In this week’s tour of new research we skip the question of human rationality (we certainly can be rational!), and instead explore the question of why we so often fail to acknowledge when we are not. We also look at how this irrationality can be used for (some definition of) good.
You don’t even know you’re wrong
The Dunning-Kruger Effect: thinking you know what the Dunning-Kruger Effect is. Why does it persist? Because relative novices (all of us, most of the time) are "insensitive to evidence".
So, D-K emerges because novices are bad at estimating the rightness of their ideas. I think this might describe every hackathon and pitch competition I've ever attended. (Except Said/Oxford's competition where students pitch problems rather than solutions.)
Vote by combat
Track people's gaze while they read a ballot and you can predict their voting behavior. Sound like another call to ban AI? Read more closely. The researchers used gaze tracking to measure how (often intentionally) confusing language on a ballot influences voters.
I love gaze tracking as a behavioral measure! Our eyes are so promiscuous :)
Do economists make rational choices?
Some economists: frequent payday loan borrowers are rational because they made good choices in our lab experiment.
Brutal reality: 44% of their cash-strapped participants never bothered claiming the $100 gift cards thank yous.
Peer Pairs Pick Pears
Headline: "Peer Pairs Pick Pears (over snickers)" or perhaps "You Are Who You Eat With". Takeaway: work peers have a persistent influence on each other's food choices.
Another paper used a multi-attribute, time-dependent drift diffusion model (it's less complex than it sounds) to deliver a just-in-time healthy diet nudge and shifted food choices from taste-driven to health-driven.
I'm working on a new project combining these 2 ideas to reduce chronic stress. It combines a reinforcement learning model of participants' choice behavior with, among other things, peer role modeling interventions. I'm getting very excited about it :)
Yes, every expert is biased
As a trained scientist, I am unbiased in every way, as are economists, investors, and all good-hearted people. Imagine how surprised I was to learn that my brain reacts to appealing but wrong ideas in science just like a high schooler's…
In both professors and high schoolers, "a network of frontal brain regions need to overcome our biases in favor of naive ideas." In fact, "scientists are slower and have lower accuracy when judging the scientific value of naive ideas compared to matched control ideas."
Even for universally acclaimed super-geniuses like me, "naive ideas are likely to persist" as we "still rely on high order executive functions like inhibitory control to overcome naive ideas".
The entire point of the philosophy of science is that we are all irreducibly biased but that we can still learn about our world by mercilessly and systematically questioning our assumptions.
Every expert is biased. It requires effortful frontal computation to overcome it. The financial incentives of fund managers don't change this. The professionalism of hiring managers doesn't change this.
We are not villains because we are biased; we're just human. We fail when we refuse to acknowledge our biases and take action.
Pulling rank out of your ass
Years ago, I did an analysis that found for every highly credentialed elite programmer, there were 10x as many equally qualified job-seekers without the credentials. A recent paper helps explain why: our obsession with “rank” blinds us to other possibilities. “The power of rank information”
From hiring decisions to consumer choice to awards in sports, “rank information [vs ratings alone] increases preference for the top ranked option.” As a result, decision makers “invest a disproportionate amount of attention to the top ranked option” and “fail to process the strengths of the other options.”
It’s awfully hard to see value in things that you ignore, whether it's science, music, or human beings.
Condemn the angry
Point 1,324,110 in favor of intellectual humility: average people AND “professionals such as fraud investigators and auditors see ‘suspects’ angry responses to accusations as cues of guilt” when in fact, “accused individuals...were angrier when they are falsely relative to accurately accused.”
And remember, our very perception of facial expressions is biased by our out-group stereotypes.
The Unbearable Effortfulness of Being
A great deal of research indicates that without a good reason “people choose to be idle,” and “when given a choice, humans and non-human animals alike tend to avoid effort.”
We might call this The Unbearable Effortfulness of Being. “Idleness Aversion and the Need for Justifiable Busyness” & “The Effort Paradox: Effort Is Both Costly and Valued”
But it turns out, effort can make us happier...
“People who are busy are happier than people who are idle...even if people are forced to be busy.”
In fact, “the same outcomes [can] be more rewarding if we apply more (not less) effort” and people often “select options precisely because they require effort.”
So the path to happiness is, by definition, hard. We are so weird!
A better question is why do some of us embrace (meaningful) effort while so many others flee, and how do we change that relationship?
One clue is that “acute stress” increases our “preference for less demanding behavior”, suggesting both a competition for cognitive resources (managing my stress vs solving the problem) and a decrease in subjective utility, the belief that our hard work will pay off.
But it might be possible to change...
One study found that “effort-contingent reward increased participants’ demand seeking and preference for the exertion of cognitive effort” on a completely different task. In other words, “people can learn to assign positive value to mental effort”.
That’s huge! But it leaves one more question: how easy is it to transfer that effort-based reward expectation to a results-based reward expectation?
Efficient Choice Coding
The title of my dissertation (all those years ago) was “Efficient auditory coding”. It explored how the fundamentals of information theory shape our senses. I always believed that the same principals could explain higher-level behavior. Growing evidence suggests choice itself follows this same principle.
In perception and choice, “several possible representations allow a reinforcement learning agent to model animals’ overall performance”, but similar to perception, “only a small subset of highly compressed representations simultaneously reproduced the co-variability in animals’ choice behavior and dopaminergic activity.”
This bit of nerdery won’t be of too much value in optimizing your resume or increasing innovation in your organization; still, it is a revelation to see the incredible richness of our thoughts derived from Shannon’s “simple” mathematics of communication.
Which is more fair: everyone on a team shares equally in the rewards or the rewards are proportional to effort? It turns out most people “prefer the rule that most benefits them”, even judging “it to be more fair and moral".
In fact, people’s “views about equality and equity change in a matter of minutes as they learn where their interests lie".
But self-interest isn’t the only factor shaping our perception of fairness…
Across cultures, policies that reduce inequality but "reverse pre-existing income rankings”, are “twice as likely to be rejected”. And this was in an economic game (the “Dictator”) where the income rankings were essentially arbitrary.
Lastly, when inequality increases “in the outcomes of an economic game…participants…take greater risks to try to achieve higher outcomes”.
i.e., high-stakes + low perceived fairness = irrational risk taking.
If it sounds too good to be true…cite the shit out of it!
In science, “papers in top psychology, economics, and general interest journals that fail to replicate are cited more than those that do”, even after the failed replication.
This perverse disparity exists despite the fact the “experts predict well which papers will be replicated."
It's too good to be true, but too hard to pass up.
Before you go pointing fingers at your neighborhood scientists, remember that similar effect can be found in many domains. For example, mayors who are more likely to lie are also more likely to be elected and re-elected…by you.