We’ve established that there are two interdependent cognitive processes, automatic (elephant) and controlled (rider), that are active when we make a decision. For some decisions, such as jumping out of the way of a speeding card, the elephant takes the lead. For other decisions, such as voting for President, we’d like to believe that the rider takes the reins, but, in reality, the elephant plays a large, often dominant, role. Surely, this isn’t a pretty thought to tend on, but it’s going to get even uglier before we break out the scalpel and explore how to fix this mess.
(If you want to take a step back, check out 1) book to own: happiness hypothesis, 2) the evolution of the elephant and the rider, 3) the mind and morality.)
Jonathan Haidt quotes from Robert Wright’s The Moral Animal (…on my to-read list), “Human beings are a species splendid in their array of moral equipment, tragic in their propensity to misuse it, and pathetic in their constitutional ignorance of the misuse.”
Most are likely willing to accept that, at times, we’ll employ tenuous reasoning to justify not doing what we would consider the ‘right thing’ (at a time when it would not be inconvenient to do so…). Haidt argues that these cases are exceptional ONLY in that they mark the few times we are actually aware of how immoral our moral decisions are.
He cites one study where Person A was told that two tasks, one pleasant and one not pleasant, were to be assigned to Person A and Person B. Furthermore, Person A was allowed to delegate the tasks. Person A was left alone in a room with a coin.
The experimenters found that “people who think they are particularly moral are in fact more likely to “do the right thing” and flip the coin.” No surprise there, “but when the coin flip comes out against them, they find a way to ignore it and follow their own self-interest.“
But how does this happen? Why doesn’t the rider step in and take control of the cognitive process?
For one, the rider isn’t giving orders, he’s taking the role of lawyer:
“Although many lawyers won’t tell a direct lie, most will do what they can to hide inconvenient facts while weaving a plausible alternative story for the judge and jury … For example, whether the minimum wage should be raised – they generally lean one way or the other right away, and then put a call in to reasoning to see whether support for that position is forthcoming.” If the person asked about the minimum wage has an aunt who works on minimum wage and can’t support her family, the person will support it.
Haidt cites Deanna Kuhn as one researcher that has found that decisions are mostly made based on such pseudoevidence, precluding the search for any contradictory evidence that might be more robust.
Haidt continues: “Studies show that people set out on a cognitive mission to bring back reasons to support their preferred belief or action. And because we are usually successful in this mission, we end up with the illusion of objectivity. We really believe that our position is rationally and objectively justified.”
Even the people who WANT to be fair, and make a dedicated effort TO BE fair, still end up being unfair.
At this point, I expect that most readers agree that this flawed decision-making exists, but if questioned directly, would still refuse to believe that their partisan alliances, policy preferences, and everyday moral judgments are so baseless and hypocritical.
Haidt channels this position: “Everyone is influenced by ideology and self-interest. Except for me. I see things as they are.”
As I read this book I tried to constantly bear in mind man’s poor ability to assess his limitations. Here are three quotes that helped me to focus on getting passed my own biases, rather than simply dismiss others as biased:
- “We think we have special information about ourselves – we know what we are “really like” inside, so we can easily find ways to explain away our selfish acts and cling to the illusion that we are better than others.”
- “Subjects used base rate information [average/mean] properly to revise their predictions of others, but they refused to apply it to their rosy self-assessments.”
- “When comparing ourselves to others, the general process is this: Frame the question (unconsciously, automatically) so that the trait in question is related to a self-perceived strength, then go out and look for evidence that you have the strength.” At that point, you can stop thinking.
Haidt terms mankind’s distorted worldview “naïve realism,” and proceeds to assail it as “the biggest obstacle to world peace and social harmony.” Why? Because naïve realists form naïve realist groups. No one cares if Joe always think he’s getting the short-end of the stick because of the people he doesn’t like at work, but it becomes all our problem when there’s a group of 1,000 Joe’s with the same distorted perception.
Naïve realism creates a narrative of pure virture (our side) versus pure vice (those who disagree with us). We’re fair and they are not. We’re just trying to do the right thing, they are selfish and immoral.
Haidt argues that the root causes of evil within naïve realism are high self-esteem and moral idealism. But why?
“Threatened self-esteem accounts for a large portion of violence at the individual level, but to really get a mass atrocity going you need idealism – the belief that your violence is a means to a moral end. … [For instance,] when people have strong moral feelings about a controversial issue – when they have a “moral mandate” – they care much less about procedural fairness in court cases.”
As we wrap this installment, let me return to Robert Wright’s excellent quote:
“Human beings are a species splendid in their array of moral equipment, tragic in their propensity to misuse it, and pathetic in their constitutional ignorance of the misuse.”
First it was necessary to convince ourselves that we indeed do misuse our moral equipment, and that we have only begun to understand the depths of this misuse. The next post will look at how we improve our use of our wide array of moral equipment.
Filed under: Cognition