Publius has moved! PrincipledAgent.com

The conversation has moved. Please join us at www.principledagent.com.

the hot-hand fallacy

Of all the things that people are terribly wrong about, sports may reign supreme. Perhaps because most have played a sport (at a terribly uncompetitive level), or because of the ubiquitous media commentary by fat, uneducated former players, the average American is thoroughly convinced he understands how their game of choice works, why a team is losing, why a player is struggling, etc. What’s more, the fan’s understanding will sound a lot like an ancient mystic reading tea leaves.  “Of course the Lakers won the NBA championship, did you see the look on Kobe’s face in the finals? He’s not smiling and has a meaner look than last year.” (While a made-up quote, that was indeed a real topic of discussion for those who missed it.) Read the rest of this entry »

Advertisements

Filed under: Cognition,

normal accident theory: explaining outliers

Leonard Mlodinow explains outliers better than Malcolm Gladwell ever could in The Drunkard’s Walk: How Randomness Rules Our Lives. For example:

That string of events spurred Yale sociologist Charles Perrow to create a new theory of accidents, in which is codified the central argument of this chapter: in complex systems (among which I count our lives) we should expect that minor factors we can usually ignore will by chance sometimes cause major incidents. In his theory Perrow recognized that modern systems are made up of thousands of parts, including fallible human decision makers, which interrelate in ways that are, like Laplace’s atoms, impossible to track and anticipate individually. Read the rest of this entry »

Filed under: Cognition,

making the grade(s)

Leonard Mlodinow’s The Drunkard’s Walk: How Randomness Rules Our Lives (NY Times review) takes on the absurdity of the grading process below.

For instance, a group of researchers at Clarion University of Pennsylvania collected 120 term papers and treated them with a degree of scrutiny you can be certain your own child’s work will never receive: each term paper was scored independently by eight faculty members. The resulting grades, on a scale from A to F, sometimes varied by two or more grades. On average they differed by nearly one grade. Read the rest of this entry »

Filed under: Cognition, Education

meditation, prozac, and cognitive therapy

The final chapter of the Happiness Hypothesis series (1) book to own: happiness hypothesis, 2) the evolution of the elephant and the rider, 3) the mind and morality, 4) man’s misuse of morality) looks at what works and what doesn’t when it comes to improving our flawed cognitive processes. The elephant and rider aren’t perfect, but by understanding their nature, we can improve their functioning.

To begin, let’s remember that the elephant (often equated with our intuition or instinct) “was shaped by natural selection to win at the game of life and part of its strategy is to impress others, gain their admiration, and rise in relative rank. The elephant cares about prestige, not happiness.” I want to stay on topic, but I’ll note that Haidt distinguishes between the interest in relative social status and happiness, which have been conflated in modern discussion about inequality.

Back to the main show, what do with our intelligent, but slow-acting rider and stubborn, hyper-emotional elephant?

The answer isn’t whipping the stubborn elephant into submission, but rather to “drop the brute force method and take a more psychologically sophisticated approach to self-improvement. … Human rationality depends critically on sophisticated emotionality. It is only because our emotional brains works so well that our reasoning can work at all. … Reason and emotion must both work together to create intelligent behavior.”

Leave it to Ben Franklin to put the point most succinctly, “If Passion drives, let Reason hold the Reins.”

Reason, in this case, “knows how to distract and coax the elephant without having to engage in a direct contest of wills.”

Enough of the vagueries, Haidt points to three methods for improving our cognition: meditation, cognitive therapy, and Prozac. I’m only going to spend a moment on meditation (its utility is well-documented), a few more on Prozac (for the eyebrows raised by its inclusion), and concentrate on cognitive threapy.

The “goal of meditation is to change automatic thought processes … proof of taming is the breaking of attachments.” These types of attachments “are like a game of roulette … the more you play, the more you lose. The only way to win is to step away from the table … Although you give up the pleasures of winning, you also give up the larger pains of losing.”

Prozac is controversial because it appears to be a shortcut — “cosmetic psychopharmacology” — that shapes minds like a cosmetic surgeon augments breasts. Haidt notes that our culture endorses two partly opposing perspectives — “relentless self-improvement as well as authenticity – but we often escape the contradiction by framing self-improvement as authenticity. … As long as change is gradual and a result of the child’s hard work, the child is given the moral credit for the change, and that change is in the service of authenticity. But what if there were a pill that enhanced tennis skills? … Such a separation of self-improvement from authenticity would make many people recoil in horror.”

Haidt explores the stigma on cosmetic surgery as well, but I’ll focus on his criticism of those who criticize Proaz as a chemical shortcut — “It’s easy for those who did well in the cortical lottery to preach about the importance of hard work and the unnaturalness of chemical shortcuts.”

Haidt supplies research that argues that each person is born with an inherited chemical balance, which goes largely unchanged throughout life, which will dictate the range of emotion of happiness and sadness the person is susceptible to — “ many people really do need a mechanical adjustment. It’s as though they had been driving for years with the emergency break halfway engaged.”

Prozac shouldn’t be seen as cosmetic for the “worried well”, but “like giving contact lenses to a person with poor but functional eyesight who has learned ways of coping with her limitations.” Contact lenses and Prozac both are a “reasonable shortcut to proper functioning.”

Fascinating.

Finally, cognitive therapy was born as a means for therapists to engage depressed people, who weren’t being reached by the Freudian exploration of painful memories and forced sexual innuendo. Cognitive therapy allowed patients to get beyond the bad memories and critical thoughts by questioning “the legitimacy of his patients’ irrational and self-critical thoughts.” The key was to “[map] out the distorted thought processes characteristic of depressed people and [train] his patients to catch and challenge these thoughts.”

Just as depressed patients are convinced of their self-critical beliefs, we also deploy distorted thought processes “not to find the truth but to invent arguments to support our deep and intuitive beliefs (residing in the elephant).” For depressed people, the three types of irrational distortions are “personalization” (seeing events as reflection of self), “overgeneralization” (take an event and believing it ALWAYS happens), and “magnification” (arbitrary inference, or jumping to a conclusion without evidence).

These should sound familiar, as they are cousins of the cognitive biases and distortions that are well documented in non-depressed people. I think this is meaningful. Accurate and realistic judgment is good for your mental health.

Cognitive therapy is about “challenging automatic thoughts and engaging in simple tasks” to create positive habits that will further shape your automatic thought processes — “it teaches the rider how to train the elephant rather than how to defeat it directly in an argument.” You get better at thinking the same way you do at anything — practice — “write down your thoughts, learn to recognize the distortions in your thoughts, and then think of a more appropriate thought.”

Specifically, Haidt refers to psychological studies that found that writing about the impact of biases doesn’t change behavior, though it does allow one to predict the behavior of others better, and neither does writing an essay arguing the opposing view. The only thing that worked was asking subjects to read an essay on biases and then write an essay about the weaknesses of their own case; this made study participants far more fair-minded. That said, the study didn’t ask them to question the deeply-held beliefs one associates with personal character, only recently assumed positions. Still, it’s a start.

In sum, man comes “equipped with cognitive processes that predispose us to hypocrisy, self-righteousness, and moralistic conflict. … By knowing the mind’s structure and strategies, we can step out of the ancient game of social manipulation and enter into a game of our choosing. … By seeing the log in your own eye you can become less biased, less moralistic, and therefore less inclined toward argument and conflict.”

Redux

Filed under: Cognition

why are the other guys wrong?

It’s been a while since I’ve posted, as I’ve waited to see if I’d be hit with a stroke of inspiration (…no, not yet.) So I thought I would see if any of my few (but dedicated) readers could get the game going.

One of the reasons I am sympathetic to both conservative and liberal perspectives is that I agree with both sides’ criticisms of the other — both sides fail to recognize distortions in their perspectives, which, in turn, undermine the intellectual integrity of their arguments.

My theory is that each of us internalizes one or more insights about different policy issues. For instance, if someone talks to me about the problems with public schools, I will be drawn to the inefficiencies that public schools has in common with other government programs. On the other hand, a more liberal friend will point to the fact that the public schools don’t get enough funding.

I think it’s important both for self-awareness and public discourse to explore these prepackaged insights, as they can get you in trouble. For instance, those who yelled for “liberalization!” in poor countries over the last 40 years when privatization was only going to lead to exchanging public corruption for private; also, for those on the other side of the aisle who have routinely demanded more and more funding for government programs (e.g., those public schools) that also do not lead to positive results. In both cases, even when the results aren’t positive, both sides simply say, ‘Well, the problem is you need more liberalization/funding.’

So, fair readers, why do you think the other side is routinely wrong? What don’t they get? I’ll be supplying my own thoughts later on, but I’d like to respond to what YOU think as well.

Filed under: Cognition

man’s misuse of morality

We’ve established that there are two interdependent cognitive processes, automatic (elephant) and controlled (rider), that are active when we make a decision. For some decisions, such as jumping out of the way of a speeding card, the elephant takes the lead. For other decisions, such as voting for President, we’d like to believe that the rider takes the reins, but, in reality, the elephant plays a large, often dominant, role. Surely, this isn’t a pretty thought to tend on, but it’s going to get even uglier before we break out the scalpel and explore how to fix this mess.

(If you want to take a step back, check out 1) book to own: happiness hypothesis, 2) the evolution of the elephant and the rider, 3) the mind and morality.)

Jonathan Haidt quotes from Robert Wright’s The Moral Animal (…on my to-read list), “Human beings are a species splendid in their array of moral equipment, tragic in their propensity to misuse it, and pathetic in their constitutional ignorance of the misuse.”

Most are likely willing to accept that, at times, we’ll employ tenuous reasoning to justify not doing what we would consider the ‘right thing’ (at a time when it would not be inconvenient to do so…). Haidt argues that these cases are exceptional ONLY in that they mark the few times we are actually aware of how immoral our moral decisions are.

He cites one study where Person A was told that two tasks, one pleasant and one not pleasant, were to be assigned to Person A and Person B. Furthermore, Person A was allowed to delegate the tasks. Person A was left alone in a room with a coin.

The experimenters found that “people who think they are particularly moral are in fact more likely to “do the right thing” and flip the coin.” No surprise there, “but when the coin flip comes out against them, they find a way to ignore it and follow their own self-interest.

But how does this happen? Why doesn’t the rider step in and take control of the cognitive process?

For one, the rider isn’t giving orders, he’s taking the role of lawyer:

“Although many lawyers won’t tell a direct lie, most will do what they can to hide inconvenient facts while weaving a plausible alternative story for the judge and jury … For example, whether the minimum wage should be raised – they generally lean one way or the other right away, and then put a call in to reasoning to see whether support for that position is forthcoming.” If the person asked about the minimum wage has an aunt who works on minimum wage and can’t support her family, the person will support it.

Haidt cites Deanna Kuhn as one researcher that has found that decisions are mostly made based on such pseudoevidence, precluding the search for any contradictory evidence that might be more robust.

Haidt continues: “Studies show that people set out on a cognitive mission to bring back reasons to support their preferred belief or action. And because we are usually successful in this mission, we end up with the illusion of objectivity. We really believe that our position is rationally and objectively justified.”

Even the people who WANT to be fair, and make a dedicated effort TO BE fair, still end up being unfair.

At this point, I expect that most readers agree that this flawed decision-making exists, but if questioned directly, would still refuse to believe that their partisan alliances, policy preferences, and everyday moral judgments are so baseless and hypocritical.

Haidt channels this position: “Everyone is influenced by ideology and self-interest. Except for me. I see things as they are.”

As I read this book I tried to constantly bear in mind man’s poor ability to assess his limitations. Here are three quotes that helped me to focus on getting passed my own biases, rather than simply dismiss others as biased:

  • “We think we have special information about ourselves – we know what we are “really like” inside, so we can easily find ways to explain away our selfish acts and cling to the illusion that we are better than others.”
  • “Subjects used base rate information [average/mean] properly to revise their predictions of others, but they refused to apply it to their rosy self-assessments.”
  • “When comparing ourselves to others, the general process is this: Frame the question (unconsciously, automatically) so that the trait in question is related to a self-perceived strength, then go out and look for evidence that you have the strength.” At that point, you can stop thinking.

Haidt terms mankind’s distorted worldview “naïve realism,” and proceeds to assail it as “the biggest obstacle to world peace and social harmony.” Why? Because naïve realists form naïve realist groups. No one cares if Joe always think he’s getting the short-end of the stick because of the people he doesn’t like at work, but it becomes all our problem when there’s a group of 1,000 Joe’s with the same distorted perception.

Naïve realism creates a narrative of pure virture (our side) versus pure vice (those who disagree with us). We’re fair and they are not. We’re just trying to do the right thing, they are selfish and immoral.

Haidt argues that the root causes of evil within naïve realism are high self-esteem and moral idealism. But why?

“Threatened self-esteem accounts for a large portion of violence at the individual level, but to really get a mass atrocity going you need idealism – the belief that your violence is a means to a moral end. … [For instance,] when people have strong moral feelings about a controversial issue – when they have a “moral mandate” – they care much less about procedural fairness in court cases.”

As we wrap this installment, let me return to Robert Wright’s excellent quote:

“Human beings are a species splendid in their array of moral equipment, tragic in their propensity to misuse it, and pathetic in their constitutional ignorance of the misuse.”

First it was necessary to convince ourselves that we indeed do misuse our moral equipment, and that we have only begun to understand the depths of this misuse. The next post will look at how we improve our use of our wide array of moral equipment.

Links:

Filed under: Cognition

the mind and morality

For those that missed the preceding posts on this topic (book to own: happiness hypothesis, the evolution of the elephant and the rider) or haven’t yet committed their content to memory, allow me to reintroduce some of the key terms and concepts. Our brain understands the world by processing the information received by our senses. These processes can be grouped by whether they are automatic/subconscious or controlled/deliberate. This post will focus on my primary interest: how cognitive processes affect moral judgments.

The automatic system has a long history and has evolved to serve elemental needs linked to survival (e.g., fight/flight, don’t eat the green berries, etc.). The controlled system is a relatively new adaptation, which separates us from (most, if not all) animals, and has evolved to allow humans to make better long-term decisions and expanding their ability to cooperate in large-scale communities. Jonathan Haidt equates the dynamic between the controlled and automatic systems as akin to a rider atop an elephant:

“The automatic system [the elephant] was shaped by natural selection to trigger quick and reliable action, and it includes part of the brain that make us feel pleasure and pain (such as the orbitofrontal cortex) and trigger survival-related motivations (such as the hypothalamus) … The controlled system, in contrast, is better seen as an advisor. It’s a rider placed on the elephant’s back to help the elephant make better choices. The rider can see farther into the future, and the rider can learn valuable information by talking to other riders or by reading maps, but the rider cannot order the elephant around against its will.”

It would be pleasant to believe that the human brain is a modicum of efficiency – seamlessly switching to and from the elephant (automatic system) and the rider (controlled) based on what is most appropriate. A car runs a red light and starts careening toward you as you walk down the sidewalk? The elephant throws you to the ground before the rider even puts together what’s happened. Deciding your position on a political or social issue? The elephant steps aside to let the rider judge the merits on each side.

This last step, the peaceful transfer of power from the elephant to the rider, is the destructive delusion that will be the subject of this post. The elephant will not go quietly into the night. Or rather – to be less poetic and more precise – the elephant is unable to see when his services are productive (decision to respond to runaway car) and unproductive (decision on complex policy issue). The elephant takes the lead no matter what the issue. So what does the rider do?

When confronted with a moral issue or really any issue that elicits a strong feeling, the rider is relegated to the role of the lawyer for the elephant: “It is the elephant holding the reins, guiding the rider. It is the elephant who decides what is good or bad, beautiful or ugly. Gut feelings, intuitions, and snap judgments happen constantly, and automatically.”

Haidt compares moral judgment to aesthetic judgment: “When you see a painting, you usually know instantly and automatically whether you like it. If someone asks you to explain your judgment, you confabulate. You don’t really know why you think something is beautiful.”

I think some would disagree that they don’t know why something is beautiful, but I don’t think they would deny the chronology of events:

1. See a Painting
2. Instantly like/dislike the painting
3. Begin to think about why you like/dislike the painting
4. Decide on a reason why you like/dislike the painting

This understanding of moral and aesthetic decision-making is humbling. No one wants to think they are simply confabulating post-hoc explanations for a gut reaction to complex issues like trade agreements or environmental issues. No one really wants to believe that their rider is reduced to “[stringing] sentences together and [creating] arguments to give to other people … fighting in the court of public opinion to persuade others of the elephant’s point of view.”

Haidt brings up a situation we’re all accustomed to: “When you refute a person’s argument, does she generally change her mind and agree with you? Of course not, because the argument you defeated was not the cause of her position; it was made up after the judgment was already made.”

It’s quite common to hear people decry “strawmen” arguments. Haidt argues that all of our arguments are (to some degree) strawmen. They are all post-hoc justifications. When “two people feel strongly about an issue, their feelings come first, and their reasons are invented on the fly, to throw at each other.”

To set aside the elephant/rider metaphor, different parts of the brain correspond to different mental activities – the frontal insula is active during “unpleasant emotional states, particularly anger and disgust,” while the “dorsolateral prefrontal cortext, just behind the sides of the forehead, [is] known to be active during reasoning and calculation.” Haidt’s argument merries with my perception that judgments are made in the parts of brain associated with emotion and the subconscious, while the arguments defending the judgments are constructed after the fact in the parts of the brain associated with reasoning and calculation.

The more emotionally detached you are, the more likely the rider can take the reins from the elephant and steer it on a rational, calculated path.

Oomph. I don’t want to dilute this crucial point about how our brains take moral positions and make moral arguments with any additional content. In the next post, I’ll look at how the elephant further muddles our moral judgments by filtering the information that gets to the rider. For now, are there doubts about this theory of making and supporting judgments? If so, I could dig back and supply some of the studies that have looked at brain activity and behavior, which support this understanding, but I don’t want to lose the theory amidst the details if it’s not necessary.

The final post in the series has arrived: man’s misuse of morality.

Links:

Filed under: Cognition

the evolution of the elephant and rider

This post is the second in a series responding to Jonathan Haidt’s excellent book, “Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom.” You can my initial brief review of the book here. This post will set the stage for a more substantive discussion about how and why our brains will sometimes lead us down paths we wouldn’t deliberately choose for ourselves, and what can be done to guard against this tendency. First, we need to understand our mind’s evolution.

Mankind is one of the few species able to live in large, relatively peaceful societies. So what do we have in common with ants, termites, and naked mole rats? We’ve all been able to overcome “the laws of evolution (such as competition and survival of the fittest)” through kin altruism.

Kin altruism is short-hand for the expansion of an organism’s genetic self-interest to family members beyond his own young. Bees, for instance, are all siblings, and its in their genetic self-interest to sacrifice themselves for the hive – “selfishness becomes genetic suicide.” This kin altruism, however, only takes a species so far; it breaks down quickly, especially for species that aren’t all brothers and sisters. For humans, “gratitude and vengeance are big steps on the road that led to human ultrasociality.”

Robin Dunbar has demonstrated that animal brain size correlates with social group size. Evolutionary success depends on playing the social game well. Brain power allows the animal to do just that, improving the animal’s odds of surviving and reproducing. Mix in some natural selection, and you explain the evolution of larger, increasingly sophisticated brains in humans.

The evolutionary backstory is essential to understanding why the brain functions as it does. Haidt identifies three cognitive developments that allowed humans to live in large, cooperative societies: language, reciprocity and vengeance.

Reciprocity and vengeance are two sides of the same coin to Haidt. He notes, “Reciprocity is a deep instinct; it is the basic currency of social life.” Social cohesion and cooperation depends on the promise of reciprocity and fear of vengeance. Haidt restates Jane Jacobs’ observation that a neighborhood has a lot of problems when a parent doesn’t feel comfortable castigating someone else’s unruly child.

While language has evolved to serve a variety of purposes, Haidt infers that one of its principle uses to early man was gossip, which serves as a “policeman and a teacher.” What hit home for me was Haidt’s insight that “many species reciprocate, but only humans gossip, and much of what we gossip about is the value of other people as partners for reciprocal relationships.”

Language, reciprocity, and vengeance all work together to a socially productive end; “Gossip paired with reciprocity allows karma to work here on earth, not in the next life. As long as everyone plays tit-for-tat augmented by gratitude, vengeance, and gossip, the whole system should work beautifully.”

Yet Haidt notes that the system doesn’t work perfectly. Evolutionary quirks leave us biased and hypocritical, sabotaging our collective efforts for social cooperation. My primary interest is identifying these destructive quirks, figuring out why they exist, and developing a plan of action to minimize their destructive impact.

The reason for our cognitive hiccups is simple – “evolution never looks ahead” (terrific insight). For instance, “linguistic ability spread to the extent that it helped the elephant do something important in a better way.” Vocalization didn’t evolve so that we could reproduce every sound perfectly. Language itself didn’t evolve so that we could communicate with perfect efficiency and clarity. Natural selection finds better ways to do things, not the best ways.

Haidt compares the human mind to a rider on top of an elephant. The elephant is our ancestral, subconscious system of automatically processing that drives our most elemental impulses. Buddha offers further insight: “In days gone by this mind of mine used to stray wherever selfish desire or lust or pleasure would lead it. Today this mind does not stray and is under the harmony of control, even as a wild elephant is controlled by the trainer.” Buddha’s elephant trainer, the rider, is a relatively new addition to the cognitive space, as “language, reasoning, and conscious planning arrived in the most recent eye-blink of evolution.”

In sum:
“The automatic system [the elephant] was shaped by natural selection to trigger quick and reliable action, and it includes part of the brain that make us feel pleasure and pain (such as the orbitofrontal cortex) and trigger survival-related motivations (such as the hypothalamus) … The controlled system, in contrast, is better seen as an advisor. It’s a rider placed on the elephant’s back to help the elephant make better choices. The rider can see farther into the future, and the rider can learn valuable information by talking to other riders or by reading maps, but the rider cannot order the elephant around against its will.”

If the rider and the elephant sounds like the basis for a Disney buddy film, you’re right on track. Haidt describes the rider/elephant dynamic: “I was a rider on the back of an elephant. I’m holding the reins in my hands, and by pulling one way or the other I can tell the elephant to turn, to stop, or to go. I can direct things, but only when the elephant doesn’t have desires of his own. When the elephant really wants to do something, I’m no match for him.”

The rider couldn’t do without the elephant, because “the mind performs hundreds of operations each second, all but one of them must be handled automatically,” but, likewise, the elephant depends on the rider for its chance for evolutionary success. For it’s the rider that “allows people to think about long-term goals and thereby escape the tyranny of the her-and-now, the automatic triggering of temptation by the sight of tempting objects.”

As Haidt notes in the block quote above, the elephant and rider (or automatic and controlled processing systems) correspond to different areas of the brain. The distinction does have basis in the way different parts of our brain are active when we are emotional and rushed versus detached and deliberate.

The next posts in this series (the mind and morality, man’s misuse of morality) look at where and why these dual systems of processing information go awry.

Links:

Filed under: Cognition

book to own: happiness hypothesis

When pitching Jonathan Haidt’s “Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom” to friends, I often find myself explaining away the title — no, it’s not another self-help book and yes, it’s about more than just plastering a silly smile on your face. With that said, the title is appropriate; Haidt is chiefly concerned with what’s responsible for making humans happy.

The title fails, however, to convey the breadth and depth of Haidt’s search, which touches on philosophy, psychology, economics, evolution, and cognitive science, and skips effortlessly across the centuries, from the Stoics’ philosophical minimalism to Ben Franklin’s pragmatism to Robert Cialdini’s work on Influence.

Haidt documents the evolution of the human mind, producing an overarching narrative that explains everything from the use of gossip and prozac to mental tendencies that steer men away from their stated values and towards self-destruction.

Along with “Kluge,” this book has profoundly shaped the way I view my brain. Before Haidt, I was aware that our brains appeared to systematically work against our best interest, and that these tendencies manifested in more general cognitive biases. Haidt, however, takes you behind the curtain, and provides a look at what exactly is going on in your brain and the evolutionary logic behind it. This book provided a more systematic take on cognition than the discrete observational work I had previously encountered.

My interest in correcting my cognitive failings largely emanates from my concern with my ability to grasp the truth. Haidt rightly adds that it’s profoundly important to happiness in general. Cognitive therapy has allowed many to escape depression by directly attacking distortions in thought. These depressive distortions are direct relatives to those that scare up trouble in all of our lives, and Haidt provides an excellent primer on how to exorcise your cognitive demons through a few different means, thereby improving the way you think and possibly making you happier.

This isn’t the end of my cognitive kick, I’m working on a series of posts that explore Haidt’s ideas in greater detail, which will dovetail nicely with Kluge, which I’m currently finishing up.

Links:

Filed under: Cognition

don’t be bob bias

Meet Bob Bias. Bob is an insufferable know-it-all (no wonder, he assumes the worst about everyone) and truly believes that if he wills it, it will be done. He’s statistically-incompetent, his memory is pretty bad, and yet he still thinks he is always right.

Unfortunately, we all have a little more Bob in us than we would like.

Bob, along with the rest of us, is simply a human being trying to find his way within an infinite flux of particles and processes with a human mind that is finite and cannot comprehend the infinite. Bob has to comprehend reality though thinking about simplified models of reality. For instance, Bob thinks of the infinite number referred to as Pi as 3.14 or as “Pi” as we can’t really comprehend or describe its infiniteness.

These models work very well for most of our tasks. My mental representation of toothpaste is unbelievably superficial, but it captures the value that is important to me.

Our comprehension, however, isn’t infinite, and it runs into some trouble in different areas. I’ve been checking out the cognitive bias work of Max Bazerman and incorporated his different biases into different traps that sabotage our decision-making.

General traps
Seen-it-all trap
Bob thinks his perception of a variable or process captures it in its entirety
Result: Bob thinks that his odds of winning a dice game have fundamentally changed since he has been “hot” recently

Never-wrong trap
Bob thinks that all of his simple finite models are accurate representations
Result: Bob plays a dumb bet in the stock market based on a superficial analysis

All-powerful trap
Bob thinks that every process or variable can be controlled by his action
Result: When Bob rolls a pair of dice, he throws harder when he’s going for high numbers, softer when he wants low numbers

Dealing with data
The human mind is good at registering extremes. You remember the feeling of fire burning your skin. This ability hurts our ability to compute large amounts of data. We aren’t good at abstraction, dealing with big numbers. We have a hard time comprehending probabilities. We’re also oversensitive to causal relationships; we see causes everywhere, from lucky hats to less ridiculous, but no more causal variables

Descriptive-recall trap
Bob’s ability to recall events and people is based on the vividness and recentness of the memory rather than their frequency
Result: Bob is more afraid of dying in a plane crash than a car crash or of heart failure

Rain-dance trap
Bob is very bad at differentiating between correlation and causation
Result: Bob believes his team won the game because he wore his lucky hat

Big-number trap
Bob is very bad at measuring statistically significant relationships and dealing with probabilities
Result 1: Bob estimates his travel time based on his last trip which was exceptionally quick, rather than his last 30 trips
Result 2: Bob thinks that three events with a 80% each of occurring are more likely to ALL occur than one event with a 25% chance

Self-assessment
This should come as no surprise. The mind has a lot of information that has been validated internally, and it shouldn’t be surprising that it tends to trump all else. I think of it this way; the signals from the self, the feelings, the motivations, etc. create a dynamic digital signal to the brain of what Bob enjoys, didn’t enjoy, etc. Bob’s mind has a hard time testing the voracity of what it thinks. This is common sense, it’s hard to objective about what you think. As I said before, this leads to the never-wrong trap, but is also has added negatives when dealing with others.

What-about-me trap
Bob’s brain registers the pain and pleasure of friends and others, but only through a relatively weak analog signal. This weak analog signal has to stack up against the dynamic digital input from Bob’s mind, the same mind now tasked with measuring the value of Bob’s digital signal v. the outsider analog input. The dice are clearly loaded.

Result 1: Bob thinks his cup is worth more than he would if it were not his cup
Result 2: Bob thinks that he cut off the guy because he had to, while that OTHER car cut him off because the driver is a jerk
Result 3: Bob hears that a person was killed in the town over, but that news causes him less anxiety than his migraine.
Result 4: Bob dismisses Frank because he sees that he is biased, but doesn’t think about his own biases

This isn’t an exhaustive list. Bryan Caplan has come up with a list of biases relevant to voters – “anti-market, anti-foreign, pessimism, and make-work” – but it’s a good place to start.

Filed under: Cognition