Read Fooled by Randomness Online

Authors: Nassim Nicholas Taleb

Fooled by Randomness (23 page)

BOOK: Fooled by Randomness
6.57Mb size Format: txt, pdf, ePub
ads

Economists were not at the time very interested in hearing these stories of irrationality:
Homo economicus
as we said is a normative concept. While they could easily buy the “Simon” argument that we are not perfectly rational and that life implies approximations, particularly when the stakes are not large enough, they were not willing to accept that people were flawed rather than imperfect. But they are. Kahneman and Tversky showed that these biases do not disappear when there are incentives, which means that they are not necessarily cost saving. They were a different form of reasoning, and one where the probabilistic reasoning was weak.

WHERE IS NAPOLEON WHEN WE NEED HIM?

If your mind operates by series of different disconnected rules, these may not be necessarily consistent with each other, and if they may still do the job
locally,
they will not necessarily do so
globally.
Consider them stored as a rulebook of sorts. Your reaction will depend on which page of the book you open to at any point in time. I will illustrate it with another socialist example.

After the collapse of the Soviet Union, Western businesspeople involved in what became Russia discovered an annoying (or entertaining) fact about the legal system: It had conflicting and contradictory laws. It just depended on which chapter you looked up. I don’t know whether the Russians wanted it as a prank (after all, they lived long, humorless years of oppression) but the confusion led to situations where someone had to violate a law to comply with another. I have to say that lawyers are quite dull people to talk to; talking to a dull lawyer who speaks broken English with a strong accent and vodka breath can be quite straining—so you give up. This spaghetti legal system came from the piecewise development of the rules: You add a law here and there and the situation is too complicated as there is no central system that is consulted every time to ensure compatibility of all the parts together. Napoleon faced a similar situation in France and remedied it by setting up a top-down code of law that aimed to dictate a full logical consistency. The problem with us humans is not so much that no Napoleon has showed up so far to dynamite the old structure then reengineer our minds like a big central program; it is that our minds are far more complicated than just a system of laws, and the requirement for efficiency is far greater.

Consider that your brain reacts differently to the same situation depending on which chapter you open to. The absence of a central processing system makes us engage in decisions that can be in conflict with each other. You may prefer apples to oranges, oranges to pears, but pears to apples—it depends on how the choices are presented to you. The fact that your mind cannot retain and use everything you know at once is the cause of such biases. One central aspect of a heuristic is that it is blind to reasoning.

“I’m As Good As My Last Trade” and Other Heuristics

There exist plenty of different catalogues of these heuristics in the literature (many of them overlapping); the object of this discussion is to provide the intuition behind their formation rather than list them. For a long time we traders were totally ignorant of the behavioral research and saw situations where there was with strange regularity a wedge between the simple probabilistic reasoning and people’s perception of things. We gave them names such as the “I’m as good as my last trade” effect, the “sound-bite effect,” the “Monday morning quarterback” heuristic, and the “It was obvious after the fact” effect. It was both vindicating for traders’ pride and disappointing to discover that they existed in the heuristics literature as the “anchoring,” the “affect heuristic,” and the “hindsight bias” (it makes us feel that trading is true, experimental scientific research). The correspondence between the two worlds is shown in Table 11.1.

I start with the “I’m as good as my last trade” heuristic (or the “loss of perspective” bias)—the fact that the counter is reset at zero and you start a new day or month from scratch, whether it is your accountant who does it or your own mind. This is the most significant distortion and the one that carries the most consequences. In order to be able to put things in general context, you do not have everything you know in your mind at all times, so you retrieve the knowledge that you require at any given time in a piecemeal fashion, which puts these retrieved knowledge chunks in their local context. This means that you have an arbitrary reference point and react to differences from that point, forgetting that you are only looking at the differences from that particular perspective of the local context, not the absolutes.

Table 11.1 Trader and Scientific Approach

There is the well-known trader maxim “life is incremental.” Consider that as an investor you examine your performance like the dentist in
Chapter 3
, at some set interval. What do you look at: your monthly, your daily, your life-to-date, or your hourly performance? You can have a good month and a bad day. Which period should dominate?

When you take a gamble, do you say: “My net worth will end up at $99,000 or $101,500 after the gamble” or do you say “I lose $1,000 or make $1,500?” Your attitude toward the risks and rewards of the gamble will vary according to whether you look at your net worth or changes in it. But in fact in real life you will be put in situations where you will only look at your
changes.
The fact that the losses hurt more than the gains, and
differently,
makes your accumulated performance, that is, your total wealth, less relevant than the last change in it.

This dependence on the local rather than the global status (coupled with the effect of the losses hitting harder than the gains) has an impact on your perception of well-being. Say you get a windfall profit of $1 million. The next month you lose $300,000. You adjust to a given wealth (unless of course you are very poor) so the following loss would hurt you emotionally, something that would not have taken place if you received the net amount of $700,000 in one block, or, better, two sums of $350,000 each. In addition, it is easier for your brain to detect differences rather than absolutes, hence rich or poor will be (above the minimum level) in relation to something else (remember Marc and Janet). Now, when something is
in relation
to something else, that something else can be manipulated. Psychologists call this effect of comparing to a given reference
anchoring.
If we take it to its logical limit we would realize that, because of this resetting, wealth itself does not really make one happy (above, of course, some subsistence level); but positive changes in wealth may, especially if they come as “steady” increases. More on that later with my discussion of option blindness.

Other aspects of anchoring. Given that you may use two different anchors in the same situation, the way you act depends on so little. When people are asked to estimate a number, they will position it with respect to a number they have in mind or one they just heard, so “big” or “small” will be comparative. Kahneman and Tversky asked subjects to estimate the proportion of African countries in the United Nations after making them consciously pull a random number between 0 and 100 (they knew it was a random number). People guessed in relation to that number, which they used as anchor: Those who randomized a high number guessed higher than those who randomized a low one. This morning I did my bit of anecdotal empiricism and asked the hotel concierge how long it takes to go to the airport. “40 minutes?” I asked. “About 35,” he answered. Then I asked the lady at the reception if the journey was 20 minutes. “No, about 25,” she answered. I timed the trip: 31 minutes.

This anchoring to a number is the reason people do not react to their total accumulated wealth, but to differences of wealth from whatever number they are currently anchored to. This is the major conflict with economic theory, as according to economists, someone with $1 million in the bank would be more satisfied than if he had half a million. But we saw John reaching $1 million having had a total of $10 million; he was happier when he only had half a million (starting at nothing) than where we left him in
Chapter 1
. Also recall the dentist whose emotions depended on how frequently he checked his portfolio.

Degree in a Fortune Cookie

I used to attend a health club in the middle of the day and chat with an interesting Eastern European fellow with two Ph.D. degrees, one in physics (statistical no less), the other in finance. He worked for a trading house and was obsessed with the anecdotal aspects of the markets. He once asked me doggedly what I thought the stock market would do that day. Clearly I gave him a social answer of the kind “I don’t know, perhaps lower”—quite possibly the opposite answer to what I would have given him had he asked me an hour earlier. The next day he showed great alarm upon seeing me. He went on and on discussing my credibility and wondering how I could be so wrong in my “predictions,” since the market went up subsequently. The man was able to derive conclusions about my ability to predict and my “credibility” with a single observation. Now, if I went to the phone and called him and disguised my voice and said, “Hello, this is Doktorr Talebski from the Academy of Lodz and I have an interrresting prrroblem,” then presented the issue as a statistical puzzle, he would laugh at me. “Doktorr Talevski, did you get your degree in a fortune cookie?” Why is it so?

Clearly there are two problems. First, the quant did not use his statistical brain when making the inference, but a different one. Second, he made the mistake of overstating the importance of small samples (in this case just one single observation, the worst possible inferential mistake a person can make). Mathematicians tend to make egregious mathematical mistakes outside of their theoretical habitat. When Tversky and Kahneman sampled mathematical psychologists, some of whom were authors of statistical textbooks, they were puzzled by their errors. “Respondents put too much confidence in the result of small samples and their statistical judgment showed little sensitivity to sample size.”The puzzling aspect is that not only
should
they have known better, “they
did
know better.” And yet . . .

I will next list a few more heuristics. (1) The
availability
heuristic, which we saw in
Chapter 3
with the earthquake in California deemed more likely than catastrophe in the entire country, or death from terrorism being more “likely” than death from all possible sources (including terrorism). It corresponds to the practice of estimating the frequency of an event according to the ease with which instances of the event can be recalled. (2) The
representativeness
heuristic: gauging the probability that a person belongs to a particular social group by assessing how similar the person’s characteristics are to the “typical” group member’s. A feminist-style philosophy student is deemed more likely to be a feminist bank teller than to be just a bank teller. This problem is known as the “Linda problem” (the feminist’s name was Linda) and has caused plenty of academic ink to flow (some of the people engaged in the “rationality debate” believe that Kahneman and Tversky are putting highly normative demands on us humans). (3) The
simulation
heuristic: the ease of mentally undoing an event—playing the alternative scenario. It corresponds to counterfactual thinking: Imagine what might have happened had you not missed your train (or how rich you’d be today had you liquidated your portfolio at the height of the NASDAQ bubble). (4) We discussed in
Chapter 3
the
affect
heuristic: What emotions are elicited by events determine their probability in your mind.

Two Systems of Reasoning

Later research refines the problem as follows: There are two possible ways for us to reason, the heuristics being part of one—rationality being part of the other. Recall the colleague who used a different brain in the classroom than the one in real life in
Chapter 2
. Didn’t you wonder why the person you think knows physics so well cannot apply the basic laws of physics by driving well? Researchers divide the activities of our mind into the following two polarized parts, called System 1 and System 2.

System 1
is effortless, automatic, associative, rapid, parallel process, opaque (i.e., we are not aware of using it), emotional, concrete, specific, social, and personalized.

System 2
is effortful, controlled, deductive, slow, serial, self-aware, neutral, abstract, sets, asocial, and depersonalized.

I have always believed that professional option traders and market makers by dint of practicing their probabilistic game build an innate probabilistic machine that is far more developed than the rest of the population—even that of probabilists. I found a confirmation of that as researchers in the heuristics and biases tradition believe that System 1 can be impacted by experience and integrate elements from System 2. For instance, when you learn to play chess, you use System 2. After a while things become intuitive and you are able to gauge the relative strength of an opponent by glancing at the board.

BOOK: Fooled by Randomness
6.57Mb size Format: txt, pdf, ePub
ads

Other books

Task Force Bride by Julie Miller
Stone Cold by Andrew Lane
The Moneylenders of Shahpur by Helen Forrester
The Sound of Letting Go by Kehoe, Stasia Ward
The Other Fish in the Sea by Cooksey, Jenn
IOU Sex by Calista Fox
Cross by Elle Thorne
Iris and Ruby by Rosie Thomas