Hello! Now that I've told you what a heuristic is, what I would like to do next is give you actual examples of the kinds predictable errors, examples of these heuristic driven biases, that we, as humans are prone to make in the way we think or make decisions. And most of the time without actually being even aware of it. Specifically you're going to learn about cognitive errors that we make. These range from basic statistical errors to information processing errors, to memory errors that we make, and that cause investors basically to deviate from the rational homoeconomicus assumption of the traditional finance field. Cognitive areas stem naturally from the way that we think, for example when we're faced with a new piece of information that challenge our prior beliefs. Or due to some blind spot that we may have or some distortion in our minds. They arise subconsciously from the mental procedures that we use for processing information. So let's start with a few exercises. I don't know what you answered, but typically most people think that Lisa must be an athlete whereas Mildred must be a librarian. While of course it seems obvious from the description that Lisa is more likely than Mildred to be a jock, it's of course very well possible that Mildred is probably a professional athlete, too. After all, you were told that 90% of these women are athletes. However, often when we're asked to evaluate how likely things are, we instead judge how alike they are. Why do you think that happens? So here's another one. Well again here, evidence shows that many people actually bet on getting a white ball since they note that the first person's draw from bowl A was 80% white. While the second person drew only 60% of it from bowl B. But of course, the sample from bowl B was four times larger. 20 balls were drawn from bowl B, as opposed to only 5 from A. That bigger drawing means that bowl B is more likely to be mostly red than bowl A is to be mostly white. Most of us know that large samples of data are more reliable, but we still get distracted by small samples nevertheless. Why is that? Okay, are you ready for one more? Now imagine that you and I are flipping coins. And we're going to flip six times and track the outcomes by recording heads as an H and tails as a T. Now, suppose you go first and you flip H, T, T, H, T, H. Heads, tails, tails, heads, tails, heads. A 50, 50 result that looks exactly like what you should get by random chance. And let's suppose that I go next and I toss and get heads, heads, heads, heads, heads, heads. A perfect streak of heads. That makes me feel like a coin flipping genius. But of course, in fact, in six coin flips the odds of getting six heads in a row is exactly equal to the odds of getting heads, tails, tails, heads, tails, heads. Both sequences have 1 in 64 chance or 1.6% chance of occurring. Yet we think nothing of your coin flips, but we are both astounded by my streak of heads. Why? Well, the answer is we are programmed to detect patterns. Humans have a phenomenal ability to detect and interpret simple patterns. And thank goodness for that, because that's probably how our ancestors survived in the primeval world. But when it comes to investing, our incurable search for patterns leads us to see patterns and assume that order exists when in fact it does not at all. So all of these examples illustrate the biases that are related to what we call the representativeness heuristic. Representativeness refers to judgements based on stereotypes. When people try to determine the probability that an object A belongs to class B, they often use their resentativeness heuristic. They tend to evaluate the probability by the degree to which A reflects the essential characteristics of B. They rely on stereotypes. They try to get a best fit approximation. That's why we tend to think, given the description that we heard, that Mildred is more likely to be a librarian, even though we know the base rate, the prior probability that 90% of these women are athletes. So, in mental processing, in our mental processing, what happens is we underweight the base rate, or the prior information that we have, or/and overweight the new information. And this is what we call the base rate neglect bias. Now closely related to this is the sample size neglect bias. And this is what was illustrated with the bowls with balls example. People often fail to take the sample size into account and tend to infer too quickly based on too few data points. This is sometimes called the law of small numbers. Not the law of large numbers, the law of small numbers. For example, people are usually very ready to believe that an analyst is great if he had four good stock picks, because four good stock picks are not representative of a bad or even a mediocre analyst. Finally, most people have a poor intuitive understanding of random processes. Therefore one of the oddest mental errors that we make is what is known as the gambler's fallacy, the belief that if a coin has come up heads several times it must be due for tails. In truth of course, the odds that a fair coin will turn up tails are always 50%, no matter how many consecutive times it's come up heads. Now, sometimes the gambler's fallacy can have very tragic results. In Italy, some years ago for example, apparently 2 years went by in which the number 53 never came up in the Venice Lottery. It just wouldn't come up. Finally, the number 53 was drawn in early 2005, after this very long spell that it never was drawn. But not before a woman drowned herself and another man shot his wife and his son and then himself, because they had spent all their money betting in vain on number 53, thinking that it's due to come up. In this lecture, we looked at some examples of the kinds of cognitive errors that we are likely to make when we rely on representativeness heuristic. We learned how we tend to overweight the prior belief, the base rate, ignore the sample size, or fall prey to what is called the gambler's fallacy.