- We often have to make decisions based on our estimates of how likely something is – how do we do that?
- People rely on a small number of heuristics that are based on simpler judgments. These are useful, but often incorrect
- Most judgements we make are based on incomplete data
**Representativeness**- A common method of making estimates is similarity to things that have been experienced previously
- This is a problem, however, because it leaves out a number of important things such as

- Prior probabilities – people seem to ignore these entirely unless they have absolutely no other information to go on. This is true even if they are told explicitly about the priors
- Incorrect appreciation of impact of small sample sizes – they would say a hospital that delivers 3 babies a day has the same likelihood of delivering at least 60% male babies as a hospital that delivers many more
- Misconceptions of chance
- Gamblers fallacy – iid processes become “due” if a certain outcome hasn’t been reached frequently
- Want random sequences to look random – give a higher probability to a coin flip sequence of HTHTH than HHHTT

- <Some items are less clear>
- Insensitivity to predictability: seems like the idea that people incorporate data about something that isn’t really related to the aspect they are interested in
- Unfounded confidence: people often have confidence in a prediction based on how representative that item is, even if the thing they are predicting is fundamentally hard to predict. This effect occurs even if the person is told explicitly about it
- Regression to the mean is commonly ignored (If a father is very tall, his child is more likely to be closer to the mean)
- When people observe the phenomenon, “… they often invent spurious causal explanations for it.”
- An example of why these mistakes are problematic is that in flight school they found that praise after a good landing often lead to a bad landing, and that harsh criticism after a bad landing as often followed by a good one. They therefore concluded that praise was bad because it lead to worse outcomes, but this is natural due to regression to the mean

**Availability**- Availability is the heuristic of estimating how likely is by counting all the examples you can think of
- This makes sense as a heuristic because something common should be able to be recalled more easily, but it is subject to bias
- Retrievability of instances influences this, related to this is that salient examples will be easier to remember and hold more impact
- The way indexing occurs may impact retrievability. For example, if asked whether more words start with ‘r’ or have ‘r’ as the third letter, its simply easier to remember items in the first category

- It may also be the case that we dont recall concrete things, but have to think them up – in some cases we are less good at creating examples, which influences our decision
- For example, when people had to do an 10 choose x operation, they said the result was higher when x was smaller, because its simpler to enumerate in your head

- Illusory correlation: people will incorporate information that is clearly unreliable, and even weigh it heavily in many cases
**Adjustment and Anchoring**- When dealing with a problem, people often start with an initial estimate that is then adjusted to produce the final answer
- This initial estimate, however, is usually not adjusted enough, and ways that questions are presented can have significant impact on this estimate, influencing the end result as well
- They call
*anchoring*the idea that the final answer is anchored in the initial estimate - Even arbitrary things (like spinning a wheel and having participants state if an answer to a question is higher/lower than the number can influence the answer
- People are also more likely to overestimate the probability of conjunctive (AND) events and underestimate probability of disjunctive (OR) events
- This may partially explain why when planning big projects we tend to underestimate the amount of time required – typically there are many conjunctive requirements in completing the project, but we assume that is easier to achieve than it actually is

- Also both naive and expert judges on topics stimate the likelihood of likely events as too high, and of unlikely events as too low
- Framing functionally the same question will give different results – they give the example of calculations about the dow jones
**Discussion**- Errors in judgement are very common, and often the errors that laypeople make are also made by experts of a particular area
- For example, ignoring priors is common even in those who are experts in statistics
- They are better at avoiding some errors like the gamblers fallacy, but when calculations start to require more intuition, they fall into the same problems essentially

- Its understandable that people commonly make some classes of mistakes, because they dont often occur (for example the probability of two particular sequences of coin flips), but other things like regression to the mean are very common and are experienced all the time, and are still not processed
- Likewise, we don’t commonly write down predictions and then formally examine those as compared to the actual outcome, so our predictions are therefore not so likely to be changed