Thinking Fast and Slow. Daniel Kahneman. Book


Introduction

  1. Deals with when our judgements are incorrect
  2. Initial examination into this area with Amos Tversky showed that even statisticians had a poor intuitive grasp of statistics; seeing small sample sizes from experiments they, like everyone else, had too much confidence in the reliability of the results
  3. Research was done based on completely informal experiments on themselves – when did they fail to make an accurate prediction?  Often the public in general had the same deficiencies
  4. Gives the example of a description of a person, and then a question about their profession.  People will pick the answer that better fits with the typical person of that profession regardless of relative priors between the two professions
    1. <In this example though, the people have to be instructed that the person was selected uniformly at random either from the general population or from the population of people in those two professions to really show people are bad at math, but from what I recall in his other papers, even when this is the case people still make the same mistakes>
    2. Example of how a heuristic leads to incorrect decision making
  5. Also gives examples discussed in earlier work about the ease of recall of certain things leading to skewed interpretation of their frequency <for example, if you ask whether more works start with ‘k’ or have ‘k’ as the second letter, people will say the first primarily because its easier to think of words with k as the first letter as opposed to the third>
  6. In the 70s, social scientists generally agreed that people are generally rational and that they perform sound computations given data – when people diverge from rational choices it was thought to be in cases where some emotion such as fear/anger were overriding normal thought.  Their article in Science flew in the face of accepted wisdom, outlying a number of cases where people do not produce correct answers, even in cases where they are explicitly given all the information they need to select the correct answer
  7. The big article after the one in Science is “Prospect Theory: An Analysis of Decision Under Risk”
  8. Also brings up the idea of how experts become good at things the rest of us are bad at.  Quotes Herbert Simon who studied chess masters: “The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer.  Intuition is nothing more and nothing less than recognition.”
  9. The title of the book deals with the idea that when we are faced with a situation where we identify our heuristics won’t help us, we transfer over to a slower deliberate way of thinking:
    1. “The spontaneous search for an intuitive solution sometimes fails- neither an expert solution  nor a heuristic answer comes to mind.  In such cases we often find ourselves switching to a slower, more deliberate and effortful form of thinking.  This is the slow thinking of the title.  Fast thinking includes both variants of intuitive though-the expert and the heuristic-as well as the entirely automatic mental activities of perception and memory, the operations that enable you to know there is a lamp on your desk or retrieve the name of the capital of Russia.”
  10. Book discusses a two system approach (fast and slow, assume that fast is “system 1” and slow is “system 2”).
    1. Associative memory is the core of system 1 and continually keeps an up-to-date interpretation of what is going on
  11. Remainder of the book goes on to study why we’re so bad at statistics (it requires the brain to integrate many different pieces of information, which system 1 is bad at).  Then discussion of why we are overconfident in our (sometimes poor) estimations.  And then a number of other things

Chapter 1: The Characters of the Story

  1. Starts with an example photograph of a angry woman yelling – we know this instantaneously when looking at the picture – how?  It also doesn’t matter whether we intended to interpret the photo or not.  It happens automatically
  2. On the other hand, an equation 17 x 24 doesn’t (for most people) conjure up an immediate answer
  3. Interpreting the image is fast thinking, doing the math is slow thinking
  4. “The computation was not only an event in your mind; your body was also involved.  Your muscles tensed up, your blood pressure rose, and your heart rate increased.  Someone looking closely at your eyes while you tackled this problem would have seen your pupils dilate.  Your pupils contracted back to normal size as soon as you ended your work…”
  5. “When we think of ourselves, we identify with System 2, the conscious, reasoning self that has beliefs, makes choices, and decides what to think about and what to do.  Although System 2 believes itself to be where the action is, the automatic System 1 is the hero of the book.”
  6. On the other hand solving 2+2 is a System 1 problem, as are other tasks that are solved (or at least where an answer is produced) instantly.  Finding a strong move in chess is a System 1 problem for chess masters – for nonmasters its a System 2 problem
  7. Both systems often interact to drive behavior
  8. System 2 would be focusing attention on one thing in a busy scene, walk faster than natural, monitor appropriateness of behavior in a social situation, park in a tight spot (aside from valet parkers)
    1. “In all these situations you must pay attention, and you will perform less well, or not at all, if you are not ready or if your attention is directed inappropriately.”
    2. <Seems to me like behaving properly in a social setting is automatic, no?>
  9. “The often-used phrase ‘pay attention’ is apt: you dispose of a limited budge to f attention that you can allocate to activities, and if you try to go beyond your budget, you will fail.  It is the mark of effortful activities that they interfere with each other.”
  10. Give the example of the basketball/gorilla video as an example of how attention causes blindness to other stimuli
    1. “The gorilla study illustrates two important facts about our minds: we can be blind to the obvious, and we are also blind to our blindness.”
  11. System 1 is running constantly, and most of the time System 2 is running “in a comfortable low-effort mode”
    1. “System 1 continuously generates suggestions for System 2: impressions, intuitions, intentions, and feelings.  If endorsed by System 2, impressions and intuitions turn into beliefs, and impulses turn into voluntary actions.  When all goes smoothly, which is most of the time, System 2 adopts the suggestions of System 1 with little or no modification.”
  12. “When System 1 runs into difficulty, it calls on System 2  to support more detailed and specific processing that may solve the problem of the moment.”
  13. “You can also feel a surge of conscious attention whenever you are surprised.  System 2 is activated when an event is detected that violates the model of the world System 1 maintains.”
    1. <This is a big idea in “On Intelligence.”  I remember this happening to me once when I opened up a drawer at my parents house after not living there for many years.  During my entire childhood the drawer was always difficult to open.  Before I went home that one time my dad decided to lubricate the mechanism, and when I pulled the drawer I was instantly shocked by how quickly it moved even though I hadn’t operated it in months.  The action of the drawer wasn’t something I was aware that I was aware of until that event.>
  14. “The division of labor between S1 and S2 is highly efficient: it minimizes effort and optimizes performance.”  It works well most of the time because of heuristics, but in particular instances it doesn’t know to move control over and will produce bad results.  “One further limitation of S1 is that it cannot be turned off.”
  15. Give the example of an optical illusion, where systems 1 and 2 can be in conflict.  2 lines can look different lengths, but when you measure them they turn out the same.
  16. Similarly, there are cognitive illusions just like optical illusions these can be extremely difficult to deal with, because even after verifying that you are incorrect, you will still think about the problem incorrectly, because system 1 cannot be turned off
  17. “The best we can do is a compromise: learn to recognize situations in which mistakes are likely and try harder to avoid significant mistakes when the stakes are high.  The premise of this book is that its easier to recognize other people’s mistakes than our own.”
  18. Using statements about “S1” and “S2” are a sin amongst many economists and psychologists because it sounds like a claim about a homoculous
    1. He uses these statements as a description, not an explanation
  19. “S1 and S2 are so central to the story I tell in this book that I must make it absolutely clear that they are fictitious characters… there is no one part of the brain that either of the systems would call home.”

Chapter 2: Attention and Effort

  1. “In the unlikely event of this book being made into a film, S2 would be a supporting character who believes herself to be the hero.”
  2. S2’s operations are effortful, and it is lazy, so many decisions S2 thinks it makes are actually made by S1
  3. There are, however, tasks that can only be accomplished by S2 because they require effort and self-control s1 is not capable of
  4. They give a simple time based math task that they say you can watch your pupils dilate while you perform the task
  5. The responses to mental effort are different from arousal <but are sometimes similar, such as the example given of pupils being dilated making a woman look more attractive>
  6. In fact, dilation varies second by second as related to the demands of the task being done at that moment
  7. In the hardest task the author has seen that subjects dont simply give up one (looking at strings of numbers and adding 3 to each digit independently) the pupil dilates by 50% and heart rate increases by 7 bpm.  If the task became harder, it was beyond cognitive abilities, and pupils stopped dilating or actually shrank.  They could figure out the second they gave up on the task based on this metric
  8.  During casual conversation, there is no dilation at all
  9. The impact of working on the most challenging tasks is that you become blind to everything else going on.  The gorilla experiment is an example of this

<Well WordPress nuked notes on part of chapter 2 and all of chapter 3… onto chapter 4>

Chapter 4: The Associative Machine

  1. If you are exposed to two words together, system 1 automatically will associate them and begin reactions to them.  The example given is “bananas vomit” – there will be an immediate and automatic negative response to vomit, which will manifest itself in physiologically measurable means, and this will also cause a temporary aversion to bananas.
  2. By being exposed to the two words, S1 makes as much sense as possible of the stimulus and links them causally
  3. The reaction of being exposed to these words is very similar to what would happen if the events were actually experienced “cognition is embodied; you think with your body, not only with your brain.”
  4. “Psychologists think of ideas as nodes in a vast network, called associative memory, in which each idea is linked to many others.”  These might be in terms of cause and effect, categories, or other means in which concepts are related
  5. In associative memory, the activation of an idea causes the activation of many other ideas at the same time, which is called priming
  6. Does not occur consciously
  7. The “Florida effect” where just being exposed to words related to old people (even if the word old is never used) causes behavior to be engaged in more slowly.  The participants in that study didn’t report even being aware of the presentation of words belonging to this category, but were still impacted by it
    1. In the original experiment, being exposed to words caused physical activity to change, a “ideomotor link”
    2. But the opposite is true as well – if you are told to do something at a particular pace (which happens to be slower than normal activity) you will be primed to recognize words related to the elderly
  8. Links in the network are bidirectional – making happy faces makes you feel happier
  9. Priming with themes of money causes people to be more selfish, but also to try and work harder on problems (increased individualism)
  10. People are often surprised about the impact of priming, because S2 thinks its in charge, but S1 is running the show most of the time

Chapter 5: Cognitive Ease

  1. When you are in a good mood, you think less critically, when means you expend less energy and therefore feel good – its a loop
  2. Being strained on the other hand leads to more vigilant decision making
  3. Things that are more familiar are easier to read or recognize.  Things that are easier to recognize feel like they are better known.
  4. Using more verbose language leads to statements being less credible
  5. Rhyming, and names that are simpler also increase credibility – this is because S2 is lazy and wants to avoid work, so it will pick up on anything it can (whether it is relevant or not) to make decisions
  6. With the same logic, giving a logic question with bad paper/print lead to more correct answers than the same question with good print/paper.  This is because the good print/paper leads to less though as it is more comfortable.  The less comfortable setup lead to more thinking and activation of S2
  7. Cognitive ease is associated with good feelings
  8. The repeated presentation of arbitrary stimulus leads to an affection for it, called the mere exposure effect
  9. This can even occur when the stimulus is so fast that the individual can’t consciously perceive being exposed to it
  10. This is explained from a biological perspective that there may be unknown dangers in novel places/circumstances, so it is best to be cautious in circumstances where they are present.  When we are repeatedly exposed to a stimulus that is first novel, and each time we are exposed nothing bad happens, we learn that stimulus is safe and it makes us feel comfortable
    1. Because this argument doesn’t only apply to humans, a test was designed where chickens still inside the egg were exposed to different tones.  After hatching, the tone they were exposed to in-egg lead to a reduced amount of distress calls
  11. Mood also impacts decision making.  In a task where three words were presented of a common theme that was difficult to discern, and where subjects were not given enough time to figure out exactly what the connection was (so they had to guess whether there was a connection at all or not) those in a good mood more than doubled in accuracy, and those that were in a bad mood  were no better than chance
  12. “Mood evidently affects the operation of System 1: when we are uncomfortable and unhappy, we lose touch with our intuition… A happy mood loosens the control of S2 over performance: when in a good mood, people become more intuitive and more creative but also less vigilant and more prone to logical errors.”
  13. Research also shows that this is causal and not merely correlational
  14. “We have learned a great deal about the automatic workings of S1 in the last decades.  Much of what we now know would have sounded like science fiction thirty or forty years ago.”

Chapter 6: Norms, Surprises and Causes

  1. S1 is “a remarkably powerful computer, not fast by conventional standards, but able to represent the structure of our world by various types of associative links… The spreading of activation in the associative machine is automatic, but we (S2) have some ability to control the search of memory, and also to program it…”
  2. This chapter focuses mainly on S1
  3. S1 is supposed to constantly maintain a representation of the world
  4. Surprise represents our expectation from our world
  5. Completely surprising events can cease to be surprising just from happenstance of them occurring a couple of times – the brain will very quickly and agresively try to establish a pattern to make sense of a situation.  Similar to what happens in memory where events are retrieved in a way that makes more sense (introduces bias) than the actual event.
  6. Its possible to pick up on some surprises that aren’t inherently startling extremely quickly.  When people hear an upper-class Enlgishman saying  “I have a large tattoo on my back,” people can respond with surprise within 1/5 second of the onset of the word “tattoo”.
  7. “Finding such causal connections [finding the correct interpretation of a slightly ambiguous story] is part of understanding a story and is an automatic operation of S1.  S2, your conscious self, was offered the causal interpretation and accepted it.” <Although I expect that in other cases S2 can reconsider what S1 passes up?>
  8. Albert Michotte in 1945 published work on inference of causality in terms of motion – the work I did at Rutgers is in this same line of inquiry.  Also Heider and Simmel from 1944
  9. People often apply causal linking incorrectly – when statistics are involved, S2 can be taught to make the right computations (but even people with the training often get simple questions wrong on the first shot)

Chapter 7: A Machine for Jumping to Conclusions

  1. “Jumping to conclusions is efficient if the conclusions are likely to be correct and the costs of an occasional mistake acceptable, and if the jump saves much time and effort.”  Conversely, when guesses are likely to be wrong and costs are high.  In this case, it is sometimes best for S2 to intervene
  2. Predictions by S1 are based on experience – both long term (learning) and recent (priming)
  3. “S1 does not keep track of alternatives that it rejects, or even of the fact that there were alternatives.  Conscious doubt is not in the repertoire of S1; it requires maintaining incompatible interpretations in the mind at the same time, which demands mental effort. Uncertainty and doubt are the domain of S2.”
  4. S1 is designed to believe, and S2 is designed to be critical.  If you engage someone with a task and present them with nonsense, people are more likely to say its valid than when S2 is not previously occupied
  5. Part of the reason we have confirmation bias is because its easier for us to think of things that agree with a statement than disagree (even though its logically correct to work in terms of negative examples)
  6. Halo effect is the tendency to like or dislike everything about someone/thing (if you like the president, you probably like his policies, but also how he looks and speaks)
  7. When we set up an environment in a way to reduce bias, it can make us uncomfortable because we rely on it for so much, it also makes us less confident although when bias is reduced the predictions are actually better
    1. Gives personal example of grading exams. He started by grading each exam front to back, but noticed that if somebody did well on the first part he often gave the student the benefit of the doubt later (and vice versa).  By grading each question separately (and recording the grades in a way he wouldn’t see till he was done) he removed bias but reduced confidence in his decisions
  8. Individuals don’t really make iid predictions, but groups of people sometimes do.  The old guess the number of marbles trick can generally be pretty accurately solved by a large group of people
  9. “An essential design feature of the associative machine is that it represents only activated ideas.  Information that is not retrieved (even unconsciously) from memory might as well not exist.  System 1 excels at constructing the best possible story that incorporates ideas currently activated, but its does not (cannot) allow for information it does not have.”
  10. “The measure of success for S1 is the coherence of the story it manages to create.  The amount and quality of the data on which the story is based are largely irrelevant.”
  11. Because S1 seeks coherence (even if it is correct) and S2 is lazy, the result is a system that will quickly accept conclusions which are false but intuitive

<And now my notes from chapter 8 have disappeared.  There was important stuff there.>

Chapter 9: Answering an Easier Question

  1. In everyday life, we generally have answers at hand for everything <whether or not those answers are correct is another issue>
  2. If posed with a question that is hard to answer quickly, S1 will often replace that question with one that can be answered quickly
    1. Calls this substitution
  3. Heuristic and Eureka come from the same root <neat!>
  4. Substitution must happen because we often deal with ill-posed questions.  If we really tried to answer them we would get stuck, but usually some answer pops out
  5. Others (George Polya is mentioned) argue about heuristics as something deliberately implemented by S2.
    1. The heuristics discussed here are of a different sort: “they are a consequence of the mental shotgun, the imprecise control we have over targeting our responses to questions.”  This makes it possible to answer difficult questions without engaging S2
  6. S2 can reject the answer produced by S1, but this usually doesn’t happen
  7. If you ask people “how happy are you?” and then “how many dates did you go on”, there will be no relationship between the responses, but if the questions are reversed the responses correlate very highly.
    1. There was a substitution that was caused by priming

Part 2: Heuristics and Biases

Chapter 10: The Law of Small Numbers

  1. Even people trained in statistics often miss simple questions that have answers that are an artifact of small sample size
  2. Likewise, even trained statisticians think a very small number of samples will give an estimate which has a high likelihood of being (nearly) correct.  The actual number needed was almost always higher than the answer given by the statisticians – the call this “Belief in the Law of Small Numbers.
    1. S1 does not doubt – it will accept any seemingly reasonable response.  S2’s job is to introduce doubt, but that is harder than simply accepting what is proposed by S1.  This phenomenon is an example of S2 not being vigilant
  3. Random patterns can often generate patterns that do not look random to us
    1. The example of “being hot” while playing sports is just a natural outcome of a random process
  4. You are more likely to attribute something random to a nonrandom process than vice-versa
  5. It turns out that the best schools are small schools.  The places with lowest cancer rates are small rural towns.  It also happens that the worst schools are small and that the places with the highest cancer rates are small towns.  Small sample size effect, which is almost always overlooked.
  6. The law of small numbers is a symptom of a larger problem:
    1. We are more likely to pay attention to information than the quality of the information

Chapter 11: Anchors

  1. In an extension to the happiness/dating experiment in chapter 9, answering questions after looking at the spin of roulette wheel was impacted by the spin
  2. This is called anchoring – it is the impact of seeing a value presented for something unknown before estimating that value
  3. If you are asked if Ghandi was over 114 years old or less than 35 when he died your answer will be closer to either of those two values that are presented originally
  4. There are two mechanisms <at least> that lead to anchoring.  One is part of s1 and the other s2
  5. One idea is that a value estimate begins at the anchored value and then moves a bit in the direction that would probably otherwise be chosen, except this movement doesn’t go far enough – you usually stop once you get to a point that seems reasonable, but this is an entire region so starting from high or low will bring you to different boundaries on the high or low side, respectively
  6. Moving from the anchor is effortful – people who are tired don’t move as much
  7. This above stance is a feature of a weak s2
  8. The other perspective is that of priming which would be s1.  “S1 tries its best to construct a world in which the anchor is the true number.  This is one of the manifestations of associative coherence that I described in the first part of the book.”
  9. Impact of priming was borne out in experiments where certain words were more easily recognized – for example, asking if the average temperature in a country was low lead to words like “skiing” being more easily recognized, and the opposite – asking if the average temperature was hot lead to words like “beach” being more easily recognized
  10. Even with experts anchoring has a strong impact – it explained about half of the variability in realtor estimates of house value, even though they pride themselves on not being subject to such influences
  11. Although using anchors in some cases can make sense – if you really have no idea about something, there may be useful information in a question – in other experiments the anchor was clearly shown to be random, but it still had impact
    1. This was even shown to be an impact on sentencing made by judges in hypothetical cases after a dice roll was shown – there the impact of the dice was about as strong as the impact of price on realtors
  12. “As you may have experienced when negotiating for the first time in a bazaar, the  initial anchor has a powerful effect.  My advice to students when I taught negotiations was that if you think the other side has made an outrageous proposal, you should not come back with an equally outrageous counteroffer, creating a gap that will be difficult to bridge in further negotiations.  Instead, you should make a scene, storm out, or threaten to do so, and make it clear – to yourself as well as to the other side – that you will not continue the negotiation with that number on the table.”
  13. It is also helpful to come up with concrete reasons why the anchor is unreasonable.  “In general, a strategy of ‘thinking the opposite’ may be a good defense against anchoring effects, because it negates the biased recruitment of thoughts that produce these effects.”
  14. In the s2 cases, priming influences from s1 still have an impact, as s2 depends on the information yielded by s1, and priming will impact that process
  15. Its also an artifact that we use information regardless of the quality of that information – even random information may be used as data even though it is clearly garbage

Chapter 12: The Science of Availability

  1. Availability heuristic being a judgement of number or frequency based on how easily examples come to mind.  This is an example of problem substitution – just trying to think up examples and deriving a decision from the number of examples is usually easier than solving the actual problem
  2. This impacts bias because some classes of items are easier to recall than others (which may occur because of a recent event that brings certain things to mind/primes them).  Also personal experiences carry more weight than just seeing some statistics on something, even though it is less reliable
  3. In an example of personal issues creating bias, spouses jointly estimate that they together do well more than 100% of chores (especially that the person themselves does more extra than spouses, this can also be toward bad things, as people more often acknowledge that they themselves are more responsible for starting fights).
    1. Usually, everyone feels they are doing more than their fair share
  4. Both the number of items recalled and the ease by which they were recalled impact judgement
    1. If you are asked to think of many instances of something you probably cant recall many of, you will produce a lower estimate of it than if you are asked just to think of a few
    2. This means you may be less confident about a choice if you are asked to produce a large number of arguments in support of it
  5. A professor at UCLA used this impact to bias his course evaluations, by asking for many (as opposed to few) examples of ways in which the course could be improved in evaluations
  6. If a distraction occurs while people are trying to think of something, the impact of the availability heuristic is diminished – we can at least recognize when we are having trouble with the method
  7. Here too, activation of s2, (in some cases because of indication of greater significance of task) can further mitigate influence of this heuristic
    1. If people feel powerful, or happy (or a number of other things) they will use s1 more

Chapter 13: Availability, Emotion, and Risk

  1. Availablity (of recall) strongly influences estimates of how common things are.  For example, on average people think tornadoes kill more than asthma, although the latter is responsible for 20x the deaths.  When people die from tornadoes, you hear about it on the news, and it is usually emotionally charged
  2. Often the easy question “How do I feel about it?” is used to answer a more complex question “What do I think about it?”
    1. This is sometimes a good thing though, as “An inability to be guided by a ‘healthy fear’ of bad consequences is a disastrous flaw.”
  3. In studies, when people like something they drastically underweigh risks and overweigh benefits.  Opposite is true for things they don’t like
    1. After this people read an article about a piece of technology, saying it was good or bad (but saying nothing about the existence, or lack of, risks).  When people’s opinion about the goodness of a technology changed, their perception of risk associated with it changed as well
  4. “… as the psychologist Jonathan Haidt said in another context, ‘The emotional tail wags the rational dog.'”
  5. <Goes into public policy from here, which I am not interested in, and skipping>

Chapter 14: Tom W’s Specialty

  1. Goes back to the example that if you ask someone to guess the career of someone with no information they may go on base rates.  If you give them information that may not actually be helpful to make the decision they will almost entirely disregard priors.  <Previous reading says this is the case even if you explicitly give the base rate information to subjects, and they have a sophisticated knowledge of statistics>
  2. “… enhanced activation of s2 caused a significant improvement of predictive accuracy in the Tom W problem.”  The increased activation of s2 in this experiment was done simply by asking students to frown while performing the task
    1. “If frowning makes a difference, laziness [of s2 simply accepting s1 without being critical of the judgement] seems to be the proper explanation of base-rate neglect… s2 ‘knows’ the base rates are relevant even when they are not explicitly mentioned, but applies that knowledge only when it invests special effort in the task.”
  3. Even if you are told that the description is not trustworthy, just reading it will influence the decision.  A called out example of WYSIATI (what you see is all there is).  Overcoming this is usually pretty difficult

Chapter 15: Linda: Less is More

  1. The “best known and most controversial” of Kahneman & Tversky’s work
  2. Given a descrpition of a liberal activist, people were more likely to say someone was a “feminist bank teller” than simply a “bank teller”  Of course, the probability of the latter is at least as good as of the prior, so that response doesn’t make sense.  This is true even if they are presented with both of those two options (as opposed to one group getting access to one and the other group getting access to the other) among a set of other options.
    1. “… we had pitted logic against representativeness, and representativeness had won!”
  3. The percent of people who answered the question wrong in this manner was ~85% and was essentially unchanged even among a population in Stanford Business School who all had advanced coursework in prob/stats
  4. Again the rate of about ~85% occurred even when there were only those two options of “bank teller” and “feminist bank teller”
  5. The only population that got this question right was (and they only answered correct at 65%, as opposed to wrong 85% of the time) was social science graduate students at Stanford
    1. In the version where many options were presented to this group though, they still got it wrong – only right when those two options were presented right next to each other
  6. “The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary.” 
  7. The claim is s1 averages probabilities or values easier than it sums them up, so if you have 2 criteria and one seems very likely it raises the average likelihood of one or the other (although of course the real math doesn’t care about that).
    1. Other instances show cases where s1 averaging things instead of summing them leads to things that don’t make sense.  For example, people assign a higher value to few rare baseball cards than to the same set of baseball cards mixed with other lower quality cards
  8. This effect even shows up in purely statistical questions, people prefer to bet on a sequence of “THTHHH” to “HTHHH” (the first sequence just has an extra “T”) by 2/3
  9. Presenting problems in a certain way that stresses the statistical nature of the problem can counteract these effects
  10. “One conclusion, which is not new, is that s2 is not impressively alert.”

Chapter 16: Causes Trump Statistics

  1. Another example where people ignore the base rate when doing statistics (when all relevant aspects of the problem are presented and they only have to do the math)
  2. If you change the description slightly (instead of saying what the base rates are, you say the base rates involved in the particular thing being asked about) people often then know how to use the information correctly, even though mathematically they are the same.
    1. In the second description people do better because it works in with a causal story, the first version doesnt
  3. They distinguish between what they call “statistical base rates” and “causal base rates”
    1. Statistical base rates are underweighted or completely ignored when information about a particular instance is presented, while causal base rates are used properly
  4. We think of categories in terms of stereotypes
  5. “The classic experiment I describe next … supports the uncomfortable conclusion that  teaching psychology is mostly a waste of time.”
  6. The experiment set up a situation where an actor was in distress, but subjects knew others would be able to help him, and also not know if anyone else did help him.  In the end, 6 out of 15 didn’t respond at all, and 5 of 15 only went out long after the actor in distress said he was choking
    1. “The experiment shows that individuals feel relieved of responsibility when they know that others have heard the same request for help.”
  7. After this experiment, new subjects were explained the previous experiment and told to guess the likelihood a presented subject from the previous experiment would go help.  They always predicted the person would go help.  This finding followed even when they were told the base-rate responses of helping in the previous experiment
  8. The reason Kahneman says this is discouraging about the prospect of teaching psychology is that people were presented with facts about human behavior, and immediately disregarded them, so what’s the point of teaching psychology?”
  9. If the information is presented differently – showing exactly the experimental setup and some video of the experiment (without presentation of the results in the end) people will accurately predict the results – the finding is that to teach psychology, you must surprise people
    1. From the original publication of the findings “Subjects’ unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular.”

Chapter 17: Regression to the Mean

  1. In terms of psychology, rewards for doing well are more effective than penalizing mistakes, but when he told this to IAF flight instructors they said this wasn’t true because after yelling at someone for goofing off, they usually did better next time, but when giving compliments for a maneuver well done, they usually did worse next time
    1. This is because regression to the mean.  After a bad move you will probably be more toward average next time (an improvement), and after a good move you will also probably more toward average next time (a regression)
  2. Likewise in golf, the person who is doing best in the first round will probably be closer to par after the end of the second round, as should the person who was doing worst the first day
  3. This also leads to things like the jinx of football players that are on Madden or whatever – after an exceptional year they will probably have a less exceptional year
  4. People often invent incorrect causal explanations for regression to the mean in terms of sports (nerves, etc) but it all plays out naturally from the statistics
  5. Same effects of regression to the mean occur in population statistics (two very tall parents will usually have a less tall child)
  6. “It took Francis Galton several years to figure out that correlation and regression are not two concepts — they are different perspectives on the same concept.  The general rule is straightforward but has surprising consequences: whenever the correlation between two scores is imperfect, there will be regression to the mean.”
    1. The following example is given “Highly intelligent women tend to marry men who are less intelligent than they are.”
    2. Again, people will invent stories as to why this is true, but the real answer is just that they are more likely to find men less intelligent than they are
    3. But the real answer in this case is “The correlation between the intelligence scores of spouses is less than perfect.”
  7. “David Freedman used to say that if the topic of regression comes up in a criminal or civil trial, the side that must explain regression to the jury will lose the case.  Why is it so hard?  The main reason for the difficulty is a recurrent theme of this book: our mind is strongly biased toward causal explanations and does not deal well with ‘mere statistics.’  When our attention is called to an event, associative memory will look for its cause — more precisely, activation will automatically spread to any cause that is already stored in memory [although that cause will often be incorrect].”
  8. This is also the same reason why when testing treatments for illnesses/disorders, you need to test against a control, and not the population at the beginning of treatment, as people severely impacted  will tend to do better anyway

Chapter 18: Taming Intuitive Predictions

  1. Sometimes with a great deal of practice (such as chess masters) they have the ability to look at a hard problem and come up with an excellent solution quickly.  In other cases where a quick decision is needed the problem is often substituted mentally with a simpler proxy problem that can be tractably solved
  2. For example, if presented with:
    1. A fact that someone read fluently at age 4, and then have to predict what their GPA would be at the end of high school
  3. A simple technique would be to consider what percentile someone would have to be to have that ability at age 4 and then map that same percentile to GPAs
    1. This is called intensity matching, and is an S1 technique
  4. People often do this, and when projecting into the future they forget regression to the mean – GPA is more likely to be toward average percentile than reading ability at age 4.
    1. The lower the correlation is between the two items the more it should move toward the mean
    2. This correction for regression to the mean is a s2 process
    3. Because its an s2 process its expensive and often not worth doing – if you information isn’t very good just shoot toward the mean

Chapter 19: The Illusion of Understanding

  1. Narrative fallacy is the idea that we will use explanations that are simple if we can think of one, whether or not it is appropriate for use in any given scenario
  2. Halo effect – we use as explanations one particularly salient feature whether or not it is relevant (if someone is handsome, you are more likely to rate them as athletic, ugly is more likely to be rated unathletic)
  3. “The halo effect helps keep explanatory narratives simple and coherent by exaggerating the consistency of evaluations…”
  4. “At work here is the WYSIATI rule.  You cannot help dealing with the limited information you have as if it were all there is to know.  You build the best possible story from the information available to you and if it is a good story, you believe it.  Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle.  Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.”
  5. Also part of the “hindsight is 20/20” effect / hindsight bias
  6. “A general limitation of the human mind is its imperfect ability to reconstruct past states of knowledge, or beliefs that have changed.  Once you adopt a new view of the world (or any part of it), you immediately lose much of your ability to recall what you used to believe before your mind changed.”
    1. If you ask people about their opinion on a topic they aren’t really decided about, then present an argument pro or con, and then ask them what their original opinion was, they will say their original opinion was in the direction of the argument they heard, as opposed to what it originally was.  They often can’t believe that their original view was different from the new one
  7. Because of hindsight bias, “We are prone to blame decision makers [such as doctors] for good decisions that worked out badly and to give them too little credit for successful moves that appear obvious only after the fact.”
    1. In terms of this they call it outcome bias
  8. CYA: “Because adherence to standard operating procedures is difficult to second-guess, decision makers who expect to have their decisions scrutinized with hindsight are driven to bureaucratic solutions–and to an extreme reluctance to take risks.”
  9. On the other hand, when unecessary risk is taken and it works out well, individuals are given credit even if they dont deserve it (consider winning in roulette).
  10. “A few lucky gambles can  crown a reckless leader with a halo of prescience and boldness.”
  11. “The sense-making machinery of s1 makes us see the world as more tidy, simple, predictable, and coherent than it really is.  The illusion that one has understood the past feeds the further illusion that one can predict and control the future.  These illusions are comforting.  They reduce the anxiety that we would experience if we allowed ourselves to fully acknowledge the uncertainties of existence.”
  12. “Stories of how businesses <or governments?> rise and fall strike a chord with readers by offering what the human mind needs: a simple message of triumph and failure that identifies clear causes and ignores the determinative power of luck and the inevitability of regression.  These stories induce and maintain an illusion of understanding, imparting lessons of little enduring value to readers who are all to eager to believe them.”

Chapter 20: The Illusion of Validity

  1. “Considering how little we know, the confidence we have in our beliefs is preposterous–and it is also essential.”
    1. <Indeed, if we tried to act with any sort of certainty in our daily lives we would spend all of our time gathering information as opposed to actually doing things.  In the real world, gross oversimplifications are a necessity>
  2. Even if people know in advance their predictions are likely to be poor (can be simply because of fundamental task difficulty / optimal Bayes error rate that is poor), it is still common to have a high degree of confidence in each prediction
    1. <He gives the example of how officers for the IDF are initially selected and rated, but this is similar to to peoples belief in their ability to tell if someone is lying or not>
  3. This is called the illusion of validity
  4. An analysis of one of his students showed that on average, when swapping one stock for another, they were actually losing money on average (on average 3.2%).  You would be better flipping a coin
  5. Followup work showed that those who traded the most frequently did worse and vice versa
  6. “Typically, at least two out of every three mutual funds underperform the overall market in any given year.”
    1. Likewise, those that are successful one year generally do not repeat success next year – luck and not skill
  7. “In highly efficient markets, however, educated guesses are no more accurate than blind guesses.”
  8. “Our message to executives was that, at least when it came to building portfolios, the firm was rewarding luck as if it were skill.”  In the end, there was zero correlation between performance between years for individual traders… We all went on calmly with our dinner, and I have no doubt that both our findings and their implications were quickly swept under the rug and that life in the firm went on just as before.”
  9. “Facts that challenge such basic assumptions–and thereby threaten people’s livelihood and self-esteem–are simply not absorbed. The mind does not digest them.”
  10. “Cognitive illusions can be more stubborn than visual illusions… When asked about the length of the lines, you will report your informed belief, not the illusion that you continue to see.  In contrast, when my colleagues and I in the army learned that our leadership assessment tests had low validity, we accepted that fact intellectually, but it had no impact on either our feelings or our subsequent actions.”
  11. “Finally, the illusions of validity and skill are supported by a powerful professional culture.  We know that people can maintain an unshakable faith in any proposition, however absurd, when they are sustained by a community of like-minded believers.”
  12. A similar very large study was performed on political pundits, and again they underperformed chance in making political predictions
    1. “In other words, people who spend their time, and earn their living, studying a particular topic produce poorer predictions than dart-throwing monkeys…”
    2. Although those with more experience tend to do slightly better than those with less experience, their results when weighed by their confidence are actually poorer due to overconfidence
  13. “‘We reach the point of diminishing  marginal predictive returns for knowledge disconcertingly quickly,’ Tetlock writes. ‘In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals–distinguished political scientists, area specialists, economics, and so on–are any better than journalists or attentive readers of the New York Times… The more famous the forecaster, Tetlock discovered, the more flamboyant the forecasts.”
    1. <The last point in particular goes back to a particular bias discussed earlier.  If you make crazy predictions, you will be right very rarely, but that is when everyone else is wrong.  People will focus on that one correct prediction because it stands out, even if it is just luck.>
  14. Experts also were less likely to admit error, and when forced to were more likely to give excuses or convoluted explanations as to how they were actually right
  15. The point here isn’t only that experts are often bad at what they do – it is also that the world is often unpredictable and so even those that devote their life to making predictions rarely do better than chance.

Chapter 21: Intuitions Vs. Formulas

  1. Simple regression on a small number of variables was often a better prediction than predictions by trained clinical counselors who had access to a much larger set of features in terms of predicting school grades.  This was also true of predictions of parole violations, pilot training
    1. This has been replicated in about 200 following papers in a huge number of areas.  In the worst case, simple regression was equally bad
    2. The unifying characteristic is that the domains are “low-validity environments,” or those that are hard and have a high optimal Bayes error rate
  2. “Why are experts inferior to algorithms? One reason, which Meehl suspected, s that experts try to be clever, think outside the box, and consider complex combinations of features  in making their predictions <so the data is spread to thin>. ” <It would make sense to test this by giving exactly the same features to both and seeing what comes out.  I expect that machines would actually do worse than people when the number of features gets really big, and would do better than people when the number of features is small>
  3. People are also very inconsistent when making decisions – given the same data twice the predictions are often very different
    1. This is a really big issue – in a study, radiologists produced different evaluation of the same image 20% of the time
    2. In another study, it was shown that these inconsistencies crop up even if a reevaluation is done only minutes apart
  4. “The widespread inconsistency is probably due to the extreme context dependency of s1… Because you have little direct knowledge of what goes on in your mind, you will never know that you might have made a different judgement or reached a different decision under very slightly different circumstances… When predictability is poor… inconsistency is destructive of any predictive reliability.”
  5. <This is crazy> “… the experts who evaluate the quality of immature wine to predict its future have a source of information that almost certainly makes things worse rather than better: they can taste the wine.”
  6. Some results show that even approaches more primitive than linear regression (such as an unweighed combination of features) can be better in some cases due to overfitting <this is again getting back to the situation where people also do poorly though; when problem dimension is high>
  7. Common sense can be a good form of feature selection
  8. Although Meehl argued that it is immoral to allow people to make life-and-death medical decisions in cases where machines can do a better job, psychologically the cause of an error is important.  Was it an algorithmic misdiagnosis or a persons?
    1. <Same issue with autonomous cars – even if they can drive more safely than people, it will take a long time before they are common>

Chapter 22: Expert Intuition: When Can we Trust it?

  1. In the field of Natural Decision Making (NDM), they study how experts make their decisions
    1. People in this field were very critical of Kahneman’s work
  2. <Ah,> Sources of Power by Klein<in my reading list> analyzes how experts develop skills, such as the ability of chess masters to immediately evaluate board positions
  3. Klein and Kahneman ended up doing a paper to try and iron out their relative positions in a constructive adversarial manner
    1. The goal was to figure out when you can trust experts, vs when they produce garbage
  4. Klein studied fireground commanders, who lead firefighting teams.  When faced with a rapidly changing and chaotic situation, they immediately produce an excellent response to fighting the fire, often without even consciously thinking of a second option.  They would then consciously mentally simulate their plan, and make minor revisions if necessary. In the cases where some major problem arose during mental simulation, then a second different option would be considered
  5. Klein calls this the recognition-primed decision model (RPD)
  6. RPD involves both s1 and s2
    1. The plan is generated in s1
    2. Then the “rollout” of the plan is in s2
  7. Herbert Simon said: “The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer.  Intuition is nothing more and nothing less than recognition.”
  8. Klein felt that experts could be trusted while Kahneman did not – the following are needed in order to have accurate judgements and accurate levels of confidence
    1. Environment is regular enough to be predictable
    2. Ability to learn these regularities through practice
    3. <Honestly, I feel like these points are very hand-wavy.  Examples are given of experiened firefighters having a 6th sense of when a building is about to collapse.  I feel like 1: there is little regularity from fighting fire in one building to another 2: How often are even firefighters in a building that is about to collapse that they can learn this through practice?>

Chapter 23: The Outside View

  1. The internal view is belief of how something works based on a subjective intuition.
  2. The outside view is based on statistics – to make an estimate of something you dont just ask someone what they think.  You find similar cases and look at the distribution over results
  3. The planning fallacy is the phenomenon of drastically underestimating time and resource costs when planning.  In particular this fallacy descibes cases where
    1. Estimates are close to best-case estimates
    2. Esitmates could be improved by looking for statistics from similar cases.
  4. Bent Flyvbjerg, a planning expert says  “The prevalent tendency to underweight or ignore distributional information is perhaps the major source of error if forecasting.  Planners should therefore make every effort to fame the forecasting problem so as to facilitate utilizing all the distributional information that is available.”
  5. Using such information to enhance prediction accuracy is called reference class forecasting
    1. Basically you use the average and then move that up or down a little based on whatever relevant exceptions you have

Chapter 24: The Engine of Capitalism

  1. The planning fallacy arises because we tend to have biases that are overly optimistic
  2. Optimism, however is healthy both for physical and mental well-being
  3. “Optimistic individuals play a disproportionate role in shaping our lives  Their decisions make a difference; they are the inventors, the entrepeneurs, the political and military leaders–not average people.  They got to where they are by seeking challenges and taking risks.” Luck of course, is a big factor in the success of these individuals as well
  4. This means that often the people shaping other peoples lives are risk-seeking
  5. Small business owners think the base-rate of success for small businesses is about 60%, but the actual number is about 35%
  6. Looking at inventors, they tend to be optimistic.  When their inventions were evaluated, those who had very accurate evaluations predicting failure continued on and doubled down their investment about half the time (and lost it).
    1. “Overall, the return on private invention was small, ‘lower than the return on private equity and on high-risk securities.’  More generally, the financial benefits of self-employment are mediocre: given the same qualifications, people achieve higher average returns by selling their skills to employers than by setting out on their own.
  7. The large majority of people evaluate themselves as above average in many desirable traits.  “90% of drivers believe they are better than average.”
  8. In capitalism, this leads to bad investments/business decision making because people overestimate their abilities, ignore why other failed, ignore negative events that are likely to arise, and ignore what we are don’t know, meaning we leave information we should discover first, undiscovered
  9. Professors at Duke asked CFOs of large corporations about returns in the S&P over the year.  It turns out that the overall correlation between what was predicted and what happened was very low, but was actually negative!
    1. Also when asked to give bounds on how much the S&P would change, the bounds turned out to be much narrower than they should be – the actual value went outside their predicted bounds 3x as often as it should based on the bounds that were asked for
  10. “Overconfidence is another manifestation of WYSIATI”
  11. “… optimism is highly valued, socially and in the market [if the above CFOs gave accurate estimates in a boardroom to the above problem, they would probably be fired]; people and firms reward the providers of dangerously misleading information more than they reward truth tellers.  One of the lessons of the financial crisis that led to the Great Recession is that there are periods in which competition, among experts and among organizations, creates powerful forces that favor a collective blindness to risk and uncertainty.”
  12. Unfortunately, this problem also arises in medicine; people value doctors that are more certain even if they shouldn’t be – one study showed that of 40% of people who died in the hospital, their doctor was certain they would be OK
  13. Studies that attempted to train people to be more conservative and less overconfident in their predictions have generally been unsuccessful
    1. “…overconfidence is a direct consequence of features of s1 that can be tames–but not vanquished.  The main obstacle is that subjective confidence is determined by the coherence of the story one has constructed, not by the quality and amount of the information that supports it.”
  14. In order to mitigate these effects Klein proposes telling everyone involved in making a decision to imagine they are a year in the future and the project was a disaster.  Each person should come up with a story to explain what went wrong.

Part 4: Choices

Chapter 25: Bernoulli’s Errors

  1. Starts by describing how economists traditionally considers utility theory where people are assumed perfectly logical, which psychologists know isn’t true
  2. This work lead to the paper on prospect theory, which is how people violate these assumptions
  3. The followup paper in Science was about how framing impacts decision making
  4. Bernoulli noted that:
    1. People appreciate money relative to how much money they have
    2. People will accept more certain payoffs of less expected value (risk averse)
  5. <His point was that because of diminishing returns in how much increasing amounts of wealth is actually appreciated (its utility) individuals end up being risk averse because they discount the high value outcome ?>
  6.  “Most impressive, his analysis of risk attitudes in terms of preferences for wealth has stood the test of time: it is still current in economic analysis almost 300 years later.”
  7. <Immediately next> “The longevity of the theory is all the more remarkable because it is seriously flawed.”  The problem is it assumes everything is based on current wealth.  It ignores that if two people have the same wealth, but one just got there by having it increase by 10x, and the other one lost 9/10 of his previous wealth they will feel differently about that amount
    1. The change is also important.
    2. “This reference dependence is ubiquitous in sensation and perception.  The same sound will be experienced as very loud or quite faint, depending on whether it was preceded by a whisper or a roar.”
  8. “The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long.  I can explain it only by a weakness of the scholarly mind that I have often observed in myself.  I call it theory-induced blindness…”

Chapter 26: Prospect Theory

  1. Because they were outsiders, they were able to see problems in Bernoulli’s approach simply because something else seemed natural to them
  2. Instead of considering wealth, they considered changes in wealth.
  3. They noted that people are risk-averse in gains, but risk-seeking in losses
  4. This was noted previously, but theory-induced blindness caused people to not fully appreciate the phenomenon
  5. Prospect theory is more complex than utility theory because it requires a reference point, which utility theory doesn’t use (so it isn’t simply a generalization of the approach)
  6. Relevant operating characteristics of s1:
    1. Everything is relative to where you start
    2. There is diminishing returns
    3. Loss aversion
  7. There are some limitations of prospect theory – can’t cover every interesting facet of behavior

Chapter 27: The Endowment Effect

  1. Prospect theory says that  even when the utility of a change is zero, preference is to remain in the current situation due to loss aversion (even-odds is felt as uneven because the loss is more significant)
  2. The endowment effect describes a gap between what you would buy something for and what you would sell it for – usually there is one.  Standard economic theory says any price above what you are willing to buy it for you would immediately sell it for
  3. Prospect theory describes this by the additional pain for selling and the pleasure of obtaining – the change of state that occurs when you get the good changes your relative valuation of it
  4. This was a big component in the development of behavioral economics
  5. We also value things differently that we have vs use (use, or more specifically use up is valued more highly)
  6. “Evidence from brain imaging confirms the difference.  Selling good that one would normally use activates regions of the brain that are associated with disgust and pain.  Buying also activates these ares, but only when the prices exceed the exchange value.  Brain recordings also indicate that buying at especially low prices is a pleasurable event.”
  7. “The fundamental ideas of prospect theory are that reference points exist and that losses loom larger than corresponding gains.”
  8. “No endowment effect is expected when owners view their goods as carriers of value for future exchanges [for example the way we view money – just a means to get something else], a widespread attitude in routine commerce and in financial markets.”
  9. In an experiment where traders were given one of two goods of equal value and then later asked if they wanted to swap, only 18% of the inexperienced traders agreed to swap while 48% of the experienced traders did
    1. The experienced traders were basically exactly where standard utility theory would put them, while inexperienced traders were in line with prospect theory
    2. How long they held on to the good before a trade was proposed was also a determiner of their willingness to trade (more willing when good was held for short period)
  10. Poor people also show decisions like experienced traders, but not necessarily because they are able to reframe the problem objectively; for people in poverty decisions are often between two different losses relative to their reference point, so loss aversion weighs both choices equally
  11. Culture also has strong impact on results of these tests, even between US and UK.

Chapter 28: Bad Events

  1. How does loss aversion fit into the way s1 and s2 interact
  2. Images of the eyes of a frightened person shown for 2/100 of a second and then masked by visual noise (so short that it is not consciously perceptible) lead to heightened activity in the amygdala (where fear is dealt with)
    1. “The information about the threat probably traveled via a superfast channel that feeds directly into a part of the brain that processes emotions, bypassing the visual cortex that supports the conscious experience of ‘seeing.’ The same circuit also causes schematic angry faces (a potential threat) to be processed faster and more efficiently than schematic happy faces.”
    2. Also a single angry face will pop out of a sea of happy faces, but the opposite is not true
    3. The ability to very quickly isolate a threat in a complex scene is an evolutionary advantage.  There doesn’t seem to be a parallel mechanism to cease on positive things
  3. These are s1 responses
  4. s1 responds similarly even when seeing a threatening word such as “war”; we react to words in a very similar manner as we do to the real thing the word represents
  5. Negative trumps positive; a roach in a bowl of cherries makes it disgusting, but a cherry in a bowl of roaches doesn’t make it any less disgusting <perhaps even moreso in a sort of macabre way – at least thats my intuition>
  6. Bad stereotypes are easier to develop than positive ones
  7. Successful marriages depend more on avoiding negative than on achieving positive
  8. “We all know a friendship that may take years to develop can be ruined by a single action.”
  9. Because of these things, we are risk averse.  We feel negative more strongly than positive so it is weighed much more heavily
  10. Negotiations are difficult because in serious situations everyone has to give something up to gain something.  Because losses are felt more heavily, even if both sides of the deal are fair, both parties will feel that they have lost out
  11. This phenomena of being loss sensitive also occurs in the animal world.  An animal defending its territory (protecting itself from loss) is much more successful at defending its territory than the attacker is at obtaining it
  12. Loss aversion causes us to be conservative and to seek to maintain the status quo as much as possible
  13. Sensitivity to base point makes people very sensitive to price gouging
    1. “A basic rule of fairness, we found, is that the exploitation of market power to impose losses on others is unacceptable.”
    2. This rule only doesn’t hold if the organization itself is also threatened; it can be aggressive with profit-seeking or cost-saving if the organization is really in trouble
  14. “More recent research has supported the observations of reference-dependent fairness and has also shown that fairness concerns are economically significant… Employers who violate rules of fairness are punished by reduced productivity, and merchants who follow unfair pricing policies can expect to lose sales.  People who learned from a new catalog that he merchant was now charging less for a product that they had recently bought at a higher price reduced their future purchases from that supplier by 15%… The losses far exceeded the gains from the increased purchases produced by the lower prices in the new catalog.”
  15. People are also happy to retaliate even if they weren’t directly involved.  In an fMRI study, joining in on the punishment of someone who wronged someone else lead to activation of the brains pleasure centers
    1. “It appears that maintaining the social order and the rules of fairness in this fashion is its own reward.  Altruistic punishment could well be the glue that holds societies together.”
    2. <On the other hand, we have seen from the old prison experiment, how easy it is to manipulate people to harm others; if an outsider was always so motivated to maintain fairness, it should be harder to get people to persecute others without cause>

Chapter 29: The Fourfold Pattern

  1. Weighing of the qualities/features of something are done automatically by s1
    1. “Your judgement of your son-in-law may depend more or less on how rich or handsome or reliable he is.”
  2. Sometimes establishing these weights is deliberate, but that must be done on top of s1
  3. Gambling is a nice thing to study because you can simply declare what the weights (probabilities) of the different options are
  4. When people are asked what is more significant in terms of the change in probability of winning a million dollars: 0->5%, 50->55%, or 95->100%, people put more weight on the first and last options.  The first opens the possibility of something happening that was impossible before (possibility effect), and the last makes it a sure bet (certainty effect).  The middle just makes something a bit more likely
    1. Things also work similarly for the possibility of bad outcomes
    2. “‘Improbable outcomes are over-weighted–this is the possibility effect.  Outcomes that are almost certain are underweighted relative to actual certainty.” Because of this old utility theory doesn’t accurately predict what people do
  5. “The plot thickens, however, because there is a powerful argument that a decision maker who wishes to be rational must conform to the expectation principle… [von Neumann and Morgenstern]… proved that any weighing of uncertain outcomes that is not strictly proportional to probability leads to inconsistencies and other disasters.”
  6. In 1952 there was a conference of all the leading economists in the world, and a question was posed that showed that standard utility theory (what they advocated) didn’t reflect even their own preferences, let alone what common people prefer.  When this was pointed out the audience simply ignored the event
    1. “As often happens when a theory that has been widely adopted and found useful is challenged, they noted the problem as an anomaly and continued using expected utility theory as if nothing had happened.”
  7. Risk seeking in terms of potential loss and risk averse in terms of potential gain
    1. “Many unfortunate human situtations unfold in the top right cell [corresponding to being risk seeing in the face of loss].  This is where people who face very bad options take desperate gambles, accepting a high probability of making things worse in exchange for a small hope of avoiding a large loss. Risk taking of this kind often turns manageable failures into disasters.”
  8. In terms of litigation, prospect theory says the way weighing is skewed puts defendants in a relatively strong position (relative to what utility theory predicts) in the case where the prosecution has a very strong case.  The tables turn to favor the prosecution when they have a weak case.  This is borne out in practice.
    1. This means that conducting frivolous litigation turns out to be more rewarding than it should be

Chapter 30: Rare Events

  1. Terrorism works because even though the actual risks of being harmed in attack (even in Israel in early 2000s) are minimal, they paint a vivid picture that causes people to factor them in more heavily than they should.  Even though s2 may understand what the actual risks are s1 overvalues the risk
  2. “The psychology of high-prize lotteries is similar to the psychology of terrorism… In both cases, the actual probability is inconsequential; only possibility matters.”
  3. “Emotion  and vividness influence fluency, availability, and judgements of probability–and thus account for our excessive response to the few rare events that we do not ignore.”
  4. Studies show:
    1. People overestimate the probabilities of unlikely events (as an example, if you ask a group of  people the likelihood that the best 8 basketball teams will win the championship, the total probability comes out to 240% – they even bet with odds that didn’t make sense because of this)
    2. People overweight unlikely events in their decisions
  5. “Although overestimation and overweighting are distinct phenomena, the same psychological mechanisms are involved in both: focused attention, confirmation bias, and cognitive ease.”
  6. When something emotional (such as an electric shock, but could also be something pleasant) occurs with some likelihood, people ignore actual probabilities of the event even more (in some sense people are just sensitive to the possibility of it happening or not)
    1. In fact, the degree to which people (somewhat) accurately make decisions based on monetary outcomes is the exception to the rule.  In most cases people really don’t make decisions based on probabilities.  Indeed, if the outcome is a good and not money, people don’t pay attention to the value of the good at all, even if they are told about it <I suppose this makes sense – normal people don’t put everything on craigslist.  Either they use it or toss it.>
  7. When decisions are made based on actual experience of outcomes (like a bandit task in the lab), people are very accurate in the way they use probability estimates

Skipping rest –  book recalled from library.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: