Sources of Power. Klein. Book 1999.


This book is essentially the contrasting position taken by that of Kahneman, and is discussed in a chapter of his book.  More specifically, Kahneman focuses on how people generally make poor or irrational decisions, and Klein focuses on how humans that are experts in a particular domain achieve that expertise.

Overall, I would say the book is quite interesting, but if I had to choose between the results of Kahneman and Klein (if they disagreed on a particular point), I would have to go with Kahneman simply because his methods seem more objective; the claims here are (somewhat necessarily by nature) anecdotal and subjective in terms of their evaluation.  It is also a bit more hand-wavy.  Still, I think there is a lot of goodness here.

Chapter 1: Chronicling the Strengths Used in Making Difficult Decisions

  1. While a good deal of research shows how people are really terrible at many things (even when they are supposed to be experts, as described by many cases with Kahneman) this book focuses on real-world (as opposed to laboratory) domains where people achieve amazing performance
  2. “Shopping in a supermarket does not seem like an impressive skill until you contrast an experienced American shopper to a recent immigrant from Russia.”
    1. <See, shopping is interesting!>
  3. Gives an example of how firefighters/EMTs operate.  In one case they sped over to a house where a man fell and put his arm through a window and severed an artery.  The person who evaluated the scene said that the man had lost 2 units of blood, and if he lost 4 he would be dead.  Based on where they had applied a bandage he could discern which artery was severed.  “Next, he might examine whether there are other injuries, maybe neck injuries, which might prevent him from moving the victim.  But he doesn’t bother with any more examination.  He can see the man is minutes from death, so there is no time to worry about anything else.”  He makes other important snap decisions such as having men put a device that keeps blood pressure up while they were already moving in the ambulance to save time.  From getting the initial alarm to getting the man to the hospital only 10 minutes elapsed.
    1. In the example, the lieutenant “… handled many decision points yet spent little time on any one of them.  He drew on his experience to know just what to do.”  But how does that happen?
  4. Conventional sources of power “… include deductive logical thinking, analysis of probabilities, and statistical methods.”  But those that are useful in natural settings include “… intuition, mental simulation, metaphor, and storytelling.”
    1. Intuition “… enables us to size up a situation quickly.”
    2. Simulation lets us do roll-outs in our head
    3. Metaphor allows us to generalize context from one situation to another in some way
    4. Storytelling allows us to transfer our experience to others.
  5. This book is concerned with naturalistic decision making, which involves “time pressure, high stakes, experienced decision makers, inadequate information …, ill-defined goals, poorly defined procedures, cue learning, context (e.g. higher-level goals, stress), dynamic conditions, and team coordination (…).”
    1. With regards to time pressure, estimate that 80% of decisions made by fireground commanders are done in less than a minute; many take a handful of seconds.  They also study engineers who have tight deadlines, even though they are far into the future, so they too have time pressure, but in a different sense than firefighters, compared to whom “… they are almost on vacation.”
    2. In terms of high stakes, mistakes when firefighting will cause people their life.  In finance fortunes can be lost on a single decision
    3. Here they consider experts – the firefighters they consider have an average on 23 years experience. “In contrast, in most laboratory studies, experience is considered a complicating factor.”
    4. In terms of unclear goals, from the example of firefighting, its not always clear if you should try and save the building, or simply spend efforts ensuring the fire doesn’t spread.  Should you risk firefighters to go into a building if you don’t know there is anyone there?

Chapter 2: Learning From Firefighters

  1. The initial hypothesis was that firefighters quickly pruned decision making to select between two options.  It seems that in most cases they didn’t even select between two; one option simply emerged.  One firefighter said “I don’t make decisions … I don’t remember when I’ve ever made a decision.”
    1. While creating options and selecting between them is something people often do (for example, when selecting a job), there is simply no time to consider options in time-critical scenarios
    2. <In Kahneman, I recall that the process is simply generating a single solution, simulating it mentally and then revising it/discarding it if (severe) problems arose when doing the rollout>
  2. In terms of firefighters, their experience seemed to blend together – they didn’t try to find a “nearest neighbor” mapping from the current crisis to something in their experience, in some cases they would isolate particular occurrences from a previous firefight that they thought of in a particular firefight

Chapter 3: The Recognition-Primed Decision Model

  1. As mentioned, firefighters just seem to know what to – even when it turns out the plan they were executing has failed they will immediately summon another solution to deal with the scenario
    1. This conflicted with their hypothesis that people always choose between (at least) two decisions
  2. “The commanders’ secret was their experience let them see a situation, even a nonroutine one, as an example of a prototype, so they knew the typical course of action right away.”  This is what they call the recognition-primed decision model
  3. Sometimes though, there is nothing at all to map to because the situation is really unique and they simply have to invent a solution
  4. As mentioned before, when a someone needs to take a while to consider options it is generally by thinking something up, then deciding if its good or not (through rollouts), and then if not thinking of something else, as opposed to a direct comparison between them
    1. The is called the singular evaluation approach
    2. This is basically the same idea as Herbert Simon’s satisficing – especially in time-critical situations you don’t have time to find the best option, so just find one that will do
    3. Experts will generally come up with a workable solution on the first shot, so they rarely have to consider more than one option
  5. They thought initially that it would be a characteristic of amateurs that they would jump to some option and that experts would carefully deliberate, but the actual data indicated the opposite
  6. “… there are times for deliberating about options.  Usually these are times when experience is inadequate and logical thinking is a substitute for recognizing a situation as typical… Deliberating about options makes a lot of sense for novices, who have to think their way through a decision.”
  7. Out of all the cases studied, 127 of 156 decisions were recoginitional decisions / singular evaluations
    1. Most of the comparative evaluations were situations were novices or outclassed by the situation somehow
  8. When creating a plan it is necessary to identify “… what types of goals make sense (so that priorities are set), which cues are important (so there is not an overload of information), what to expect next (so they can prepare themselves and notice surprises), and the typical ways of responding in a given situation.  By recognizing a situation as typical they also recognize a course of action likely to succeed.  The recognition of goals, cues, expectancies, and actions is part of what it means to recognize a situation.  That is, the decision makers do not start with the goals or expectancies and figure out the nature of the situation.”
  9. The 156 decisions they recorded corresponded to the difficult decisions while undertaking the tasks; the simple/routine stuff was left out
  10. The RPD model was in contrast to the traditionally accepted models of two-option comparison, or work by Janis and Mann that says people generally try to avoid decision making because it is difficult, but when they do a very heavy-weight process to come up with an answer; the approach here is quite the opposite.
    1. “Janis and Mann probably did not intend this advice for time-pressured situations, but the RPD model predominates even when time is sufficient for comparative evaluations.  Yte in one form or another, Janis and Mann’s prescriptive advice is held up as an ideal of rationality and finds its way into most courses on cognitive development.”
    2. This approach is more accurate when dealing with novices, or working in teams when it provides a way to pool knowledge, lead to agreement, and produce a process everyone can agree on
  11. The implications of the chapter is you don’t make someone an expert by having them exhastively enumerate options and consider their values.  In fact, when you do that you “… run the risk of slowing the development of skills.”  It is by experience and training by experts in terms of what they would do at a similar pace to what they would experience in the real task that is best.
    1. “The design of the scenarios is critical, since the goal is to show many common cases to facilitate a recognition of typicality along with different types of rare cases so trainees will be prepared for these as well.”
    2. “The emphasis is on being poised to act rather than being paralyzed until all the evaluations have been completed.”

Chapter 4: The Power of Intuition

  1. Intuition is difficult to study because its difficult to describe and people often don’t even know when they are using it
  2. A study showed that the brain basically makes a decision before the individual is even aware that it happened
  3. In one case a firefighter lieutenant saved his crew by deciding to pull his men out of a building that had a fire that reacted strangely to initial attempts to put it out, but otherwise did not seem particularly dangerous (it collapsed shortly after the men left).  The firefighter claimed that he used ESP, but when drilling down it turned out that he was spooked by the unusual circumstances and  pulled the men out to figure out what was happening.
    1. After they (Klein and the firefighter) worked through the experience “I think he was proud to realize how his experience had come into play.  Even so, he was a little shaken since he had come to depend on his sixth sense to get him through difficult situations and it was unnerving for him to realize that he might never have had ESP.”
    2. “… he did not seem to be aware of how he was using his experience because he was not doing it consciously or deliberately.  He did not realize there were other ways [aside from ESP] he could have sized the situation up.”
  4. Here intuition can be considered recognizing something without knowing how or if recognition is happening
  5. The perspective here is that intuition comes from experience
  6. In the above example, the firefighters experience didn’t give him facts from memory, but it did change the way he saw and reacted to the situation.  Additionally:
    1. He was drawing on things that didn’t exist in his memory (this fire didn’t react in a way he had experienced before, so there was nothing concrete to think of)
    2. He wasn’t making decisions based on particular events but his aggregate experience and expectations of what should happen
  7. In its simplest instantiation, RPD is a model of intuition
  8. Work by others “… shows that people do worse at some decision tasks when they are asked to perform analyses of the reasons for their preferences or to evaluate all the attributes of the choices.”
  9. Of course, intuition isn’t infallible though
  10. In the case where intuition is wrong, comparing the way events actually unfold to the way they were expected to allows us to figure out when we make mistakes
  11. There is another example of how a radar operator in the gulf war was able to discern that a radar blip was an incoming missile and not an incoming friendly aircraft – the distinction was extremely subtle and too many people a long time to figure out, but this operator had the correct intuition, even though he was unsure why
  12. Another example is how nurses decide if newborns have an infection – in some cases by the time the antibiotic starts to take effect the infection is already too far along.  Nurses often able to correctly identify when babies are infected, but were unable to identify what rules they used.  Careful investigation of their methods and case studies of their decisions.  Half of the rules they eventually discerned were new to medical literature, and a number of cues are opposite of what they would be for adults.  Ultimately, their decisions were usually based on the combination of a number of subtle cues
  13. The best way to get people to improve their performance is to simply have them experience increasingly complex scenarios.  Compiling case studies is also very good (especially if someone is in a situation where they can’t get access to sufficient experience or perhaps it is very dangerous) Simulations that are crafted with care can be even better: “A good simulation can sometimes provide more training than direct experience.  A good simulation lets you stop the action, back up and see what went on, and cram many trials together so a person can develop a sense of typicality.”

Chapter 5: The Power of Mental Simulation

  1. “During a visit to the National Fire Academy we met with one of the senior developers of training programs.  In the middle of the meeting, the man stood up, walked over to the door, and closed it.  Then in a hushed voice he said, ‘To be a good fireground commander, you need to have a rich fantasy life.'”
    1. What he meant was its important to be able to imagine what scenarios could have lead to the current situation, and to be able to imagine how the situation can evolve going forward.
  2. By fantasy, he meant ability to imagine and simulate – to be able to do rollouts
  3. Mentions a paper by Kahneman and Tverseky on the simulation heuristic: a person builds a simulation to explain how something may happen, if the simulation required too many unlikely events, the outcome was determined to be implausible
  4. Mental simulations tend to be extremely simple, relying only on a “… few factors–rarely more than three.  It would be like designing a machine that had only three moving parts.  Perhaps the limits of our working memory had to be taken into account.”
    1. Usually limited to about 6 transitions, perhaps also a function of the limit of working memory
  5. 3 parts and 6 transitions are basically the limits we have to consciously work in
  6. Expertise in the area allows for more powerful modeling; several transitions can be compressed into one
    1. Basically more experience allows for more accurate, higher level abstractions that would leave novices bogged down
  7. Being able to work like this is critical for those that deal with code <Indeed, I think programming requires more use of a large working memory than any other tasks I’ve done, including symbolic math>
  8. “Considering all these factors, the job of building a mental simulation no longer seems easy.  The person assembling the mental simulation needs to have a lot of familiarity with the task and needs to be able to think at the right level of abstraction.  If the simulation is too detailed, it can chew up memory space.  If it is too abstract, it does not provide much help.”
  9. They then went on to study what leads to failure of mental simulation (they previously gave an example were multiple agents weren’t able to communicate, and ultimately disaster occurred because for each party to get into the correct frame of mind seemed too implausible)
  10. He gives an example of an economist extremely accurately predicts what would happen to Poland under basically pure capitalism after the fall of communism.  The predictions were pretty accurate and came down to monitoring 3 variables: unemployment, inflation, and foreign exchange
    1. <Its worth mentioning that Kahneman reports that studying political scientists and pundits in aggregate reveals they are quite poor at making predictions as a whole.  Maybe this is a small sample size issue, or maybe the real experts are the only ones that are actually good at making predictions and got washed out in the large sample Kahneman took.  It may be the latter because even this indivudal’s students who spent time in Eastern Europe, and another political science professor who spent time in Poland (but wasn’t an expert in the area) weren’t able to make simulations at all>
  11. “…without a sufficient amount of expertise and background knowledge, it may be difficult or impossible to build a mental simulation.”
  12. Mental simulation serves to explain the past and predict the future.  In either case we work from the current state and then work backwards or forwards
  13. Once a plan is constructed, it is evaluated according to:
    1. Coherence: does it make sense?
    2. Applicability: will it accomplish what I want
    3. Completeness: does it do too much or too little
  14. When doing a rollout you may realize there are transitions which rely on data you don’t know much about (that is not that its unlikely, but rather that the confidence in the model is low)
  15. Experts can use intuition to predict if a strategy is effective, even without studying each particular step
  16. One problem with mental simulation is that sometimes we can get stuck with them and may be unwilling to discard them, even in the face of significant counter-evidence
    1. These are called by Perrow de minimus explanations; they simply try to minimize inconsistency. “The operator forms an explanation and then proceeds to explain away the disconfirming evidence.”
  17. Naturally when pressed for time we may not find errors
  18. Other problems are that in some cases people will look for something wrong with a plan, find one thing, and then consider the evaluation done, while there may be other problems with the plan
  19. Usually we can catch when a plan starts to seem infeasible, but on occasion patching it repeatedly by small amounts can lead to something that won’t work, but also seems ok
    1. Catching yourself in the error is called snap-back
    2. This is called the garden path fallacy
  20. He also discusses the ideas of doing a pre-mortem (imagine a plan went wrong, now explain why).  Kahneman also discusses this idea, but I don’t recall it was in terms of his discussion of Klein
    1. This works well because by design people should not just say a plan is fine as they are instructed to assume that it was a failure and then work backwards from there (as opposed to their natural tendency to say it worked well)
    2. “It takes less than ten minutes to get the people to imagine the failure  and its most likely causes.  The discussion that follows can go on for an hour.”
    3. These are useful not only because they can more successfully reveal flaws; they also can enable more effective contingency planning
  21. In talking about how planning at Shell went wrong: “This example shows how mental simulations can gain force when made explicit; the executives responded more favorably to decision scenarios than to forecasts based on statistics and error probabilities.”
    1. Kahneman would describe this as a system 1 / wysiati error

Chapter 6: The Vincennes Shootdown

  1. The name is derived from the commanding officer on the USS Vincennes
  2. In one case  he correctly decided not to shoot down enemy fighter planes that he correctly deduced were simply trying to provoke him, but didn’t really intend.
  3. In the other case he incorrectly shot down an Iranian passenger jet, incorrectly taking it as an enemy plane
  4. The rest of the chapter will deal with this case study
  5. In this case, the missile was fired 9 seconds after the the plane was recognized as potentially dangerous
  6. That day, the ship was attacked by other smaller ships, and its helicopter was fired on.  There were plenty of other reasons why the CO would consider the plane hostile
  7. Unfortunately, 2 key pieces of information were in error: that the plan had begun descent and that it was using an enemy beacon.
  8. Another ship in the area (seems to have had less sophisticated tools to deal with aircraft as well), figured the plane was correctly a passenger plane
  9. “Once the Vincennes’ crew became convinced that the track belonged to an F-14, that assumption colored the way they treated it and thought about it.”
    1. Context has a strong influence on decision making, especially when the data is highly ambiguous
  10. The interpretation started out the wrong way because an operator picked up an enemy military beacon, but that was due to operator error, so that one mistake lead to more mistakes
  11. US Navy analysis found “After this report of Mode II [enemy beacon], [a crew member] appeared[ed] to have distorted data flow in an unconscious attempt to make available evidence fit a preconceived scenario (‘Scenario fulfillment’).”
  12. Other data shows that regardless of the flawed data they had, even the good data would lead one to believe that the plane was hostile; most of the analyses suffer from hindsight bias
  13. “If… the Vincennes had not fired and had been attacked by an F-14, the decision researchers would have still claimed that it was a clear case of bias, except this time the bias would have been to ignore the base rates [one base rate would have it at 98.7% chance of being hostile], to ignore the expectancies.  No one can win.”
  14. Basically there were many issues that caused errors that were outside the control of the sailors.
  15. The point they make for this chapter is that behavior followed “… the same pattern: the use of mental simulation to evaluate and rule out possible explanations.”

Chapter 7: Mental Simulation and Decision Making

  1. “Mental simulation shows up in at least three places in the RPD model:diagnosing to form situation awareness, generating expectancies to help verify situation awareness, and evaluating a course of action.”
  2. Situation awareness is making sense of a situation from clues, which is especially useful when we find ourselves in an atypical situation, an example of this the way a mechanic or doctor makes a diagnosis
  3. “Situation awareness can be formed rapidly, through intuitive matching of features, or deliberately, through mental simulation.”  May be from mapping from the present to the past experience, or to choose between a number of candidate options
  4. Expectancies are developed by doing rollouts.  With more experience these become more exact.  “The greater the violations and the more effort it takes to explain away conflicting evidence, the less confident the decision maker feels about the mental simulation and diagnosis.”
  5. We generally don’t compare options because we are satisficing, not optimizing: “We think of grand masters as rational and analytical.  When de Groot asked them to think aloud while finding the best move in a chess problem, they relied on mental simulation to evaluate promising courses of action.  In de Groot’s published records, only five cases out of forty games, … show the grand masters comparing the strengths and weaknesses of one option to another.  The rest of the time they were rejecting moves or figuring out their consequences.”
  6. This is to say its not that we never do comparisons – when choosing a car to buy, for example we often do such comparisons
    1. There are a number of strategies people use to do such comparisons (which generally simplify the problem), such as just picking the option that is the best in a single dimension, or iteratively thresholding items from most important to least important dimension
    2. Also more common among novices
    3. Which one is used depends on a number of criteria, such as the amount of time allowed, importance of the decision, whether the decision needs to be justified, etc…
    4. In other cases, however, this option selection may be made according to a criteria that is not easily describable.  That is the case again in chess players; they do an iterative deepening based on a criteria that is tough to describe
  7. When there is time pressure, decisions are more likely to be made according to RPD method as opposed to comparative method
    1. In these situations, “Virtually not time was spent in any comparisons of options.  In fact, the bulk of time was spent in situation assessment rather than alternative generation…”
  8. Significance of the RPD model:
    1. Seems to describe what people actually do most often
    2. Explains how people can use experience to make decisions
    3. Demonstrates that people can make effective decisions w/o using a rational choice strategy
  9. Unlike more formal systems of decision making, RPD is naturalistic.  When other schemes are taught, they seem to just reduce performance
  10. RPD says to get better at something experience is key, but there are factors that emerge among experts
    1. Practice is good – it should be undertaken such that the experience obtained in each episode of practice has a goal and evaluation criteria
    2. Have lots of experience
    3. Get accurate feedback that is diagnostic (without feedback, accumulated experience may not be useful)
    4. Get more use of past experiences by reviewing them from time to time to try and learn new lessons (for example, chess players don’t have much time to deliberate over earlier decisions during a match; they do post-mortems)
  11. “The strategies provide a concept that is consistent with principles of adult learning in which the learner is assumed to be motivated, and the emphasis is on granting autonomy and ownership to the learner rather than having the trainers maintain tight control.”
  12. Inasmuch as a task can be broken down into individual subcomponents, its useful to identify those components and practice each independently

Chapter 8: The Power to Spot Leverage Points

  1. A leverage point is a place where a small amount of effort can lead to a large change
  2. A case study is given where a doctor relied on past experience that was in many ways different from a present challenge, but used that experience to develop a solution to a complex and time-critical problem
    1. The trick was the two problems had the same leverage point; the trick that worked in one worked in the other
  3. “Skillfull problem solving is impressive because after the fact, the solution seems obvious, yet we know that, without any guidance, most people would miss the answer.  They would not even know that an answer was possible.”
  4. Their concept of leverage points first arose in chess, where often a particular situation would be sought that would cause them to have an advantage, such as being in a position to attack the opponent’s queen
  5. It can also be other sorts of strategic decisions, such as IBM investing in the 360 (which they say cost more than the manhattan project)
  6. “Leverage points are just possibilites–pressure points that might lead to something useful, or might go nowhere.  Expertise may be valuable in noticing these leverage points.”
  7. “Leverage points provide fragmentary action sequences, kernel ideas, and procedures for formulating a position.  Experts seem to have a larger stock of procedures that they can think of to use as starting points in building a new plan or strategy… Novices, in contrast, are often at a loss about where to begin.”
  8. A leverage point that can work against you is called a choke point
  9. “Once we come up with leverage points, we need to fill in the remaining details.”

Chapter 9: Nonlinear Aspects of Problem Solving

  1. “The concept of leverage points opens the way to think about problem solving as a constructive process.  It is constructive in the sense that solutions can be built up from the leverage points and that the very nature of the goal can be clarified while the problem solver is trying to develop a solution.”
  2. This approach can be traced back to Karl Duncker of the Gestalt school.  “Rather than treating thought as calculating ways of manipulating symbols, the Gestaltists viewed thought as learning to see better, using skills such as pattern recognition.”
  3. “To solve ill-defined problems, we need to add to our understanding of the goal [or decide what it would even be] at the same time we generate and evaluate courses of action to accomplish it.  When we use mental simulation to evaluate the course of action and find it inadequate, we learn more about the goal we are pursuing.”  Failures also lead to new understanding
  4. They mean nonlinear in that there is not necessarily one smooth transition from start to finish while attacking a problem.  It may require multiple iterations of deepening understanding, planning, simulating, and revising, and the order in which these steps occur can vary.
  5. Part of problem solving tells us how long we expect a solution to take
  6. “We have to balance between looking for ways to reach goals and looking for opportunities that will reshape the goals.”
  7. Standard research on decision making focuses on well-defined problems, as this makes many types of analyses possible.  Many problems we face in the real world, however, are poorly-defined.  This makes the first step, of defining the goal,  difficult.  If you wait for a well-defined problem, you can’t really do much of anything
  8. Goes onto how AI approaches to problem solving aren’t what we do <not really taking notes on this>.  They also work in problem spaces we don’t always work in (well defined, clear goal)
    1. Main point is AI approaches are similar to the analytical approaches that RPD stands in opposition to, and if we are usually doing RPD (at least on a conscious level) then the way classical AI solves the problem is probably not what we do
  9. Example of the Apollo 13 mission

<Ok, out of time on this book – think I got the gist.>

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: