Last year I read Daniel Kahneman‘s “Thinking Fast and Slow“. It took me a long time – definitely reading slow, for me – but I think that was down to his style rather than the book’s content. I read it because two people from very different backgrounds recommended it in the since of a week, and despite being somewhat hard work, bits of it have stuck: they keep recurring in my thoughts.
So I thought I’d share some of those, and recommend it, too. (I haven’t looked at the book for the last six months, and I am deliberately writing from memory. So please don’t take these examples as gospel, and before quoting them, please look to Kahneman’s original text!)
Kahneman’s work can be considered an academic counterweight to Malcolm Gladwell’s “Blink”. Gladwell set out, I think, to suggest that we should trust our intuition (albeit that many of the examples he wrote about seemed to be based around what happens when intuition goes wrong. Policemen shooting innocent men, for instance).
Kahneman, a prolifically able psychologist (and Nobel prize winner in economics, for his work on behavioural economics), sets out to describe how the mind works, describing the unconscious, instinctive, intuitive brain – his “system 1” – and the conscious, analytical brain – “system 2”. System 1 is much faster and cheaper to run than system 2, and this is why for most things we are happy to let system 1 get on with it. His book is full of fascinating stories that illustrate how system 1 can lead us to make some very counter-intuitive decisions, often his own expense.
I started the book very sceptical. Despite all the evidence Kahneman provides, what he describes just didn’t sound like me. I’m analytical, rational, sensible. But he also describes how just about everyone thinks that, too. And left to its own devices, system 1 seems to get us into several bad habits.
For instance, it makes us bad at estimating things, particularly our own (and others’) expertise. Kahneman tells a story of how he was part of a team writing a new curriculum for a psychology course. After several months when they though they were making good progress, he asked another member of the team, who had a lot of experience of the process, how long it should take. The answer was something like “a good team will take a couple of years”; and when asked whether this was a good team, the answer was a resounding “no”! This was a team made up of very rational people – psychologists and educationalists – who frankly should have stopped right there and seen what they could change to achieve a better result. But instead, despite the insight they had received, they ploughed on as if nothing had changed. When Kahneman left the project several years later, it still hadn’t been completed.
In another situation, he describes undertaking leadership assessments for the Israeli army. He understandably decided to validate the process, to see whether the assessments predicted future success as a leader in the army. They didn’t. The predictions were no better than chance. And yet Kahneman continued his work assessing candidates, despite knowing that it was a complete waste of time.
His work in behavioural economics lead to Kahneman working with some stockbrokers. He looked at the firm’s remuneration and bonus structure. Analysing individuals’ results, he showed that success was random: and hence the large bonuses paid for results were completely unwarranted. He told the board, producing his evidence. The board, of course, did nothing, because their whole belief system (and the firm’s culture) was based rewarding success. No one accepted his evidence; they – the experts – knew better than the statistics.
Another story that really stuck with me it’s how bad system 1 is at assessing memories. It only recalls the last experience of something, rather than the totality of that experience. So if you’ve been listening to a piece of music on vinyl, for instance, and it ends with a scratch, you remember the scratch and not the forty minutes of pleasure that came before it. In an experiment to test this, subjects preferred an extended period of pain that ended in a reduction of pain rather than a much shorter period of pain that ended suddenly. System 1 remembers the pain at the end rather than the totality of the pain. The lessons here for anyone designing any process involving customers are rampant. Make it end with a smile!
I think these four simple stories illustrate how irrational even seemingly rational, analytical people can be. This is painful – these are people like me – but it is a valuable lesson, too.
I think the best lesson is to stop and think. This brings the conscious, rational system 2 to the fore. It is harder work, and slower, than letting system 1 determine our actions, and maybe not always appropriate. But it also leads to better, more mindful outcomes. (For instance, it may well be why people who keep “gratitude lists” report being happier – because they are bringing their conscious mind to bear, rather than letting system 1 remember only those last painful moments. There seem to be real benefits to keeping a journal or diary: it helps us to bring an active dimension to our otherwise irrational intuitive minds.)