Daniel Kahneman is widely regarded as the most influential psychologist alive. He won the Nobel Prize in Economics (2002) for his work on judgment and decision-making under uncertainty, much of it done jointly with his late collaborator Amos Tversky. He is the author of the bestselling books Thinking, Fast and Slow and Noise: A Flaw in Human Judgment (written with Olivier Sibony and Cass Sunstein).
JOSEPH WALKER: Daniel Kahneman, welcome to the podcast.
DANIEL KAHNEMAN: Pleasure to be here.
WALKER: Danny, there are many qualities of yours I admire, but perhaps the quality I admire most is your intellectual honesty, and a couple of moments exemplify this for me. First was your response to the replication crisis with respect to priming. Obviously, there was a quite famous and emphatic chapter in Thinking, Fast and Slow, and there's this blog post where you really graciously and humbly retract that chapter.
And then more recently, there's this incredible lecture you did on the topic of adversarial collaboration for Edge.org, and reading it, I was just stunned by how intellectually honest you were. Let me quote a couple of pages from the essay. First, referring to priming, you say, "It turns out that I only changed my mind about the evidence. My view of how the mind works didn't change at all. The evidence is gone, but the beliefs are still standing. Indeed, I cannot think of a single important opinion that I've changed as a result of losing my faith in the studies of behavioural priming, although they seemed quite important to me at the time."
And then later you go on to make the general point that, "To a good first approximation, people simply don't change their minds about anything that matters." I guess my first question is: I find it hard to fathom that you can be simultaneously so self aware and also, as you admit, and just like the rest of us, not good at changing your mind when challenged. Have you gotten any better at changing your mind as you've gotten older?
KAHNEMAN: No. I think I'm actually known for changing my mind. This is one of the traits that all my collaborators complain about, because I keep changing my mind. But I keep changing my mind about small things. Then what I discovered actually, in part while preparing that talk on adversarial collaboration, there are things on which I just won't change my mind. Some of these I've believed since I was 17 or 18, so certainly are not going to change now.
WALKER: And what are some of those beliefs?
KAHNEMAN: Well, they're tastes more than beliefs. There is a kind of psychology I like and a kind of psychology I don't like. There are methods that appeal to me and methods that I find sort of repugnant. Many tastes like that. Among the competing psychological theories of the 20th century, there was a holistic gestalt theory and then there was a behaviouristic theory, to which I attributed a sort of false precision. Since I was 18, I had a very clear preference for the holistic over the falsely precise. I've kept that taste all my life. And it's just a taste. It's not any better than the other taste. It's just my taste.
WALKER: I actually had a question around the topic of tastes and that was: Should psychologists worry less about how descriptively accurate their models are and more about adopting positions that are stronger and starker than what they might actually believe, in order to contribute to an intellectual dialectic?
KAHNEMAN: Among matters of taste, there is a distinction between people who prefer to be precisely wrong or approximately right. I'm on the side of those who'd rather be approximately right. I was married to my late wife, Anne Treisman, who was an eminent psychologist and she very clearly was sticking her neck out all the time, theoretically taking extreme positions. Often she was wrong, sometimes she was wrong, but she defended those positions and found ways of defending them. Whereas I'm sometimes not very easy to refute because I'm fairly vague, but I think I'm approximately right a lot of the time.
WALKER: I see. Who's the most intellectually honest person that you've met or interacted with in your life?
KAHNEMAN: That's a hard one. I think most of the people I've interacted with have assumed they were intellectually honest. All of us are not honest in more or less the same ways — we’re defensive. I find that a difficult question because it's not a trait.
The default is to be honest. I can't think of people I've interacted with whom I consider dishonest. There are a few, and I won't name them.
WALKER: The Undoing Project has my favourite ending of any nonfiction book. In fact, I think I teared up when I was reading it. I was watching an interview with Michael Lewis and he said that story — of you waiting by the phone, and when it didn't ring, you finally allowed yourself to think about what it would be like to win the Nobel Prize and what you would do and how you would do for Amos what he had never done for you or had never had the chance to do for you — you only kind of told him that story after, like, seven years into your interaction with Michael Lewis?
KAHNEMAN: I don't remember when I told him that story. It was pretty straightforward, the story of waiting for the phone call. It was actually quite amusing. I don't remember what he wrote about it. The true story is that I did know that this was coming up. There had been an audition for the Nobel Prize, there sometimes is sort of a workshop with the Nobel committee, where clearly they’re sizing you up. And that had happened the year before. I knew that either I was going to get it or very likely I just wasn't going to get it. So, we were waiting by the phone because you know when it's going to happen. The phone didn't ring for a long time. My wife went to exercise and I went to write a letter. I still remember a reference letter for somebody, and then the phone rang, and they take elaborate precautions, so you believe that it's not a prank.
I walked into my wife who was exercising, and I told her, “I got it.” And she said, “You got what?” That was the beginning of a very exciting day.
WALKER: Was there anything important that the book missed?
KAHNEMAN: You mean Thinking, Fast and Slow?
WALKER: The Undoing Project.
KAHNEMAN: Oh, The Undoing Project. Well, The Undoing Project, it’s not fiction, it's non-fiction, but the characters are drawn to be quite extreme. There are quite a few things where I would have written that differently.
WALKER: In what specific ways?
KAHNEMAN: Well, there is an incident at the very end of the book when Amos, who had been my closest friend and was then, he was like a brother to me of course… We had been for each other, I think, the most important person in each other's life, because we had done so much to change each other's life. We were having a conversation — that must have been a couple of days before he died, about three days. He said, “I wanted you to know that of anybody I've known, you are the one who caused me the most pain.” And I answered without hesitating: “Ditto.” The same. Michael couldn't bring himself to write that. He softened this. Although I had told him “ditto.” I was quite annoyed with him because the ditto was… That expressed our interaction, of course: Amos expected me to say ditto, and we went on and talked as if nothing had happened.
WALKER: I see.
KAHNEMAN: That's the one thing, actually, that I felt Michael shouldn't have done.
WALKER: When you say that was characteristic of your interaction with Amos, is that like an Israeli thing or was that special about your interaction?
KAHNEMAN: It's an Israeli thing, but we were really very close and very open with each other. It didn't come as a huge shock when he told me what he told me, and I'm sure it didn't shock him to hear my answer. It was the kind of interaction we had.
WALKER: I have some questions I really want to ask you about the concept of great partnerships, and speaking here about great partnerships, like world-class partnerships, as opposed to merely good partnerships: Watson and Crick, Lennon and McCartney, Amos and Danny. In a strange way, I almost feel jealous of your partnership with Amos. I hope that I can find that at some point in my life.
KAHNEMAN: Yeah, I think you're right to be jealous. It's an extraordinarily fortunate thing when it happens.
WALKER: Yeah. I want to ask whether we can systematise the formation and maintenance of world-class partnerships or whether, on the other hand, there's just something kind of mysterious and ineffable and unpredictable about them.
KAHNEMAN: Well, I mean, it's clearly unpredictable, and I'm not sure that it's the same everywhere, although quite possibly it's true for the better ones. The mechanism in my interaction with Amos, I think what happened was that very often he understood me better than I understood myself. There is a stage in creative thinking when you say things that later turn out to be important, but you don't yet understand what you've said. You have a glimmer. And he would immediately see through the fog of what I was saying much more clearly than I did. And that is an intense joy, and it also really allows a kind of creativity that a single person doesn't have.
WALKER: Was that the key way in which you were complimentary? Were there any other ways?
KAHNEMAN: We were complementary in many ways. We had different styles. I was better at intuition, I think; he was better at precision, and that was very clear. At the same time, I could understand his precision and he could appreciate my intuition, and I had a lot of precision and he had a good intuition. But we were… to some extent, we were different people, although we could complete each other's sentences.
WALKER: Have you read Montaigne's essay ‘On friendship’?
KAHNEMAN: I must have done it when I was a child, but I wouldn't remember.
WALKER: There's this lovely passage where he talks about his best friend. They only became friends as adults. His best friend's name was Etienne De La Boétie.
KAHNEMAN: Oh, Étienne De La Boétie. Oh, yes.
WALKER: Yeah, he's obviously famous in his own right. There's this lovely line where Montaigne is trying to articulate what made their chemistry so special. He says, “I feel that it cannot be expressed except by replying: ‘Because it was him; because it was me.’” That reminded me of your partnership with Amos.
KAHNEMAN: Beautiful. That is beautiful. Indeed, there is something that feels unique about the interaction, but at the same time, it was fairly clear while it was happening that why we were… And clearly we were better as a pair than either of us was. We did good work individually, separately, but the work we did together clearly is one step beyond and in a combination of amused creativity and a fair amount of precision. That combination really came from the interaction.
WALKER: That leads me to my next question, and that is: are pairs the fundamental creative unit? So, all else being equal, would it actually be better just to have two people working on a problem or a new idea than, say, three or four?
KAHNEMAN: I think it would be very unlikely —it would be very difficult — to imagine a threesome interacting in that particular way. I'd never thought about it that way. I'm inclined to agree that this particular kind of interaction where you build on each other and you improve each other in the interaction, that feels like a pair interacting.
WALKER: There's this really cool book. I actually have a copy there, which maybe I'll give to you at the end, called Powers of Two, about this idea. I was also reflecting on it in the context of… So, as I told you before we started recording, I'm interviewing Katalin Karikó tomorrow in Philadelphia, and she actually did her work in partnership with a guy called Drew Weissman at the University of Pennsylvania. They worked together intensely for almost a decade, but only as a pair. That was because they couldn't get grant funding to support more researchers joining their team. But I think if you reflect on it, it probably turned out that that was a good thing for their research.
KAHNEMAN: Yeah. I did quite a bit of work without Amos, but I always had the feeling that if I had done it with him, it would have been better.
WALKER: Should researchers, should startups, think more about, where possible, creating teams of two as opposed to adding more people to a problem?
KAHNEMAN: I'm not sure that teams can be created by somebody else. Teams have to develop, and pairs have to develop. But as a unit, taking two people: that I think may be a good idea. You may want a team that consists of several pairs because for many projects, two isn't large enough.
WALKER: There are some things that can't be done by two. Sure. I'd like to talk about rationality. In my view, and obviously the view of many others, your work with Amos is a knockout blow to the idea that von Neumann and Morgenstern’s theory could be a description of real human behaviour. So Homo Economicus is clearly descriptively inadequate. Is it also inadequate as a norm? Or how has your thinking on the correct normative model of rationality changed over time?
KAHNEMAN: Well, it's important to see what… Consistency of beliefs and preferences, which are the essence of rationality in that model — it's important to see what it implies. It's not the same thing as reasoning correctly, that is, of saying two things that are consistent with each other in the same conversation. It's that your beliefs, the whole system, your beliefs and preferences, taken one at a time, make up a consistent system. And that is psychologically a non-starter. That's simply because our beliefs and our preferences are so context dependent and the context is highly specific and momentary, that this type of consistency is not conceivable. And being inconceivable, it's not a very useful norm either. Put it this way: there were many attempts to create a looser model of rationality that would accommodate certain paradoxes of choice, and we never believed in that.
We never thought that there would be an alternative, more tolerant model of rationality that would be usefully descriptive. So that never tempted us.
WALKER: Interesting. Okay, so do you have any hunches as to what a better normative model of rationality would be?
KAHNEMAN: No. I mean, I don't use the word. I prefer to avoid it. For me, rationality is a technical term. It is rationality in the von Neumann-Morgenstern decision theory or in economics. And that's it. Otherwise, I think I would ask of people that they be reasonable, because “rational”: that word is taken so far as I'm concerned, and it's taken in a very precise way, by something that is descriptively a non-starter.
WALKER: Right. Without putting words in your mouth, does that imply that the rationality versus irrationality debate is just not very useful?
KAHNEMAN: Well, you know, it's been very productive. There are debates that will never be resolved, but they're exciting. It sounds like an important issue to debate whether man is rational, humans are rational or not. It sounds like a worthwhile enterprise, and a lot of good stuff came out of that. Our work, to a very large extent, came out of taking a stance against a technical definition of rationality. Some debates can be productive without any hope of resolving them. I think the rationality debate belongs to that class.
WALKER: I guess it's all about the dialectic.
WALKER: I want to ask some questions about an evolutionary approach to biases and heuristics. Are you familiar with Coren Apicella’s experiment on the endowment effect among the Hadza?
KAHNEMAN: I probably saw it. You'd have to remind me; at this stage, I don't store experimental results as well as I used to.
WALKER: No worries. The Hadza are one of the last hunter-gatherer societies on Earth, who live in North Tanzania. In the experiment, participants are randomly given one of two coloured lighters that they use to light campfires, and then they're given the opportunity to exchange the lighter for one of a different colour. In similar experiments on Western populations, as you well know, because you've done some of the most famous ones, about 10%, give or take, of people trade whatever object or item they're given. But for the Hadza in this experiment, they traded about half of the time, 50% of the time, which is what you'd expect for perfectly rational traders. So there was no endowment effect, although there was some endowment effect for Hadza living in more market-integrated camps. And so, my question is, to what extent are biases and heuristics the products of culture rather than biology?
KAHNEMAN: Well, that separation of culture and biology is tenuous. I mean, they clearly are in interaction. You can clearly overcome a lot of biological tendencies through culture. I mean, we do not act naturally, you and I, in this situation. Our interaction is conditioned by culture. I can readily see that in certain cultures, you might have a norm of exchange where the polite thing is to exchange and not to hold on to what you have, even if people's tendency and I think that's true of babies. When you try to snap something from a baby, there will be a reaction. I mean, the baby hangs on. In a certain way, I think people don't like losing things that are under their control. I do think it's very likely that there is an asymmetry between the importance of grabbing something that you don't have and the importance of holding on to something you do have.
That's how I think of the endowment effect. I don't think of it as a law of nature. I mean, clearly it's possible to overcome culturally.
WALKER: I see. The cultural norms are kind of overriding the biological programming.
KAHNEMAN: There are some instances of trading among animals, but it's not very common. The primary typical response, animal response, is to hang on to what you have.
WALKER: Should evolution be the unifying theoretical framework behind the heuristics and biases research program?
KAHNEMAN: There have been attempts along those lines to say that. Well, if you assume that we have evolved to be as good as we can be, then if we have biases, the biases must be functional. I don't much see the point of that because I think of biases of judgement and the heuristics that lead to them. I think of biases of judgement as a side effect of a kind of mental operation that in general works very well. It's an inevitable side effect of the way that we do things. I wouldn't segregate the biases and the flaws as a separate thing that you need a separate mechanism to explain. There is a mechanism that mostly explains behaviour that is quite functional, but under predictable conditions it leads to predictable errors.
WALKER: But if some cultural norms can override our biological programming, and earlier when you were talking about the distinction between culture and biology not being so clear, you were maybe gesturing at dual inheritance theory and gene culture coevolution.
KAHNEMAN: Well, I was, but I must say this kind of thinking has never been part of my thinking. I have never found it particularly useful to the kind of thing that I was doing. It has sometimes been used to defend rationality. Those claim that people are ecologically rational and that they're adapted to their environment. This may or may not be the case. That's not the way I think about it. That's one of those matters of taste that we were talking about.
WALKER: I guess I think more of Joe Henrich's research than Gigerenzer’s here.
KAHNEMAN: Well, they don't exactly have the same position, but if you start from the point of view that what people do must be good otherwise they wouldn't be doing it, that can lead you to some productive research, and I think it has led Gigerenzer in some productive directions. Henrich, his emphasis on culture, again, is extremely compelling, but it doesn't account for everything. I think you can exaggerate the extent to which everything is culturally changeable. There is a difference, for example, between preferences. We were talking about the endowment effect earlier, and judgement and heuristics of judgement, and the preferences. Well, there are preferences. You want one thing or you want another. That's fairly straightforward. And judgement, there is an issue of complexity and of truth, and of understanding reality the way it is, and it sometimes demands a level of complexity that we don't have, that people don't have.
So those are very different issues. Whether you can overcome change preferences by culture, that's one thing. Whether you can improve people's judgments by culture much beyond where we are, where educated people are today, that, I think, is very doubtful simply because culture is not going to change the limits of our attention. It is not going necessarily to change the fact that there are limits to a computational ability. There are limitations that are constrained so far, as culture is concerned. They impose limitations, I think, on how much can be accomplished or how much can be improved by thinking about culture or viewing every flaw as a cultural fact. I think many flaws in our reasoning are responses to the fact that our brain is limited.
WALKER: So speaking of improving people's judgments, do you predict that as AI systems are developed and adopted, they will reduce the effect of biases? Do you think that they'll kind of consistently reduce the effect of biases or will some biases be impacted more than others?
KAHNEMAN: Well, I think anybody who tries to predict how the AI story will develop… There is a saying in Hebrew that prophecy was given to fools. I think really, forecasting the developments of AI makes very little sense. One thing that we can be fairly sure of is that collaboration between humans and AI doing the same thing, like a diagnostician with an AI diagnostic tool, which is an ideal that many people have in mind about the future of Human-AI interaction I think that is very unstable. That is likely to be unstable. Because if you have a human and an AI operating at approximately the same level, the AI is going to be better than the human in very short order. Simply because the ability of AI to learn from experiences is enormously larger. Simply because you can have different agents. AI, artificial intelligences, they all report and teach each other, they all learn from each other's experiences.
So this is something humans cannot match. Anything that we predict about how humans are going to control AI, I wouldn't venture to go there.
WALKER: So I actually have some questions about prediction, prophecy and forecasting. I want to ask you about reference class forecasting, and maybe you can explain what that is. My question is, how do you go about defining the correct reference class? Because if you were trying to make a personal forecast, ideally the best reference class would contain people identical to you, but then obviously the sample size is just one. So how do you choose the scope of the reference class?
KAHNEMAN: Well, first let's define our terms, what the reference class is. I don't know a better way of doing this than telling the origin story of that idea in my experience, which is that, 50 years ago approximately, I was engaged in writing a textbook with a bunch of people at Hebrew University, a textbook for high school teaching of judgement and decision making. We were doing quite well, we thought we were making good progress. It occurred to me one day to ask the group how long it would take us to finish our job. There's a correct way of asking those questions. You have to be very specific and define exactly what you mean. In this case I said, “Hand in a completed textbook to the Ministry of Education — when will that happen?” And we all did this. Another thing I did correctly, I asked everybody to do that independently, write their answer on a slip of paper, and we all did. And we were all between a year and a half and two and a half years. But one of us was an expert on curriculum. And I asked him, “You know about other groups that are doing what we are doing. How did they fare? Can you imagine them at the state that we are at? How long did it take them to submit their book?” And he thought for a while, and in my story he blushed, but he stammered and he said, “You know, in the first place they didn't all have a book at the end. About 40%, I would say, never finished. And those that finished…” He said, “I can't think of any that finished in less than eight years — seven, eight years. Not many persisted more than ten.”
Now, it's very clear when you have that story, that you have the same individual with two completely different views of the problem. And one is thinking about the problem as you normally do — thinking only of your problem. And the other is thinking of the problem as an instance of a class of similar problems.
In the context of planning, this is called reference class planning. That is, you find projects that are similar and you do the statistics of those projects, and it's absolutely clear. It was evident to us at the time, but idiotically, I didn't act on it. That was the correct answer, that we were 40% likely not to succeed. Because I also asked a friend, the curriculum expert, I asked “When you compare us to the others, how do we compare?” He said, “We are slightly below average.” So the chances of success were clearly very limited. So that's reference class forecasting. Now, how do you pick a reference class? In this case it was pretty obvious. I mean, we were engaged in creating a new curriculum. In other cases, when you are predicting the sales of the book or the success of the film, what is the reference class?
So if it's a director and he's had several films, is the reference class his films or similar films, same genre or whatever? And there isn't a single answer, the answer is actually… You were asking how do you choose a reference class, my advice would be… And today I'm not the expert on that. The expert on that is Bent Flyvbjerg at Oxford. And I think what he probably would tell you is, “Pick more than one reference class to which this problem belongs.” Look at the statistics of all of them and if they are discrepant, you need to do some more thinking. If they all tend to agree, then you probably have got it more or less right.
WALKER: In making predictions about the future, the reference class could also be — I mean, you could think of it as like the prior probability in a Bayesian formula. Is that like an inappropriate tool in a context of radical uncertainty?
KAHNEMAN: Well, I don't know what you mean by radical uncertainty.
WALKER: So, a context where you don't know what all the possible outcomes are, let alone have the ability to attach probabilities to them.
KAHNEMAN: Then I don't understand your question.
WALKER: Okay, maybe let me try and explain it another way. Are you familiar with Jimmie Savage's distinction between small worlds and large worlds?
WALKER: And so small worlds, in simple terms, are worlds where you can look where you look before you leap; large worlds, you have to cross that bridge when you come to it. So I guess, quintessentially large worlds would be like choosing a romantic partner or macro economy or the chances of war between China and the US in two decades. Is reference class forecasting like a category error in those contexts?
KAHNEMAN: Well, there are experiments on that type of forecasting. Phil Tetlock and Barbara Mellers have those experiments where you ask people questions with considerable uncertainty of the type of what's going to happen. Now, when you're looking at the distant future, people succeed so little that it's hardly worth talking about. When you're talking about the intermediate, relatively short term predictions, some people are quite good at it probabilistically. These people quite often do look for reference classes, and they do look for more than one. This is part of the standard procedure of super-forecasters. There is a good way of doing it, there is a better way. There's no good way of forecasting that will give you a very high degree of success in complex problems, but you can do better than others.
WALKER: Let me ask you about super-forecasting. As you alluded to, Phil Tetlock's research suggests that up to a horizon of about six months, you seem to be able to help people make better forecasts. Beyond that, as you said, Danny, the future is just shrouded in the mists of uncertainty. Presumably that time horizon of roughly six months isn't etched into the laws of the universe. So do you predict that it'll shrink, say, to like, two or three months or whatever, if productivity growth picks up for a sustained period of time and society becomes more dynamic? In other words, should we be short Phil Tetlock's ideas as innovation or complexity increases?
KAHNEMAN: That's a very interesting question. What you remind me of is the claim for which there seems to be a lot of evidence that, at least in the domain of technology, change is exponential. So it's becoming more and more rapid. It's clear that as things are becoming more and more rapid, the ability to look forward and to make predictions about what's going to happen diminishes. I mean, there are certain kinds of problems where you can be pretty sure there is progress and you can extrapolate. But, in more complex prediction questions, at a high rate of change, you really have no business, I think, forecasting.
WALKER: So I want to ask you about bubbles, and my question is how you weigh the relative importance of cognitive biases like the representativeness heuristic — which has had a big impact on behavioural finance because it provides a natural account of extrapolation — versus social biases and things like conformity, herding, mimetic desire.
KAHNEMAN: Well, I wouldn't know how to answer this question. I mean, clearly both are important. Clearly, you could get bubbles from either one of these alone, and very likely both of them are operating. There is a strong tendency for people to look where other people are going and to go where other people are going. This is the herd tendency, and it clearly exists and it's clearly powerful. It's also the case that people extrapolate much too easily and they see trends. It's not that they expect them to last forever, but they expect them to last more than they are actually likely to last. That almost defines a bubble. Both of these could explain bubbles by themselves, and both of these are probably operating and have to weigh their importance. I wouldn't know how to do that.
WALKER: Right. And maybe the stories that tap into and reinforce the extrapolative tendencies spread socially as well.
KAHNEMAN: Yes, clearly. I mean, again, the distinction is not clear. Why is everybody running and how did that begin? And it's not an accident. It is something that people have in common to begin with.
WALKER: Okay, so because I'm an Australian, I'm really interested in the link between national culture and innovation, but specifically between an egalitarian national culture and innovation. And what's interesting to me is that you've lived in both the United States and Israel. And the United States is relatively inegalitarian, but obviously incredibly innovative — you know, the home of Silicon Valley — and Israel is famously egalitarian, like a culture of debate and criticism, people aren't always so respectful of elders or people in positions of authority, but it's also super innovative, it's famously the Startup nation. Firstly, do you agree with my characterisation of the cultural differences between the two nations and what is the link between egalitarianism and innovation?
KAHNEMAN: Well, what you can definitely say, I think, is that where people are intimidated and a culture of intimidation, a culture of fear, culture of conformity, of extreme conformity is unlikely to be optimally innovative — although you find a lot of innovation in high conformity cultures.
I wouldn't define the distinction of the difference between the United States and Israel in terms of egalitarian or non-egalitarian. If it's in terms of questioning authority, there's a lot of questioning authority in the United States as well. So there's probably more of it in Israel. Right. You question everything. You certainly question each other more. You push each other more in Israel than you do in the United States. And to some extent, when you look at creativity in Israel, you think, “Oh, yes, this is Israeli creativity” in the sense that these are people who… The fact that other people haven't been successful at doing something just doesn't intimidate them. They think they're better, and if they try to do it, they're going to do it. There is that kind of arrogance which drives a lot of innovation, saying “Oh, sure, I can do it's a piece of cake.” That is in the spirit of the… I think it's more Israeli than it is American. It's not an essential condition for creativity. It's a type of creativity. When you look at it, you say, “Oh, they're creative because they are like that.” You can be creative in more than one way. Creativity doesn't line up with arrogance.
WALKER: Yeah, I see. You mentioned lack of respect for authority being important. We could potentially distinguish two types of authority. Like, there's authority in terms of elders and tradition, but then there's impersonal authority, governments and institutions. In Israel, is there a lack of respect for both types of authority?
KAHNEMAN: I don't think that they question institutions more than in many other countries. It's more at the individual level. I mean, these days you're seeing a lot of foment in Israel but…
WALKER: Yeah, because in Australia there's not a lot of conformity. We're a highly, sort of individualistic, weird society. There is a lot of obedience to impersonal authority quite similar to Germany in that respect. I feel like that is somehow connected to extreme versions of egalitarianism, which people often call the Tall Poppy Syndrome. I'm speculating here.
KAHNEMAN: Yeah, I don't have speculations on that issue.
WALKER: Fair enough. Okay, so let me ask you some questions about Noise. In Noise, you, Cass and Olivier anticipate seven major objections to noise reduction strategies. I want to get your reaction to a possible 8th objection. So there's this book, I'm not sure whether you've heard of it, called Seeing Like a State by James Scott.
KAHNEMAN: Yeah, I read this, actually.
WALKER: Oh, wow, okay, awesome. So, in the book, as you know it Danny, he talks about legibility, and one of the key ingredients for authoritarianism is highly legible states. States where things are like well organised and indexed, which allows governments and possibly even totalitarian powers to better exert their control. Obviously, one kind of example of this that he discusses in the book is Holocaust survival rates. And he discusses some evidence around the fact that the greater legibility of the state, the worse it was for the Jews. So in the Netherlands, according to Scott, one reason the Jewish survival rate was low was the Netherlands had very accurate census records. And so, I guess the potential objection here is that noise reduction strategies increase legibility and open societies and countries up to possible exploitation by people with totalitarian ambitions. I'm conscious that it comes across as a very paranoid objection, but I just wanted to get your reaction to that.
KAHNEMAN: Well, you're thinking bigger than I do. When I think of noise as a phenomenon, I think of it within a particular system where there is variability of opinions that really shouldn't exist and that is costly or damaging or that doesn't serve a purpose. And saying that you want certain kinds of judgments to be shared, that you want to reduce the noise of, say, in sentencing by judges or… Those are narrow, specific objectives. I don't go as far as saying that if you control or reduce noise in some specific cases — because noise is always in a specific system, the way that we define it — that's thinking very big indeed, to think that noise reduction is going to cause those problems. I don't quite see… We're not at the first stage of people recognising that noise is a serious problem. Before noise reduction becomes a serious societal problem, we've got a long way to go.
WALKER: Maybe that's an objection for a few decades. All right, I'll save it.
KAHNEMAN: Few decades of considerable success in noise reduction efforts, which I do not foresee.
WALKER: Right. Why are you pessimistic about noise reduction efforts?
KAHNEMAN: Well, I'm pessimistic about everything. Because noise reduction efforts, they're quite costly. They're costly when you have individuals doing things and following their intuition. They have a feeling that they're expressing themselves and the feeling of individuality and so on. And by emphasising that you want people to reach similar judgments, you're doing something, potentially, that people will resist. People don't like to admit that there is noise. And the very existence of variability is surprising. And the essential thing about noise as I see it, the inside to me, was that every one of us, we are… Each of us is in a bubble. And I think I see the world as it is, as I do because that's the way it is. We have what late psychologist Lee Ross called naive realism. We see the world the way it is. And if I see the world the way it is, I expect you to see it in precisely the same way as I do.
That turns out not to be the case. Turns out that the variability among people in how they see complex things is much bigger than any of them can see because each of them feels that they're seeing reality the way it is. That, to me, is the interesting problem of noise.
WALKER: So, we were talking earlier about the difficulty, if not impossibility, of forecasting the distant future. I want to try and tie that into this discussion of noise. Let me try to do this. In the book, you argue that in any organisation, in any specific context, there may actually be an optimal level of noise. And you write that, “Whenever the costs of noise reduction exceed its benefits, it should not be pursued.” I guess that raises an interesting question as to how we cope with uncertainty where it might be hard to quantify costs and benefits. So, say, in an evolutionary system like entrepreneurship and startups or science or the common law — where there's benefit to noise because it generates variation which then can be selected — it's difficult, if not impossible, to know ex ante which variations will prove to be the most successful.
If I try to give a concrete example of this, maybe you want to improve academia and so, take the awarding of academic grants. Maybe you want to introduce a rule to reduce noise in the judgments of who gets grants. A rule that says, “You should award grants to researchers with lots of citations or whose ideas seem promising according to some other metric.” It's just really hard to know which ideas will turn out to be important. Doesn't this just collapse back into debate of how to quantify uncertainty?
KAHNEMAN: Well, in granting in particular, there are systems where a certain level of unpredictability is important. Scientific grants are a good example of that in the sense that we don't know what we don't know and some randomness could potentially be useful. At the same time, a lot of randomness makes the system radically unfair. So, finding a balance — and it may not be a matter of… The question is whether currently things are biased one way or the other way, whether there's too much noise or not enough. I think there is too much noise in granting, but I agree that if you eliminated noise completely, if you had rigid rules about what gets granted, then society in the long run would lose quite a bit.
WALKER: Yeah, I guess my question is maybe more specific than that. And it's just like, “How tractable is a cost benefit analysis when you're dealing with uncertainty?” If that's the framework for judging the optimal level of noise.
KAHNEMAN: I haven't thought much about this problem, so I don't have crystallised thoughts. The question is whether there is any sensible way of quantifying the costs and the benefits in a system like that. I don't know enough about how one would quantify success and how one would define the goals of the system. So I wouldn’t know how to do cost benefit analysis on noise reduction.
WALKER: I have just a few miscellaneous questions and then a final question. So, what — these are high variance questions, some of them might provoke interesting answers, some of them maybe not. So you and I are similar in that we both finished high school at the age of 17. Do you think on average boys should actually finish a year later than normal rather than a year earlier?
KAHNEMAN: I've heard success stories both ways and… Something has happened, it just reminds me of the fact that this may be dependent on culture and on time. When I grew up, rushing to adulthood was the norm. You were rushing to adulthood, you were rushing to financial independence. You had to take responsibility for your own life. And I look at my grandchildren, they have all the time in the world. And I think they are blessed because they feel protected and that gives them time and they feel safe. I think it's quite wonderful. I don't completely understand how they can be so patient because I wasn't at their age.
WALKER: So, as you know, Nassim Taleb argues that we underestimate tail risks. Does that contradict prospect theory?
KAHNEMAN: Well, no, I would say. In prospect theory, you overweight low probabilities, which is one way of compensating. Now, what Nassim says, and correctly, is, “You can't tell — you really cannot estimate those tail probabilities.” And in general, it will turn out — it's not so much the probabilities, it's the consequences. The product of the probabilities and consequences turn out to be huge with tail events.
Prospect theory doesn't deal with those — with uncertainty about the outcomes. So what Nassim describes, as I understand it, is you get those huge outcomes occasionally, very rarely, and they make an enormous difference. This is defined out of existence when you deal with prospect theory, which has specific probabilities and so forth. So prospect theory is not a realistic description of how one would think in Nassim Taleb’s world. Certainly not a description of how one should think in Nassim Taleb’s world.
WALKER: I see. Does that diminish the descriptive validity of prospect theory?
KAHNEMAN: I don't think prospect theory is much descriptive. I think of it as a bunch of ideas. It's quite interesting when you look at the way formal theories like prospect theory play out. They are valuable for one or two ideas that actually travel well and get completely detached from the rest of theory. So, loss aversion is an idea, overweighting low probabilities is an idea, thinking of reference points and changes rather than final states, those are ideas. It turns out that in order to be able to state those ideas in a way that will influence thinking, you've got to pass a test of… You've got to develop a formal theory that will impress mathematicians, that you know what you're doing. Constructing a theory — so far as I'm concerned, this is very iconoclastic, what I'm saying now — constructing a theory like prospect theory is a test of competence. Once you demonstrate competence, what makes the theory important is whether there are valuable ideas that can be detached from it completely. So it's not that the theory is valid. Some ideas are more or less useful, and that's the way I think about it.
WALKER: I see. Are there any subfields or results in psychology that have weathered the replication crisis so far but you think are very vulnerable?
KAHNEMAN: No, I can't think right now of any area. You know, the thing that is most striking about the replicability crisis is how the field has responded. And it's extraordinary. I mean, the improvement and the tightening of standards that have occurred in the ten years, and it's exactly ten years since the crisis began. The way psychology is done, scientific psychology is done, has really changed top to bottom. It's a different field. And that's what's impressive to me. The field as a whole is much less vulnerable, I think, than it was to those kinds of mistakes.
WALKER: That's good to hear. Okay, so, my final question is you famously left the happiness literature, you kind of realised that people are very confused when they talk about happiness and just wasn't a particularly tractable problem to work on. Have you learned anything else about happiness and the experiencing and remembering selves since abandoning that project? What have you learned about the good life since then?
KAHNEMAN: I haven't completely abandoned that project. In fact the latest paper I've done is an adversarial collaboration on happiness. I had a particular idea which turned out to be wrong, and then that's what happened. I had the idea that you want to measure emotional experience and that what people think about their life is not all that important. And I thought that this is a normative way to… This is a way of, maybe, redoing the happiness literature. And then I realised that the basic flaw in this is that people by this don't want to be happy. This is not what they really want. They really want to be satisfied with their life, they want to have a good story about their life. And at the same time, clearly, the quality of experience is relevant but I didn't know how to go on from there, and I was not impressed by the measurements that were available.
There was a lot of talk about 20 years ago of measuring well-being, and there has been a lot of improvement, but it has not been along the lines that I was thinking of then. I mean, I wanted to measure experience. In fact, what has taken hold is a definition of well-being in terms of like, satisfaction. Which… There's a lot of progress in that field, especially in the UK. There are some very interesting things happening. And this was one area where probably my pessimism was exaggerated. Better things have happened than I would have imagined 20 years ago.
WALKER: Right, so how specifically have your views changed since then?
KAHNEMAN: Well, as I said before, they haven't changed all that much. I mean, I'm still interested in experience and I'm still interested in emotions, but what is happening is an actual movement towards having happiness as a criterion for social policy. I can see this developing, it's beginning. The key figure in this, I think, is somebody who's not well known, not as well known as he should be in the US. And that's my friend Lord Richard Layard. And he is really the driving force behind the movement, especially in the UK, towards giving happiness measurement a role in policy that it hasn't had, using happiness for cost-benefit analysis. So there are exciting ideas. There's a book by him and a colleague coming out within the next couple of months, which I expect will have a lot of impact.
WALKER: Awesome. I'll look out for it.
WALKER: Danny, thank you so much. It's been an honour.
KAHNEMAN: It's been a pleasure.
WALKER: Thank you.
KAHNEMAN: You're a very good interviewer.