Bayesianism, the doctrine that it's always rational to represent our beliefs in terms of probabilities, dominates the intellectual world, from decision theory to the philosophy of science. But does it make sense to quantify our beliefs about such ineffable things as scientific theories or the future? And what separates empty prophecy from legitimate prediction?
David Deutsch is a British physicist at the University of Oxford, and is widely regarded as the father of quantum computing. He is the author of The Fabric of Reality (1997) and The Beginning of Infinity (2011).
[05:40] JOE WALKER: David Deutsch, welcome to the podcast.
DAVID DEUTSCH: Thanks for having me.
WALKER: I want to focus in particular today on prediction and the types of predictions that we can make legitimately… I'd like to begin with a bit of context; I was hoping you could tell me about Charles Parkin and how he led you to Karl Popper.
DEUTSCH: Ah, Charlie Parkin. Yes. Well, he was my tutor as they call him in Cambridge, which is misleading because tutors do anything except teach. In Oxford, they used to call them ‘moral tutor’, but even that is misleading. It's basically a member of the college whose job it is to connect with the particular that they're assigned to and be on their side. Like if they have some problem with the bureaucracy or with the university or with the college, if you are in trouble, or just if you need something, you would go to the tutor. And there was a thing that you're supposed to go and see your tutor at the beginning and end of each term to kind of check in with them and they check that you are not on drugs and stuff like that.
And I never had anything to say to my tutor, and he was a historian, which was – not now, but was – then pretty far from my interests. On one occasion I had written this essay just for myself because I'd read Bertrand Russell's History of Western Philosophy and another book. And I was really taken with this idea of philosophy of science and I thought, “I want to be a physicist and I want to do it right.” So I wrote this article about how it's important to do induction right, as it says in Bertrand Russell. And I sent this to Charlie Parkin so that we would have something to talk about when I went to see him.
And he said, and I remember this very well, “Hmm, induction. Hasn't that been proved wrong by that Popper chappy?” Well, I'd only heard of Popper once before, which was from my mathematics teacher in school, but I'd never followed up anything. And so I said, “Oh, well I didn't know.” He said, “Induction is old hat now.” So I I thought, well, if it's old hat, I need to know what new hat is. So I went out and bought a Popper book and that was the beginning of a complete change of course in my philosophical life.
WALKER: And I believe you actually met Popper, right? Can you tell me what that was like?
DEUTSCH: Yes. I met Popper once, unless you count one of his lectures that I went to where I didn't meet him personally. So I only met him personally once. I went to his home with my boss, Bryce DeWitt, who also wanted to meet Popper. We went to his home and we mainly wanted to tell him that although his philosophy was amazing, groundbreaking… he'd got quantum theory completely wrong and he hadn't even understood what the problem was, let alone what the solution was. And the solution was Everett's multiverse interpretation or so-called interpretation – it's Everettian quantum theory as we now prefer to call it. And he just rejected that out of hand, again for rather silly reasons.
We discussed many things with him, but other things we got round to discussing Everett and we explained it to him what the problem really was and what he got wrong. And he listened incredibly carefully, asked all the right questions. We'd been told he's intellectually very arrogant and he shouts down opposition and so on. None of that; we found him incredibly, you know, Popperian in his attitude to ideas. And at the end he said, “Well, I've got a book in print now and I'm going to have to change one of the chapters. I'm gonna have to make a radical change to something.” I forget what it was. And we thought, “Wow, we've kind of succeeded.” But then when that book came out – and I forget actually which book it was; he wrote several that mentioned quantum theory – but it had none of that in it. It was all back to his original view. So I guess he must have changed his mind, whereas in the heat of the moment he'd thought we had a point, but then thought, “No. No, they don't have a point.” So that was my one and only meeting with Popper.
WALKER: Did he show any engagement with the idea?
DEUTSCH: Complete engagement, yes.
WALKER: No, in the book that he subsequently published.
DEUTSCH: Oh no, no. As far as I remember anyway, the only mentions are asides and they were rather disparaging.
WALKER: Right. So why was encountering Popper such a pivotal moment in your intellectual development?
DEUTSCH: It's hard to express exactly why, but for me personally, psychologically, my idea of philosophy and the philosophy of science and what philosophy is for and what it can do, and so on – like when I was in school and an undergraduate at first – it was really the everyday view of philosophy. And when I read Russell, it was again the common sense view. Because induction is common sense. If we see the sun rise every day, then we think it's going to rise the following day as well. It’s that kind of thing. But if we see it not rise, then we know that there's something wrong with our theory that it'll rise every day. And that's what I thought, that's what Russell thought. And then when I read Popper, I saw that not only was that wrong, but it took philosophy itself to a whole new level. It's like this is serious thinking about what the truth of the matter really is. And so it was the seriousness of Popper which first got me, rather than the content. And it took me, actually, several years. I've been trying to think back from time to time, how long it took me before the time when I would say “Yes, I'm a Popperian” to the time when I actually got it. I think it was about four years and several more Popper books.
WALKER: When you say the “seriousness of Popper”, what makes someone serious in that respect?
DEUTSCH: Well, it's again very hard to describe in words, but it's following ideas through and insisting that things make sense. So the trouble with the theory of induction is that if you follow through any strand of “How can this be?” you end up with a problem, namely that it doesn't make sense. The philosophers call this the problem of induction. The original problem of induction was: we see all these instances of things like sunrises and we infer that the sun is going to rise again, but that inference is not logically valid. So logic had been developed to quite a high degree already in antiquity, by Aristotle and people. And then even Aristotle already realised that this kind of inference is just not a valid inference, it doesn't follow.
And then people tried various ideas. “Okay. Maybe it's not logically valid, but there's another form of reasoning, which you can call inductive reasoning. And that somehow makes sense.” And every attempt to make that make sense didn't work either. Then I realised that this whole problem is a misconception because it's just not true that the future is like the past. There's one thing about the future – and this may come into prediction if you want to talk about that later – is that it's not like the past, it's never like the past. And then the inductivist might say, “Well, yeah, but it's like the past in some ways. And it's different from the past in other ways. So it's approximately like the past.” None of that works.
What Popper did was he said, “Okay, what are the assumptions that lead to this so-called problem, i.e. this refutation of the whole theory? What are the assumptions behind it, one or more of which must be false?” And so he took seriously that there is something wrong with not our theory of just scientific knowledge, but our theory of what knowledge is. Knowledge had traditionally been thought of – again, since antiquity – as what was later called “justified true belief”. So it's a kind of belief, knowledge is a kind of belief, and it's true and it is justified. You can also modify that by saying it's a form of belief that is mostly true, or it's a form of belief that's probably true, or probably justified or partially justified, or everything's been tried along those lines. Popper realised that the problem of induction actually implies that there's no such thing as justified knowledge in the first place, and that we do not need knowledge to be justified in order to use it.
There is no process of justifying a theory. So theories, according to Popper, are always conjecture, and thinking about theories is always criticism. It's never a justificatory process. It's always a critical process. As David Miller says, a theory doesn't need to have any special credentials to be allowed into science. It's a conjecture. Any conjecture is allowed into science, but once it's into science, it's then criticised. And when it's criticised successfully, it is dropped. That's where Popper begins. Then he has to answer all the questions – “What's the rational reason for acting on theories, and so on?” But once you've got the idea that you don't need justification, everything eventually falls into place, and it falls into place in a structure that makes sense – and it makes sense of science and even beyond science.
WALKER: So his solution to the problem of induction was to sort of reframe it and say that justification isn't needed in the first instance… For people wanting to read a good summary of his solution to the problem of induction, would you agree that the first chapter of his book Objective Knowledge is probably the best place to go?
DEUTSCH: Yes, many people say that. I think where you should start with Popper rather depends on where you're coming from. Because what I've described is his philosophy of science narrowly conceived, but he had a very broad attack on different eras of philosophy – basically all the same thing. It's all the idea of thinking of starting with problems, starting with existing theories and criticising them rather than seeking justifications for theories. Some people come to Popper via his political philosophy though. He denied being a political philosopher but he was, and he was the greatest so far. Actually back in Cambridge, it was very hard to find Popper books in bookshops at the time.
And there was no internet. So actually the first one I read was The Open Society and Its Enemies: Volume Two. That's the only one I could find at first. Then I found Volume One and read that too. The only direct connection that that had with the philosophy of science was in the underlying approach. And that's what attracted me. And that's why I looked further and tried to, whenever I went into a bookshop, I first looked for Popper books in the philosophy section. Very rarely found one.
But yeah, I think Objective Knowledge is a good place to start, or Conjectures and Refutations also other people find interesting. But it, as I say, depends where you're coming from.
WALKER: Why do you think his books were so conspicuously absent from the shelves of Cambridge bookshops?
DEUTSCH: I don’t know. There is a mystery about Popper’s reception in the academic world, which I don't know about. That's the history of ideas, which I'm not an expert on. I'm more interested in ideas than in the history of ideas. Though sometimes the one is needed for understanding the other. I know that Popper had great difficulty, especially with Oxford and Cambridge, but with the academic world generally. And he would never have come to England as I understand it, if it hadn't been for Hayek, who was professor at the LSE, causing the LSE to create a professorship just for Popper. I think it was called professor of the scientific method, something like that. His lectures famously began – and you can find his first few lectures on the internet – the first one begins something like, “I want to I want to warn you that, although I am called the Professor of a Scientific Methodology, and I'm the only one in the British Empire, there is no such subject, there is no such thing as scientific methodology,” and so on. And he goes on brilliantly from there on.
WALKER: That's great. I've heard you recommend The Myth of The Framework as the best of his books, maybe not for beginners, but certainly for people who've already read a few of his books. Why do you recommend The Myth of The Framework in particular?
DEUTSCH: Yeah, well, it's not the book. Many of his books are collections of essays or lectures or whatever, and The Myth of The Framework, the book, is such a collection. And what I always recommend is the particular essay within that book called ‘The Myth of The Framework’. So the book is named after that one essay or that one chapter. I think it's brilliant because it reaches out beyond philosophy of science, philosophy of politics, to just a general attack on – I don’t know what you would call it – relativism, including postmodernism, and all sorts of bad ideas about ideas. The actual myth of the framework that he criticises is that for two people to make progress in a discussion, it is important that they have an area of agreement and that they locate that and then work out from there to create agreement. Now, Popper attacks that idea from all directions.
First of all, he says, the discussions can be valuable, and usually are, even if you never reach agreement. And this I think is a crucial idea of Popper’s because, again, this idea that the objective of a discussion is to reach agreement is authoritarian. The idea is that you are creating together a kind of authority, a kind of uniformity. Whereas, in fact, all we have is conjectures and we are going to be wrong in various ways. And we are never going to arrive at the final truth because there is always improvements to be made. And when you have a public controversy, like people say debates in parliament are useless because nobody ever changes their mind... Well, first of all, people do sometimes change their mind, but that is not the point.
The point is that by having a debate, you don't improve your agreement necessarily with the other side, but you improve your understanding of the other side. If you are right, you improve your own arguments so as to be better. If you think about real life, about people changing their minds about things, you can very rarely remember a case where somebody has changed their mind during a debate. And yet, if you look at the big picture, if you look at opinion polls about “Would you live next to a person of a different race?” A generation ago like 20% of people would, and now 95% of people would. And in that time you can't find anybody who says, “Oh, right now, I've changed my mind about that.” Or hardly anybody.
What has happened is that they changed their view on a larger scale and on a deeper scale, including, in the first instance, the type of reasons that they give themselves for their ideas. I can't find this quote, but there's a marvellous quote by some moral philosopher saying the reason we need moral philosophy is that people change the reasons for their behaviour before they change their behaviour. You justify it in a different way. Now that you are justifying in a different and better way, even though you are justifying the same view and the same behaviour as before, you’re now changing. And eventually that will lead to you changing your actual behaviour in that little way we were talking about, but you never see that because it happens as a result of a deeper shift. So this in practice is what happens. And in theory of it was first understood, I think, by Popper and expressed in that essay.
WALKER: That quote you shared of the moral philosopher also reminds me of that great quote by John Stewart Mill: “He who knows only his own side of the case knows little of that.”
[28:48] WALKER: How do you define Bayesianism and why in your view is Bayesianism a form of inductivism?
DEUTSCH: The word ‘Bayesianism’ is used for a variety of things, a whole spectrum of things at one end of which I have no quarrel with whatsoever and at the other end of which I think is just plain inductivism. So at the good end, Bayesianism is just a word for using conditional probabilities correctly. So if you find that your milkman was born in the same small village as you, and you are wondering what kind of a coincidence that is, and so on, you've got to look at the conditional probabilities, rather than the absolute probabilities. So there isn't just one chance in so many million, but there's a smaller chance.
“Against what background of population are you taking this estimate of the chance and so on?” So if you're not careful, you can end up concluding that your milkman is actually stalking you. And that's because you've used probability wrongly. So that is one end of the spectrum, which I have no quarrel with whatsoever. At the other end of the spectrum, a thing which is called Bayesianism is what I prefer to call ‘Bayesian epistemology’, because it's the epistemology that's wrong, not Bayes’ theorem. Bayes’ theorem is true enough. But Bayesian epistemology is just the name of a mistake. It's a species of inductivism and currently the most popular species. But the idea of Bayesian epistemology is that, first of all, it completely swallows the justified true belief theory of knowledge.
So it's saying, “How do we increase our knowledge? Well, we increase our knowledge whenever we increase our credence for true theories.” Credence is belief. Belief is, according to Bayesian epistemology, measured by a measure that is basically a probability. In fact, all probabilities are supposed to be these beliefs, which is another mistake, but never mind that for a moment. So the idea of science and of thinking generally in Bayesian epistemology is that we are trying to increase our credence for true beliefs and decrease our credence for false beliefs. And so they use Bayes’ theorem to show that when you encounter a true instance of a general theory and you use Bayes’ theorem to calculate the new probability of that theory, the new credence for that theory, it has gone up. And so the basic plan of Bayesian epistemology is that that is how credences go up.
And the way they go down is if you find a counterexample. So credences of theories go up when you find a confirming instance and down when you find a disconfirming instance. And that just is inductivism. It's inductivism with a particular measure of how strongly you believe a theory and with a particular kind of framework for how you justify theories: you justify theories by finding confirming instances. So that is a mistake because if theories had probabilities – which they don’t – then the probability of a theory (‘probability’ or ‘credence’, in this philosophy they’re identical, they’re synonymous)... if you find a confirming instance, the reason your credence goes up is because some of the theories that you that were previously consistent with the evidence are now ruled out.
And so there's a deductive part of the theory whose credence goes up. But the instances never imply the theory. So you want to ask: “The part of the theory that's not implied logically by the evidence – why does our credence for that go up?” Well, unfortunately it goes down. And that's the thing that Popper and Miller proved in the 1980s. A colleague and I have been trying to write a paper about this for several years to explain why this is so in more understandable terms. Unfortunately, Popper and Miller's two papers on this are very condensed and mathematical. They use kind of a special terminology that they made up in order to prove this. So the paper hasn't been taken on board, and we would like it to be taken on board. But we haven't yet managed to solve the problem, which evidently they didn't, of how to present this.
WALKER: I'm curious, are you aware of some of the analogous critiques made of Bayesian decision theory by people like Ken Binmore?
DEUTSCH: No. I'm not aware of having any quarrel with Bayesian decision theory, except unless this is referring to its ambiguity, that you never know rather like the Duhem–Quine ambiguity in scientific reasoning. If that's what you are referring to, then I do know about it, but I don't haven't specifically read about it.
WALKER: No. I think the best place to start would be Binmore's book Rational Decisions. In the book he takes ‘Bayesianism’ to mean the doctrine that Bayesian decision theory is always rational. And he builds on Leonard or ‘Jimmie’ Savage's distinction between small and large worlds. So small worlds are worlds to where you can “look before you leap”, large worlds are worlds where you have to “cross that bridge when you come to it”, so to speak. Savage argued – although macroeconomists seem to have forgotten this – that Bayesian decision theory is only applicable, is only sensible, in small worlds.
DEUTSCH: Do you mean worlds where there's a finite number of things that you have propositions about, therefore, when you find that one of them is true, you've actually made inroads into the whole set, whereas for infinite things you never make any inroad into the whole set?
WALKER: Exactly. Archetypal examples of large worlds are high finance, the macroeconomy, et cetera. And it just doesn't make sense to apply Bayesian decision theory in large worlds. So Binmore has this long kind of rant against Bayesian decision theory. But he argues that Bayesians are acting as if they've solved the problem of scientific induction, even if they don't explicitly acknowledge that.
DEUTSCH: I agree that they are, and I agree that that's an error.
WALKER: So why is the future of civilisation unpredictable in principle?
DEUTSCH: Because it's going to be affected by future knowledge and future knowledge is unpredictable. The example I give in my book is that if you'd been trying to predict the future of energy production in 1900, you wouldn't have included nuclear energy because radioactivity had only just been discovered in 1900 and it wasn't known that it could be used to produce energy. And there was no way of predicting that nuclear energy was going to be discovered, because if you had predicted that, that would be equivalent to already predicting it in 1900. So that's a logical contradiction. You can't know knowledge that you don't know. Now, suppose that you'd been magically told that there would be nuclear energy, then you might have predicted that, “Okay, carbon dioxide is not going to build up in the atmosphere because by the middle of the 20th century we're going to have nuclear power and we will have much less use for fossil fuels. So, there won't be global warming.”
And so you could ‘predict’ that there won't be global warming on the basis of the best knowledge known in 1900. But it was not known in 1900 that in the mid-20th century there would be an environmental movement that would stigmatise nuclear energy. At each stage, you don't know what the future of knowledge. These examples illustrate that knowledge can be erroneous as well. That's another thing that Popper took seriously – false theories also contain knowledge. And so this is a nice example to show that it's impossible to know the future that's going to be affected by knowledge.
Now, which parts of the future are going to be affected by knowledge or not? Well, that's also unknowable. So we predict the orbit of Mars, but our prediction is only gonna be correct if nothing intervenes. So human knowledge could intervene. We might create the knowledge to shift Mars, and we might want to shift Mars for some reason or another. And in the next a hundred years or thousand years or million years, we might want to do that. And whether we do it or not depends on the knowledge we create and that knowledge – not just scientific knowledge, but also all other forms of knowledge, moral knowledge, political knowledge, aesthetic knowledge – all those might affect the orbit of Mars in the future. But that doesn't mean that it's completely useless to try and predict anything, because we have explanations.
What I’ve just said [about Mars] is not just a conditional prediction, It's also an explanation because I've been saying that it would need some extreme changes in the human condition for human knowledge to affect Mars. Whereas if you talk about human knowledge affecting, let's say, the atmosphere, then the best explanatory theories we have are already that human knowledge is affecting the atmosphere now and will affect it more in the future. Now that might be false. Both those things might be false. But every particular theory about how it can be false is subject criticism and has failed criticism. So our best explanatory theories about the future say that the atmosphere is being affected and will be affected more in the future. And we can therefore conclude that given what we want from the atmosphere, we would do best to create the knowledge to make it change in the way we want.
WALKER: So is the key point of differentiation between legitimate prediction and illegitimate prophecy that legitimate predictions rely on good explanations?
DEUTSCH: Exactly. But the legitimate predictions are not justified knowledge. They are conjectures just like everything else. It's just that their rivals have failed criticism – which doesn't mean they're false. They have just failed criticism. And the rational way of proceeding is to proceed according to the best explanation.
WALKER: And they are, of course, subject to disappointment.
WALKER: So for most of our species history, knowledge was sparse and grew slowly if at all. Does this mean that people could have made better predictions about the near futures of their societies than we can of ours? For example, would it have been easier for someone in Ming China to make predictions about the future of their civilisation than it is for a 21st century American to make predictions about the future of the United States? And how could people in the past have known ex ante that their seemingly static circumstances endowed them with this predictive power without extrapolating their circumstances forward?
DEUTSCH: It's not just any old prediction. The assumption was that nothing would change. Now sooner or later, that assumption is going to be proved false. In almost all cases, it was proved false by the destruction of that society, that civilisation. Almost all civilisations that have ever existed have been static until they were destroyed either by nature or by other humans. So whether you call that reliability of prediction or not, it is a matter of taste. If you're in an empire that in fact is going to last for 400 years and you are at the 200-year mark and you predict, “Well, nothing's gonna ever change.” Then you can predict that the kind of diseases that you have now are still going to be experienced by your great-great-grandchildren. Until you are wrong. Until it comes to the 400-year mark and your great-great-grandchildren are going to have it much, much worse than you, or much, much better than you. So it depends how you calibrate predictability. In a static society, you can make predictions conditional on the survival of the static society, whether or not you know that you are making them conditionally. In a dynamic society, it's the other way around: given that your society is going to survive, the future is opaque.
WALKER: So to come back to good explanations, there are some quotes in The Beginning of Infinity that could potentially be misconstrued as prophecies. So I just wanted to give you the opportunity to clarify them. For example, on page 455 of the paper back, you talk about how humans will achieve immortality within the next few lifetimes. And I imagine that someone might pounce on that and say, “Ah, David's contradicting himself. That's a prophecy.” But I suppose your retort would be that there’s a good explanation underlying that claim. Is that fair to say?
DEUTSCH: I can't remember the wording. It's perfectly possible that I worded it in a way that was either ambiguous or plain wrong. So if I said, “Humans are going to solve this problem within X years,” then that is a prophecy and I shouldn't have put it that way. Now, if I said, “I expect this to happen,” then that technically escapes the criticism of prophecy, but it depends on the context. If “I expect” can be taken in context to mean “I predict”, then it is a mistake and I shouldn't have said it. But if it's a description of my personal conjecture of what's going to happen, then it's accurate. It is based on an explanation, but the explanation is in kind of a negative form. At present, I see nothing in our existing theories of biology that suggest that there's a law of physics that says that the lifetime has to have a particular finite limit.
We know that there are organisms that don't have that limit, like most microorganisms and so on. And the processes that we know that kill people are all of the form: Something goes wrong physically, which we can see, which we could undo if we had the knowledge. So it might not be true. It might be that there's some deep reason yet to be discovered why humans can never be immortal, where by “immortal'' I mean ageing won't kill us (but something else might kill us). If that comment in my book said that nothing is known that mandates that, then it's not prophecy. But if I accidentally phrased it as a prophecy, then I'm wrong. Popperians shouldn't be so embarrassed about being wrong, as many people are.
WALKER: That's very honourable of you. One of my favourite genres, David, is old books about the future. I like reading about how people in the past thought about the future. And I sort of collect these books, mainly to remind myself not to prophesy. But some of the ones I have sitting on my bookshelf downstairs are Toward the Year 2018, which was published in 1968. There's Lester Thurow’s book Head to Head – he was an MIT political scientist and it was a 1993 book that envisioned Japan and Europe as America's great economic rivals in the 21st century, scarcely making mention of China. There's Servan-Schreiber's book The American Challenge, which envisioned American growth continuing very aggressively into the 21st century. There is Kahn and Wiener's book The Year 2000, which speculated on all of these future technologies that we would have. There is, of course, Ehrlich’s The Population Bomb. There's Limits to Growth. There's a book called The Coming War with Japan by Friedman and LeBard. But I'm curious, do you have any of your own favourite examples of failed predictions or doomsaying books that turned out to be false?
DEUTSCH: Well some of thoseI have actually read, or at least seen some of those that you mentioned. Others are completely new to me, I haven't heard of many of them. I was recently rereading Twenty Thousand Leagues Under the Seas.
WALKER: I’m not familiar with it.
DEUTSCH: Oh, Jules Verne. Science fiction book written about 1870, something like that. And it's amazing the things he gets utterly wrong and the things he gets amazingly correct. Electric light, submarines him. And some of the things he gets very wrong. It wasn't known at the time whether Antarctica was land or frozen sea, like the Arctic region. So in this book, they go in a submarine and they go and try to find the South Pole to see if it can be reached under the ice. And he gives a wonderful argument for why there must be a continent there and not just ice. And he says near the South Pole, there are far more icebergs than there are near the North Pole.
And icebergs can only form from glaciers, which are on land. The glaciers in the Northern hemisphere form in places like Greenland and so on, but not in the Arctic itself. That's why there are fewer of them. And therefore we must expect there to be an Antarctic landmass. And I thought, that's just so typical of explanatory reasoning. There's no induction there. There's no “We found continents everywhere else, therefore there should be one in Antarctica as well.” It's an explanatory theory. It's explaining the phenomenon of icebergs by something that absolutely isn't icebergs. It's a landmass in the middle of the Southern Ocean that’s never been discovered. That's brilliant. And I don't know whether it's even true. I haven't looked it up.
I don’t know whether there are more icebergs in the Southern hemisphere. I guess it must be true by the same argument. My best guess is that it is true. So you asked for examples of predictions or prophecies that turned out to be false. Well, I'm more impressed by the ones that turned out to be true. On the same trip, he predicts that in the future hunting whales will be made illegal. And this is 1870 or something like that. I think one thing that's happened is that between the 19th and 20th century, speculative fiction, science fiction, turned pessimistic. The 19th century was more optimistic, and when people speculated about the future they're more likely to have been wrong by overestimating progress than by underestimating it.
In the 20th century, there was a sort of congealing of the intellectual climate into a very rigid pessimism, so that a prediction or prophecy could only be taken seriously if it was negative. There’s another book I thought you might mention – I don’t know exactly when it was written, but I remember it coming out – called Will the Soviet Union Survive Until 1984? And this was in the ‘70s. And his answer was no, or rather it will survive until approximately 1984, then it will collapse. And I remember this being vilified basically on the grounds that it was sort of arrogant to assume that the West has it all right and there's nothing viable in the Soviet system, and this is just arrogance on our part, and so on. It's not true. The book was just giving explanatory arguments about why this edifice could not survive. And he was only five years out. It turned out to be correct. But he at the time was vilified.
WALKER: You mentioned that switch from optimistic to pessimistic visions of the future, where you’ve got in the first half of the 20th century Isaac Asimov speculating about how wonderful the future could be and then by the second half of the 20th century you’ve got movies like Terminator. And you mentioned that that shift may have been caused by this congealing intellectual environment around pessimism. I'm not sure if I'm offering an alternative explanation or adding to what you've said, but what do you think of the idea that economic and productivity growth began to slow for whatever reason or reasons around, say, 1972/73. And prior to that, where we had this amazing period of growth, people were kind of extrapolating that into the future. And so it made sense to be talking about flying cars only a few decades away because people had just gone from almost nothing to electricity, telephones, radios, flight. But when that growth started to slow, the pessimism kind of kicked in. Does that make sense to you?
DEUTSCH: I think that happened, but I don't think – how can I put it? – I don't think there's an inexorable evolution in this thing. I think it happened because of specific mistakes that got embedded in systems, particularly the academic world and governmental bureaucracy, and from there into the wider society. So that, again, the idea that there's something wrong with aspiring to make radical improvements, that this is hubris or that this is dangerous, or that this will inevitably have side effects – this idea is very widespread, various versions of this idea are very widespread. And they have caused a slowdown in various areas. Of course not in all areas. You've only got to look at computers to see that rapid improvement happened during that entire so-called period of stagnation. But in many areas, improvement drastically declined. Declined not to zero. We still have improvement all the time. But we don't have rapid improvement. We don't have rapid game-changers happening anymore. And I think that's completely unnecessary and could be turned around if people change their attitude.
WALKER: That's a very optimistic interpretation. I would like to believe that that is true, that there's just something wrong in the culture, some kind of mental problem that we can sort of tweak and get ourselves back on track. That would be much better than thinking that we'd somehow picked all the low hanging fruits or something like that.
DEUTSCH: Right. That is the epitome of the wrong theory. I'm a physicist and so I can judge that in the context of physics. People have said, and I think the prevailing view is, that the reason fundamental physics progress has slowed down is that we've picked all the low hanging fruit. But that's not true. There's more low hanging fruit than there ever was seen before. It's just that picking it is stigmatised.
WALKER: That speculative fiction book from the 1870s you mentioned. Do you remember the author's explanation for why he thought whale hunting would be outlawed in the future?
DEUTSCH: No. I mean, he says something like that whales have large brains and something like that. I forget. But he was not at all opposed to hunting sea creatures or land creatures. There's a scene where they encounter these other predators who are going to prey on the whales and they literally make mince meat of them. And he describes this in gleeful tones. So it's not that he's against hunting, he's against whale hunting in particular.
WALKER: Interesting. So for you, a good explanation is an explanation that is hard to vary while still explaining what it purports to explain. But hard to vary by what standard?
DEUTSCH: Ultimately the standard is that conjectures that have been put forward, or which are on the horizon for being put forward, have been refuted, and what's more they have been refuted in such a way that it's not just that the particular theories have been refuted, it's that their underlying assumptions have been argued away. When I say “refuted”, I meant in this context “argued away” – nothing like that could happen. Because if we were to find somehow a theory with that property (for example, in reality the earth is flat and that it just looks round because light travels not in straight lines or something like that) then that would spoil all sorts of other explanations, which those theories (flat earth theories) do not address and it looks as though they can’t address them.
Now, just because it looks as though they can't address them, doesn't mean they can't. But we can't switch to a theory that isn't a good explanation, because a theory that isn't a good explanation is obviously false. It’s like: “I'm not gonna step into the path of moving traffic on the motorway, because it looks as though I'd be mashed by the next car that's coming along.” Now, it's no good saying, “Well, you might be wrong.” Yes, of course I might be wrong. It might all be a hologram and everything. But it's not rational to make decisions on the basis of what might be true. It's rational to make decisions on the basis of what looks as though it's true in the sense that the contrary theory looks false.
WALKER: A few more questions about prediction. In his book The Precipice, the Australian philosopher Toby Ord takes a Bayesian approach to quantifying existential risks to humanity. He adds up the chance of various existential catastrophes before us in the next 100 years and reaches a rough overall estimate: the chance of an existential catastrophe befalling humanity in the next 100 years is one in six. He stresses that it's not a precise estimate, but he thinks it's the right order of magnitude. And the one in six estimate takes into account our responses to escalating risks. Question for you, David: should we use base rates like that to estimate the probability of existential risks and help prioritise which ones we address?
DEUTSCH: Basically, absolutely not. We should not. But I have to qualify that by saying that in some cases the probabilities can be known because they are the result of good explanations. For example, we can calculate the probability that an asteroid from the asteroid belt will hit the earth in the next thousand years or something. Unfortunately, we don't know the probability that an asteroid from somewhere else – from the Oort cloud, or from somewhere outside the plane of the ecliptic, or from elsewhere in the galaxy, or from another galaxy. So we don't know any of those probabilities, there's no way of estimating them. So there is no way of using Bayesian reasoning to address them. I should also say another caveat is that because of the grip that Bayesian epistemology has on the intellectual world at the moment, people often phrase good arguments in Bayesian terms in order to give them the appearance of being strong arguments. Whereas in fact, they're already strong arguments. They don't need Bayesianism to justify them. And so what you tend to get is a mixture of good arguments, disguised as Bayesian epistemology, with bad arguments that actually use Bayesian epistemology.
Toby Ord's book – I haven't read it all, but it definitely makes this mistake of Bayesianism in both senses. That is, a lot of the book is good argument and good proposals, but some of it is just lost behind the mist of prophecy.
WALKER: Nick Bostrom's vulnerable world hypothesis is the hypothesis that there's some level of technology at which civilisation almost certainly gets destroyed unless quite extraordinary and historically unprecedented degrees of preventive policing or global governance are implemented. So in simple terms: the cost of destructive technologies falls, and we have a diverse set of motivations in the population, like there'll always be a few crazy or malevolent people, and so it's a near inevitability that those people use those destructive technologies to destroy civilisation. Are you familiar with Bostrom's vulnerable world hypothesis?
DEUTSCH: Yes, and I disagree with both the conclusion and the argument. Though again, Bostrom is a wonderful writer and a lot of the things he says are very true. If you've read his letter from the future to the present, that's the most uplifting thing I've ever read, I think, and highly optimistic. But I think the argument about technology and the dangers of technology is just wrong. So he has this analogy of the urn from which one takes white beads and black beads, and these are technological discoveries. And the white ones are the ones that are beneficial and the black ones are the ones that destroy us, and sooner or later we're gonna hit a black one, unless we take drastic steps to make sure that, first of all, we take them out more rarely and, second, that we examine them very closely before actually deploying them.
I think this is a recipe for totalitarianism, but even worse it's precisely a recipe for civilisation to be destroyed. Because a civilisation is only going to be rescued by rapid growth of knowledge, whether or not we take totalitarian draconian steps to try to reign it in. In terms of the analogy, the mistake is that every time we take out a white bead, we reduce the number of black beads. So the probability calculation that's implicit in that metaphor is a mistake. We become more resilient the more we know about especially fundamental knowledge, because fundamental knowledge can protect us from things that we don't yet know about unlike specifically-directed knowledge, which has less of that tendency.
Secondly, it's not true that technology has made us more vulnerable because our species (depending on how you count species), our species Homo sapians sapians is one of six or eight or maybe more species that had the capacity to create explanatory knowledge. We know that because things like campfires require explanatory knowledge to form them. We know from the evolution of language – language must have been used before the structures in our throat adapted to make language evolved. It must have been the language use that made them evolve, it couldn't possibly have happened the other way around. So we know that all these other species existed in the past and were capable of what we are, namely creating new explanatory knowledge, and they're all extinct except us.
And we know that our species almost went extinct at least once but probably more than once in our past as well. If we come nearer, every civilisation before the civilisation we called the West, now technological civilisation…every civilisation before technological civilisation was also destroyed, some of them after 4,000 years. I think that's the longest the civilisation has ever survived, depending on where you draw the line, between its creation and what people regard as its destruction of the knowledge that kept it going. That's nothing compared with the lifetime of a species, and all those civilisations have been destroyed as well. And all those species and all those civilisations could have been saved with just a little more knowledge, from our perspective, a little more knowledge about hygiene to prevent plagues and farming and irrigation and that kind of thing to prevent being destroyed by climate change, and so on.
So a small amount of knowledge would've saved them. On the other hand, not a single civilisation was destroyed through creating too much scientific knowledge. That's never happened. So, if you're going to be Bayesian or if you're gonna pull these beads out of a hat, then even by that standard we should be pulling them out faster, not slower now. And as for the idea that a small number of people could destroy civilisation, well, yes. But that's not the right measure. We have a small number of people who could work on things that could destroy civilisation, but we have a large number of people that could be working on countermeasures. Now, it could be argued, and I think there's a very good argument for this, that we are not doing enough of that – that is, we are not doing enough to counteract artificially caused pandemics, for example, or, as Carl Sagan put it, artificially caused meteor strikes.
And he said, we shouldn't be developing the technology for fending off asteroids or comets because it could also be used to destroy us. Now, I think that's a mistake. We should be developing that technology and we should be developing it faster. Again, this is something that's actually going in the right direction, because, say, 20 years ago, the idea of an asteroid defence system was ridiculed. It was literally ridiculed and rejected just for being ridiculed. Whereas in that time we have set up a rudimentary asteroid detection system, which can detect asteroids, and also there's been research into how to fend them off when we do detect them. Now, they will not fend off asteroids coming from an unexpected direction, outside the plane of the ecliptic, nor faster than we expect. We could be vulnerable to those just because we don't have a fleet of nuclear powered spaceships, and we're going to be kicking ourselves if one of those heads towards us and we don't have the nuclear powered spaceships, and it's going to take us more than – whatever it is – a year to build them. I don't know what we could do if our lives depended on it.
I guess we could do, like we did with the present pandemic, things that were thought impossible previously. But there's a level of things that we couldn't do. And we should be if not making the fleet, we should be creating the knowledge to make the fleet, of nuclear powered spaceships, and all sorts of other things. And as I said, the most important knowledge in this respect is fundamental knowledge. And fundamental knowledge is created by things like fundamental science. Fundamental science has been held back by the phenomenon we discussed before, that the academic world has been trapped in a sort of Sargasso Sea of bad assumptions, which have de-emphasised fundamental research in favour of incremental research. And the science funding system doesn't work. And the peer review system doesn't work. And it all goes together to increase the number of scientists doing that won't save civilisation and reduce the number working on things that will.
WALKER: I wanted to ask you this before, but what practical steps would you take to improve the incentives for fundamental research over incremental research?
DEUTSCH: Well, I'm not an expert on science funding, and the reason for that is, in part, that I have tried to get away from the entire academic system from funding down to academic politics. I don't have an official position at any university, and I'm not paid by anybody to do research. I write my books and I am an honorary member of various things in Oxford University, but not a paid member. So I don't know how things are going. I only know the complaints that my colleagues raise and they're all the same complaints – that funding is highly bureaucratic at the moment. So if you have an idea for some research you want to do, you've got to submit it to a committee.
The committee consists of 20 people, none of whom are experts in fields close to yours, or even if they are, they've got a vested interest in not doing that kind of research but in directing it towards their kind of research, which is only natural. Now, even when I was a student, it used to be that research funding was not done in that way. Research funding – I don’t how the higher levels of it worked – but it was directed towards individual senior researchers, and they dispersed the funds and chose their own graduate students and postdocs to do the research that they thought was important. And those of them who thought that fundamental research was important didn't have particular projects that they wanted. They were looking for young people who had ideas to do fundamental research.
That was certainly true of the bosses that I had when I was a student and when I was a postdoc. Now that doesn't seem to exist anymore. Now, it's the scientific department that has its priorities and which tells the professors what to do. We had some ridiculous situations a couple of years ago. One example was we wanted to hire a postdoc to work on foundations of constructor theory. And the reason we wanted to hire him is that he was the only person in the world who had proved a particular theorem and we wanted him to use his techniques. Anyway, it was impossible to hire him because we had to advertise his position and then make a case to the relevant committee. And what did they know about it?
On the form, there wasn't a box called “constructor theory”, because constructor theory doesn't exist yet. The whole point was that we are trying to create this new field. And the reason it doesn't exist yet is that it may not exist at all. It's a fundamental conjecture. But it's a fundamental conjecture that is thought worthwhile by me and several other senior people. That should be enough to fund something. And the same thing has happened with graduate students. It's ridiculous by the way that, at least in the parts of the funding system that I see, if you are a young person wanting to do research on a particular thing in a particular department, you've got to apply for the funding in one place, typically the government or some giant charity or something like that, and to the department in a different place. So it can happen, like in the example I gave, that the department really wants you but there's no funding.
Sometimes the senior people can arrange a weird arrangement where you're funded for one thing, but you’re really going to do another thing. And I think in general, people who apply for grants nowadays are playing a game. You're gaming the system, you're trying to tick as many of the boxes as you can, and you're trying to pretend that your research is directed towards those boxes. Whereas, actually, it only satisfies the boxes incidentally, and it's really directed towards something else that couldn't get funding. I shouldn't go on and on about this, because this is only a tiny facet of the overall problem. People like Michael Nielsen and Patrick Collison have investigated the problem at a deeper level. Although they have much more sympathy with the low-hanging fruit theory than it deserves, but at least they've gone into the problem in a broader sense than I have. So, you know, you should ask them.
WALKER: Yeah, there's also a great book by Donald Braben called Scientific Freedom. Have you come across that?
WALKER: I recommend that to people as well. I think Stripe Press – Patrick Collison's publishing house – recently republished it. But yeah, I'm just getting frustrated listening to you. It just feels like we're self-sabotaging as a species.
DEUTSCH: Yes. And of course the bad guys have no such restrictions.
WALKER: Right. There's an asymmetry.
DEUTSCH: Yeah. But really the natural asymmetry is the other way around. The good guys have a natural advantage in this game, but not if we hogtie ourselves.
WALKER: Exactly. A couple more questions on prediction and probability. Are you familiar with Phil Tetlock's research on forecasting?
WALKER: Okay. I'll skip over that then. Do you have any explanations as to why frequencies in certain situations can be approximated by probabilities?
DEUTSCH: Yes. It's because there is an underlying physical process for which, if we have good explanations of that process, we can use frequencies to predict probabilities. But not otherwise. And the usual case is otherwise. That is, the usual case is that the frequencies are misleading, especially when something important depends on it.
WALKER: Right. Why do you think it took people so long to come up with probability theory? Humans were gambling long before Cardano, the maths isn't particularly difficult…
DEUTSCH: Oh, I think the maths is quite difficult. I mean, if you going back to Cardano, it was only a couple of centuries before that that Europe switched from Roman numerals and using letters for algebra. If you look at how people like Kepler expressed theories, it was amazingly cumbersome. But people like Galileo were taken down by the system as it were, as it was then. So as Bronowski says, after Galileo scientific research in Southern Europe came to a dead stop, but it continued in Northern Europe. And then we had Leibniz and Newton and Descartes and so on. They had their problems with authorities, but they were relatively free to pursue ideas, and they had fundamental ideas.
DEUTSCH: I don't think it's at all surprising that they didn't invent that probability theory wasn't invented earlier. When probability theory was first invented by Cardano and Pascal and those people, it wasn't misused. It wasn't used in the same way as today. It was, I think, understood by all concern that this was a theory of how to make a profit playing games where the process of randomising was always part of the explanation of why you should take the fact that, you know, three aces have already been dealt changes the probability of the fourth ace, that this was based on a physical understanding of the situation where a randomising process had approximated probabilities. Nobody would have tried to use this for predicting things like whether there's going to be another continent in an uninhabited unexplored part of the world. They wouldn't have done that because they would only have been expecting probability to have this narrow range of uses.
Again, I'm not a historian of ideas, but I think it was only in roughly the middle of the 20th century when people like Jaynes started advocating a much broader... No, no it was before that, there was already the beginnings of it in the 19th century… But anyway, a much broader interpretation of probability – a subjective interpretation of probability. By the way, I haven't mentioned this, these bad interpretations of probability are all subjective, including Bayesian epistemology. They're all thinking that probability is not an attribute of the pack of cards. It's an attribute of how we think about the pack of cards. And that's a terrible mistake, which Popper attacks as well. I mean, he's against subjective interpretations of anything except psychology itself.
WALKER: Speaking of subjective probabilities – and I know you are more interested in the ideas themselves than the history of ideas, but just as an aside – people often go back to Frank Ramsey when thinking about the birth of subjective probability, but it was recently when reading Popper in Objective Knowledge… For people who have the book it's page 79 of Objective Knowledge, it's the second essay ‘Two Faces of Common Sense… but there's a footnote of Popper’s: “The theory is often ascribed to Frank Ramsey, but it can be found in Kant.” And I got a bit excited when I read that. I do get a bit sidetracked indulging in the history of ideas. I agree with you that I find it useful to the extent that it helps you understand the ideas themselves, and the debate between different ideas, even if only as a memory device, I suppose. And I went back to Kant's Critique of Pure Reason and sure enough, in there he talks about using bets to quantify subjective probabilities.
DEUTSCH: Oh, really? I didn't know that.
WALKER: All the way back to Kant, yeah.
DEUTSCH: It's not surprising, I guess, because he was really into subjective interpretation of knowledge in general.
WALKER: Yeah. I was like, “Wow, that's a great pickup by Karl.” So the last question I wanted to ask you, David, was really something I just started thinking while we've been talking and that is, I wonder whether you see the cultural malaise that's been afflicting science, and the careerism and incrementalism, as being at all a cause of the continued and perhaps increasing popularity of Bayesian epistemology? Because I guess with Bayesian epistemology, you can kind of keep tinkering with existing theories rather than coming up with fundamentally new ones…
DEUTSCH: So sociology is another kind of thing that I'm not particularly interested in. I don't want to psychologise or second guess why people make the mistakes that they do. I would rather think that Bayesian epistemology is just one facet of a much larger thing. So it's not that Bayesianism has caused all the trouble in the world, it's that all the trouble in the world has caused Bayesian epistemology. However, it is striking that in Bayesian epistemology it's all about increasing the authority of a theory, which in the big picture is all about increasing authority, which means “let's follow the science”, as recently people have been saying about the pandemic and so on, as if science had some authority, had a moral authority or a finality or an indisputableness about it.
And at the same time, Bayesian epistemology undervalues criticism. Everything is focused in Bayesian epistemology on increasing our credence for something. And, okay, we have a reputation that reduces it to zero. So it's a kind of structureless conception of how theories can fail. According to that theory, they fail all at once and when they are refuted by experiment. Whereas in reality, in the Popperian conception, science consists entirely of criticism or rather of conjecture, which is a thing that we don't know how to model (theories don't have a source other than conjecture), and the whole rich content of scientific reasoning comes in criticism, a small part of which is inventing experiments and doing them. But most criticism is structural criticism of the theory qua explanation, and most theories are rejected for being bad explanations rather than act actually refuted.
And even when there is an apparent refutation, we don't take it seriously unless there's an explanation for it. Again, my example I gave in my book is the fuss that was made when some people thought that they'd found neutrinos that traveled faster than light. And they were thinking, “Oh, general relativity is refuted and so on.” And actually, the explanation was that there was a faulty connector in some of their electronics. And that was it. That was the whole explanation for the neutrinos appearing to travel faster than light. Now this is the Duhem–Quine critique of science in general. It does not apply to Popperian epistemology, but it does apply to Bayesianism, because in Bayesianism you never know whether your credence for the integrity of the experiment should be reduced or your credence for the theory should be reduced. Bayesian epistemology doesn't give a criterion for which of those to choose. Nor does Popperian epistemology, but Popperian epistemology has an alternative account of what you should be doing, namely trying to find explanations. And then when you’ve found the explanations, it's not that your probability or your credence for them changes, it's that their rivals become bad explanations.
WALKER: And that’s how we make progress.
WALKER: David Deutsch, thanks so much for your time.
DEUTSCH: It's been fun. Nice chatting.