Francis Fukuyama — AGI and the Recommencement of History

42 min read
Francis Fukuyama — AGI and the Recommencement of History

Francis Fukuyama is a Stanford political scientist and the author of (among many other works) The End of History and the Last Man—arguably the most influential work in political science of the past half-century.

If “History” is driven by technology, how does Fukuyama now view biotech and AI—and their potential to usher in a new, post-human history?

These are difficult questions, but I wanted to ask Frank about topics that are both important and (at least for AI) on which he has spoken little until now.

We also get a sneak peek at his forthcoming book and discuss his ideas on bureaucracies, delegation, and state capacity.


Video


Sponsors

  • Fundrise Innovation Fund: check out their portfolio for yourself and open an account today at fundrise.com/joe. (Carefully consider the investment material before investing, including objectives, risks, charges, and expenses. This and other information can be found in the Innovations Fund’s prospectus at Fundrise.com/Innovation. This is a paid sponsorship.)
  • 80,000 Hours: a non-profit that helps people find fulfilling careers that do good. To explore their free, in-depth resources and career guide, head to 80000hours.org/joewalker.

Transcript

JOSEPH WALKER: Well, today it’s my huge honour to be speaking with Francis Fukuyama. He truly needs no introduction, so I won’t in fact introduce him. Frank, welcome to the podcast.

FRANCIS FUKUYAMA: Well, thanks very much, Joe.

WALKER: I’ve been thinking about what are some topics I can discuss with you that are both important and which you haven't written or spoken much about. And that's been a challenge because you’re so prolific. But it seems to me that the topic I’d most like to discuss is this question of artificial general intelligence and the recommencement of History. And in doing that, I’d mostly like to draw on one of your lesser known books, Our Posthuman Future.

But before we get to AI, some questions on biotech. Our Posthuman Future was obviously mostly concerned with biotech, especially genetic engineering. And as I understand it, the reason for your concern was that if biotech can alter the substrate of our human nature, then that will have downstream consequences for the political order and for liberal democracy. Because if liberal democracy is about a system that completely satisfies human nature, if we change that human nature, then liberal democracy could be undermined and we might be ushered into a posthuman history.

So the first question I wanted to ask you was: it’s been about 25 years since you wrote this book. I’m just curious whether biotech generally and genetic engineering specifically have played out in the ways that you expected they would?

FUKUYAMA: Well, in the 1990s, I was leading a study group in Washington on the impact of new technologies on politics. And we looked both at information technology and at biotechnology. The Internet was only privatised at around this time and social media was still 15 years in the future. And I thought at that point that biotech was likely to be more consequential. I think that may be true in the long run, but certainly the Internet has turned out to be a much more disruptive force than I imagined at the time. But I think that both of them are going to provide fairly large challenges.

I think the one coming from biotech in a way is more fundamental.

Because really if you can alter human nature as opposed to just altering human behaviour, I think that’s going to affect things like our understanding of rights, human rights, because it may put into contestation what is a human being actually and what’s the boundary between humans and non-human beings.

WALKER: So we’ve had only, I think, three CRISPR babies so far. There was Lulu and Nana in 2018, and then Amy in 2019, all in China. Were you expecting more genetically engineered humans by this point?

FUKUYAMA: I don’t know that I was expecting more. I do think, though, that it’s going to happen. I think the barriers to doing this kind of human experimentation are still fairly high. But it just seems to me such an obvious path for somebody with a lot of money and ambition. 

And you already have a lot of Silicon Valley tech billionaires pouring a lot of money into life extension. It’s different from genetic engineering, but I think it’s also going to have huge consequences for human societies.

WALKER: I had Laura Deming on the podcast last week, who founded the Longevity Fund, and she’s currently working on her own startup now for cryopreservation.

FUKUYAMA: Yeah, it’s funny. This is one area where I part company with almost everybody, because I think that life extension is a bad idea. It’s something I think is personally desirable because nobody wants to die. But socially it’s going to be a disaster if people start routinely living to very advanced ages, because there’s actually a good evolutionary reason why people die. If you didn’t have generational turnover, you’d never have social change. And I think that’s really kind of the future we’re facing.

WALKER: Right. Yeah. If I remember correctly, that’s the main kind of negative externality that you highlight in the book as far as longevity science is concerned. It’s that our childhood experiences shape our worldview in a very durable sense. And so you have these generational effects. And if you have a certain generation living much longer, kind of like a Joe Biden on steroids, so to speak – maybe literally, I guess – then they will entrench that particular worldview and society will become less dynamic.

FUKUYAMA: Yeah, I mean, we’ve already had this with individual leaders like Castro or Francisco Franco that lived way past what should have been the end of their political lives. But if you have a whole generation of people that simply don’t go away and don’t get replaced, I just think it’s going to be very hard for human society to advance. Because there’s the old joke about economists: that the field advances one funeral at a time. Sometimes you really do need an entire generation to be replaced by another one before you open yourself up to new social, political possibilities.

WALKER: But I wanted to push back. I guess you’re focusing on one specific negative possibility. But if you did a more holistic kind of cost–benefit analysis, maybe it would come out in favour of longevity.

FUKUYAMA: Well, it’s hard to see what that is. I guess you could say that as people get older, they accumulate human capital and it’s better not to have to start over. But again, I just think that a lot of that human capital becomes rigid and out of touch with the changing environment. 

I certainly feel that things around me are very different from when I was young and I keep wondering whether a lot of my attitudes are simply reflective of the period I was born in. So I really do think that…

But the trouble is that nobody wants to die so there’s no political support for passing away earlier rather than later.

WALKER: Right. So which current biotech do you view as most likely to drive transhumanism? Is it CRISPR-Cas9 or…?

FUKUYAMA: Well, the thing is that with that type of heritable gene editing, you actually affect not just the individual in question but all of that individual’s descendants. So it’s considerably more consequential than something that simply affects the behaviour of a living individual and that will die with that individual.

And that’s what got me started on thinking about this, because I really do believe that human rights are ultimately embedded in human nature, and if you can change that human nature you’re going to change the nature of rights.

WALKER: Most critiques of The End of History focused on other perceived weaknesses. But, correct me if I’m wrong, I think you always viewed the true weakness of the end of History thesis as modern natural science’s ability to alter our fundamental human nature. Because, as we said at the outset of this chat, if you alter our fundamental human nature, then whatever the highest and best political order looks like could be different to liberal democracy. 

I’m curious, do you view liberal democracy as an end in itself? Is it intrinsically good?

FUKUYAMA: Well, I think we need to unpack a few of those things. 

I didn’t say it was just genetic engineering. I think that technology in general has a big effect on the viability of different political systems. So in the 19th century, with the rise of industrialism, it tended to concentrate power because you needed large-scale industries, mass production, and that tended to fortify more centralised government. And the thought behind the Internet originally was that it would spread information and therefore power out and therefore would be democratising.

In a way, it was all too successful at that. And so what it’s done is actually destroyed the basis of common empirical knowledge. I think that’s one of the big problems that democracies are facing: that there are no authoritative sources just of factual information.

So that’s not genetic engineering; that’s a consequence of technology that I think a lot of people fail to recognise. And, in fact, it’s hard to imagine how democracy really works if people simply don’t agree on certain empirical facts and hold them in common in the society they’re living in.

So my statement was not that you couldn’t have an end of history if you had genetic engineering. My argument was that technology in general was the driver of history. And unless you imagine some kind of technological stasis, you wouldn’t have a stasis in political forms. 

And I think we’re already seeing that with the developments in information technology.

WALKER: The technology that could continue driving history forward didn’t have to be technology that alters human nature?

FUKUYAMA: No, I mean, all forms of technology have big social consequences. So that was the statement: that you can’t really have an end of history unless you have an end of technological development.

WALKER: Yeah. Okay, so I have some questions about natural rights. So in our posthuman future, you argue that human dignity is grounded in something you call ‘Factor X’, which is this kind of emergent, complex bundle of uniquely human traits. On that basis, would you have denied rights to, say, Denisovans or Neanderthals?

FUKUYAMA: Well, that poses a real problem because I think they would generally be recognised to not be human beings. And it depends on which of those aspects of Factor X you take most seriously. For example, would you allow one of these proto-humanoids to vote? You may say that we understand that they feel pain, they feel emotions, if you rip their babies from the mother’s arms it’s a terrible tragedy. And so you want to protect their rights in that respect. But do they have the intelligence and the capability of actually making political choices of the sort that we expect a democratic population to make?

We don’t allow adolescents or children to vote, because we feel that their mental capabilities are really not sufficiently developed. And if you have a proto-human race that basically doesn’t develop past the age of seven, I think you could make a very strong argument that they shouldn’t have the full set of rights that human beings have.

WALKER: So maybe they have something like Factor X−N, or Factor Y, or whatever you want to call it.

FUKUYAMA: Yeah.

WALKER: And so that would imply that it’s possible to have more than one set of natural rights at the same time.

FUKUYAMA: Well, that’s the problem that I saw with genetic engineering. Aldous Huxley talked about this already in Brave New World. You had Alphas and Betas and then the Gammas at the bottom, and they had been deliberately engineered basically to be slaves. They didn’t have the full set of human capabilities and therefore people felt free to exploit them. And I think that you could imagine getting there in a number of different ways. 

It could be that you deliberately engineer a kind of subhuman race. I think that’s not that likely. What’s more likely to happen is that elites will start separating themselves not just in terms of social status and background and education, but also genetically.

It’s actually interesting: if you look back historically, there were actually biological differences, or heritable biological differences, between social classes. Poor people, because of bad nutrition in the Middle Ages, were shorter and less mentally developed than aristocrats were. And so in a sense we’ve already experienced some version of that. And I think the simple physical differences between aristocrats and common people reinforced the belief in the need for political class differences.

And if you could actually get to that same result through biotechnology, I think you’d also have a call for different classes of rights.

WALKER: So to dwell on this a little longer, let me give you my understanding, and then you can tell me whether I’ve got it correctly. My understanding is that you weren’t concerned so much with the concept of a posthuman per se, as much as with an uneven, transhumanist transitional period. So if you could flip a switch and upgrade every human in the world into the same kind of posthuman, that would be less bad than a transhumanist rollout.

FUKUYAMA: There’re a lot of different dangers mixed up in this. And I would say that probably one of the most powerful ones is simply unanticipated consequences. That the current human emotional makeup is the result of hundreds of thousands of years of evolutionary experience. And we have the kinds of characteristics and faculties we do because that’s proved to be a kind of winning combination in terms of the survival of the human species. And if you deliberately try to manage that process, it just seems to me very likely that you’re going to get consequences that no one ever thought of. And dealing with those is then going to be very difficult. So that’s one category of problem.

Another is, substantively, what would you want human beings to do? Live longer, be smarter?

People would probably pick intelligence as the first category that they’d want to monkey with. But, again, that’s going to have consequences, as we were saying, for things like rights and political participation. 

So there are many ways in which this could end up affecting human societies. 

I actually think that we’re already in something of a crisis in terms of life extension. When I was on the Bioethics Council, we spent our last year talking about, essentially, gerontology. That past some point in your mid-eighties, roughly half of all people have some chronic degenerative disease. And it means that a lot of your population is actually going to live a good 10, 20 years beyond the point that they are fully capable human beings.

And that’s an economic cost that we’re now grappling with, but it’s likely to get bigger as time goes on. And I guess the way I thought about this was that ideally what you would like in a human lifespan, assuming that we do die, is that all of your faculties would kind of shut down at the same time. And I think the likelihood of actually having life extension in which that happens is very unlikely. That certain faculties are going to shut down well before other faculties. And so you’ll have a significant portion of the population living with some form of disability. 

And we don’t really like to think about that. It gets into these questions of rights. But even as we speak somebody with severe Alzheimer’s doesn’t have the rights of a younger adult that has all their faculties. They can’t drive. They can’t make independent decisions in the way that a fully formed adult would. And this is all consequences of the life extension that we’ve already achieved as a result of our existing biomedical technology

And so this is why it’s not a single thing I worry about. I worry actually about a lot of different consequences that we really haven’t thought through.

WALKER: Yeah. A few different threads to pick up on there. 

For longevity, the worst case scenario is that people’s cognitive faculties tend to shut down before their other bodily faculties. It’s just not obvious to me that that’s the direction in which longevity science is driving. I don’t know enough about it.

FUKUYAMA: Well, it’s already driven to that point. So the question is: could you reverse Alzheimer’s or Parkinson’s or any of these degenerative diseases? I suspect that will eventually happen, but there could be other things that start shutting down that we’re not even aware of, so …

WALKER: Yeah. On the fiscal consequences of ageing …

FUKUYAMA: Well, we’re already in a big social security crisis. 

WALKER: Yeah, but maybe we’ll be getting AGI just in time to kind of rescue us from those.

FUKUYAMA: Well, maybe.

WALKER: Maybe. We’ll come to that. 

Briefly, back to genetic engineering. Do you view assortative mating as being on a continuum with genetic engineering or qualitatively different?

FUKUYAMA: Well, it’s qualitatively different in that the agency is exercised in different ways. So assortative mating is simply done because you meet a partner that you really like and because you’re [of] similar social backgrounds and so forth, you end up marrying them and having children. Whereas genetic engineering is under much more direct control and can be used deliberately for social purposes. Like, if I graduate from Stanford and marry another Stanford graduate, I’m not thinking to myself, deliberately, ‘we’re trying to create a race of super-smart tech entrepreneurs’. You just are kind of following your instincts. But the problem with genetic engineering is that can be done deliberately with a clear social purpose in mind.

WALKER: If transhumanism does continue to progress and we do enter a world in which there are different sets of overlapping but sometimes competing natural rights, how do you adjudicate disagreements between those sets of rights? Do you then need to kind of fall back to a utilitarian framework, or how do you think about that?

FUKUYAMA: I think that it’s hard to say because it would depend exactly on how these different categories of human-like creatures turned out.

It also depends really on what you mean by utilitarian. The main charge against utilitarianism is that it doesn’t actually take the issue of human agency and human dignity depending on human moral agency seriously as a basis for rights and for defining who a human being is. It simply is a kind of calculus of pain and pleasure. And I think that if you actually did develop human beings with different moral capacities you would rethink rights.

Just to take another possible future scenario: one thing that would be the target of genetic engineering seems to me something like compliance. All societies want human beings to be more compliant and follow rules and not cause trouble. But I suspect that for evolutionary reasons there are good reasons why people take risks and don’t want to follow rules, because otherwise you just live in a regimented society with no personal freedom and therefore no innovation, no risk-taking and so forth. And so do you really want to breed a willingness to take risks out of the population and replace that with a tendency to comply with rules and authority?

It’s that kind of thing that worries me. Previously we had lots of ways of trying to make people compliant. We put them in labour camps and we gave them agitprop and tried to educate them in certain ways. That really, in the end, didn’t work, because human nature itself resisted these kinds of attempts to shape behaviour. But maybe in the future we’ll have much more powerful tools.

The other phase that we haven’t mentioned yet is neuropharmacology. You can produce behaviour change really directly by using drugs. And that’s something that we’re kind of in the midst of a crisis over right now. It’s not heritable, so your children don’t necessarily inherit those characteristics. But it also is a way of potentially making people more compliant or conforming with certain social rules that certain people prefer. And, again, I think that that politically can be very problematic.

WALKER: If we do have these different creatures inhabiting the Earth, with different sets of natural rights, are there any obvious ways in which liberal democracy becomes less suitable as the political order for accommodating those different rights?

FUKUYAMA: Well, yeah, obviously. I mean, both liberalism and democracy are based on a premise of human equality. And obviously if people accept the fact that there are different categories of human beings, you’re not going to have that.

WALKER: Okay, that makes sense. All right, some questions about artificial intelligence. 

So artificial intelligences have the potential to become posthumans. They might be made of silicon rather than carbon, but in a cultural sense they’ll be our descendants. When Our Posthuman Future was published in 2002 we were in an AI winter, and the prospect of human-level AI still seemed like pure science fiction. But since the book was published, the AI spring has well and truly arrived and it’s now strikingly plausible that we could have artificial general intelligence by the end of the decade. 

Before I ask you some different questions about what this could mean, first I’m just curious how you’re generally thinking about the concept of artificial general intelligence, artificial superintelligence. Are these coherent ideas to you? Do you think they’re likely to arrive soon? Just generally, how are you thinking about these questions?

FUKUYAMA: Well, the first thought is that I don’t like speculating about what the future is going to look like, because my analogy is that it’s sort of like speculating ‘what’s the consequence of electricity?’ and asking Thomas Edison that. What would he have foreseen about all the uses of electricity in the next hundred years? Probably almost zero.

And the one thing I’m convinced of is that – unlike blockchain or Bitcoin or crypto, which I think is a kind of useless technology – general-purpose AI is really, really big and it is going to have huge consequences. Just very hard to know at this point exactly what direction that’s going to move us in. So that’s my first observation. 

And so that probably means that I do think that the speed of change is going to be great. That’s what everybody around here seems to think. And the capabilities are going to develop very rapidly, and that’s usually not good, because social institutions in the past have always adapted to new technology but there’s always this lag. And I think that this is going to be an even bigger lag because the technology is going to move that much more quickly.

WALKER: If we do get to a posthuman future, do you think that will be more likely to be brought about by biotech or by artificial intelligence?

FUKUYAMA: It’s hard to know, and it could be the combination of the two. I think that there’s going to probably be this gradual merger of computers and human brains that are going to operate in rather similar ways. But, again, this is one of those areas I don’t want to speculate too much about.

WALKER: Yeah, I understand. One of my worries going into this interview was that I would be inviting you to speculate too much. And I as much as you dislike that kind of pointless speculation, I’m very sympathetic to you on that. So I’m going to try and ask more specific questions that maybe rely on conditional predictions. 

FUKUYAMA: Yeah, that’s fine.

WALKER: Let me see. Presumably it would take a lot for you to be willing to grant Factor X to artificial intelligences, right? If Factor X is a complex, emergent bundle of uniquely human traits… Presumably, you can tell me, but maybe the most important of those traits is something like consciousness. But it would take a lot before you were willing to say that an AI had Factor X.

FUKUYAMA: Yeah, Factor X is a bundle of different characteristics which point in somewhat different directions. So, for example, I have a dog. A lot of people have dogs. I suspect that dogs have some form of consciousness. They imagine things. My dog dreams all the time. And so obviously she’s living in this mental world inside her own brain. And I think the reason that people like dogs as pets is that they’re so obviously emotional and they have very human-like emotions. They make eye contact with you, they’re happy to see you, they have preferences, they get angry at certain things. All of this I don’t think is simply anthropomorphism.

That’s the other thing that we haven’t discussed. We’ve been talking about, could you breed humans that are less than a full human being? But the other thing is are we going to realise that animals are actually much closer to human beings than we recognise?

There is an animal rights movement, which I think doesn’t have a clear philosophical basis. But I think what we may come to understand is that actually many of those parts of Factor X that we thought were unique to human beings actually are not. And that there are many animals that actually come close to that.

What I think about my dog all the time – my wife is firmly of this opinion – is that they’re sort of like a three- or four-year-old. They have all the emotions and emotional intelligence of a three- or four-year-old, but you wouldn’t want them to vote. Basic human uses of intelligence like language are beyond a dog, but they also can suffer and they probably feel things and have some degree of consciousness. And so that’s one reason why people really don’t want to eat dogs or use them in a completely utilitarian way, because we do attribute certain of those human Factor X characteristics.

And I think that you’re probably going to have more choices like that. Creatures that have some degree of human characteristics.

One of the early ideas in artificial intelligence was the Turing test. I’ve always thought this is a ridiculous test. I mean, basically it says that if the external behaviour of an AI is not distinguishable from that of a human being, then they’re basically a human being. And I never understood why anyone thinks this is the correct way to do it.

If that’s the case we’ve already got artificial human beings. Chatbots, in many respects, are not distinguishable from a human interlocutor. And I think what most people would think of the AI as missing is something like consciousness and the whole emotional suite of reactions that human beings have. So the chatbot can replicate emotions, they can say ‘thank you’ or ‘please’ or ‘don’t do that’, but you don’t get the feeling that that’s based on an actual emotional perception. 

This is what kind of annoyed me about computer scientists, people like Marvin Minsky, that they really do believe that the human brain is just a wet computer and that when the computer gets to be the same scale as the human brain, it’s going to develop consciousness and emotions and all this stuff.

That seems to me one of the biggest unproven assumptions that there is. We don’t know what the origin of consciousness is.

WALKER: Another way to approach this problem in terms of the sphere of politics is just to say that even if it is kind of like John Searle’s Chinese room and there’s no subjective experience happening in the AI, if the AI is able to convince people that it is conscious, on some level that’s all that really matters to politics. I mean, one straw in the wind: a couple of years ago we had that Google engineer, Blake Lemoine, who was convinced that one of Google’s models was conscious. And that was a very early model. But you can imagine for years in the future, when these models are much better and much more persuasive, that even if they aren’t truly conscious, they still might be able to demand and then successfully obtain political rights.

FUKUYAMA: Well, yeah, maybe. Maybe.

WALKER: Would AIs need to be thymotic before you were willing to grant them liberal rights?

FUKUYAMA: Well, I guess it depends what you mean by that. Like, can they feel anger? I would say that you could certainly program them in such a way that they could replicate angry behaviour, but that’s not the same as actually saying that they’re feeling anger and that’s what’s motivating them to act in certain ways.

WALKER: Right.

FUKUYAMA: So, again, it’s that Turing Test problem – that you actually don’t know what’s going on on the inside of these machines. Even though the behaviour is really indistinguishable from that of a real human being.

WALKER: Yeah.

FUKUYAMA: A lot of my thinking about this was actually shaped by a friend of mine who wrote this book, Nonzero.

WALKER: Robert Wright.

FUKUYAMA: Robert Wright. In that book, he has this very interesting discussion about consciousness and what it means to be a human being. And he said that there’s a philosophical question that nobody has really answered, which is: why do we have subjective feelings at all? 

He makes this point, for example: why do we feel pain and fear pain? You could design a robot so you put your hand over an open flame and it hurts and you withdraw your hand. But you can program a robot to do exactly that, right? You have a heat sensor, and the heat sensor says, oh, this is a temperature that’s too high for my hand to survive, so I’m going to pull the hand away, without actually having to have this internal emotional state of pain that makes you draw away. And his argument was that it’s not clear why those subjective emotional states exist. 

Again, if all you’re interested in is the external behaviour of the being, they don’t have to. You can program creatures that will respond to all sorts of things as if they had these internal states.

And I think that’s kind of crucial for believing that an AI is actually a human being – some awareness of the fact that they actually have this kind of internal subjective feeling. And I have no idea how you’d know that. I’m certain that you can get to the point where they can pretend to have those. But whether they actually would or not is, I think, still an open question.

WALKER: If you had a society of non-thymotic, Spock-like AIs, as a first approximation what would the best political order for them look like? Would it be something like market-oriented authoritarianism?

FUKUYAMA: Yeah, that’s the trouble with a lot of these tech billionaires. This is a characteristic of a certain kind of intelligence that a lot of them have. A lot of mathematicians and people that are very good at a certain kind of reasoning have. They feel that that’s the most important human characteristic. I mean, all these guys – Peter Thiel and Mark Andreessen and Elon Musk – they’re all edging towards this belief in a kind of technocratic aristocracy that there are just certain human beings that are smarter and better at doing things than other human beings, that they should have kind of some intrinsic right to rule other people.

And so some of them have actually become overtly anti-democratic — that you really ought to delegate decision making to this kind of superior class of individuals. And I think that that is not good for democracy as we understand it today.

WALKER: But if the society was just composed of synthetic AIs and they were non-thymotic, so we’re taking that as an assumption.

FUKUYAMA: Well, that’s never going to happen.

WALKER: Okay, something more realistic then. So imagine the next few decades unfolds and the scaling laws continue to hold for LLMs, AI progress continues, we get better and better models, those models start to become agentic, and there are now sort of millions of Blake Lemoines in the world who are convinced that these artificial intelligences are conscious, that they do deserve political rights. In that kind of scenario, are there any general intuitions you have or predictions that you’re comfortable making about what the political order starts to look like?

FUKUYAMA: No, I’m not comfortable making any of those predictions. I just think it’s so hard to imagine.

WALKER: Do you think we should even be thinking about this or is it kind of pointless?

FUKUYAMA: You should think about it. I think that the big issue is going to be one of power, right? Are you actually going to delegate real power to these machines to actually make decisions that have big consequences for living human beings?

We already delegate decision-making power to a lot of computers, in terms of processing information and telling us what’s going on, having sensors that feed back information to us. But are you actually going to delegate to them the power to make like life and death decisions that will directly affect other human beings? 

I suspect we probably will and we’ll get there at some point. But that, I think, is going to pose a much more sharp problem for society.

WALKER: Say we had a superintelligence today and it wasn’t public knowledge yet. I don’t know. Sam Altman gives you, Frank Fukuyama, a kind of preview into OpenAI’s new superintelligence. And you ask it whether it could come up with a better political order than liberal democracy for today’s world. How likely is it that you think it would be able to do that?

FUKUYAMA: Well, I just don’t think that it would be likely to get that right. It might iterate enough that over time it could work its way towards something that would help. But the thing is – and I think this is a very common mistake that mathematically minded people have about the nature of intelligence – political intelligence is very different from mathematical intelligence because it’s completely contextual.

WALKER: Right.

FUKUYAMA: To really be intelligent about politics and the way things are going to work out in the political world, you have to have a lot of knowledge about the environment. And this is something I teach my students. We basically teach comparative politics. Things that are doable in China are not doable in India. And in fact they may be doable in certain parts of India, but not in other parts. Some states may be able to get away with certain things and others not. And how it affects different classes of people, how it’s affected by traditions and culture and this sort of thing is all part of what political intelligence has to draw on. 

And it also gets down to this lived experience. I think lived experience is used wrongly in many cases to say that there are certain experiences that are so unique that really if you haven’t actually experienced it you don’t have a right to even talk about it. But I do think that the best political leaders are ones that have certain lived experiences that allow them to empathise with people or understand pitfalls in the way that people are thinking or acting. And for a computer to actually extract that from their environment, it seems to me, would be very difficult to give proper weightings to all of these experiences and then put them together in a way that would actually produce a certain order.

And then the other problem is that nobody's going to want to give up power. So supposing the computer comes back and says, well, actually I think you ought to delegate power to smart machines or smart oligarchs. How are people going to take that?

WALKER: Yeah. So it’s not possible to kind of reason your ways to a political order a priori?

FUKUYAMA: No. I mean, I actually believe that evolution is the way that most things came about, that you have a lot of trial and error. Certain things work and other things don’t. And that’s how we got to be human beings the way we are now. And I think that’s also the way any future political system is going to evolve.

WALKER: Yeah. And there might be a certain kind of logic to the mechanisms, but you can’t really predict how they’ll unfold.

FUKUYAMA: Right.

WALKER: So there’s an interesting passage in Our Posthuman Future where you talk about how Asian cultures might be more permissive of biotech developments; there are less inhibitions on biotech in most Asian cultures. The reason for that is that many Asian cultures lack a kind of transcendental religious tradition like Christianity. And so there isn’t a dichotomy between humans and non-humans. There’s more of a continuum. We see many examples of this in Chinese laws in the past around organ harvesting of prisoners, eugenics, even in the … 

FUKUYAMA: Abortion is much more common. Even infanticide in some cases.

WALKER: The fact that the three CRISPR babies emerged from China.

FUKUYAMA: Yeah. I mean, most Asian cultures don’t have anything like Factor X, a concept that there’s some core set of human characteristics that sharply distinguishes human from non-human. That has some good consequences. In both Daoism and Shinto, for example, they have a belief that spirits inhabit all sorts of things. They inhabit desks and chairs and temples and computer chips, so the spiritual world really extends to basically all material objects in the world. And it means also that in those cultures you actually have more respect for the non-human world or it’s less obviously there to be exploited than it would be in a Judeo-Christian culture where there’s a special creation of man and a sharp distinction between human and non-human.

So that’s why I think there are just going to be less inhibitions to this kind of biotech in Asia than in the West.

WALKER: Does that also imply that if we do get powerful agentic AIs, Asian cultures are more likely to be the first, or Asian countries are more likely to be the first countries to grant them some form of rights?

FUKUYAMA: Well, maybe. It’s possible. Who knows?

WALKER: Have you seen the 2023 film The Creator?

FUKUYAMA: No.

WALKER: Okay. It’s a really good film. It’s probably the most compelling depiction of this scenario. So I think it’s set in the year 2055. Superintelligent AI has detonated a nuclear weapon over Los Angeles, and the Western world rallies together to annihilate and exterminate the artificial intelligences. And then this bloc called New Asia, which is basically composed of all the Asian countries, kind of offers safe haven to the AIs.

Anyway, I realised that there was this kind of connection to your point about the human–non-human continuum in Asian cultures. It’s a good film. 

Okay, a couple of questions about AI, China and the end of history. And now we can kind of bring our horizons a little closer in and just think about the next, maybe, five to 10 years.

If we think about large language models and the way they’re being used currently, in terms of how authoritarian regimes might make use of them or be affected by them. On the one hand, you have concerns that AI will help authoritarian regimes entrench their power by providing them tools of propaganda or surveillance. There’s this other view that’s been gaining currency recently, and I think our mutual friend Tyler Cowen has been writing about this, which is that LLMs are imbued with Western, but specifically American, ways of thinking in very subtle ways. And that this is going to represent a vector into China and a kind of victory of American soft power. Because even the best Chinese models, like apparently DeepSeek, is largely based on OpenAI’s models. 

So I’m curious how you think about this and whether it’s more likely that AI will advance or set back the cause of liberal democracy in China.

FUKUYAMA: Okay, well, that’s precisely the kind of question that I can’t answer.

WALKER: Is there a way we could break it into smaller chunks?

FUKUYAMA: Well, I mean, look, these AIs are trained on certain bodies of writing and presumably they will pick up cultural habits that are embodied in particular literatures and so forth. And so I would imagine that if there’s a Western bias to existing models, it won’t be the case with Chinese models when they train them on Chinese material. 

So I’m not too worried – or I guess worried is the wrong word – I’m not hopeful that we will undermine China by these hidden biases in our AI models that we’re exporting.

WALKER: Okay. Some questions about the future of work and its thymotic origins. So first some questions about megalothymia and then some questions about isothymia.

A lot of reasonable people predict that by the end of the century, as a result of AI advances, we might have machines that can perfectly substitute for human labour and might even make human labour redundant. And then obviously you might have a world in which people are relying on something like universal basic income, and they have all of their material needs met, but humans are no longer doing anything economically valuable in the economy.

Assume that world does arrive. It strikes me that one of the virtues of liberal democracy, which you’ve written about, is that it provides outlets for megalothymia. And perhaps the most important outlet is entrepreneurship. And that’s for two reasons. First, megalothymotic individuals generate wealth for society but, second, it keeps them out of potentially disruptive activities in the realms of politics and military. 

I’m curious, if humans no longer do any of the economically valuable work in society, what do the new megalothymotic outlets look like?

FUKUYAMA: Well, look, we’re already living in that world you described. I mean, you have people like Donald Trump and Elon Musk that are intervening in politics, because of their megalothymia. And so we’re already seeing the terrible consequences of that. 

I think in terms of work, this is why universal basic income will never take off. People don’t just have material needs for resources to stay alive and pursue their hobbies. They really feel that their dignity comes from work. And, in fact, there’s a significant resistance to being on the government dole precisely because people are proud. They say, ‘I am a worker. I do things that are useful. My salary reflects my use to society. And if you’re just paying me for existing, that doesn’t make me feel good as a human being.’ That’s a pretty universal kind of reaction, and that’s why I just think universal basic income is just never going to be a tenable idea.

WALKER:  Just as a complete sidebar on Trump and Musk, I’m curious, given their mutual megalothymia, how you make sense of the kind of equilibrium they’ve managed to reach in their personal relationship.

FUKUYAMA: Well, I don’t think it’s an equilibrium. I’ve always thought that Trump is going to drop Musk the moment that he becomes a political liability, and that’s probably going to happen sooner rather than later.

WALKER: Okay. I was at … I don’t think I’m breaching any Chatham House rules here, but I can always edit this out if I am … But I was at a dinner in San Francisco recently, and one of the attendees had been at some kind of fundraising event at Mar-a-Lago in the last month or so. And apparently Trump has this trick where he goes around before the dinner and asks each of the guests ‘Who do you think is more successful, me or Elon?’

FUKUYAMA Oh, yeah?

WALKER: Someone made the mistake of saying, ‘Well, Mr President, I think you’re more successful in politics, but Elon’s more successful in business.’ And that person was not invited back.

FUKUYAMA: Yeah, yeah.

WALKER: Okay. So politics might start to become a more important outlet for megalothymia again in a world in which humans do less work.

FUKUYAMA: Yeah. Well, I don’t think we’re going to get to that point. But the nature of work is definitely going to change. It’s going to be less onerous and more mental and so forth.

WALKER: In the second edition of Kojeve’s Introduction to the Reading of Hegel, there’s this footnote where he talks about the trip he took to Japan in 1959. And he thought of Japanese society as, in some sense, being at the end of history, because after Shogun Hideyoshi in the 15th century Japan basically hadn’t suffered any civil wars or invasions of the homeland islands. And he reflected on the rituals and traditions of the aristocratic class and viewed them as engaging in a form of pure snobbery, where they engaged in these kind of elaborate formal activities like flower arranging and Noh theatre. And he viewed that as kind of an outlet for their megalothymia. 

That would seem to suggest that, in a world in which humans have become economically redundant, those kinds of activities take on greater importance. Like maybe everyone’s just trying to climb Mount Everest or working on very elaborate projects. It doesn’t seem like a very plausible vision of …

FUKUYAMA: Well, but I think a lot of that has already arrived. I mean, how many people and how many billions of dollars are involved in the video game industry?

WALKER: I see.

FUKUYAMA: I mean, we’re already creating these artificial worlds that have really no consequences for human beings except that they’re an outlet for people’s thymos and ambitions and so forth.

WALKER: These kind of Robert Nozick experience machines – that’s only going to continue?

FUKUYAMA: Yeah. I mean, I think that’s one of the problems in our politics: a significant part of the American population lives in this fantasy online world where reality doesn’t really intrude very much. You have as many lives as you want and you never have to pay consequences for risks and so forth.

WALKER: Some people have compared or viewed as a model the lifestyles of the landed gentry in the early modern era as a kind of vision of what people’s lives might look like if/when artificial intelligence makes people economically redundant. And I’m curious what you make of that, because I can actually see a kind of disanalogy there where the reason those aristocratic lifestyles worked was because those people were kind of ‘masters’ in the Hegelian sense. And so they had that sense of recognition and they were able to kind of engage in aristocratic activities and not do much economic work. It doesn’t seem like you could apply that same model to a human future in which people weren’t themselves masters.

FUKUYAMA: Yeah, maybe. Again, I just resist this premise that you’re going to get to this point where people don’t do economically useful work and that they then have time to do other things. Because, first of all, human desire does not stop expanding, right? I mean, what makes a billionaire get up in the morning? If you’ve got a billion dollars, you could just kind of lay in bed all day, you could fantasise, you could play video games, right? But they’re all out there doing stuff. And I think that the reason is that there’s no level of material wealth at which human beings say, ‘okay, that’s enough, I’ve got everything, I’m not going to do anything anymore.’ It just doesn't exist.

WALKER: Yeah. I think in this world it makes sense that people would still be doing projects, broadly construed, but perhaps not work that generates an income, if AIs can be doing it much cheaper.

FUKUYAMA: Well, again, we’re already living in that kind of a world. Not that AIs are taking over, but, you think about a lot of the products that are sold today, that are really not in the least bit necessary for any kind of human life, but people still are involved in them. And I just think that human desires really don’t have any particular limit. And once you reach a certain level of material wealth, you’re still going to pile on more objectives and desires that presume you’re already at a certain level, but you still want more. I just don’t see how… 

What I do think is going to happen is that a lot of activity will go into things that are not traditionally thought of as producing material. But you can only drive so many cars, eat so many chocolate cookies. 

WALKER: Right. Okay, some random questions to finish off. So Hegel thought that there would still be wars at the end of history, but Kojeve thought that there wouldn’t be. How do you make sense of Kojeve’s view?

FUKUYAMA: Well, I think that Hegel is probably more correct. I mean, if you take thymos seriously … In fact, I think I said that in one of the last chapters of The End of History, that there’s actually nothing like the risk of violent death in a struggle, in a military struggle, that makes people feel as human, fully human. And I think that that’s going to continue to be the case. I think that a lot of the political turmoil that we see around us now is really driven by that kind of desire. People want struggle for its own sake. They want risk and danger. And if their lives are so contented and peaceful that they don’t have it, they’ll create it for themselves.

So why are all these kids at Stanford and Columbia and Harvard and other places camping out on behalf of the Palestinians. Right? I mean, why do they care about the Palestinians? What they want is to be seen as people that are struggling for justice, because that’s a noble human being. And I think that desire is really not going to go away. And that’s why the ultimate struggle for justice is really one where you actually do risk your life. And I think that’s the sense that Hegel had about why war wasn’t going to disappear.

WALKER: Yeah, but how do you make sense of Kojeve’s view?

FUKUYAMA: I don’t know.

WALKER: Right.

FUKUYAMA: I don't know.

WALKER: Yeah, interesting. So what more would have to happen for the singular example of China to falsify the end of history thesis?

FUKUYAMA: Well, I guess if in another 50 years they were the leading power in the way that the United States was, and everybody wanted to emulate them, then I would say there’s no …  the model, the liberal democratic model … And we’re halfway there, I must say. It’s not just China’s success, it’s our failure, the failure of our democracy to actually produce kind of reasonable outcomes.

WALKER: Do you see any signs of the democratic recession reversing?

FUKUYAMA: I wouldn’t say that I see signs of it reversing. I think it’s always possible that people can still exercise agency and make different choices. And so every election that goes by can actually go in very different directions. So I think it’s important for people to remember that and the fact that they can reverse this democratic decline if they struggle for it.

WALKER: So I don’t know but I imagine you as one of the most misunderstood living public intellectuals, and that would pertain mostly to the end of history thesis. And I think people misunderstand it in two basic ways. First, they think that you were predicting that liberal democracy would spread to every corner of the globe over the short term. And then, second, and obviously more egregiously, some people misinterpreted you as saying that there would be an end to history in the sense of significant events. And I imagine you probably get emails every day telling you how you were wrong. And you probably get the same kind of questions every time you do a public event or a lecture. If that premise is correct, I’m curious what that experience has been like for you and what you’ve learnt from it.

FUKUYAMA: Well, I’ve largely learnt to shut it out. As you’re right, it never goes away. And I guess my feeling is that there are enough people that have actually read my books. In fact, there was a meme going around on Bluesky where there was a little checklist about things you had to do to basically … one of them was apologise to Francis Fukuyama for never having read his book. And so I think there are people that actually did read the book and kind of understand that it’s a little bit more complicated. 

WALKER: Has it changed the way in which you go about being a public intellectual?

FUKUYAMA: Well, not particularly. I think that one of the big pitfalls of certain public intellectuals is that they have a big success, they get this big dopamine hit early on in their careers and they then constantly want to replicate it. And so they are forced to then take positions that are more and more extreme and ridiculous.

Simply, I think Dinesh d’Souza, this rightwing commentator, is like that. He was the editor of The Dartmouth Review when he was still in college, and he made a name for himself by staking out these very conservative positions in a very liberal college. And he’s been trying to replicate that feeling ever since, by saying things that are yet more outrageous and yet more rightwing than the last thing he said.

And I just think that’s a trap that I never wanted to fall into. I’m perfectly happy to be regarded as boring because I’m taking actually a reasonable position rather than trying to shock people. I’m not trying to replicate the excitement everybody felt when the original End of History was published, because it’ll just never happen. And I’m not going to try to get there.

WALKER: Is there any other advice you’d have for public intellectuals who want sustainable careers? Because you say that you didn’t want to force yourself to replicate the success and the dopamine hit of The End of History. But your later books, like The Origins of Political Order and Political Order and Political Decay, are brilliant, in some sense as significant as The End of History, if not as well known. So what other advice would you have for public intellectuals who want to be sustainable?

FUKUYAMA: Well, if you want to be a public intellectual, you have to begin by being an intellectual, meaning that you actually have to think about things and you have to do research and take information and process that information and then write about it. It’s interesting. When I published Trust, my second book, one of the reviewers said something that I thought was right and sort of revealing. He said that, yeah, this is a pretty good book. Most people that have a big hit like The End of History, that’s all they do in their careers. They don’t go on to write a second book that’s interesting.

And I think that for me, actually the success of The End of History was liberating in the sense that I could actually write about whatever I wanted at that point. And I could write a serious book, about a very different topic, and people would still pay attention to it. Not trying to replicate the success of the first book, but the first book actually freed me to be able to write about whatever I wanted. And I think that’s been a great advantage. I never had to get tenure.

I actually don’t like the idea of tenure because I think that it forces younger academics to toe the line in terms of the kind of intellectual risks and the issues that they study, because it really narrows you to a very small subdiscipline within your bigger discipline. And I think I’ve tried to avoid that, but yet do things that are intellectually serious.

WALKER: Final question. What, if any, books are you working on at the moment?

FUKUYAMA: Well, I’ve just written a little bit of an autobiographical memoir. There’s actually a thread that runs through a lot of my writing that might not be obvious but that connects different books I’ve written. So one of them is something we’ve talked about already, which is the idea of thymos, which starts in The End of History, but it continues to my most recent book about liberalism [Liberalism and Its Discontents], but the other one is about bureaucracy and why I actually spend a lot of time worrying about the state and the nature of the state and how that’s all related to a bunch of different ideas I’ve had over the course of my career.

So, for example, I’ve got a chapter in the autobiography on delegation, because at a certain point I began to realise that delegation within a hierarchy is one of the most difficult and most central questions to management, to public affairs, to law. And it’s something we’re still fighting about, right? Republicans believe that we’ve delegated too much power to the state and people on the left think we haven't delegated enough. 

So there’s a lot of things like that aren’t obvious to a lot of people. So, anyhow, this is going to try to tie those threads together in a more comprehensive way.

WALKER: When will this be published?

FUKUYAMA: I have a contract. I’m going to have to revise it, but probably in the next year or so.

WALKER: Okay, so your stuff on delegation, it’s not readily apparent, or that theme isn’t readily apparent or organised in your existing published body of work.

FUKUYAMA: Right.

WALKER: Could you share your most interesting takes on delegation?

FUKUYAMA: Well, I’ll start with an anecdote, which is why I really started thinking seriously about this. In the late 1990s, you were in the midst of the first dotcom boom and the whole of Silicon Valley was arguing in favour of flat organisation. They’re very opposed to hierarchies of various sorts. And there was a feeling back then, in this very libertarian moment, that everything could be organised on the basis of horizontal coordination. The idea was the Internet was going to reduce transaction costs involved in this kind of coordination and nobody would actually have to listen to a boss in the future. And that is not true. You actually need hierarchy because actually you can't coordinate on this horizontal basis.

So in any event, this being the zeitgeist in the 1990s, I was still working at the RAND Corporation and the last study I ever wrote for them was called ‘The “Virtual Corporation” and Army Organisation’. Because it seemed to me and a colleague of mine, Abe Shulsky, that the army is quintessentially the hierarchical organisation. And here was Silicon Valley organising itself in a much flatter thing, without these hierarchies. And could the army learn something from Silicon Valley? 

So we went around – this was sponsored by the Training and Doctrine Command in the army – to a lot of different military bases, talked to a lot officers. And we realised that actually Silicon Valley didn’t have anything to teach the army because they understood this already, that after Vietnam they had done a lot of soul searching about why that war went so badly. They began to change their doctrine. They borrowed it a lot from the Wehrmacht and from Germany military practice. There is a tradition in the German army called Auftragstaktik, which is basically a doctrine about delegation. And it says that if you’re going to be a successful military organisation, the senior leaders, the generals, have to give only the broadest strategic direction and you have to delegate the maximum amount of authority to the lowest possible command level. Because in a war, the people that actually know what’s happening are the second lieutenants on the ground that are trying to assault this village. And it’s not the general way back 100 kilometres at headquarters that understands that. 

And then I began to realise that in corporate organisation that’s true as well. The Toyota just-in-time manufacturing system: every worker on the assembly line had a cord and if they saw a production problem or a defect, they pulled the cord and stopped the entire assembly line. If you think about what that means, you’re delegating the ability to stop the entire output of the factory to every single individual low-level factory worker. And that requires trust, but it also requires this huge amount of delegation.

And it’s for the same reason the army was delegating authority: it’s the lowest levels of the organisation that actually know what’s really going on.

WALKER: Right, so it’s like a Hayekian point.

FUKUYAMA: Yeah. So the article that I always have my students read is Hayek in 1945 wrote an article called ‘The Use of Knowledge in Society’. And he said that in any economy, 99% of the useful information is local in nature. It’s not something that's known centrally but it’s you in your particular local context. And that’s why he said that a market economy is going to work better than a centrally planned one because price making in a market economy is based on local buyers and sellers that haggle and set prices and therefore allocate resources efficiently. And so all of a sudden I said to myself, yeah, this really is important, that in any hierarchy you need the hierarchy because you need the generals to set the broad targets. But most dysfunctional organisations are ones that don’t delegate enough authority. 

And the army really fixed itself. I mean, the US army has become the best fighting force in the world. The IDF in Israel had a similar kind of doctrine and that’s one of the reasons that they got so good at warfare. That’s why the Ukrainians have been beating the Russians, because they absorbed a lot of this American doctrine about delegation, basically.

That’s where this all started. The only thing I wrote systematically about delegation was actually this Rand study on army organisation. But it shows up in other things that I’ve written. 

So lately I’ve been taking on the whole DOGE, stupid effort. This ridiculous effort of Elon Musk’s to combat waste, fraud and abuse in the government. He’s so wrong about so many of the things. But he repeats this conservative mantra that the bureaucracy has too much autonomy, that it makes all sorts of decisions that are leftwing, out of touch with the American people and out of the control of the democratically elected leaders. And it’s 180 degrees wrong. The problem with the bureaucracy in this country and in most other countries is that it is too controlled by the political authorities. There are too many rules that bureaucrats feel they have to follow. If you want to fix the bureaucracy and make it more efficient, as Elon Musk claims, you have to delegate more authority to them. You have to let them use their judgement. You don’t try to control them through thousands of pages of detailed rules and regulations for buying an office desk or a computer or something like that. 

So I think this delegation issue plays out in contemporary American politics, as well as in military affairs and as well as in factory organisation. All sorts of places.

WALKER: Right. That’s super interesting. Yeah. For me, as an Australian outsider looking in, the DOGE effort is very much symptomatic of the kind of Lockean American political culture that doesn’t trust government and wants to place more strictures around it. Starving the beast, so to speak.

FUKUYAMA: Yeah. And as a result, they kind of get the opposite of what they intended. 

WALKER: Because the bureaucracy becomes so risk-averse, it breaks down the feedback loop between policy design and policy implementation.

FUKUYAMA: Exactly, yeah. And that’s what I teach my students here in this policy program that I run.

WALKER: Right. I’m not sure whether you’ve quantified this concept of delegation, but is there a correlation between bureaucracies that delegate more effectively and state capacity?

FUKUYAMA: Yes. I just won this award last year. It’s a kind of lifetime achievement award in public administration. So this is a field that Americans really don’t pay any attention to because they don’t like bureaucracies. But I think that one of the reasons I won the award was that I published an article back in 2013 that did exactly that. It said, what’s the appropriate amount of authority to delegate in a bureaucracy? And I said it’s determined by the capacity of the people to whom you’re delegating authority. So in the Federal Reserve, the staff of the Federal Reserve are all PhD economists, so you can safely delegate a lot of authority to them. Whereas the TSA, the Transportation Security Administration, are full of high school graduates, and you’re not going to delegate a lot of authority to them to make complex judgements about ‘Does this person look like a terrorist?’ or ‘Am I going to stop this person?’ You just give them simple rules to follow. And so, that’s how it plays out, the relationship between capacity and delegation.

WALKER: It’s super interesting. I’m excited to read more about this in the memoir. 

Frank, it’s been an honour. Thank you so much for joining me.

FUKUYAMA: Yeah. Well, thank you for talking to me.