Notes on 'Our Posthuman Future', by Francis Fukuyama
My notes on: Fukuyama, Francis. (2002). Our Posthuman Future.
[This note may contain errors and inaccuracies. It was mostly written just for myself, in preparation for a podcast conversation with Francis Fukuyama.]
In Our Posthuman Future, Francis Fukuyama examines the moral and political consequences of four biotech developments:
- Greater scientific knowledge about the genetic causes of human behaviour and intelligence
- Neuropharmacology
- Longevity science
- Genetic engineering
Of greatest concern to Fukuyama is the fourth development: genetic engineering on a species level.
Fukuyama is concerned about genetic engineering specifically and biotechnology generally for the following reason. Biotechnology has the potential to change our human nature and thereby usher us into a “posthuman” stage of history. Since human nature shapes and constrains the horizons of political order, biotechnology has the potential to undermine liberal democracy and re-open the question of what the highest and best political system looks like.
Below are five broad questions the book raised for me.

(1) What does transhumanism imply for liberal democracy?
It’s helpful to briefly recapitulate Fukuyama’s thesis from his book The End of History and the Last Man (1992). Fukuyama uses the term “History” in the Hegelian-Marxist sense. That is, History is a single and coherent evolutionary process, taking into account all peoples in all eras. History can “end” in the sense that the countries of the world converge on a political system that somehow represents the highest and best form of human social and political evolution.
Judging this system would require a “transhistorical standard”, otherwise one could be fooled into thinking a period of passing stability was History’s true endpoint. For Fukuyama, as for Hegel, an understanding of human nature provides the basis for this standard. History could end if we found a system that was, in Alexandre Kojève’s words, “completely satisfying” to humanity’s deepest longings. For Fukuyama, this system is (probably) liberal democracy.
The unfolding of History, according to Fukuyama, is not a random process, like throwing darts at a dartboard. Rather, it obeys a certain logic, even if it’s not deterministic. Two motors give History its logic.
The first motor is that of economic modernisation. This motor kicked into gear in the early modern era, with the invention of the scientific method. It gives History a progressive direction, since know-how and scientific understanding are cumulative—we can’t easily unlearn what we’ve already discovered. Military competition forces countries to jump on the scientific bandwagon, and economic self-interest propels it forward too.
The economic motor of history corresponds to “desire” and “reason” in Plato’s tripartite soul.
But it’s insufficient for explaining why countries choose democracy. After all, the French and American revolutions occurred when neither country had industrialised and just as industrialisation was beginning in Britain.
Enter the second motor of History: the “struggle for recognition”.
The third part of the human soul, Plato called “thymos”, which translates to something like “spiritedness”. Thymos drives our desire for recognition and is associated with emotions like anger, shame and pride. For Hegel, recognition was the deepest and most consequential human need.
The two motors of History are not like twin jet engines. My metaphor for thinking about them is a two-stage rocket: you need both motors to reach orbit, but they work in sequence. Economic modernisation often comes first and makes the choosing of liberal democracy more likely, especially via education and the raising of people’s consciousness, but it doesn’t necessitate liberal democracy. The thymotic motor provides the missing link between liberal economic and liberal politics.
In contrast to the “master-slave” relationship that defined aristocratic societies, liberal democracy provides equal recognition through granting rights to citizens. In satisfying people’s desire for recognition, liberal democracy is therefore “completely satisfying”—or at least better than any alternative political order so far conceived.
Neither Fukuyama nor Hegel predicted that liberal democracy would sweep every corner of the globe in the short term. There would still be detours and backward steps, sometimes with catastrophic consequences, as in the twentieth century. But the possibilities of political order would always be limited by human nature. Socialism and fascism ultimately failed because they were trying to pound the square peg of human nature into the round hole of social planning.
Okay, back to biotech.
While most critiques of The End of History have focused on other perceived weaknesses, Fukuyama always regarded the “true weakness” of his thesis as the radical uncertainty of modern natural science. Here he is in a 1999 essay for The National Interest:
"History cannot come to an end as long as modern natural science has no end; and we are on the brink of new developments in science that will, in essence, abolish what Alexandre Kojève called "mankind as such.""
In particular, modern natural science has the potential to alter the very substrate of human nature.
If liberal democracy is downstream of human nature, then changing human nature—for example, via genetic engineering—could have consequences for the fitness of the liberal democratic order. If it did, History wouldn’t restart so much as a new kind of history would commence: a posthuman history.
If the goodness of liberal democracy derives from human nature, it’s not clear to me why preserving liberal democracy should be an end in itself.
Does Fukuyama think that liberal democracy is intrinsically good? Or does he merely view it as instrumentally good? I think these questions are partly cleared up by Fukuyama's notion of rights.
(2) Can there be more than one set of natural rights?
(i) Natural rights
Rights are the very basis of the liberal democratic order. As mentioned above, rights are how the state provides its citizens with all-important “recognition”.
But where do rights come from?
In principle, rights derive from three possible sources:
- God (divine rights)
- Nature (natural rights)
- People, that is, law and social custom (positivistic rights)
Let’s briefly consider each source.
Fukuyama skips over divine rights for the same reasons recognised by Locke: “it is extremely difficult to achieve political consensus on issues involving religion.” (p111)
He then considers "positivistic rights". This approach says that rights are granted by law or custom—essentially they are whatever people say they are.
But the flaw in this approach, Fukuyama says, is that "there are no positive rights that are also universal." (p113) The result of accepting positive rights would be cultural relativism.
So the weakness of the positivistic approach necessitates an effort to resurrect a concept of natural rights.
But such an effort immediately runs into a problem: isn’t the idea of natural rights (i.e. that human rights can be based on human nature) a kind of naturalistic fallacy?
Fukuyama distinguishes two strands within this “naturalistic fallacy” critique:
- The is/ought distinction: a statement of moral obligation cannot be derived from an empirical observation about the natural world.
- Even if we could derive an "ought" from an "is", the natural world is often ugly, amoral or indeed immoral.
To address the first criticism, Fukuyama argues (following Alasdair MacIntyre) that the "is" and the "ought" are bridged by concepts like "wanting, needing, desiring, pleasure, happiness, health". (See pages 115-125 for a longer discussion.)
To the second criticism, Fukuyama answers that, "while there is no simple translation of human nature into human rights, the passage from one to the other is ultimately mediated by the rational discussion of human ends—that is, by philosophy." (p125) Philosophy, according to Fukuyama, thus allows us to establish a hierarchy of rights and to rule out certain kinds of political order, like tyranny, as unjust.
(ii) Factor X
Fukuyama distinguishes (a) contingent and accidental characteristics like one's race, gender, culture, talents, wealth from (b) a human essence shared by all people. He calls this human essence "Factor X".
Factor X is implied by the demand for equal recognition that has been the dominant passion of modernity: if all humans are equal in dignity, then they must all share some quality X.
Factor X is an emergent, complex bundle of human characteristics:
“Factor X cannot be reduced to the possession of moral choice, or reason, or language, or sociability, or sentience, or emotions, or consciousness, or any other quality that has been put forth as a ground for human dignity. It is all of these qualities coming together in a human whole that make up Factor X. Every member of the human species possesses a genetic endowment that allows him or her to become a whole human being, an endowment that distinguishes a human in essence from other types of creatures." (p171)
If Factor X maps to some irreducible package of human traits, then definitionally it would seem that only humans can possess it.
On that basis, would Fukuyama have denied natural rights to Denisovans or Neanderthals? Or would they have received separate sets of natural rights rooted in, say, a Factor X-n or a Factor Y?
Similarly, will various gradations of transhumans have different sets of perhaps overlapping rights?
These questions aren’t “gotchas” directed at Fukuyama—in fact, they’re precisely what he worries about.
(3) Does it matter if there are different sets of natural rights?
Possibly the key passage in the entire book appears on page 155:
"As usual, the philosopher Friedrich Nietzsche was much more clear-eyed than anyone else in understanding the consequences of modern natural science and the abandonment of the concept of human dignity. Nietzsche had the great insight to see that, on the one hand, once the clear red line around the whole of humanity could no longer be drawn, the way would be paved for a return to a much more hierarchical ordering of society. If there is a continuum of gradations between human and nonhuman, there is a continuum within the type human as well. This would inevitably mean the liberation of the strong from the constraints that a belief in either God or Nature had placed on them. On the other hand, it would lead the rest of mankind to demand health and safety as the only possible goods, since all the higher goals that had once been set for them were now debunked. In the words of Nietzsche's Zarathustra, "One has one's little pleasure for the day and one's little pleasure for the night: but one has a regard for health. 'We have invented happiness,' say the last men, and they blink." Indeed, both the return of hierarchy and the egalitarian demand for health, safety, and relief of suffering might all go hand in hand if the rulers of the future could provide the masses with enough of the "little poisons" they demanded."
In this passage, Fukuyama takes a swipe at utilitarians like Peter Singer who abandon the concept of human dignity for the goal of relieving suffering but would ruin egalitarianism in the process.
The posthuman world Fukuyama worries about is one inhabited by different species of posthuman “creatures” and in which there are different sets of rights as a result. It is a world that is “far more hierarchical and competitive than the one that currently exists, and full of social conflict”. (p218)
This criticism can be seen as resting upon a (reasonable) claim that transhumanist technologies will diffuse unevenly. While an “ontological leap” from nonhuman to human might have occurred in the evolutionary process (p161), evolution plays out over a sufficiently large timescale as to render human nature “stable”. Technology, on the other hand, can be deployed more quickly. Moreover, the deployment of technologies like gene editing will be affected by which families or which countries can afford them, for example.
If there is more than one set of rights, what is the relationship or hierarchy between the sets? Should we not nest them within a broader utilitarian framework to adjudicate between them when they're in conflict?
And, to return to my very first question, is liberal democracy still the right political order for accommodating multiple sets of rights? Fukuyama would answer that it obviously is not, since liberal democracy is predicated on equal rights. But is there no way to recover some version of liberal democracy in this scenario?
(4) Will AIs have rights? And what would a human/AI political order look like?
Our Posthuman Future was published in 2002, when we were in an AI winter and the prospect of human-level AI still seemed like pure science fiction. Fukuyama doesn’t contemplate synthetic “posthumans”; his only concern is how biotech (especially germline therapy) might be used to enhance biological humans.
Since the book was published, we’ve seen only three CRISPR-Cas9 babies, all from China: Lulu and Nana in 2018 and Amy in 2019. (More on the interesting question of China below.) The science is impressive and consequential, but we’re still a long way from genetic engineering being carried out in a statistically significant way for the population as a whole. Indeed there is a de facto global clinical moratorium on CRISPR babies.
Meanwhile, the AI spring has well and truly arrived, and the possibility of artificial general intelligence (AGI) by the end of the decade is strikingly plausible.
Artificial intelligences have the potential to become a kind of posthuman. They may be made of silicon rather than carbon, but they will be our descendants culturally speaking.
AI receives a brief discussion in Our Posthuman Future in a subsection on consciousness, in particular from pages 167-168 (emphasis added):
"[M]any of the researchers in the field of artificial intelligence sidestep the question of consciousness by in effect changing the subject. They assume that the brain is simply a highly complex type of organic computer that can be identified by its external characteristics. The well-known Turing test asserts that if a machine can perform a cognitive task such as carrying on a conversation in a way that from the outside is indistinguishable from similar activities carried out by a human being, then it is indistinguishable on the inside as well. Why this should be an adequate test of human mentality is a mystery, for the machine will obviously not have any subjective awareness of what it is doing, or feelings about its activities. This doesn't prevent such authors as Hans Moravec and Ray Kurzweil from predicting that machines, once they reach a requisite level of complexity, will possess human attributes like consciousness as well. If they are right, this will have important consequences for our notions of human dignity, because it will have been conclusively proven that human beings are essentially nothing more than complicated machines that can be made out of silicon and transistors as easily as carbon and neurons.
The likelihood that this will happen seems very remote, however, not so much because machines will never duplicate human intelligence—I suspect they will probably be able to come very close in this regard—but rather because it is impossible to see how they will come to acquire human emotions. It is the stuff of science fiction for an android, robot, or computer to suddenly start experiencing emotions like fear, hope, even sexual desire, but no one has come remotely close to positing how this might come about. The problem is not simply that, like the rest of consciousness, no one understands what emotions are ontologically; no one understands why they came to exist in human biology."
Fukuyama seems to think that consciousness is a very important quality within the Factor X bundle. Is it the most important? On page 169, he seems to indicate that it is:
"[I]t is the distinctive human gamut of emotions that produces human purposes, goals, objectives, wants, needs, desires, fears, aversions, and the like and hence is the source of human values. While many would list human reason and human moral choice as the most important unique human characteristics that give our species dignity, I would argue that possession of the full human emotional gamut is at least as important, if not more so."
What would it take to ascribe consciousness to AIs? And would that be enough to give them Factor X, given that Factor X is a complex, emergent whole that includes other traits?
Will it even matter if powerful AIs don’t really have Factor X? Won’t some people impute Factor X to AIs regardless—and isn’t that all that would matter politically, in order for liberal rights to be extended to AIs?
Could AIs be conscious but not thymotic—Spock-like? What would the correct political order look like in that case?
(5) Will Asian cultures be the first to recognise AI rights?
In a later chapter that discusses the political control of biotechnology, Fukuyama says the following on Asian cultures:
"There is a continuum of views in the world today concerning the ethicality of certain types of biotechnology and particularly genetic manipulation. At the most restrictive end of this continuum are Germany and other countries in continental Europe that, for historical reasons already mentioned, have been very reluctant to move too far down this road. Continental Europe has also been home to the world's strongest environmental movements, which as a whole have been quite hostile to biotechnology in its various forms.
At the other end of the spectrum are a number of countries in Asia, which for historical and cultural reasons have not been nearly as concerned with the ethical dimension of biotechnology. Much of Asia, for example, lacks religion per se as it is understood in the West—that is, as a system of revealed belief that originates from a transcendental deity. The dominant ethical system in China, Confucianism, lacks any concept of God; folk religions like Taoism and Shinto are animistic and invest both animals and inanimate objects with spiritual qualities; and Buddhism conflates human and natural creation into a single seamless cosmos. Asian traditions such as Buddhism, Taoism, and Shinto tend not to make as sharp an ethical distinction between mankind and the rest of natural creation as does Christianity. That these traditions perceive a continuity between human and nonhuman nature has allowed them to be, as Frans de Waal points out, more sympathetic to nonhuman animals. But it also implies a somewhat lower degree of regard for the sanctity of human life. Consequently, practices such as abortion and infanticide (particularly female infanticide) have been widespread in many parts of Asia. The Chinese government has permitted practices abhorrent in the West, such as the harvesting of organs from executed prisoners, and passed a eugenics law as recently as 1995." (p192)
And then this:
"If there is any region of the world that is likely to opt out of an emerging consensus on the regulation of biotechnology, it is Asia. A number of Asian countries either are not democracies or lack strong domestic constituencies opposed to certain types of biotechnology on moral grounds. Asian countries like Singapore and South Korea have the research infrastructure to compete in biomedicine, and strong economic incentives to gain market share in biotechnology at the expense of Europe and North America. In the future, biotechnology may become an important fracture line in world politics." (p193)
If AIs become sufficiently powerful, will Asian cultures be the first to recognise their rights? (The 2023 film The Creator offers a compelling depiction of a scenario like this.)
If this happened in China, would it set back the march of liberal democracy (by undermining the basis for equal recognition)? Or would it perhaps hasten the end of History in that country?