LIVE EVENTS

From January to March 2025, I'm running a series of live podcasts on Australian policy problems. Get tickets or learn more here.

Episodes Science Technology

Stephen Wolfram — Constructing the Computational Paradigm (#148)

145 min read
Stephen Wolfram — Constructing the Computational Paradigm (#148)

Stephen Wolfram is a physicist, computer scientist and businessman. He is the founder and CEO of Wolfram Research, the creator of Mathematica and Wolfram Alpha, and the author of A New Kind of Science.

Transcript

JOE WALKER: Stephen Wolfram, welcome to the podcast.

STEPHEN WOLFRAM: Thank you.

WALKER: Stephen, I'd like to start with business and biographical stuff, and then we'll wend our way into computational science as well as its implications for history, technology and artificial intelligence.

So you're one of those rare figures who's both a brilliant scientist and a brilliant entrepreneur. And kind of like Galileo, you've both made important discoveries and created the tools necessary for making those discoveries (your version of the telescope, of course, being Mathematica and Wolfram Language). Do you view your scientific ability and your entrepreneurial ability as largely separate, or is there some common underlying factor or factors? Because not many great scientists are also great entrepreneurs and vice versa. So what is your fundamental theory of Stephen Wolfram?

WOLFRAM: Well, thinking about things and trying to understand the principles of them is something that has proven very valuable to me both in science and in life in general, and in business and so on. And so it always surprises me that people who think deeply in one area tend to not keep the thinking apparatus engaged when they're confronted with some other area. And I suppose if I have any useful skill in this, it's to keep the thinking apparatus engaged when confronted with practical problems in the world, as well as when confronted with theoretical questions in science and so on.

Mostly I see the kinds of things I do in trying to understand strategy in science and strategy in business as very much the same kind of thing.

Maybe I have one attribute that is a little bit different, which is that I'm interested in people — which is something quite useful if you're going to run companies for a long time. Because otherwise the people just drive you crazy.

But if you are actually interested in people and find the development of people to be a satisfying thing in its own right, that's something that is relevant on the business side, less relevant on the scientific side.

WALKER: Perhaps a third attribute of yours I might add to the mix is optimism.

WOLFRAM: Well, yes. Right. There's a lot that one doesn't see from the inside, so to speak. And I think it is true that when one embarks, as I have done many times in my life, on large projects, very ambitious projects, I don't see them as large or ambitious from the inside. I just see them as a thing I can do next. I don't see them as risky. I just see them as things that can be done. And yes, from the outside, it will look like lots of risk taking, lots of outrageous optimism. From the inside, it's just like "that's the path to go next".

I think for me, it's often one has optimism, but one also says, "What could possibly go wrong?" And having had experience of sort of the things that happen and so on, it is useful to me as a kind of backstop to optimism to always be also thinking about what could possibly go wrong.

And it actually probably fuels the optimism because by the time you realise the worst thing that could go wrong is this and it's not so bad, then it makes one more emboldened to go forward and try and do that next thing that seemed impossible.

WALKER: I read this anecdote about how you learned the word "yes" before you'd learned the word "no" — and that felt kind of representative of the optimism.

WOLFRAM: Yes, that's something my parents kind of would trot out from time to time as an explanation for my sort of later activities.

[6:29] WALKER: That's great. So typically the earliest one would get a PhD is the age of 25. You got yours at the age of 20. So somewhere in your education you compressed five years before the age of 20. How much of that is accounted for just by raw talent? And how much was some hack you learned that other people with less horsepower could adopt as well?

WOLFRAM: Interesting question. First hack was you can learn things just by reading books. That's very old fashioned; these days it would be going to the web or something. But the idea that if you want to learn something, you just go read books about it, you don't have to sit in a class and be told about it, so to speak, that was perhaps hack number one.

Hack number two was: you can invent your own questions. When you're trying to learn about something, yes, there are exercises in the back of the book, but there are things that you might wonder about and by golly, you can go off and explore those things. And often if you'd asked me, "Can you actually answer this question? Is this answerable question?" I would have said, "I don't know, but I'm going to try and do it."

And somebody else might have said, "You can't go and ask that question. You're a 14 year old kid and that's a question that nobody's asked before and that's not a thing one could do as a 14 year old kid." But I didn't really know that. And so I got into the habit of if I have a question that I'm curious about, I will try and figure out the answer, whether it's something that would be in the back of the book, whether it's something that's been asked before or not.

So for me, those were two important hacks.

Another one is trying to get to the point where you truly understand things. There's a level of understanding that is perfectly sufficient to get an A in the class. (Well, when I was in school, they didn't quite do grades like that, but same idea.)

But can one really get to the point where one can explain the thing to oneself and feel like one really understands it? That was a thing that I progressively really found very satisfying and got increasingly into. And once you really understand it, sort of from the foundations, it's much easier to build up a tall tower than if you kind of roughly know what's going on, but it's all a bit on sand, so to speak.

So those are perhaps three things that I figured out.

Now, I never saw myself as having that much raw talent. In retrospect, I went to top schools in England, and they ranked kids in class, and I was often the top kid and so on. So in retrospect, yes, I was, at least by the ranking systems of the time, a top operative, so to speak.

But that was not my self image. I mean, it was just like, I do the things I find interesting. Perhaps it was a good thing that I didn't say, "Oh, I'm the top kid, and therefore I can do this and that." I was just like, "I can do these things, and they're fun, and that's what I'm going to do."

For me, there's a certain drive to do things and to do things that I think are interesting regardless of the ambient feedback of "yes, that's a good thing to do, or no, that isn't a good thing to do". I suppose I'm perhaps obstinate in that respect, in that there are things I want to do and I'm going to try and figure out how to do them. And that's been a trait that I suppose I've had all my life.

I know when I was a kid, I would do projects, I would get very excited about some particular thing, and I would go and explore that thing, and then I would get to the point I wanted to get to in that thing, and I would move on to the next thing. And I'm a little bit shocked to realise that I kind of still do that now, half a century later.

WALKER: If you hadn't developed an interest in computation back in the early '80s, would Mathematica, or something like Mathematica, have been developed or how long would it have taken? So how contingent was that on you?

WOLFRAM: Well, I think that being able to have a computational assistant for doing mathematical kinds of things — there were already sort of experimental systems; I built my first system back in 1979 for doing this kind of thing — that was a thing that was bubbling forward.

The part that I think is probably more contingent is the principled structure of this kind of symbolic programming idea, the idea that you can represent things in the world in terms of symbolic expressions and transformations for symbolic expressions and so on. I think those things, in retrospect I realise, were more singular and more specific than I might have expected. I mean, in a sense when I set myself this problem back in 1979 of, "Okay, I'm going to try and build this broad computational system. What should its foundations be?"

And at the time I'd spent time doing natural science and physics and so on, and my model for how to think about that was it's like you go try to find the atoms, the quarks, what are the fundamental components from which you can build up computation? And I went back and looked at mathematical logic and understood those kinds of things and tried to learn from that, "How can I find the right primitives for thinking about computation?"

As it turned out, I was either lucky or something, that I got a pretty good idea about what those primitives should be. And I'm not sure that would have been quite something that would have happened the same way.

The precursors of that date back to things like the idea of combinators from 1920 which had existed in the world and been ignored for a really long time. That's probably a particular thing.

The other thing is the — you might call it ambition or vision — to say, "We're going to try and describe the whole world computationally." That was a thing that I steadily got into. And that was a thing that I think was not really something other people had in mind and have had in mind in the intervening years. I think it's something where perhaps it's just too big a project. It's like, can you really conceptualise a project that big? You mentioned optimism. That's probably a necessary trait if one's going to imagine that one can try to do a project that's kind of that grand, that big. When would people have decided that one could do something that big? It's not clear that happens for a really long time.

The thing that I've been interested in very recent times, looking at LLMs and AI and so on, and realising that they're showing us that there's kind of a semantic grammar of language, there's ways that language is put together to have meaning and so on. And I'm realising, well, Aristotle did a little bit on this back 2000 years ago and managed to come up with logic — and that was a pretty good idea. And we could have come up with sort of more general formalisations of the world anytime in the last couple of thousand years, but nobody got around to doing it. And I've done little pieces of that — maybe not so little, but done pieces of that — and hopefully we'll get to do more of that. But it's sort of shocking that in 2000 years, although it was something that could have been thought about, people just didn't get oriented to think about it.

And it's something where I suppose I've been fortunate in my life that I've worked on a lot of things that were things I wanted to work on which were not quite in the mainstream of what people were thinking about, and they worked out pretty well. And so that means that the next time I'm thinking, "Well, I'm going to think about something that's sort of outside the mainstream," I kind of think it's going to go okay. After you've done a few steps of that, you kind of feel that, yes, you feel a bit empowered to say, "Yeah, I'm thinking about it, I think it makes sense to me, I'm really going to do something with this," rather than, "Look, how can it possibly make sense? Look at all these other people who say it doesn't make sense or say that isn't the direction things should go in."

So I was lucky because I started doing science when I was pretty young, and I was in an area that was very active at the time — particle physics — and I was able to make a little bit of progress. And that gave me good, positive, personal feedback to get emboldened to try and do bigger, more difficult, further outside the box kinds of things.

[16:03] WALKER: Andy Matuschak, a previous guest of the podcast, and Michael Nielsen have this article called 'How Can We Develop Transformative Tools for Thought?' — "tools for thought" being tools for augmenting human intelligence. Examples include writing, language, computers, music, Mathematica. And in the essay they assert a general principle that good tools for thought arise mostly as a byproduct of doing original work on serious problems. Tools for thought tend either to be created by the people doing that work or people working very closely to them.

Just out of curiosity — and I assume you probably agree with that principle — can you think of any historical counterexamples, where someone has actually set out primarily to create a new tool for thought without being connected with an original problem?

WOLFRAM: Well, this kind of goes along with when entrepreneurs ask me how should they invent the product for their company, and the first thing I say is, "Invent a product you actually want; it's hard to invent the product for the imaginary consumer that isn't like you." And so in my own efforts, certainly the things I built as tools, I'm typically user number one. I'm the persona that I most want the tool to be able to serve.

I would say that when it comes to tools for thinking about things and the extent to which they are disembodied from... there are things where people invent abstract ideas that don't have application to the world. I mean, a famous example in mathematics is transfinite numbers which were invented, they're interesting, they have all kinds of structure, and it's been 100 and something years, and every so often I say, "Finally, I'm going to find a use for transfinite numbers." And it doesn't usually work out.

Another thing to understand is if you look at the progress of science, there are often experimental tools that get created — whether it's telescopes, microscopes, whatever. I think that the invention of the telescope — how that was plugged into things one would think about, it wasn't really. It was invented as a piece of invention for practical uses. And then the fact that it turned out to be this thing that unlocked the discoveries of the moons of Jupiter — it came after the creation of the tool.

But in terms of ways that people have of thinking about things, a big example that you mentioned is language which is our apparatus for taking the thoughts that swirl around in our brains and packaging them in such a way that we can communicate them elsewhere and even play them back to ourselves. And I think that's something which, by its very nature, emerges from the thoughts that are happening inside.

I suppose another example of this would be when it comes to things like artificial languages, where people say, "Let's invent a language that will lead us to think in certain ways."

I'm thinking through historical examples here.

There are definitely, in science, there's definitely plenty of things where the experimental tool has been invented independent of people thinking about how it will be used. Just as a matter of "well, this is the next thing we can measure" type thing, without kind of thinking, "Well, if we measure this, then it will fit into our whole framework of thinking about things."

In terms of the history of tools of thought at a more abstract level, they're not so many. I mean, you listed off many of the major ones. It's sort of interesting, if you take mathematics as an example, which is in a sense an organising tool for thinking about things, what was mathematics invented for? What were the ideas of numbers and things like that invented for?

They were invented for the practical running of cities in ancient Babylon. They were invented as a way of abstracting life to the point where it could be organised to be governed and so on.

But things like numbers, and sort of the early times of mathematics, were not invented, I think, so much as a way of extending our ability to think about things; they were invented as a sort of practical tool for taking things which were going on anyway and making them kind of more, I don't know, governable and organised or something.

So perhaps that's an example of a place where the notion of this abstraction kind of happened for very practical reasons. Now, that's why by the time we get to 1687 and Isaac Newton and his Principia Mathematica, its full title is, in English, Mathematical Principles of Natural Philosophy. So in his time, he was the one who got to make this connection between this already-built tool of thought, in a sense — of mathematics — and, in his case, things in the natural world.

So it's a good prompt for thinking about how one imagines the history of intellectual development for our species. And it's always a thing where, as we fill in a certain amount of abstraction, a certain set of principles, we get to put another level on the kind of tower of intellectual things that we can think about. Each new kind of paradigm that we invent lets us build a bit taller so we can potentially get to the next paradigm.

WALKER: So you raised the more general claim about the history of ideas, namely that technology often precedes science.

WOLFRAM: Yes.

WALKER: I'm going to take that as an opportunity for a quick digression, and then I'll come back tools for thought.

[23:23] So if it is indeed true that technology often precedes science, and in fact, in A New Kind of Science, you raise the question, "Well, why wasn't the computational paradigm stumbled upon earlier?" And the answer you give is that the technology of computing that had coalesced by the time you were looking at these problems was an important enabling factor for two reasons. Firstly, there were certain experiments that could only be done with that contemporaneous technology. And secondly, being exposed to practical computing helped you to develop your intuition about computational science.

So if that's true, does it worry you that some technology currently inconceivable to us could in future provide a basis for an even more fundamental kind of science?

WOLFRAM: Well, I'm not sure it worries me. I think that seems kind of exciting. One of the things I've come to realise from studying recent things about fundamental physics is we perceive the universe the way we perceive the universe to be, because of who we are. That is, our sensory apparatus for perceiving the world is what gives us the laws of physics that we have. So we talk about space and time and so on, and the fact that we imagine that there is a notion of a state of things in space at successive moments in time is a consequence of the fact that as we look around we see 100 metres away or something, and the time it takes light to come from 100 metres away to us is really short compared to the time it takes us to realise what we saw.

That's why we kind of imagine what happens in space everywhere at successive moments in time. We might be built differently. We might be a different physical size relative to the speed of light and so on, and we would have a different view of how the universe is put together. So I think that the way that we have of understanding science, understanding the universe, is deeply dependent on the way we are as perceivers of the universe. And as we advance, maybe have more sensory apparatus, we build more tools that allow us to sense aspects of the universe we couldn't sense before, we necessarily will start to think differently about how the universe works. And I think it's kind of a thing that goes hand in hand — both the way that we kind of expand our existence, and the things that we can perceive about how the universe works, these are going to sort of expand together.

Whether it was the telescope, the microscope, the electronic amplifier. These all led to different views of what existed in our universe that we were simply unaware of before that. And I think it is likely, in fact certain, in fact necessarily the case, that as we extend our sensory domain we will end up sampling aspects of what I call the ruliad — this kind of limit of all possible computational processes, this kind of universe of all possible universes. We'll inevitably sample more of that.

That there can be sort of different pieces of science, different pieces of the story of how the universe works, that we will get to — I find that inevitably the case.

Now have we reached the bottom of the whole thing? With the ruliad and with all these ideas about fundamental physics, are we at the end of that particular path of understanding what's it all made of?

I kind of think yes. I think we got to the bottom. Now there's a long way from the bottom to where we are, and there are undoubtedly many kinds of science that one could expect to build that live in that intervening layer between what's at the bottom... What's at the bottom is both deeply abstract and in some sense it's necessary that it works that way — but it also doesn't tell us that much. That it tells us that that is the foundation is interesting. I think it's great. But also, to be able to say things about what we could possibly sense in the world, there's layers of what we have to figure out to know that.

One of the things that comes out of this idea of computational irreducibility is this realisation that there's an infinite number of pockets of computational reducibility, an infinite number of places where we don't just have to say, "Oh, we just have to wait for the computation to take its course to know what's going to happen," where we can say, "We discovered something. We know how to jump ahead in this particular case." There's an infinite collection of those places where we can discover something that allows us to jump ahead, that allows us to make an invention, that allows us to make a new kind of scientific law or something.

That's the place where there's an endless frontier of things to do, and that's a place where there will undoubtedly be kinds of science that are developed by looking at different kinds of pockets of reducibility than the ones we have seen so far.

Maybe I'm wrong, but I think we, for better or worse, hit the bottom in terms of understanding what the ultimate machine code of how things are put together is. And in a sense, as I say, it's a very abstract, general, inevitable kind of structure. But the real richness of our experience comes in the layers that exist above that.

[29:45] WALKER: So coming back tools for thought, we were talking about how when one is designing such tools, it's important to have some kind of tangible contact with the problem that the tool is designed to solve. And one of the things I find interesting is that Mathematica's functionality has expanded over the years into domains where you don't actually have domain expertise. For example, you bundle libraries with detailed primitives for earth science modelling. I was curious what incites projects like that and how is geological domain expertise imported into Wolfram Research?

WOLFRAM: Well, one of the things that been great about my job and my life, so to speak, is that I am sort of forced to have some kind of fairly deep understanding of a very broad range of areas. You asked me about life hacks that have let me do interesting stuff I've been able to do. One of them is I've been forced to understand at a foundational level a very broad range of areas, because what I've discovered is that if you're trying to do language design, you're trying to make the best tool for people to be able to do different kinds of things, the way you have to do that is by drilling down to get to the primitives of what has to be done in that area. And that requires that you have a deep understanding of that area.

Within the company we have a very eclectic collection of people with lots of different backgrounds and we always have this internal database about who knows what, where people talk about the different things they know about. And so, okay, we need somebody who knows about geology. Alright, let's go to the "who knows what" database; there's probably somebody who knows about geology.

But beyond that, we've been lucky enough to have a very broad spectrum of top research people around the world use our tools. And so it's always been an interesting thing when we need to know about some very specialised thing, it's like, "Well, who's the world expert in this?" It's often very satisfying to discover that they've been longtime users of our technology. But then we contact them and say, "Hey, can you help us understand this?"

I have noticed — particularly in building Wolfram Alpha, which has particularly wide reach in terms of the different domains that we're dealing with — that one of the things about setting up computational knowledge there has been unless there's an expert involved in that process, you'll never get it right because there's always that extra little "Oh, but everybody in this area knows this."

It's kind of like you see things that happen in the world where like in the tech industry or something, people will be saying, "Oh, can you believe this or that happened," or, "Can you believe this company turned out to be a sham," or whatever else. It's kind of like, "Look, I'm in this industry, everybody in this industry has certain intuition about what's going on and kind of knows how this works." But if you're outside of that world, it's kind of difficult to develop that intuition.

One of the things I've I think gotten better at over the years is first of all, I know that is a thing in an area, that there's some kind of intuition, some way of thinking in that area, and I know that I don't know it if it's something I've never been exposed to. And I've kind of learnt that you have to sort of feel your way around talking to people in that area, trying to get a feeling for how people think in that area. And usually you can get to be able to do that, but you have to realise that it's the thing you have to do, and it's not self-evident how this area works, even if you know the sort of the core facts of the area.

[33:31] WALKER: That's interesting. So Wolfram research was founded in 1987. It's been a private company ever since. What are the factors that have gone into the decision to remain private? Because I think you toyed with the idea of taking it public back in '91?

WOLFRAM: Yes, that's right. Yes. So, look, people sometimes say everybody has a boss. But I don't.

And that's great, because it means that I can get to do things where I take responsibility for what I do, and often it works out and that's great, but sometimes it doesn't. And I think that the sort of freedom to do what one thinks one should do, rather than having a responsibility to other people to say, "Hey, look, you put all your money into this..." I would feel, in that case, a responsibility to the folks who put all the money in, or the public or whoever else it is, to not lose their money or whatever. It's been very nice to have the freedom to just be able to do the things that I think we should do.

It's a complicated thing because our company is about 800 people right now, and that is a size that I kind of like. I think maybe we could expand to maybe twice that. If you say, "Well, would you like a company that has 50,000 employees?" The answer is, "Not particularly." That's a ship that's a lot harder to turn.

If you have a company that has only 50 employees, that has the problem that there's a lot of single points of failure, there's a lot of things where there just isn't a structure that lets you get certain kinds of things done. And also, as the thing gets bigger, the thing I notice is it's like, okay, we could have a big sort of tentacle that does this or that thing, which I don't really know about and I don't really care about. And it's like, okay, that could be a thing, we could do that. And it's necessary for the practicalities of the world that you have things that are commercially successful, and sometimes those involve pieces that you don't personally that much care about.

But for me, I view the company as a sort of machine for turning ideas that I have into real things. And there's a certain ergonomic aspect of a certain kind of character and size of company that works well for that. And having something where a lot of pieces of it, I don't really know how they work and what they're doing, it's like, well, you can do that. It might be a commercially viable thing to do. But it's not something that intellectually and personally I find as satisfying.

Another thing that tends to happen is there are always these trends. People say, "Oh, yeah, you're a successful tech company. You should go public. You should do this, you should do that." There's some trend about how it should work.

My own point of view has been I try and think about what makes sense and I try to do what makes sense, and it often isn't what the trend is. People saying, "That's really stupid, everybody's doing this, you should be doing that." It's like, "Well..." I just try and do the things that I do. And that's worked out pretty well for me, and that's given me sort of an attitude that I should just do the things I think I should do, rather than following the "Go public." "Do an ICO," these days or a few years ago, or make up tokens or do something. They're just all these different trends. And I suppose at some level I've been a very simple-minded and conservative business person, because we just make a product that people find useful, they buy it, and that allows us to go on and make new products and improve the thing we have.

For me personally, the greatest satisfaction comes from making a great thing. There are people I know and respect where the thing they most want to do is make the most money. I don't particularly care about that. I will always choose the door that says "do the more interesting thing". Of course, one has to be practical and one only gets to go on doing interesting things if one has a viable commercial enterprise. But for me, the goal is to do the interesting things, and that's the value function that I'm applying to the things that I do.

WALKER: Where do you think the threshold is in terms of headcount, when the ship gets too difficult to turn?

WOLFRAM: That's an interesting question.

WALKER: I guess maybe it depends on the network structure of the company as well.

WOLFRAM: A little bit. It depends on what you're doing, because there are some things that just require a lot of people. What I've done in our company is automate everything.

In other words, our company, if you look at the technology we're producing it should be 10,000 people — in terms of technology produced per unit time it should be at least 10,000 people to be able to do that. But it isn't, because the product that we make is something that automates the making of things. But we very much applied that ourselves and that's been why it's been possible. So in a sense, our company is full of great people and some great AIs, in effect, that let us make things and leverage a smaller number of people to be able to do those kinds of things.

I would say that the size that we're at I can pretty much know everything that we're doing at some level. If you say, "What's the list of all the projects in the company?" Okay, it's a sort of joke at our company that there are more projects than there are people in our company. But that's some number of hundreds of projects. And I can have some idea what's going on, on all of those things.

If you get to a structure where there are actually 5000 projects going on, then that's not something where a CEO can kind of really keep all of that in mind. And that, I think, becomes a more difficult — a different — kind of enterprise to manage.

I think it also depends on an important aspect of these things is what the culture of the company is. For our company, it's been interesting. It's had multiple phases. The company has been around for 36 years now, but it's had various phases. I mean, at the beginning it was all about Mathematica, developing Mathematica — very successful product right out of the starting gate. And then I went off and spent a decade working on basic science. I mean, I was still CEOing the company, but my priority at the time was: keep the company stable, I want to go off and do this basic science. The company kind of grew up very nicely during that period of time — as in, it went from a company that was probably not so well organised to a company was quite well organised, even if it wasn't as innovative during that period of time.

Then I came back in 2002 from that, and I'm like, "Okay, now we have to really push to innovate." And by that point, the company had a pretty good stable structure. It took some effort to say, "Okay, now we're going to innovate." People were saying, "Why are we doing this? We have a good business going. We're doing the things that we've already been doing." But it took some force of will to turn the company into something where there could be innovation. And then what's developed very nicely is that people recognise that we do new things, and people recognise that the new things usually work out. And so, for example, when LLMs came on the scene, I very quickly said, "We're going to work seriously on this." And it happens to dovetail very beautifully with the technology we've spent so many decades developing, but I didn't have a lot of pushback. It wasn't like people saying, "Oh, why are we going to do this?" Et cetera, et cetera, et cetera.

I try to have a company culture in which people do think for themselves. And so I definitely get pushback, when people say, "That argument doesn't make sense." It's kind of been amusing with virtual reality and augmented reality: back a decade ago, I was like, "We should be doing this." But some of the people at the company who'd been around in the early '90s said, "You said that in the early '90s, and it turned out to be totally silly at that time." And so now we're just about to see whether people take it seriously again now. and I would say that it's kind of a mixed bag. I'm not sure how seriously I take it right now either.

But developing this kind of culture where people have anticipation of innovation, and anticipation that things change, that's important. How much that can scale to how many people, I don't really know. And what tends to happen is you both have to have people think for themselves, but you have to have some commonality of purpose and mission so that it isn't just a bunch of fiefdoms in silos doing all kinds of different things that don't fit together. And I think there's some kind of ratio of the force of will of the CEO versus the extent of independent thinking in different parts of the company. And I don't know whether we've optimised that but at least it's a thing which feels like it's working fairly well.

[43:31] WALKER: So you've been a remote CEO since '91 — and indeed much of the company is distributed. How do you think about the trade-offs involved in remote work? Because a lot of people stress the importance of physical proximity for fomenting the exchange of ideas — the proverbial water cooler conversation.

WOLFRAM: Yes, it's funny because people adapt to lots of different kinds of things in their lives, in the world and so on. I think that companies do the same thing — that is, had our company just adapted to the idea that it is distributed and people they get comfortable with brainstorming on Zoom or whatever... And that's happened for a long time. When I really knew that we turned the corner, years ago now, was when people were working in the same office and you realise that they're actually talking to each other on their computer even though they're just down the hall. And why are they doing that? Well, because it's more convenient, because they can share the screen more easily, it's an easier way to take notes and so on, it's less distracting, et cetera, et cetera. People get used to these kinds of things.

Now, the dynamics of in-person versus kind of remote... There are certain kinds of conversations I do find it more useful to have in person: they're mostly personal conversations, really.

When it's, "This is a set of ideas. They're kind of impersonal. It's all about ideas," it's okay, it works pretty well in my experience, remotely. And by the way, it has the tremendous advantage for us that there are people distributed around the world in completely different kind of personal settings, cultural settings, et cetera, and I notice that there are times when I think we have a better view of things because we do have that kind of diversity of environment, for the people. If everybody was kind of like, "We're all living in the same town, we're all kind of seeing the same things," it brings less ideas to the table. So I think that's been a really worthwhile thing.

But sometimes when it comes to understanding people, which is something that occasionally is really valuable to do, the in person thing is often useful in that regard. I mean, it's like what can you get from email versus what can you get from a phone call versus what can you get from actually seeing people in person? Now, every year we have an experiment, I suppose, in this: we have a summer school for grown ups and we have a summer research program for high school students and so on, which is in fact just starting tomorrow for this year. And that's an in person thing for altogether about 150 people or so. And it's an interesting dynamic, it's a different dynamic. I think that it's a great way to get to know people. In the three weeks of the summer school, one can get to know people much better for the fact that one is actually running into them in person.

It would take longer to get the same level of "Oh, I really understand something about this person if it was done kind of remotely in some more attenuated way. But if I look at all the things we've invented at the company and have they been invented in person conversations even when those happen? Not really. What is difficult is getting to the point where you really can have a brainstorming type conversation with people. And for some people that's more convenient when they're in person, and they're not used to it when it's a remote thing. But people get used to that. And for me, for our company, I suppose, there are certainly people where I find it easier to kind of expose ideas talking to them than other people.

And there's sort of an environment, a cultural environment, one sets up where it's kind of easier to expose ideas than otherwise. I mean, one of the dynamics for us in recent times for our software design activities, we live stream a bunch of these things and that's a whole other interesting dynamic that's worked out really well. The process, which I've always found very interesting of kind of inventing software design and so on, it gives it a certain extra gravitas or something that we're recording this people can watch it. It's kind of like the process means something as well as the end result. And actually I think that's helped us have a better process and a better feeling that we're accountable for the process as well as for the end result. It's something that I found quite helpful.

WALKER: The other, I think, valuable aspect of those livestreams — which I'll link to, there's an amazing library of them; they're incredible; and a lot of your other meetings as well around the physics project and whatnot — from the standpoint of the general public, those kind of recordings facilitate tacit knowledge communication.

WOLFRAM: Look, I think we're the only group that has either the chutzpah or the stupidity.

WALKER: To work in public.

WOLFRAM: Yeah, right. But I have to say, whether it's the humans or the AIs that pick up on this and learn how to think about things, I think this process of seeing thinking happen is very useful for people. I know that at our summer school I tend to do a live experiment for people. I actually just figured out this morning what my live experiment will be this time. And what's useful about that is people get to see we don't know what's going to happen, we're puttering away and then things usually go horribly wrong and then usually eventually it comes together in some way. The fact that you can see that happen and you can see the missteps that get made and so on, and you can kind of get a sense of sort of an intuition for how the rhythm of such a project works, that's an important thing.

Too much of, for example, education ends up just being, "Here's the way it is," not, "Well, you too can think about it." I mean, I was mentioning my early — what I was describing as an educational hack — of you can go and explore things that haven't been explored before, this idea that you can actually be in the process of thinking about things, not just, "Here's the answer, let me tell it to you," type thing.

WALKER: Okay, so of the four large projects you've done in your life —Mathematica, A New Kind of Science, Wolfram Alpha and The Physics Project — I'm going to assume that A New Kind of Science was the most difficult, correct?

WOLFRAM: It was the most personal. I had some research assistants and things, but it was really a very individual project. And most of these other ones, there are teams involved, there are other people involved. It's one of these things where the question for a project is always, "If I don't do something today, does that mean nothing happens on this project today?" And by the time there's hundreds of people working on some software development thing, even if I do nothing today, the machine is going to keep moving forward.

[51:44] WALKER: I have a bunch of specific questions about A New Kind of Science. Firstly, I want to talk about the book from the perspective of treating it as a project. Secondly, its impact. And then, thirdly, the content of its claims. But let's start with it as a project, because I think it's one of the most ambitious, inspiring intellectual projects I'm aware of.

Okay, a bunch of questions on this. So when you were standing on the precipice of the project in '91, did you have any idea it would take you more than ten years to complete?

WOLFRAM: No. I wouldn't have done it if I did.

My original concept of the project — and this is often how these projects work — is I had worked on simple programs, cellular automata and things, in the 1980s. I'd been pretty pleased with how that had worked out. I kind of thought there was what I would now describe as a new kind of science to build that really focused on complexity as the thing to understand. And I tried to get that started in the mid-1980s, and I tried to do that not only as an intellectual matter but as an organisational matter as well.

It was kind of frustrating. It went really slowly. I was 26 years old or whatever; I didn't understand that the world moves more slowly than you can possibly imagine.

I went to my Plan B of: build my own environment, my own tools, and then dive in and do it myself.

I thought when I started A New Kind of Science I was mostly going to summarise what I had done in the 1980s in a well-packaged way. But I thought, "I'd better go and actually make sure I understand the foundations of this better. And there are some obvious questions to ask. Let me go ask them. Now that I have tools that let me ask these questions, I can go ask them."

First couple of years I was really studying programs other than cellular automata, what really happened with them, Turing machines, register machines, all these different kinds of things. I found it was quite quick work, actually. If the book had been just that exploration of what I would now call ruliology (the study of simple rules and what they do), then I would have been done by 1993.

But what then happened was I was like, "Well, there's low-hanging fruit to be picked in how this applies to different areas." Maybe I started at the bottom branch of the tree, but I quickly found there's much more fruit going all the way up the tree, so to speak, and just discovered a lot more than I expected to discover. I felt this almost obligation to figure this stuff out within this context.

Now, I also knew perfectly well that producing one sort of high-impact thing was going to be much more economical with my life than writing 500 papers about lots of different small pieces. I knew that matrix for where to put the things I was discovering, having a single place to put them, was a lot more efficient. That was a conscious realisation: I'm not going to write endless papers which won't fit together and somebody will have to come back years later and say, "Oh, look — all these things fit together."

I also think that the process of writing the book, it's like: I want to understand the science, how do I know that I understand it? Well, I try and really write it in as minimal a way as possible. That was my internal mechanism for getting to the place where I wanted to get to intellectually.

WALKER: Correct me if I'm wrong, but you set out the table of contents at the beginning of the project.

WOLFRAM: I did.

WALKER: Did you worry that would somehow make you too intellectually rigid?

WOLFRAM: I didn't think about that, because I thought this is an 18-month project and I know what's going to be in it.

WALKER: And it was the same table by the end, right?

WOLFRAM: Pretty much. Pretty much. I'm sure I have all the data. I just wrote this thing recently; because it was the 20th anniversary, I wrote this, as it turned out, very long and elaborate piece about the making of A New Kind of Science.

The thing that really happened was the table didn't broaden, it just deepened. So in a sense, what I was covering was the main areas of intellectual formalisation, whether it's in science, physics, biology, whatever else, mathematics. The table of contents didn't really expand.

Now, something I left out of the book was the technological implications of all of this, and I made the conscious decision I'm not going to do that as part of this project: I don't know how that's going to work out, but that's a separated piece. And I certainly did start thinking about that while I was doing the science of the project and then said I'm not going to do that.

One of the things, for me at least, is that I have many ideas. And one of the things I've learnt is that one of the very frustrating things that can happen is you have ideas but you can do nothing with them. Because it's like, yes, it's a good idea, but to implement that idea, you need this whole structure in the world that I don't happen to have. And so I tend to, as a self-preservation move, try and constrain the ideas that I think about to be ones in which I have some kind of matrix for delivering those ideas. And A New Kind of Science was a great matrix for presenting certain kinds of ideas.

So for example, right now, if I decide one day I really want to study some really cool aspect of register machines, for example, well, I could do that, it might be fun, but I really don't have a great matrix into which to put those results. So I'll tend not to do that right now. I'll tend to do things for which I have some sort of delivery mechanism, because otherwise it's just frustrating. You just build up these things that are sort of free-floating disconnected "oh, I can't even remember I did that"-type things.

One of the things very nice about the A New Kind of Science book is that I refer to it all the time and it's like, "Yeah, I think I understood that once — let me go look at the note in NKS."

WALKER: Like on a daily basis?

WOLFRAM: These days, yes.

I mean, it depends a little bit whether I'm doing science or technology. But yes, all the time. A thing for me that's important about the things I write is that I refer to them.

Particularly in recent times now, all the code and all the things I write, any picture, is click-to-copy, so it has click-to-copy code. So there's a picture and it has all these things going on and you click it and you get some Wolfram Language code, you paste it into a notebook and it runs and it makes that same picture. That's a very powerful thing. Like at our summer school, that's a thing people are using all the time to be able to build on stuff one's already done. And it's been actually a long running project to get click-to-copy code for everything in A New Kind of Science. It's slowly getting done.

But I refer to it all the time, because it's very convenient to have a condensation of a large chunk of things one's thought about. Between that and Wolfram Language, that's a pretty good chunk of things I know about (and now stuff for the Physics Project). But having that be organised is really nice. If it was scattered across a zillion academic papers, I would always be like, "I don't know where I talked about that".

[59:50] WALKER: How good are you generally at predicting how long projects will take and how many resources they'll require?

WOLFRAM: I don't know. Locally, pretty decent. But what tends to happen is it's a question of just: how well do you want to do this project?

It's funny, at our company, there's a chap who is now actually still with the company but sort of semi-retired, who joined the company very early in its history, and had had an experience of doing project management actually for building like billion dollar freeways. So it's an area where you better not get it wrong.

Anyway, he came into our company and he says, "I'm going to be able to tell you how long it's going to take to do every one of these projects that you think you're going to do." Okay? So I said, "I don't believe you." He said, "It'll take me six months, but I'll really be able to predict this pretty well." And he was right. He can predict it.

WALKER: Really?

WOLFRAM: Yes.

WALKER: Can you share what he does?

WOLFRAM: I'm not sure. I think it's a bunch of judgement.

But here's the terrible thing. Then he said, "This is going to take us two years," he'd say about something. "Let's tell the team it's going to take two years." Okay, if you tell the team it's going to take two years, it doesn't take two years anymore; it takes longer.

And so we had this big argument about: we know how long these projects are going to take, should we tell people how long they're going to take?

And the answer was, in the end, no.

WALKER: Interesting.

WOLFRAM: It's not useful. It is sometimes useful from a management point of view to know. But even sometimes from a management point of view, for the kinds of things we're doing, which are one-of-a-kind projects that have never been done before, it's often you don't really want to know, because the optimism, the vision — that's all necessary.

I suppose I've been wrong in both directions. Like the Physics Project, I had no idea that would happen as quickly as it did. And that was something where I thought we'd be picking away at little pieces for decade or two. And it turned out we got a whole collection of breakthroughs very quickly.

And I think I have more of a feeling now for the arc of intellectual history, of how long things take to kind of get absorbed in the world — and it's just shockingly long. I mean, it's depressingly long. Human life is finite. I perfectly well know that lots of things I've invented won't be absorbed until long after I'm no longer around. The timescales are 100 years, more.

It's kind of satisfying to say, "I can see what the future is like." That's cool. It's also a little frustrating because to me, one of the things, particularly as I've gotten older, that I really get a kick out of is you invent ideas, you invent things. And it's just really nice to see people get satisfaction, fulfilment, excitement out of absorbing those ideas. I mean, the ego thing of, "Oh yeah, they got my idea" for me is less important than, "It's so cool to see these people get excited about this." It's kind of like you gave them a gift, and they enjoy it.

That's a thing where it makes it a pity that the fruition is going to come 100 years from now. And it will be just pleasant to be able to see a bunch of those things.

But one of the things I would say — in technology prediction it happens as well — is I think I have a really excellent record of predicting what will happen but not when it will happen. And a classic example (my wife reminds me about this example from time to time) is back in the early '90s modifying an existing house, and we had this place we'd really like to put a television, but it's only four inches deep. And I'm like, "Don't worry, there are going to be flat screen televisions." This was beginning of the '90s, right?

Well, of course there were flat screen televisions in the end, but it took another 15 years. Why was I wrong? Well, I had seen flat screen televisions. I knew the technology of them.

What was wrong was something very subtle, which was the yield. When you make a semiconductor device, it's like you're making all these transistors and some of them don't work properly. And when you're doing that in a memory chip or something, you can route around that and it's all very straightforward. When you're doing that on a great big television, if there are some pixels that don't work, you really notice that. And so what happened was, yes, you could make these things and one in a thousand would have all those pixels working properly. But that's not good enough to have a commercially viable flat screen television. So it took a long time for those yields to get better to the point where you could have consumer flat screen televisions. That was really hard to predict.

Perhaps if I'd really known semiconductors better and really thought through "it's really going to matter if there's one defect here" and so on, I could have figured that out. But it was much easier to say, "This is how it's going to end up," than to say when it's going to happen.

Like, I'm sure one day there will be general-purpose robotics that works well and that will be the ChatGPT moment for many kinds of mechanical tasks. When will that happen? I have no idea. That it will happen I am quite sure of. You could say things about molecular computing — I'm sure they'll happen. Things about sort of medicine and life sciences — I'm sure they'll happen. I don't know when.

It's really hard to predict when. Sometimes some things, like the Physics Project, for example, good question: when would that happen? I had thought for a while that there were ideas that should converge into what became our Physics Project. The fact that happened in 2020, not in 2150 or something, is not obvious. As I look at the Physics Project, one of the things that is a very strange feeling for me is I look at all the things that could have been different that would have had that project never happen. And that project was a very remarkable collection of almost coincidences that aligned a lot of things to make that project happen. Now, the fact that that project ended up being easier than I expected was also completely unpredictable, to me at least.

But I think this point that you can't know when it will happen... It's like, "Okay, we're going to get a fundamental theory of physics." Descartes thought we were going to get a fundamental theory of physics within 100 years of his time. Turns out he was wrong.

But to know that it will happen is a different thing from knowing when it will happen, and sometimes when it will happen depends on the personal circumstances of particular individuals. For example, things like our company happened to have done really well in the time heading into the Physics Project, so I felt I could take more time to do that — and lots of silly details like that. That makes it even harder to predict when what things will happen.

And in terms of, you know, how long a project will take, there are projects where it's kind of like you know you can do it. If you say, "Write an exposition of this or that thing," like, I know I wrote an exposition of ChatGPT, I knew roughly how long that would take to do. It's an "I know I can do it" type thing.

There are other things where if you say, "Can you figure out something that's never been figured out before?" No, I don't know how long it's going to take.

[1:08:35] WALKER: Do you feel like you've gotten better at project management over time? I feel like it's one of the big underrated skillsets in the world.

WOLFRAM: Yeah. I mean, what does it take to manage a project? I mean, there's managing a project that's just you, and there's managing a project that has lots of other people in it as well.

The first step is, can you assemble the right team to do the project? And one of the things I always think is that a role of management is you've got projects, you've got people — there are these complicated puzzle pieces. How do you fit them together? And do you have your arms well enough around the project to know what it's going to take? And do you understand the people well enough to know how will this person perform doing these things for this project? So that's the first step. And yes, I think I've gotten significantly better at that.

Because it's really straightforward: I just have more experience and I just know, "I've seen a person like that before. I've seen a project like that before." I have this lexicon. It's helpful to me that there are a lot of people at our company who've worked with me for a very long time, and so something will come up and they'll say, "Oh, yeah, we had this remember this situation in 1995 where we had something like this happen?" And it's like everybody has this kind of common view of "Well, this plays out this way." It's always interesting: we have a lot of bright people who come into our company and there's people know there's a certain pattern of the kind of the young eager folk who come in and some do fantastically and some blow themselves up in some way or another. And it's kind of there's a certain pattern to that.

And the fact that there's a group of people who've all seen this is helpful, and it's often very hard to predict the details of what will happen. But yeah, I've definitely gotten better at that.

At our company, we have a pretty serious project management operation — actually started by this same guy that I mentioned who was estimating times for projects. He kind of built this kind of structure for doing project management. And there's a certain set of expectations for project managers. I think one of the things that's important is project managers have to understand their project. They don't have to be able to do every technical detail, but they have to understand the functional structure of the project. And if they don't, it's not going to work. And they have to be able to fill in the things which the people in the trenches, so to speak, they don't see far enough away to be able to notice, "Oh, this piece has to fit together with this other piece."

The thing you always notice in projects — I've done a lot of big projects and a lot often quite intense projects where like "we've got to deliver this by this time" — and one of the things I always notice is that you'll have a thing where people will be great at doing their particular silo. But the role of the overall manager ends up being: "This silo is great, that silo is great, but who's got the stuff in the middle?" And both of them say, "We're doing our job!" You have to really push hard often to get them to do the stuff that's in the middle.

A thing that really helps me in my efforts at management is I rarely manage anything where I couldn't do it myself if I really wanted to. I do not envy people who manage things which they couldn't do themselves and people who are, for example, non-technical CEOs of tech companies. That's a tough business. Because for me, if I'm in some meeting and people are saying, "Oh, it's impossible. X is impossible." It's like, "Explain it to me." People at my company know me pretty well by this point, and sometimes newer people will try and explain it to me in sort of very baby terms, and it's like, "No, just tell me the actual story. And if I don't understand what some word means, I'll ask you what it means, more or less." And then it gets very technical very quickly.

It's very nice, actually, because I used to think that me diving into sort of these very deep technical details would be dispiriting to the teams that were working on this. Because, like, "Look, the CEO could just jump in and parachute in and just do our job." I thought that would be bad for people to feel that way. Actually, quite the opposite. It's: "Hey, it's cool, the CEO actually understands what we do and has some appreciation for what we do. And by the way, okay, we didn't manage to figure this out, and he did manage to figure this out." It's like, "Well, we learnt something from that," and it's actually a good dynamic. It's not what I expected.

It is interesting to me that, oh I don't know, things like debugging complex software problems, I am always a little bit disappointed that I am better at that than one might think I would be. But it is two things: it's experience and it's keeping the thinking apparatus engaged (and it's also perhaps knowing some tools). It's a very common thing: some problem in some server thing and this, that, and the other. First of all, it's experience: "Did you look at this and this?" Maybe yes, maybe no. It's like, "Well, we can't tell what's going on. There's 100,000 log messages that are coming out." It's like, "Okay, did you write a program to analyse those log messages?" "Well, no, we looked at log messages." "Well, no," you sit down, you write a little piece of Wolfram Language code: "Hey, I'm going to do it right here." And then, "Oh, well, now we can look at the 100,000 messages and we realise there are five of them that tell us what's going on. But we'd never have noticed that if we were just doing it by hand." You end up making use of a lot of stuff from other areas to apply to this.

But this method of management where you do understand at some level the things that are going on is — again, that relates also to things like company size and so on — can you be at the point where that's going on?

And I know that for our company, there have been areas of the company which I for years never really understood, like our transaction processing systems. I never paid attention to those and they were kind of crummy, actually. And then finally, about five years ago, I got fed up because things were just too crazy. And I said, "We're going to build our own ERP transaction processing system in our own language. We're just going to build it from scratch." Which we've done. And it's a wonderful thing. We've learnt a lot from doing that. And we've managed to build something that's very good for us. It'll probably spin off as a separate company selling that to other people, too. But I was shocked at the things I didn't understand how crummy they actually were.

It's a lesson. Part of the dynamic that happens in companies is things the CEO doesn't care about, people don't put as much effort into. And so I suppose it's the "inspecting the troops" theory of things, even though that function...it isn't really that important that you check out the swords or something, but the fact that you bothered to do it is important. That's a dynamic that I certainly see. And that's a reason why it's pretty nice to be able to parachute into the details of projects and so on, because it very much communicates that, yes, you care about this stuff even though you're not spending all your time doing it. It's not like you say, "Oh, yeah, they're those guys doing DevOps or something. I don't care about DevOps."

WALKER: I've been coming around to this idea that micromanagement is underrated. But back to A New Kind of Science in the process of writing it. So you famously worked in solitude for ten years. Did that reclusive period run against your nature, or are you comfortable being a lone Wolf(ram).

WOLFRAM: Oh, I'm a gregarious person. I like people. I like learning things from people, but I'm probably not I'm not a big small talk, just hang out with people kind of person. To be fair, if you look at the ages of my children, three of them were born during the time that I was working on A New Kind of Science. So I can't claim I was a...

WALKER: Monk.

WOLFRAM: Yeah, right. And I was also running a company. So again, I wasn't completely isolated in that respect. But in terms of the process of doing the intellectual work, it was not a collective process. I mean, I had some research assistants I delegated some particular things to, but it was very much of a solo activity.

Now, in the early time of working on the project I did occasionally talk to people about it, and it was a disaster, because what happened was people would say, "Oh yeah, that thing is interesting. What about this question? What about that question?"

And then I'd think, "Well, I should think about that, I guess." And then I'd waste several days thinking about such and such a question and I'd say, "I don't really need that."

WALKER: Which they may have even suggested kind of flippantly in the first place.

WOLFRAM: Perhaps, but even if they have expertise and it was well-intentioned, in order to get a project of that magnitude done, you have to just say, "I've got a plan; I'm going to execute my plan." The distraction of other people's input and so on, I really didn't want it. I learnt actively early on in the project, if I have that input it will not get done anytime soon. And so it was much better to just close things off.

And there are several points. I mean, first of all, the act of writing things and being honest in what one's writing, is for me a very strong driver of; do you know what you're talking about? For many people it's like, "Well, let me chat with other people," and sometimes I find that useful for myself — to just chat with other people to know that I know what I'm talking about.

I mean, in my own in last few years I've been doing a lot of live-streaming and answering questions from people out in the world about things. That process has actually been quite helpful to me as I set up the camera, I'm going to be yakking for the next hour and with answering a bunch of questions and I gets me to think about a bunch of things. And this process of self-explanation, I find to be at least as valuable, if not more so than the actual interaction back and forth with people. So that was one dynamic.

Another dynamic was I'm writing code. The code doesn't lie. It does and what it does. And for me it's like, "Do I understand this? What does it actually do?" It's not like I need somebody to tell me, "Oh, that's wrong." I'm finding that out for myself because the code doesn't work, or whatever else.

So it didn't need some of the things that people think, "Oh, the socialisation will be useful," it didn't need, and it was actively a negative because of the fact that it was distracting staying on target.

[1:21:33] WALKER: Let me put an idea to you. In the general notes in the book you write about how it's crucial to be able to try out new ideas and experiment quickly. So with this idea of the importance of speed in science in mind, could you have benefited from a close collaborator in the Hardy-and-Ramanujan, Watson-and-Crick sense? I guess I have a hypothesis that pairs in science can accelerate the progress of a field in a way that a solo researcher can't and a group of three or more can't, because the pair can bounce ideas off each other.

WOLFRAM: Possibly. I mean, I don't know.

WALKER: I guess the trick is finding a partner.

WOLFRAM: That's right. I mean, in the Physics Project I had couple of people (Jonathan Gorard, Max Piskunov) who worked on the early part of that project. Particularly Jonathan's been a good person who's carried forward a lot of things. I think the fact that project got done as efficiently as it did certainly was greatly helped by those guys being around.

It's probably a terrible statement about myself, you know: I haven't had that many successful collaborations in my life. I mean, I've been happily married for 30 years or something — that's I suppose one successful kind of thing like that. Although I think my wife would say — I would say — "We never collaborate on actual projects." It's like she wants to build a house, go build the house, I'm not going to be involved.

But in any case, it's a thing where when I was younger, when I was a late teenager, whatever, doing physics and so on, I did collaborate with people and I had some great collaborators.

But I would say that a lot of the dynamic was more social and more motivational for me than it was necessary — I mean they certainly contributed plenty of things — from a pure technical execution point of view. I don't disagree that if you find the right collaborator at the right time, it's cool. And sometimes there are times when it happens for a while and then it doesn't happen anymore.

I would say that the ones you mentioned — I mean Watson and Crick, I happen to know both of those people not terribly well, but I have a little bit more personal view of that. But if you take Hardy and Ramanujan, I think it wouldn't be fair to say that was so much of a collaboration. I mean I think Ramanujan was an experimental mathematician who Hardy never really understood, and I think that was more of a Hardy as distribution channel and as kind of socialiser to the world, so to speak, and Ramanujan as kind of a person just pulling mathematics out of the experimental mind.

WALKER: Yeah interesting. I got that impression when I read your essay on Ramanujan.

WOLFRAM: Right. As I say, it's great if you can have two people moving things forward rather than one. On the other hand, finding that second person where there's a perfect fit is very challenging, and although I have known worked with many terrific people, the number of times in my life where that dynamic has really developed is very small. For the Physics Project I was lucky that Jonathan read the NKS book when he was in junior high school or something. So it's somebody where there's an intellectual alignment that was not of my making. It was kind of a thing that had independently happened.

But when you're building something new and it's like nobody's done something like that before and can you find the other person who also believes that thing is worth doing — that's a difficult thing. I think it's great if it works.

In business, for example, in my company right now, I've been the CEO from the beginning. I've never really had a business partner, to my detriment. I've been lucky enough to have lots of great people I've worked with, but I wouldn't say I've ever really had... Maybe now I maybe have some hope of having aligned that but we'll see. But being able to say, "Look, I want to do the intellectual stuff, somebody else be the business partner" type thing. And perhaps I have been both lucky and unlucky that I am competent enough at running a business that it isn't an absolute disaster not to have somebody else in there doing it. But on the other hand, I consider myself pretty good on the R&D innovation side.

I always rate myself as kind of mediocre on the running a business side. But the truth is, probably from the outside I'm much better at that than I think I am. Partly because for me most of the things that have to be done are just pure common sense. It's just: keep the thinking apparatus engaged, it'll be okay. And I know because I've advised a lot of people who have lots of tech startups and so on, I know that my "it's just common sense" thing isn't really quite right. I've been super useful as an advisor to lots of companies where people say, "Wow, you can figure all this stuff out. We couldn't figure out what to do and you can figure it out." But to me internally it's like, "Look, that stuff is pretty obvious."

Whereas a lot of things I do in science and so on, I don't think they're obvious. I think they require intellectual heavy lifting to do them. Does that mean that I'm saying that business is easier than science? I don't think it necessarily is. It's just that I don't take seriously whatever skills I might have or thinking capability I might have on the business side.

WALKER: Do you have any unique comments on the Watson and Crick partnership?

WOLFRAM: Don't think so, don't think so.

[1:29:00] WALKER: Okay. So it strikes me that A New Kind of Science as a project would almost be inconceivable to pull off within the context of academia, which is kind of a sad thought. What accounts for the incrementalism in academia?

WOLFRAM: It's big. Academia is big. In any field, when it's small, it's not as incremental. It's when it gets big, it gets necessarily institutionalised. By the time you have 20,000 people in a field, it's got to have structure. It's got to be, well, which people do you fund? Which people go in the departments? Who sets the curriculum? All this kind of thing. When it's an emerging field and there's only five people working on it, you don't need that kind of structure. And indeed, those are the times when you see the fastest progress — when some new thing emerges, it's a small number of people, it's quite entrepreneurial, some of what gets done is probably nonsense, but some of it is great and not incremental.

I think academia as a whole, the fact that it is so big is the thing that holds it back and forces it to have this really conservative — they would hate to use that term in the context of academia — but it is; it's a conservative view of what makes sense to do.

And all these different fields, they develop their value systems. Their value systems get deeply locked in, because it's the funding cycle, the publication cycle, all this kind of thing. That's how that works.

I see people who want to be more entrepreneurial. Can you be intellectually entrepreneurial and be an academic? The answer is there's only a certain amount of entrepreneurism that works. If you want to be more entrepreneurial, if you're lucky enough to be...

In a sense, this happened to me. I mean, I got to the point where I was a respectable academic, in a good kind of position, and I got to that point when I was pretty young, and so it was like, "Okay, now I can do whatever the heck I want, and now I can do things that aren't particularly incremental." Again, I was lucky because I worked in particle physics which was having its golden age in the late 1970s. And that was a time when, in a sense, there was low-hanging fruit to be picked. Incremental progress was big because the field was in this very active phase. One, having made some reasonable incremental progress, people could say, "Oh yeah, that person knows what they're doing and so they can be a physics professor or whatever," and then one can go off and do other kinds of things.

But it's rare that people end up with that kind of platform. And it's very common that they've gone through this tunnel for 15 years or 20 years, and by that point they can't really escape from that very narrow thing that they were doing.

But I think the number one thing is academia is big, and that means it has structure — and has structure that holds back the spiky stuff that gets to be really innovative. And I think that is almost to be careful what you wish for. As I think about some fields of science that I've been interested in moving forward, like this area of ruliology and so on, I think, what's that going to look like? I'm going to build a structure for doing ruliology and then the really cool stuff, it will have a definite direction — and that's a particular area which has a nice feature as some other areas have had, where just doing more stuff is useful.

So like 130 years ago or something, people doing chemistry: "Let's go study all these different chemical compounds." It just was useful to build this giant encyclopaedia of what was true about all those things.

So similarly with ruliology. There are times when incrementalism in science is useful because you need a bunch of incrementalism to build this encyclopaedia that you need to be able to make the next big conceptual leap. And I think that's not a bad thing.

The other point is that people only understand things at a certain rate. If there were major new paradigms in science being invented every year, people would find that utterly disorienting, nobody would keep track of it. It would just be a mess. In order to socialise ideas, it can't be too fast.

WALKER: Titration.

WOLFRAM: Yeah, right.

WALKER: Titrate the paradigms.

WOLFRAM: Yes, yes.

[1:34:02] WALKER: It raises the question of where in the world truly original research should be done. If it's not in universities, then, I mean, what have you got left? Corporate monopolies, or more exotic research institutions like the Institute for Advanced Study or All Souls at Oxford. Do we need new social and economic structures to support original research? Have you thought about this? Do you have any suggestions?

WOLFRAM: Yes, I have thought about this. I don't have a great answer.

WALKER: Interesting.

WOLFRAM: The Institute for Advanced Study, where I worked at one point, is a good example of a bad example in some ways.

I worked there at a time when Oppenheimer had been the director a decade and a half earlier. He was very much a people person; he picked a lot of very interesting people. And by the time I was there, many of his best bets had departed, leaving people who were the ones who he had betted on but they weren't such good bets, as it turned out.

And then there's this very strange dynamic of somebody who was in their late twenties, and it's like, "Okay, now you're set for life. Just think." Turns out that doesn't work out that well for most people. So that isn't a great solution. You might think it would be a really good solution, let's just anoint these various people — "You go think about whatever you want to think about". That turns out not to work very well. Turns out people in this disembodied "just think"-type setting, it's just a hard human situation to be in.

I think I've been lucky in that, doing things like running companies, the driver of the practicality of the world is actually a very useful driver for just stirring things up, getting one to really think. For example, the fact that I have been able to strategically decide what to do in science a bunch of times, the fact that I think seriously about science strategy — that's because I've thought about strategy all the time, every day, running companies and building products and things like that; it's all about strategy.

If you ask the typical person who's gone and studied science and got a PhD or whatever else, you say, "Did you learn about the strategy for figuring out what questions to ask?", they'll probably look at you and say, "Nobody ever talked about that. That wasn't part of the thing." But that's one of the features that you get by being out in the world that forces you to think about things at a more strategic level.

Now, this question of how should basic science be done? Very interesting question.

I mean, one of my little exercises for myself is imagine you're Isaac Newton, 1687, you're inventing calculus, and you think there's going to be $5 trillion worth of value generated by calculus over the next 300 years. What do you do about it?

And you say, is there a way to take basic science — which often is the thing from which trickles down lots of things that are very significant in the world — is there a way to take that future trickle-down and apply it to now to get more basic science done? And then how do you avoid the trap of if you make too much of that it gets institutionalised?

It's kind of like when people talk about entrepreneurism and they say, "We're going to have a class about entrepreneurism; and we're going to teach everybody to be an entrepreneur, we're going to teach everybody to be an innovator." It doesn't really work that way, because by the time you have a formula for innovation it's a self-answering, not-going-to-work type of thing.

We recently started this little Wolfram Institute effort. I would say I consider the jury is still out on how that is best set up.

So my history in this is back in 1986, I started a thing called Center for Complex Systems Research, which was an effort to make a basic research direction about complexity. I was very disappointed with what happened there in the sense that I brought in a bunch of people I thought were quite good. They have turned out to have had good careers. But then it's like, "What's my role in running this? Well, I'm the guy who gets to raise the money? Well, I'm not really interested in that." And so I went off and started my company.

But for me, I saw that as being a bunch of feral cats going off and doing their thing, and there wasn't much role for management there.

Now, most universities don't have strong management of "you should be concentrating on this" — to their detriment often. Because I see people who go through an academic career, they get tenure, all this kind of thing, and it's like, why did nobody tell this person: "Just think about the strategy of what you're doing." The basic thing that you would do in a company where you're managing some person or group of people, you'd say, "You should think about what you're trying to get to, where are you trying to go." And nobody does that at universities. It's an unmanaged setting. When I was a professor type, that was kind of cool to be in an unmanaged setting. But I don't think it's always good for the people involved.
One model of doing things is you have the person, like me, who has a definite set of "I want to do these things and these things," — it's kind of what I've done with the company — and then you get the best support you can for being able to take those ideas you have and turn them into real things in the world and really work things out.

But no, I'm very curious. In the time when NFTs were big, it's like, could you tokenise the idea of basic science? Couldn't really figure that out.

I figured out one thing — I don't really like where this is going, but it's interesting. Basic science, it's like, you're not going to make patents... What is the thing that is the protectable value in basic science? And it usually tends to be guild-like know-how. There'll be a certain set of people that know about this particular kind of thing.

WALKER: Tacit knowledge.

WOLFRAM: Yes. If you look at who knows about "X", it's the students of this person and the grandstudents of that person, and so on.

I was thinking about this a few months ago, and I realised that one of the things that I've done is that in many different fields I've ended up being not somebody who was part of the guild, who ended up showing up in the field and doing something. It's terrible that it took me decades to realise this, but for people in the field it's quite disorienting to see somebody who they might know about from some other setting, but it's like: "You're not part of our guild, and now you're coming in and doing something."

Sometimes it's easier if you're coming from the outside, because you guys have all been off in this corner and by the way there's this great big thing over here.

But the fact is the situation is much more typically: there's a kind of a guild, there's a group of people that has this, as you're saying, tacit knowledge about how things work. They have this intuition that they collectively develop. And that thing is sort of a thing; it's not a thing that gets monetised, for example, particularly. The only way it gets monetised is by the education process (insofar as education is a business) of "come and learn about the ways of our guild" type thing.

Is there a way to take that and have it feed into the earlier years of the basic research from what will be the subsequent development of the guild that eventually becomes the guild that drives what eventually becomes the economic value?

Take an area like machine learning. There were people who were working on neural nets. There were people working — many of them I know — in that area for years. It wasn't an economically interesting area. I mean, these were academics, but they were lone people with weird backgrounds, off doing particular things and justifying their work on the basis of, "Oh, it's connected to neuroscience." Or, "Oh, it's connected to computer science," or whatever. Even though really they had a more specific vision.

And then suddenly it becomes a very economically valuable area. And then that guild, in that particular case — mostly that guild has done quite well. Actually, I can think of one example of a very good friend of mine who I don't think cares that much, but hasn't been part of the commercial development of these kinds of things. But for many of those individual people and their students, and grandstudents even, that's worked out quite well.

But this question of how should this be done, how should you set up environments where people can be successful, is a very challenging thing.

Sometimes it's even: is this person a good person to bet on? That's often very difficult. That's the problem when you're doing companies and you're doing venture capital or something. That's the problem you have with that. It's really hard.

In the intellectual domain, same kind of issue. I myself find it very interesting to mentor folks who are the high talent, maybe unusual kinds of people. And sometimes I do feel like there are many settings in which I'll run across people and I know enough to be able to say, "Hey, I think this person has something really interesting going for them."

Or I'll know enough to say, "I think this person is just full of it, and this person's a fraud." And I think I do a lot better than the average bear on that particular thing. And sometimes I'll be too optimistic and I'll get it wrong.

But it is sometimes shocking that you'll see people where it's like, I'm pretty sure that person has some really interesting intellectual thing going on but the world doesn't recognise that. The world just says, "You're a hopeless, whatever it is," and it is a little frustrating. What do you do in that situation? And I try to do some mentoring. But sometimes they're like, "Where am I going to get a job?" And it's like, "Well, I don't really know."

WALKER: There needs to be some kind of mechanism for putting the equivalent of a call option on that person.

WOLFRAM: Yes, right. People try to invent some schemes like this which really don't work very well from a human point of view. It is a shame that there aren't... Even at the level of philanthropy, I don't think people feel very good about this "just bet on this random person" type thing.

The MacArthur Foundation is an outfit that bets on random people — except I think for the last several decades they have been really betting on people who are already sure bets. And they gave me some money in the very first cohort of these things back in 1981. And it was interesting getting to know that foundation and the whole history of "how did somebody decide to just make random bets on people?"

The interesting question I've asked them from time to time is whether they think I was a good bet for them or not. Because, for example, I'm one of the very few people from everybody they've ever funded who has been a significant commercial operative and generated significant assets at a financial level.

But it was interesting how that even came to be. I mean, this guy John MacArthur ran an insurance company. And I asked people, "Does anybody know what John MacArthur wanted?" And people would say, "No, nobody really knew what he wanted." And then he died and left all this money, and he had this corporate lawyer who was a very crusty corporate lawyer who was just like, "I need to figure out what to do with this."

And he went and asked a bunch of people, and somebody suggested this MacArthur Fellows program. And this guy, I met him several times. You'd never have thought this was a great innovator of philanthropy. It was just a very "I'm going to do my job, I'm a crusty corporate lawyer" type person. That was where this came from. And it got some advice from different people who suggested, "Oh, this might be an interesting thing to do," but it came from a slightly random place.

Even at that level of betting-on-people philanthropy, there's not a lot of that goes on. And I don't even know if that's the right thing. You take the Institute for Advanced Study case where you say to somebody, "Okay, you're 22 years old. We're going to bet on you doing something great, and here you are set for the rest of your life."

One of the things I often notice is (I often refer to it as) the negative value of money. It has many individual negative values. But one of these things is, "Okay, you're set. You don't really have to do anything." It's like "go off and hang out for the rest of your life" type thing. That doesn't usually end well.

Sometimes it does. Occasionally, somebody will say, "Well, by golly, I got interested in this thing, and I'm going to become what always used to be called a gentleman scientist. And I'm going to go figure out amazing things." Occasionally in history that's worked, but that's exceptional.

I know a small number of independent scientists, and it's an interesting crowd. I suppose I'm one of that crowd in some sense. It's a terrible thing because usually people say, "I'm going to be an independent scientist. I'm going to make money doing this thing. I'm going to start this company, and then I'm going to go off and I'm going to do intellectual stuff." They almost never go back to the intellectual stuff. Even though they have the means — they could just hang out and do that — they don't end up doing it.

WALKER: Why?

WOLFRAM: Because they get used to a mode of life where it's probably for many people... if you're in the CEOing role and there's a kind of a rhythm to doing that, and then it's like, "Okay, you're on your own now, just go invent something in science," it's a pretty gruelling kind of transition. Because you've been CEOing, you're working with a whole bunch of other people, they provide momentum, et cetera, and then, oops, you're sitting on your own. Now you've got to figure something out on your own. It's not an easy...

I've been fortunate in that I interspersed these kinds of activities. So for me, it's kind of like when I'm in the "Okay, I'm just sitting and figuring out something by myself" type mode, it's not, "Oh, I've just spent the last 20 years being running a company and having momentum from other people."

WALKER: Just as a final comment on this section of the conversation, it's kind of funny to think that as the CEO of a company, you probably have more time for basic scientific research than most university professors, who have to deal with applying for grants, sitting on committees, teaching students.

WOLFRAM: Yes, it is a funny thing.

WALKER: It's a perverse situation.

WOLFRAM: Well, yes and no. I mean, look, one point is that because I get to be my own boss, so to speak, I get to decide what I delegate. I suspect if I put more effort into this or that thing, the company will be more successful in this or that area. But I decide as a personal matter, I'm going to be a little bit irresponsible. I'm not going to do as much as I could in that direction because I want to spend time doing basic science. Yes, I find that ironic.

There are a number of extenuating circumstances. One, you get to decide what you can delegate. Two, many people, if they were academics, for example, if they were presented with what I do for a living every day, they would be like, "Oh my God, how are you going to decide these things?"

People who've been academics, who come to join our company — and it's a very common experience — we'll say we're going to have this meeting, we're going to decide this or that thing. And they're like, "You can't do that. I mean, you can't just decide this in an hour. This is a whole process. We'd have a committee, and it would take six months or something." And it's like, "Well, no, we're just going to decide it. And hopefully we'll get it right 90% of the time or something, and that's okay. And it only took an hour, and it didn't take six months." And I think it's one of these things where it is a question of the cultural rhythm of things.

It really helps me that I've been doing this a while, and so a lot of things that I might agonise about, it's like, "Eh, I pretty much know what to do." It'll take two minutes. I don't have to agonise about it. I don't have to ask a bunch of people. Let's just do this. Sometimes it's wrong, but it certainly saves a bunch of time.

One of the things I find particularly ironic in today's world: college professors, university professors, are busy. I think high school students may be the busiest people, at least in the US. The elitish high school students. "I've got an activity every 15 minutes."

WALKER: Yeah, the extracurriculars.

WOLFRAM: Yes, right.

WALKER: Stephen, I want to be respectful of your time, but I also have—

WOLFRAM: Gazillions of questions.

WALKER: Yeah. And I'm really enjoying this, and I figure we'll only do this once. Are you okay if we keep going?

WOLFRAM: Keep going, keep going. Actually, you know what? I'm going to take a very short food break. So I'm going to crunch for a little while here. Do have another water here. You're asking very interesting questions. This is good.

[1:55:19] WALKER: Thank you. So you sort of implicitly touched on the question of how to identify talent. Let me ask a couple of questions about this. One is, how many potential Ramanujans do you think go undetected in the world today?

WOLFRAM: Interesting question. Quite a few. But it depends what you mean by potential Ramanujans. I'm sort of an optimist about people, and so I think everybody's born with lots of interesting capabilities.

Do those capabilities happen to be usable at this time in history? In other words, you could be somebody who would be a great programmer, but if you lived in the 1400s, you're out of luck, there aren't any computers. Or you could be a great discoverer of the source of the Nile and live in the 21st century when you can find it on whatever satellite map. At any given time, there are certain kinds of things that are possible to do in the world, and there are lots of interesting capabilities people have.

To become a Ramanujan, you have to have a certain degree of development. I mean, Ramanujan went to perfectly decent schools and learnt math and so on, and had he not done that he might have been great at basket-weaving or something, but one would never have known that he would have had the capability to be great at doing math. So I think there's some history dependence.

But I do think that there's surely a huge amount of untapped great talent in the world.

And how does it go untapped? Sometimes it goes untapped through the best education. People go to these terrific schools and they are fed lots of great content, and they're so busy doing all that stuff, and they get put into a track where they wind up working for a big consulting firm or something like this.

They were pushed onto that track by the very momentum of all of the wonderful education that they were getting, and they could have been a great innovative thinker who invented something really new and different in the world, but instead they were on this particular track.

I have to say it's very recently become a little bit of a pet peeve of mine (which perhaps is an unfair one), but I look at the finance industry and I know many people in that industry who are really great intellects, I mean, really smart people, good thinking skills, even good strategic skills. And it's like, at the end of the day, they've run a company, they've made billions of dollars, and it seems very unsatisfying, at least to me. I mean, maybe that's why I don't do that.

And it feels like there is in the world today, there's a great pull, because there are things that the financial elite can get in the world that are distinct from what other people can get. And so there's a great motivator to be in that kind of financial elite.

But I think it's something where it's kind of a shame that high-talent people get pulled into this activity that — and maybe I'm just not seeing it correctly — to me seems like it's a waste, perhaps for those people and for the world, of that talent.

But that's on one end of the people who have all of the access to terrific education and this kind of thing and then get pulled along by it into something which is ultimately not a particularly moving-the-world-forward-in-a-creative-way kind of activity.

And then the other side of it is people who just don't have access to that and if only somebody taught them Wolfram Language or something, or taught them something which allows them to have a tool to explore or whatever, then they would go places.

Here's where I have a hard time understanding some things, which is imagine you go talk to people in the rural US or in other places and you say, "Look, there are all these cool things you could do. Look at the tech industry, it's a really cool thing. Everybody could be a tech entrepreneur." And then you realise, look, these people just don't care. It's like somebody comes to me and says, "You could be in show business." And it's like great, I don't care.

I've done these surveys of kids actually of what would you like to have achieved in your life? Like make X amount of money, take a one way trip to Mars, write a great novel or something. And people pick very different things. My guess is that after about the age of twelve, whatever they pick won't change that much. That people have a certain intrinsic value system that comes from who knows where, and that doesn't shift.

I said I was interested in people. I am interested in people. I'm interested enough that I recently did a 50-year virtual reunion of my elementary school class. So that was interesting to see. What happens to people in 50 years? And I think that what you see is that people somehow don't change. But sometimes the world provides people certain kinds of opportunities and niches that really what allowed them to be there was already present originally, but you expose certain kinds of things differently with the way that those people interact with the world.

But anyway, one of the things that I don't really understand very well is you're saying, okay, there are all these kids, for example, who might be great tech entrepreneurs, let's say, and should you go and be like a missionary, basically; go to all these places and say, "Look, there's this great thing. It's tech entrepreneurism." And at first people say, "We don't care."

Is it something where is that the right thing to be doing? And I think my main conclusion is that the thing to say to people is here's this thing that exists, if you care, that's great. That's sort of a good thing to do. But on the other hand, before you've had some level of development in that direction, it's hard to even form the thought that you might care.

WALKER: You don't have enough context.

WOLFRAM: Right. To me you ask is there lots of high talent out there that has not been realised? My guess is absolutely.

And my guess is that even in developed countries where there's all kinds of educational programs and testing people and this and that and the other, my guess is that there are a very large number of people where were you to align their lives in a different way they would end up being great contributors of this or that kind.

How one achieves that, how one makes this a more efficient world and market, I don't know. I've put some reasonable amount of effort into this, of putting out feelers for kids of that type. And sometimes there are things which, again, not being embedded in that world, it's difficult for me to understand.

Sometimes there are kids who have less resources just at a purely practical level. You say, "Okay, why don't you join the zoom call?" "Oh, well, I don't have a computer that has Internet." So you're out of luck there. Some of these things that perhaps for me in my particular walk of life are not things I think about, and they might not think about some of the things that are issues for me. But it's hard to get into that.

The rhythm of my life tends to be there are things I get interested in and I try and do them. And sometimes I start off doing them as a hobby and then eventually they get more serious. This one of identifying talent, particularly in kids, is one that for years I've been a hobbyist, thinking I should do something. (I had this idea I was going to have a thing probably called the Trajectory Project.)

Another issue is kids who just don't know what's out there to do in the world and where you tell them, "Boy, did you know about software quality assurance?" "No, I never heard of that." Or did you know about this or that kind of thing, where it turns out it's a really good fit for them. All they tend to know about is the subjects they studied in school, which is a very narrow slice, and by the way a slice that was set 100 and something years ago mostly, for things that can be done in the world.

Somebody like me bouncing around the world, I know some of the things that are coming and so can you even communicate those things?

Okay, who's going to be an AI psychologist? There will be AI psychologists. And some kid out there has exactly the right mindset, exactly the right skills, and is going to be a great AI psychologist.

WALKER: Very empathetic towards machines.

WOLFRAM: Yeah. Well, also just getting an intuition for what is this large language model doing? How do I write a prompt that will convince it to do this? How do I get inside? Yes, an empathy for machines, basically.

How do you even tell some kid somewhere? How do you even communicate to them there is this thing. And they might say, "I don't care." Or they might say, "Wow, that's really interesting." Because that's one of the other issues, is that a lot of kids — and I think it's worse in more elite education, actually, because I've done this thing; I actually was even doing it with a group of kids just yesterday, middle school kids — saying, what do you want to do when you're grown up? And a large fraction, particularly these days, will just say, "I don't know, I'm just going to go with the flow of my education process, and somehow it will land me in the place where I should be."

It's a terrible thing about giving advice, because somehow it's always entwined with the particular choices the person giving the advice made for themselves. And for me, the fact that I thought I knew what I wanted to do with myself by the time I was 10, 11 years old was tremendously useful. Maybe it wouldn't be to other people. Now, as it turned out, the thing I thought I wanted to do with myself was be a physicist, which I was by the time I was 20, which was a good thing that happened quickly. And then I was like, "Well, actually, I want to do more things." And now I've come back to being a physicist many years later. At some level. I don't know whether I count as a physicist; a new kind of physicist, at least.

WALKER: So Ramanujan famously wrote a then-equivalent of a cold email to Hardy, in the form of a letter with a bunch of formulas. Do you get many cold emails like that?

WOLFRAM: Yes, many.

[2:09:05] WALKER: Interesting. Have you developed any heuristics for determining whether someone is an outlier talent?

WOLFRAM: Yeah, it's an interesting question.

I do look at [cold emails], and I have occasionally found people who sent pretty strange emails.

The thing I get a lot is people with theories of physics and things like this. Those are very disappointing to me, usually, because the most common pattern is it's kind of high school level knowledge of physics and then "I've got a theory of everything".

And the problem is a lot of things happen in physics in the 20th century. If you don't know about any of those things, it's really hard to have a good physics theory. And so you can kind of see right off the bat this is not going to work.

What's frustrating to me, and I've never figured it out, is there's quite a lot of energy out there in this kind of area.

What should this be channelled to? Now, I have to say I tend to try to channel people towards things like ruliology, studying simple programs, which is an activity where there is a much less tall a tower to climb to get to the point where you can do useful, original research.

Because in physics, if you really don't know anything about 20th century physics, it's really tough. You can say, "Well, I can understand things in terms of electromagnetism from the 19th century." Well, that's not going to work. We already know that's not going to work. It's more abstract, more elaborate than that.

But in the case of studying simple programs, there's much more low-hanging fruit to be picked.

I would say that Hardy...When you look at the formulas...I do know a few people who — one in particular I'm thinking of, who has for years sent me Ramanujan-like formulas. This person really is very smart and is a misfit a little bit in the world as it is. Actually, as he's getting older now I've been trying to persuade him to collect all the stuff that he's produced because even though, individually, it's just like "this fact, this fact, this fact," it's a pretty interesting corpus of work. But certainly on an individual basis, it's very weird and a "how does this fit into anything" kind of thing.

Okay, the heuristic is really this: you look at what's there, and there are details, and sometimes things let themselves down in the details. As in, if you want to know, has somebody been a professional physicist or something, and you talk to them about something and there's some standard term in physics that they use wrong, or they say it wrong, or whatever else — and you kind of know, okay, that person never was really in that particular world. There are little details like that.

Now, sometimes [details] reveal things about knowledge. I'm not sure they reveal things about ability. I mean, that's a different thing.

Another challenge is somebody has a brilliant set of ideas, but they absolutely cannot explain what they're talking about. Where does that leave one? Because I've seen that a bunch of times, too, of where I'm pretty sure you can extract by pulling hard enough — you pull long enough on the thread, there's something really interesting there. But my gosh, I run out of patience long before I can get the thing at the other end of the thread.

WALKER: Yeah. And then the risk is that you simply penalise them for being inarticulate.

WOLFRAM: Well, yes, but I think the other risk is — which has happened to me plenty of times — you pull on the thread, it's incomprehensible what's coming out, and in the end whatever came out was actually just something I put in. It's like the person is just generating something completely incomprehensible and I'm just imposing my own ideas on this. And sometimes I say, "Hey, that's a really good idea." And I have no idea whether that was there in what I was working with, or whether I just came up with that and it was completely independent.

This is one of the challenges, because people have different ways of thinking about things. Some of the most interesting innovations come from different ways of thinking about things. But if they're too different, you don't understand them.

One of the things that I've put a lot of effort into is being able to explain things in a way that other people can understand. But part of the motivation to me for that is it helps me to understand them. In other words, if it was purely a service to other people to explain them, I'm not sure how well I would do at that. But because I find it really useful for me to be able to understand things that way, that's why I end up putting so much emphasis on it.

WALKER: Yeah, interesting. I wonder whether people somehow feel comfortable reaching out to you because of the unique path that you've beaten in your life.

WOLFRAM: I don't know. I have at times thought I must have had the complete set of different theories of physics and so on and so on.

But then somebody sent me this thing where they've cataloged these things and they're like 20,000 of them. I haven't counted, but I suspect I'm in the thousands, but not up to 20,000 yet.

I get very interesting cold emails and sometimes they really turn into good things for me.

I have a couple of mechanisms there. So one is we do these summer schools every year. I say, "You want to interact with us? Come to our summer school." So I'm sure, I haven't looked actually at the list, but I'm sure this year there are several people who are coming where they sent me a cold email, we said, "Come to our summer school," and they're coming.

And then we'll interact with them and learn about what their story is. They'll learn about our story, and whatever happens will happen. That's one thing.

The other thing is that I have to say that I'm almost obnoxious at saying, "If you're talking about something that has formalised content, show it to me in Wolfram Language." You show it to me in words, you show it to me in some random piece of C code or something: I'm not going to look at it. Because if you show me a piece of Wolf language code, I can run it; not only that, also I can read it quickly, and by looking at the texture of what's been done, it's very easy for me to make an assessment: does this make sense? And again, that's a pretty good dynamic and filter.

Now, no doubt there are people that say, "I can't be bothered to do this." Well, my attitude towards these things is if it's like you provide a path, if they don't take the path, well, then that's not my problem.

But the thing that I haven't figured out: there're some categories of people who contact us — another one is artists who make artworks of various kinds based on science and things that I've done — and sometimes they're really nice and it's like I don't quite know what to say. We recently made a collection of some of these artworks, which I thought was helpful. But where do you go next with something like that?

Again, I was talking about, the matrix that one creates for oneself. If it's like, I want to do stuff related to your products and your company, okay, fine, we've got a business development team; there's a mechanism for making something happen there.

My staff are always horrified at how diligent we are at actually responding to all these random emails. I mean if it isn't outrageous in some way, we'll usually try to respond, even if it's mostly saying, "Package what you're saying in a way that we can better understand it."

I would say that over the years that's been a good thing to do, because we've come into contact with a lot of interesting folk that way. It's always funny what you can learn. There are these strange corners of the world.

Most academics I know, for example, they'll never respond to these messages. Never. I think, one, I feel some responsibility to respond; and two, it's in self interest because occasionally something really interesting will come out of it.

[2:19:31] WALKER: Yeah, for sure. Okay, so I'd like to turn to the impact of A New Kind of Science. We've spoken about how paradigms get absorbed, or how new ideas get absorbed — the rate at which they're absorbed. Have you found any patterns studying the history of ideas?

WOLFRAM: It's slower than you can possibly imagine. On the ground, it's slower than you can possibly imagine. In the hindsight of history, it looks fast.

So to the idea that one uses programs instead of equations to describe the world, people will say, "Oh, yeah, as soon as there were computers able to do those kinds of things, that was an immediate thing." Which it wasn't, on the ground. On the ground, it was a large part of my life.

But in hindsight, it will look like that happened quickly.

Another thing is (for example, with NKS), if you look at different fields, fields with low self-esteem absorb more quickly than fields with high self-esteem — and the self-esteem of fields goes up and down.

There are fields like art, actually, where everybody always wants new ideas. There are fields which feed off new ideas, like art. What I noticed with the NKS book, a lot of the softer sciences that hadn't had a formal framework of any kind were like, "Wow, these are models we can use and this is great." Whereas an area like physics says, "We got our models, we're happy, we've got our equations, it's all good, we don't need anything else."

At the time when the NKS book came out, physics was in a high self-esteem moment, thinking, "We've got string theory, we're going to nail everything in just a short while." Which didn't happen. But that meant it was a field particularly resistant to outside ideas. Bizarre for me, because I was well-integrated into that field.

And in fact, the greatest irony was people saying, "We don't need any of this new stuff. The only new thing we need is the thing you built, which is this tool that we now all use." That was one of the really amusing ironies of the whole thing.

Now, with our Physics Project, 20 years later — quite a different situation. Fundamental physics is not a high self-esteem field. The string theory thing worked its way through. It didn't nail it. And it's [now] got good receptivity to new ideas, I would say.

When you look at the arrival of a new thing...I've been involved in a few new things in my life and one of the things I'm always curious about is who's going to jump onto this bandwagon? And sometimes you'd say, "It'll be the young people."

It's not true.

It's a distribution of ages, distribution of stages of career. And what happens is there are people going around the world and a new thing comes up and that resonates with them and then that's the thing they pursue.

Now, what also tends to happen, you wait 20 years and you say, what happened to those people who jumped into this new area? My observation is about half of them are still in that area. And another half have moved on to two other new areas. So in other words, for some people the newness is the driver and for other people it's the specific content where they realise that this is a thing that resonates with them.

The other thing that's complicated about new areas is how much flakiness do you allow? So, for example, you have a new area, people start saying, "This area is going places, I'm going to use its banner. I'm now going to start doing something that seems to a person who is an academic sensibility-type person like it's really flaky, kind of nonsense-y."

But that's a tricky thing because sometimes it is flaky and nonsense-y.

But sometimes it's just what it looks like as people are trying to come to terms with some new set of ideas and you have to not throw out all the the marginal stuff. But you don't want so much marginal stuff that the whole field gets covered with marginal stuff and it overwhelms and kills the field, as has happened with some fields. So it's a tricky thing.

And by the way, one of the things that happens is the ideas that at first seem outrageous and shocking and how can this possibly be true, you wait a few decades and people are like, "Oh, that's obvious." It's kind of charming that way.

What's always interesting to me, when you are interested in foundations of a field and the originators of the field are still alive, you go talk to them, you say, "Hey, what about this foundation?" They say, "Well, we're not quite sure about that," and, "Maybe there's a better way to do it," et cetera. They're still very flexible.

Then you go five academic generations later. You talk to the people in the field. You say, "What about this foundational thing?" And they say, "Oh, that's just the way it is." There's no possibility.

And by the way, in a bunch of things I've done and things I've encouraged other people to do, it turns out by the time you're five academic generations later, it is the case that one of the foundations is, or some of the foundations are, wonky. And if you go attack those foundations, you can sometimes make huge progress, because nobody who's actually in the field is ever going to look back down at those things. They're all up at the top of the tower.

WALKER: And you say "five generations" deliberately? That is a number that's emerged?

WOLFRAM: So things like physics are, relative to the stuff that happened a century ago, at five academic generations. It might be partly: are the people who originated the field still alive? Are they still influencing what's happening?

WALKER: The Max Planck, "science advances one funeral at a time", thing.

WOLFRAM: Yes, but this is the inverse of that.

WALKER: Oh, I see.

WOLFRAM: This is to say, when those people are still alive, they're still flexible about the field that they created.

It is true that people, once they're locked in, "I learnt this field, this is what I do, this is my career, "they'll often never change. And even when overwhelming evidence shows them that this just wasn't the right direction.

I don't blame them at some level because it's a very wrenching thing to say, "I've been doing this for 30 years now. I got laid off from my field, basically, and now I'm going to try and find some other profession." It's not surprising that people try and hang on to the things that they were doing. It's not a thing calculated to lead to the greatest innovation.

As careers have gotten longer (because we all happily live longer), you might have thought that would mean the timescales in the modern world (where everything moves so quickly) for change have gotten smaller. But I don't think that's true. Because people, once they've locked into "this is the way we do it," they can be doing that for a very large number of decades at this point.

[2:28:02] WALKER: Right. Okay, so you chose to introduce the computational paradigm via a book. Why not create some kind of new canonical medium for a new kind of science?

WOLFRAM: Oh, I wanted to do that. I mean, the concept of computational essays, where you can have computational language which you can read and understand alongside English text or whatever, that is a great thing. And that will be the future. It's just it's not there yet.

We built the technology for that 35 years ago, and people have used it. But it's been painfully slow to see that come into practice. And I think the reason is, for academics, you write a paper, you make some claim — it's just a bunch of words. You just have to make the words say what they say once. If you've got a piece of computational language code there, and you say, "This code shows 'this'," then there's a higher bar. The code can actually run and you can see does it actually do that?

It takes more work. It's more valuable to the people who created and to the community to have this thing that actually runs, but it's more work. And the academic enterprise has not particularly rewarded that work so far.

That's one of the things that has, I think, is a very important direction for change: make the computational way of communication something that is expected in these intellectual areas, not something that just you do in the back room and you don't use it as part of communication.

The Physics Project — that was delivered in a slightly different form. I did produce a book from it, but that was not its primary delivery mechanism. Because, in the modern world, we can run things in the cloud, we can have people be able to run code, we can do live-streams, we have social media. It's a different form of communicating things. And I would say I think that worked pretty nicely. It was a strange thing that it landed right at the beginning of the pandemic. And that was a mixed thing. Perhaps people had more time to think about new things. I think a lot of the channels of communication had closed down.

One thing was interesting about the Physics Project was how much we didn't get coverage in traditional media and how much we couldn't care less. I mean, we literally didn't even bother. We sent a few emails, but we didn't bother. We just didn't care. It wasn't relevant. It's more useful to do live-streams and podcasts and social media stuff than it is to get the article in the newspaper or whatever. Which had changed in 20 years. Because when A New Kind of Science came out, it was useful to have wide coverage in things like newspapers, but that was irrelevant by 20 years later.

[2:31:37] WALKER: I have a specific question about scientific books, and then a general question. So the specific question is, Benoit Mandelbrot wrote a book on fractals in the early 1970s that turned out really to be more impactful than the hundreds of papers he'd written on the subject. What's your explanation for why his book was so successful relative to the things he'd published in journals?

WOLFRAM: Right. Well, partly he was a good example of why I did A New Kind of Science rather than write hundreds of papers.

Benoit and I had a complicated relationship, I would say. I mean, Benoit was fond of telling other people — didn't tell this to me, but I heard this from a whole bunch of people — he said about my stuff, "Eventually that stuff will kill fractals." And I said to him, "You're wrong. Fractals are a thing that are interesting in their own right, and the fact that there's a more general story about computation is also interesting — I like it a lot, spent my life on it — but it's not going to crush the story of fractals."

When Benoit died, I was going to write an obituary, and I picked out all the communications I'd had with him, and I was looking at them, and my staff said, "You cannot write this obituary, because there's too many just horrifying things that happened here." I mean, he was a difficult guy in many ways. But then later on, he wrote an autobiography, so I wrote a review of it.

And I did realise what had happened in that book [Fractals: Form, Chance and Dimension].

What happened is he was a guy who'd worked on power laws. He worked on power laws in language. He worked on power laws in turbulence. He worked on a bunch of power laws. Then he was going to write the book, and the editor of the book said basically, "Well, who cares about power laws? Can't we make some pictures?"

Well, Benoit was at IBM. There was a guy called Dick Voss, who was a younger physicist there, who started making pictures. And the pictures were really cool. It was a very unusual case where it was driven by the communication channel, and the publishing company was a sort of visually-oriented publishing company, and they're like, "We want pictures," and so then started producing these pictures. Then the pictures ended up taking over the story.

The impact was just vastly higher for the presence of the pictures.

I wonder whether I ever asked Benoit this question... I'm not sure how seriously he took the pictures initially. I think before people started giving him feedback about them, I think he may not have thought that they were much of anything.

It's an unusual case, but one that certainly I was very much aware of as the value of the one book versus the hundreds of papers.

Benoit made another interesting tactical mistake, which was that people would apply his stuff in different areas, and Benoit would collaborate with them and add his name to their papers, in whatever area it was — in meteorology or in geology or whatever it was.

That did not work well, because what happened is — and there's this question about the fringe — the people who would first contact him would be, you know, the geologists who are off in a corner not part of the mainstream. And he was like, "That's cool. You're using fractals. Let me help you out. Add my name to your paper," et cetera, et cetera. But then that turned out to be this weird corner of geology. So other people in geology said, "Oh, this fractal stuff, it's part of that weird corner. It's not something that we can mainstream enough."

WALKER: Right. It became tainted.

WOLFRAM: Yeah, right. So that was not a good strategy. It might have seemed like a good strategy, but it wasn't, in fact, a good strategy.

I must say that, well, my own emergent strategy, which I won't claim is great, is I'm a cheerleader but I'm not going to be involved in all the things people have done with NKS and so on. Because the dynamics just don't work. Because it's like, okay, I'm pretty skilled at doing these things, you show me a paper you've written. I say, "Gosh, I could do that in 15 minutes." That's not useful. Because I'm going to spend the next however many years telling people about it, because it was like, well, I could just do that in 15 minutes. And also, it's not a good human situation.

And also, I feel like when I write something or be involved in something, I really have to have my arms around it. I really have to understand it. I don't feel comfortable unless I really know the bedrock that it's based on. And that's something that's just impossible to do. If somebody says, "Can I add your name to my paper?" It's like, "Well, no," and if you were going to do that, I would have to understand every word of what you're doing. And by the way, by the time I've done that, it won't look anything like what you originally had.

[2:37:49] WALKER: It's interesting because there seem to be at least two prejudices against scientific books. One, which I hear increasingly, is that a book is a vanity project — just write a blog post, get it out into the world quickly, you don't need to do the book. The second is that, well, a scientific book just synthesises ideas that have already been published in journals and then popularises them.

But there are exceptions where a scientific book makes a genuinely original contribution. I feel like Richard Dawkins's The Extended Phenotype is a pretty classic example. A New Kind of Science is also a classic example. So when is it appropriate to take the book avenue?

WOLFRAM: When you've got a big set of things to talk about. Because there are things where you could explain it, if you did a good job, you could compress it to five pages, and you'd have the whole story.

But there are things where it's just a big paradigmatic thing to talk about.

If Charles Darwin had written On the Origin of Species as a three-page paper, people wouldn't have understood it, and people would have just ignored it.

But what happens with books is there's this whole industry of trade books; there'll be a book that people just buy at the front of the store and just read it for fun. And there's certainly a development of: scientists are among the people who write those front-of-the-store type books.

Most of those are, at best, deeply secondary — as in, they're just a little spinoff of a spinoff of somebody's research presented in a sometimes good, sometimes not so good, kind of entertainment type form.

When I was starting to write A New Kind of Science, I was working with a publishing company, considering having them publish it, and I said, "Let's go find out who actually reads popular science books. What is the audience for these things?" Because they had no idea. No idea.

One fairly well known editor for these things said probably the most useful thing, which was, "I think it's people who used to buy philosophy books before, but now the philosophy books are all too technical, and they buy science books instead."

WALKER: Like intellectual fodder.

WOLFRAM: Yes. For working scientists, the popular science book is usually a secondary thing. That's something you do as a kind of a hobby rather than something that's part of your mainstream activity.

Now, occasionally, when you have big ideas to communicate, you don't really have a choice but to present them in a form that has enough scaffolding that people have a chance to understand them. If you just say, "Oh, by the way, you can use programs instead of equations to study the natural world." It's like people say, "Okay, whatever." So I think that's the dynamic there.

I think this is also part of the value system of academia. And I'm not sure I've tracked that in the last few decades that well, but I think it's something where people feel like sometimes there's the people who are just doing their job, and then they're the showboaters. And that happens, that's a real phenomenon. Although sometimes the people who are explaining things are the people who really like what they're doing, and even the people who are not using their explanation to deliver the main message, they're people who really like what they're doing and think other people should know about it. And it's not really a showboating activity. It's more, "I really like this stuff. This is really cool."

But some of the dynamics and the industrial dynamics of the publishing industry have led to a certain degree of just pump out those kind of science entertainment books, and that doesn't make for the best results.

[2:42:30] WALKER: So you mentioned Charles Darwin. I once heard you say that you learned from his example to never write a second edition.

WOLFRAM: Yes.

WALKER: Can you elaborate on that and on what it takes to write a timeless book.

WOLFRAM: Yeah. I think on the timelessness question, I'm fairly satisfied with a lot of things I've written that there was a certain domain and there was fruit to be picked, there was a certain amount of fairly low-hanging fruit, and I just efficiently, with the best tools, just tried to pick it all.

That has the great feature that what you do is timeless.

(It has the bad feature that then when people come in and say, "Hey, I want to work on this stuff," there's no low-hanging fruit to pick anymore, because you picked it all. And you picked the first level of low-hanging fruit, and the next level of fruit is quite a ways away. And I didn't really realise that phenomenon — you've got to leave some stuff there that people can fairly easily pick up.)

I have to say it's always surprising to me that when you're in the middle of a project, you're understanding what's going on, you are so much ahead of anybody else, just because you've wrapped yourself up in the whole thing. It's always surprising how long it takes for people to get to that same place that you were in — and sometimes you're not even in that place anymore.

But I think the thing that happened with Charles Darwin is he wrote On the Origin of Species, he made a bunch of arguments, and then people said, "What about this? What about that? What about the other thing?" And he started adding these patches — "As Professor So and So has asked; this, and this, and this, and this."

You read those later editions now, and you're like, look, Professor So and So just didn't get it. And Darwin just went and pandered to this thing and made a mess of his argument because he's pandering to Professor So and So. He should have just stuck with his original argument, which was nice and clean and self-contained.

But I feel like it's very hard for me to say, "I'm going to do this, and I know I'm going to throw it away." I can't do it well if I know I'm going to throw it away. I have to believe this is going to be the thing.

One of the things about the NKS book, for example, is that, in a sense, once you know the paradigm, much of what it has to say is kind of obvious. And that means that it's very clean. It doesn't have a lot of scaffolding of the time.

I knew when I was writing it, there were things where I was referring to, like, technology of the time, like PDAs, personal digital assistants, which nobody's ever heard of anymore. I thought (I think I have it somewhere in the notes) when I mention these things, I'm like, "Ehh, I don't know." Today, people will understand that; in the future, they'll say, "What the heck is that?" It's like Alan Turing mentions: "You could use a Brunsviga to do that." That was a brand of mechanical calculator which lasted a long time, but I had no idea what that was. So there are things like that.

But I think picking the low-hanging fruit and trying to make the arguments as clean as possible [are the keys to timelessness]...

One of the things that's always striking to me is you see some ancient Egyptian artefact and it's a die, and it's, I don't know, an icosahedral die. And you say, "That looks very modern." Well, it's modern because it's an icosahedron, and icosahedrons haven't changed in the history of the world.

Again, when you're at the foundational level and you can make it clean enough, you have the chance to make something that is timeless. I could run rule 30 in 1982 and I can run rule 30 today, and it's going to be the same bits — and it's going to be the same bits forever. And it's not, "Oh, now it's written in old English," or something. The bits are going to be the same forever. And it's like that icosahedron from ancient Egypt. Whoever made that icosahedron spoke a language we absolutely don't understand today. But the icosahedron is still the same.

WALKER: In hindsight, would you have left more low-hanging fruits on the tree?

WOLFRAM: I don't know. I don't know. I guess different people have different expertise. And I think this thing about how do you develop a community is not so much my expertise. For the book, I started writing a list of unsolved problems related to the book. Okay. And actually, I had never run into anybody who'd ever commented on anything about those unsolved problems until this chap, Jonathan Gorard, who worked on our Physics Project, said, "Oh, that was my favourite thing that I read when I was 13 or 14 years old," or something. So okay, we got at least one hit from that.

WALKER: Turned out to be a valuable hit.

WOLFRAM: Yeah, right. But I don't know. It's an interesting question. I mean, right now I'm thinking about for this field of ruliology that I launched in the early '80s of studying simple programs and what they do (I didn't have the name ruliology at that time), and then I look today and there are about 500 people who've made interesting contributions to that field that I can tell. And so I was thinking, now many of these people have grown up — they went from being maybe young researchers in 1982 to being esteemed, distinguished, whatevers — and they're embedded in lots of different places and activities around the world, and I'm like trying to think, how do I help this field? And I'm probably going to organise a ruliological society. I'm not quite sure what it's going to do, but it's at least going to be a collective kind of guild branding or something of this group of people who've been interested in this particular area.

I don't know exactly what the best way to stimulate more work there is. I think sometimes it's very mundane. Sometimes it's like, is some university going to start teaching classes about this and giving out credentials? Oh, if that happens, then people will come there because they want to get a credential. It's very prosaic like that, rather than, "Oh, it's a wonderful thing and people find it fascinating, and so that's why they go study it." So I'm not sure.

[2:50:20] WALKER: Something random I noticed when I was reading the book is that you use commas sparingly. Is that a conscious stylistic choice?

WOLFRAM: Oh, boy. It's a funny thing because at my company we have a group called DQA ("Document Quality Assurance"), which in past eras might have been called proofreading or copy editing or something. And it's funny because they have a set of guidelines for things — they have the main guidelines, and then they have the guidelines for me. There are all these different things about commas and starting sentences with conjunctions and smooshing words together. Over time, I've evolved slightly different stylistic conventions.

I'm not sure what my comma usage.... I know my DQA team, they re-comma-ify things from time to time. And I have to say there's some things where I get frustrated because it's like, "Look, the previous people didn't capitalise that word and now you're telling me that word is capitalised." But, like, starting sentences with conjunctions, basically, that is a hack for avoiding Kantian-length sentences. And I think it works okay.

But yes, I definitely have had some stylistic quirks, and they've slightly evolved over time. Like, for example, in the NKS book, I never used "isn't" and things like that, these shortenings. Whereas in the things I write now, I always use that stuff. I don't know why particularly.

I actually have liked the way that the writing I've done more recently has evolved because I feel like one of the questions is, can you say anything in your writing? In other words, if you can only say things in a very formal way, if you've got just a feeling about how something works, can you express that? Or if you're writing something that's very authoritative, you just can't talk about that? And so one of the things that's happened in more recent times is evolving towards a style where I feel like I can talk about anything even if I'm not sure about it.

And sometimes also, like in NKS, there's not a single joke, for example. And in the things I write now, when I see something which is a resonance with something that's funny or culturally resonant or something, I'll put it in. I full well know that that cultural reference will fall away into incomprehensibility at some point.

Another thing I realised is things I've written today — last however many years — everything was written at a certain moment in time. And like when I make these books, which are collections of posts that I've written, I was at first like, "I can't do this. They've all got to fit together perfectly." It turns out that isn't really true. Each one is at a moment in time, and people don't seem to be confused by or mind the fact that this one was at this moment in time, that one was at that moment in time.

Now, with the NKS book, I set myself a higher bar because I was really trying to define a paradigm in a coherent way. That sets a higher bar for the way that you organise what you're writing.

[2:54:13] WALKER: In the book, you had come up with the simplest possible Turing machine as a piece of evidence for the principle of computational equivalence, and you put up a prize for somebody to prove or disprove it. Alex Smith won the prize, I think, in 2007.

WOLFRAM: Yes.

WALKER: So that was a significant positive technical update to the book. Have there been many other updates, either positive or negative, since it was published?

WOLFRAM: Not that many. Surprisingly few. And, I mean, it's one of these things where people were like, "But is it right?" And it's like: "Every frigging thing in this book has been picked over now!" Because there's an online version of the book, and we've been collecting things where they're sort of addenda. I suppose the other really major update is the Physics Project.

WALKER: That's sort of an extension of the book.

WOLFRAM: Yes. And I mean, some things — like I wrote this book about combinators, which are a big extension to the section about combinators.

But in terms of the cliffhangers, the simple Turing machine was probably the most obvious cliffhanger in the book.

Rule 30, for example, and its characteristics — I put up this prize associated with that, and another one associated with combinators. I was really happy that Alex Smith was able to resolve the Turing machine thing quickly because I thought it might be a hundred-year story and I suspect some of these others might end up being 100-year stories.

There's the occasional typo, not in text, that's long since gone. But there are some little glitches in pictures which people notice from time to time. I'm always excited when that happens, because it's like they're very small things.

WALKER: Keen readers.

WOLFRAM: Well, usually it's because they're trying to reproduce the thing themselves. That's the most common thing.

WALKER: I see. I just realised I don't know how the Alex Smith story ended. Did you try and hire him or anything like that?

WOLFRAM: Alex Smith is an unusual person and I think the set of people who can focus in... I would say he's a person who I doubt he would describe himself as a socially connected kind of person.

Yes, absolutely we tried to hire him. He went and finished his PhD in theoretical-ish computer science. And I think he's been working on compiler technology, which is kind of like what he did with the Turing machine.

But I think it's one of these things in part, where it was cool that he was there for this project and it was cool that this project was there for him. And it was one of those sort of moments where these things intersect.

Makes me realise I should ping him again. I do every few years. Partly because I was just like, "Thanks a lot for resolving this question." It was one of my better investments of $25,000.

[2:58:17] WALKER: Well, it made me wonder: when is it more effective to try to solve a scientific problem by offering a prize and when is it better to assemble a team to solve it? How do you distinguish?

WOLFRAM: I don't know. I mean, this was the one case in my life where: put up a prize, somebody solved it, everybody's happy type thing. It had a difficulty level that was a lot of very complicated technical work. But I don't think one would say that it was a big paradigmatic kind of thing that had to be figured out.

WALKER: Right. And it wasn't cross-disciplinary either, right?

WOLFRAM: Right. It was pretty specific and technical.

I don't know. There are obviously prizes that people put up. The whole XPrize Foundation has been trying to put up prizes for things with varying degrees of success.

This was a case where it's a very specific technical result. You know the target. Actually, it was kind of funny with that result, because it's like: this is a definite thing, there is no doubt about what happens. And I assembled this team of people — so most of the world's experts in these kinds of things were on my little prize committee — and so Alex Smith sends in this thing and I say to this prize committee, "Okay guys I didn't know this was going to happen in our lifetimes, but here it is. Somebody's actually got a real thing about this. Let's go check it out."

And eventually a couple of people really worked hard on going through it. But then it was like then people were like, "Well, does it really solve the problem? Does it really prove it's universal? What are the footnotes to this, and how complicated is the initial condition?" Et cetera, et cetera. And it's like, if you wanted something which is well-defined, this is about as well-defined as it comes. Although it is a complicated issue what counts as universal computation.

And it was in a sense funny to me that this thing that I thought was a very clear target — even because the way these things that are difficult work out, it's never exactly what you think. That is, it's like I say, "Okay, I want to find the fundamental theory of physics. I want going to find the rule which makes the universe."

And then you realise, well, actually there's this whole ruliad object — and the question that I originally asked isn't quite the right question. It's more like we have this whole thing and we're observers of this, et cetera, et cetera.

So whenever you build one of these tall towers, it turns out that the particular thing you thought was the target probably isn't precisely the right definition.

[3:01:32] WALKER: Right. So what's the most underrated chapter or section of the book today? I feel like you might have said Chapter 10 in the past, but maybe that's now changed with the Physics Project underway.

WOLFRAM: Yes, Chapter 10, which is about perception and analysis, it's about to have its day in the sun. Because I'm working on this thing that I call observer theory which is an attempt to make a general theory of observers in the same kind of way that Turing machines and so on are a general theory of computation. And that's a Chapter 10 story.

It's funny because every chapter has its own personality. I mean, Chapter 9 is a chapter about fundamental physics and that's very much had its day in the sun. Actually it has really two sections in it: it has partly the parts about spacetime and quantum mechanics and so on and the fundamental physics at that level.

The earlier part of the chapter is about the second law of thermodynamics, which turns out in this amazing thing that we've now realised that these three big theories of 20th century physics — thermodynamics and statistical mechanics; general relativity and gravity; and quantum mechanics — are all facets of the same result about how observers interact with the computational irreducibility of the underlying structure of things.

And the thing that's just fascinating to me, philosophically, aesthetically, scientifically, is that people had thought in the 1800s, "Oh, the second rule of thermodynamics is derivable," but they never thought that general relativity was derivable. They never thought quantum mechanics was derivable.

It turns out they're all in the same bucket; they're all as derivable as each other, and they're all in some level derivable from the way that we exist as observers. So that's a super exciting thing.

But for me, the thermodynamics story is an interesting personal story because I started being interested in the Second Law of thermodynamics when I was twelve years old, and now, 50 years later, I think I can bring that to some kind of closure.

And that is certainly the longest running project in my life. I realised the Second Law of thermodynamics has inserted itself into my life many different times. And it's also interesting to me that I wrote this stuff about the Second Law, and I have a book about Second Law coming out real soon, actually — there are an awful lot of people who I know who wrote to me after that Second Law stuff came out and said, "Oh, I've been interested in the Second Law for a long time as well. I've never written anything about it," nor had I really, apart from the stuff in the NKS book. But it's always been a thing I've been curious about but never managed to make progress on. I didn't know there was as many closet Second Law enthusiasts as turned out to be the case.

But I think somehow the early chapters of the book, which are about ruliology and what's out there in the computational universe, all these different kinds of systems, those I look at all the time, I need all the time. I found Chapter 12, which is about the principle of computational equivalence and covers things like the relationship to the foundations of mathematics — that I have very much picked over in great detail, and that's proved valuable.

Many of the earlier sections, which are a little bit more like starting from randomness and systems based on numbers, things like this, these have all been of practical use in actual explorations that I've done.

I would say right now, well, Chapter 7, which is about mechanisms and programs in nature, is good for intuition-building, it's been good for paradigm creation. I would say it's detailed content, it has a bunch of specific things that I pick out from time to time. But it's more of a hodgepodge, I would say, than some of the other chapters.

But after you spend ten years on something like this with a fixed table of contents, yes, every chapter is your personal friend. And I think that the people who've studied the book a bunch, and the real aficionados, can quote page numbers, which I can't do.

[3:06:25] WALKER: That's impressive. What's your mental model of physicists like Freeman Dyson or Steven Weinberg who didn't take your new kind of science seriously?

WOLFRAM: Well, I knew both of those people. Well, they're a little bit different, actually.

Steve Weinberg was... it's kind of funny. I had lunch with him after the book came out, after he wrote things about the book.

Steve Weinberg, longtime user of Mathematica and very competent physicist. Murray Gell-Man always used to say Steve Weinberg is a physicist who can work out anything. He used the viscosity of milk as an example. You just feed him something like that and he'll technically be able to do it. He had his kind of rhythm of doing physics, and he was very good at it.

I think, for him, A New Kind of Science was just alien, just like a message from the aliens type thing. And I remember I had lunch with him a while after it came out, explaining to him simple programs. He said, "I just didn't get it. I just didn't understand that." And it's like, "You wrote a whole review. You read the book, right?" He said, "But I just didn't understand that."

And it was just like it's a different paradigm. It was something that just went straight past.

Another mathematical physicist, well-known physicist, who also ended up writing... I never read these reviews, so I don't actually know. I've been meaning one of these years, I crack it open.

WALKER: Crack open a bottle of wine and...

WOLFRAM: And read all these things. It's perhaps a strange psychological quirk that I don't find seeing the feedback about what people say about me... I just do what I do, and it's kind of independent of what people say about me.

But another one was very directly — I remember getting on the phone with this person — and the first thing he says to me with great emotion, he says, "You're destroying the heritage of mathematics that's existed since ancient Greek times."

And it's like, "Okay, that's interesting."

WALKER: Quite the compliment.

WOLFRAM: Yeah, well, right. I was perhaps quick enough to say the next thing I said: "Then it is perhaps the greatest irony that I've made such a good living from purveying the fruits of that particular tradition."

But in that conversation it was very interesting because eventually this person was saying, "I look at the book, all I see is a bunch of pictures and code. I don't understand anything."

And I said, "That's kind of the sound of a new paradigm. It's different."

And I think Steve Weinberg felt that way as well. And then later on, I ran into him and was talking to him about doing the Physics Project. I think the most telling line was, "I hope you don't do that project."

Because he said, "If you do that project and if you are right, it will destroy what we've done for the last 50 years." And I said, "I don't think you're right about that. What you've done is a perfectly solid thing, and it's going to survive forever, and we may be able to do things that are below that or even above that, but that thing will survive."

And I suppose then the next thing Steve Weinberg said to me in that conversation was, "And anyway, you'll never be able to find any young physicists who are prepared to work on this stuff."

And I said, "Well, the one little glitch in that theory is we hire an awful lot of physics PhDs at our company." There's no lack of people in this, sometimes people who don't want to be in academia because they don't like the milieu of academia.

So I would say that in the case of Steve Weinberg, he had a paradigm, he was really good at that paradigm, he really liked that paradigm. For him, NKS was something just completely alien that, as he said, "I hope you don't do that project," thinking about the Physics Project, because he thought it was a risk to what he was doing. Which I don't agree with.

And this is an interesting case, because he's a first generation person of the people who had done the things he'd done. A few generations later, they wouldn't think it was a risk. They would just think the foundations are solid.

Freeman Dyson is a little bit of a different story. I knew Freeman when I worked at the Institute in Princeton. He was very interested in new ideas and would forage the world for new ideas, and would always want to come up with the most contrarian idea he could.

And the number of times we'd go to lunch at the Institute, and Freeman would say, "I want to talk about some new idea." He'd explain it. And I'd say, "But Freeman, that can't possibly be right." And then he would kind of bristle and eventually go quiet.

And it's like, "No, it wasn't right." It was a contrarian idea, but it was... I remember he was big on the idea that forget the electronics revolution everything is going to be biological, all of the machinery we use is all going to be grown biologically. And it was like, "Freeman, there's many reasons why that isn't going to work." Maybe he'll have the last laugh and we'll eventually understand how to do molecular computing and it will work very much like living systems. But certainly in the practicality of the early 1980s, it wasn't a thing.

There's an interesting thing I asked Freeman shortly before he died. I ran into him, and there'd been this quote somebody had given me. I don't pay attention to many things people say about me, but sometimes when they quote them to me they're kind of fun. So Freeman had this quote that somebody had asked him about the NKS book. And he'd said, talking about me, "He's very precocious. He does a lot of things young. Some people, when they get very old and decrepit, think that they have a global theory of everything. He's been precocious in that too."

So I said to Freeman, "Did you actually say that?" Because I had no idea. It was quoted to me secondhand by some journalist. And so he had the gumption to say, "Yes, I did say that." Okay, so I give him credit for that.

Then, this was eventually an email exchange, it was like: "And I never believed in any of the work that you did back in the 1980s." And I'm like, "Look, Freeman, we've interacted a bunch of times since then. Why did you never tell me that?" It's like you should have told me that. I wouldn't have agreed with you. But then one can actually have a discussion, rather than tell other people you think it's nonsense. Tell me.

I have to say — I'm sure it will show up in his archive sometime — but I sent him a pretty strong letter that basically said, "I think it's irresponsible. Because I was at that time a young guy, and if he had something sensible to say, it might have actually been useful to hear it, rather than just hear it behind one's back. So I would say I was not impressed with his intellectual integrity in that whole thing. So I think it's a little bit of a different situation.

I think Freeman was a person who went through the Cambridge, England, education thing. I remember I first noticed his name because there was some collection of difficult math problems for high school students. There was one problem — very few had anybody's name associated with them — which said, "This was solved by Mr. F. J. Dyson." It was about the only one in this book that had a name on it. And it was before the Web, so you couldn't just go look up who was this character. But I have a good memory for names, I suppose.

So I remembered this, and years later, I would meet Freeman. And I realised that his greatest skill was solving math puzzle type things. And he was really quite good at that and quite good at solving mathematical physics kinds of things. But yet the grass was greener for him on the side of "come up with these incredibly creative ideas," even though I don't think that was the thing he was really the best at.

One tries to not make these mistakes oneself of saying, "There's this thing that I'm good at, and, oh, everybody's good at that, so I don't have to make use of that skill. There's this other thing that I'm actually no good at all, but that thing seems like the real thing I should be doing."

And I think that was a little bit of his situation. Because I'm probably more on the opposite side of that. I wouldn't consider myself technically competent. I wouldn't have been able to solve that math problem. Okay, I've built computational tools that automate, that let me solve things like that. But me unaided, I wouldn't have been able to do that. He could. But I'm more on the side of "create the ideas"

WALKER: What do you think Dick Feynman would have made of the book? Because he was always quite committed to the tools of calculus, wasn't he?

WOLFRAM: He would have liked the book. I talked to him enough about it.

Look, he liked new things, he liked new ideas. And I think he always just wanted to be intellectually stimulated and solve the next thing. He would fall back on, "These are the tools I know, these are the tools I'm going to use." But he was always excited applying it to new things.

You know, I think one of my favourite, perhaps compliments or something, was something Dick Feynman once said to me. We were both consultants at this company called Thinking Machines Corporation, which was ultimately an unsuccessful parallel computing-meets-AI type company. And I had been generating this giant picture of rule 30. And Dick Feynman was like, "I'm going to figure this out. I'm going to crack this. This is not as complicated as it seems."

And so he tries to do this for a while, and eventually he says, "Okay, okay, you're onto something here." And then he says he wants to walk off from everybody else and ask me some question, and it's like, "I just wanted to ask you, how did you know it was going to work this way?"

And I said, "I didn't. I just ran these programs, and this is what I found." And he said, "Oh, I feel so much better. I thought you had some intuition that was far away from what [I] had." It's like, "No, don't worry. I just did an experiment."

Now, to be fair on all sides there, the thing I've realised in later years is to do an experiment and actually notice the unexpected, it turns out you have to be primed for that. Otherwise, you just whizz right past it.

WALKER: Yeah, theory-induced blindness.

WOLFRAM: Yes. Right.

But in terms of the fundamental physics stuff, I think Dick Feynman would really like that. It's really a shame that... We talked about quantum mechanics a lot, and he always used to say, "I've worked on quantum mechanics all my life, but I can tell you nobody understands quantum mechanics." And I think now we really do.

And I think the understanding that we have is one that he would really resonate with. Actually, he and I worked on quantum computers back in 1984, maybe. And we came to the conclusion that it's not going to work. And so I'm interested, even in the last few months... From our Physics Project, I've have an intuition about why it's not going to work and how to understand that it isn't going to work.

But it seems like other people are coming to the same conclusion. And in fact, the reasons we thought it wasn't going to work are, well, now transmuted into a different way of saying these things, the same reasons as today. I mean, you've got this quantum thing, and it makes all these different threads of history, and in parallel all these threads of history can do all sorts of different computations, but if you want to know as a human observer what actually happened, you've got to knit all those threads of history back together again, and you've got to say this was the answer you got. And that knitting process is one that's not accounted for in the standard formalism of quantum mechanics. And that knitting process turns out to be hard.

Dick Feynman was, to me, interesting in that he really liked to understand things in a fundamental way. One of his charming features was that he was a very good calculator. And so he would go off and do all these calculations, but he thought that was easy. So he would then get to the end and say, "Now I want to come up with a real intuitive explanation, because that's really hard to do." He would come up with this intuitive explanation, never even tell anybody about all these calculations. And so for years afterwards, people would say, "How did he figure this out? How did he know this was going to work this way?" And it's the same thing as my "I just did the experiment." It's like, well, I just did this whole giant calculation.

The thing that was always remarkable to me was that he could go through this big calculation and get the right answer. Because for me, unless I had a computer doing it or unless I had some intuition about what was going to happen, I just wouldn't have gotten the right answer.

But the precursors of the [NKS] book he did get to see and I got to talk to him about. And I would say that he was quite into them.

The idea that, for example, physics is ultimately computational, I think was an idea that he talked about. He talked to me for ages about why is eH t in statistical mechanics the same as the ei H t in quantum mechanics?

WALKER: Right. Coincidence or not?

WOLFRAM: Yeah, right. And it's not a coincidence.

One of the things I really miss, actually, about Dick Feynman and Steve Weinberg, for that matter, is one of the things we now have to do in the Physics Project is go from this very foundational level of "these are principles about what's going on" to, "Okay, you're an astrophysicist. You've got a big telescope. Point it in this direction and see if you can see a dimension fluctuation or something. Figure out what you actually look for, what is the physics detail." And it used to be the case, at least in that generation of physicists, that those kinds of people were quite good at figuring out, "Okay, we've got this set of principles underneath. So what is the actual consequence for what happens to an active galaxy or some such other thing?"

And it's frustrating because the younger folk mostly are much more specialised. In fact, at our summer school that's coming up right now, I'm hoping I'm going to get some people who are actually going to go figure some of this stuff out, because otherwise I'm going to be stuck doing it myself (talking about a lack of delegation and need to dive into the details). I think I know how to do this stuff. I used to be pretty good at it, but I'm rusty at those kinds of things.

Can we actually figure out what happens to a photon when it propagates through a dimension fluctuation in the early universe? And are there these strange fractalised images that the space telescope should see based on that or whatever? Don't know. So those are things that in the Dick Feynman, Steve Weinberg generation of physicists, they were generalist enough that they would have been really good at working those things out.

I might have even persuaded Steve Weinberg to work some of those things out, because that was exactly his kind of thing.

I find it interesting now people have started using our framework for thinking about General Relativity and using it as a computational scheme for studying black hole mergers or whatever else. And I just saw a quote, actually Jonathan sent me, from some person saying, "These methods are really good. It's so strange that they're based on such a crazy set of premises underneath." For them, you're using this method and it's based on this idea that space is discrete in this way, but we don't really care. It's just that discreteness is what we need to put it on a computer. So for them, if they see some weird numerical glitches in their calculation, they may be unhappy, but we'll be really happy because that round-off arrow or whatever is the signature of the discreteness of space.

That's one of my things now, is to try and find what Brownian motion — little microscopic motion of pollen grains and things, which people finally understood was the pollen grains being kicked by individual molecules. Now, I want to find that analog of that for spacetime, because that's what's going to show us that spacetime is ultimately discreet.

[3:27:33] WALKER: Speaking of Dick Feynman, and given I'm interviewing Richard Rhodes next week, I have to ask, did Dick ever tell you any stories about working at Los Alamos that you can share.

WOLFRAM: Many. I mean, gosh. One that I suppose is perhaps interesting is, "After I saw the first bomb test," he was saying, "I thought the world's going to end." It's like, "Why is anybody bothering to do anything? I can see the end."

I thought it was interesting that he had that reaction to that.

He was in this funny position because he was running this team of human calculators. He was on the younger end of people who were there, so he wasn't part of the actually design the bomb, figure out how the bomb should work, kind of thing.

But he was, I think, viewed as the super smart guy who was, in that particular case, put to work on doing this human calculation stuff.

But yeah, let me think. I can think of... He was quite an enthusiast of Oppenheimer's.

WALKER: As a lab director.

WOLFRAM: Yeah.

I remember one day were this strange event that was essentially, I suppose one could say, a Californian cult-like thing where the guy who was running it had a thing for physicists and put up this money to put on these physics conferences. And so Dick Feynman and I were kind of the people selected by that group to be across this guy at dinner.

WALKER: Was this in San Francisco?

WOLFRAM: Yeah. Est was the name of the operation. It was a guy called Werner Erhard.

WALKER: Yeah, I had Leonard Susskind on the podcast once, and he was telling me about how he used to go to these dinner parties as well and had a conversation at the blackboard with 't Hooft over Black Holes one time, and Stephen Hawking was there as well. Anyway, I digress.

WOLFRAM: I might have been there at that same one, I'm not sure.

But anyway, I mean, after we had this conversation, Dick Feynman just wanted to talk for hours about what is leadership, what causes people to follow people in sometimes apparently irrational ways. Brigham Young was one of his big examples. How did a bunch of people decide to follow somebody out into the desert. And how do people follow Werner Erhard at est.

He put Oppenheimer in this collection of somebody who can be a leader who people follow just by force of personality or something. Now, everybody always gives advice that's based on their own experiences. And so he would always say when I was off talking about organising things and companies and all that thing, "Why do you want to do any of that stuff? Just hang out and do science."

He had very bad experiences, I think, in later years with two categories of people: university administrators and publishers. Those markets are not the most efficient; those industries are not the best organised. And so he would imitate for me in a less than flattering way what people in those kinds of industries had said to him about different things (and it's like, "these people are idiots"). But I think he picked particularly bad examples of industries there.

But it's always funny in physics. I was involved in the field at a time when particle physics was still in the "Thank you for the Manhattan Project phase" as far as the government was concerned. And a lot of the people who I knew in physics who were the older generation of physicists, many of them, they were treated with great respect for sometimes reasons I had no idea about, because they were reasons that were "Oh, yes, that person invented the such and such thing that was critical to the atomic bomb," but it was secret or semi-secret. And it's just that is a very esteemed person type thing. And there was almost clique of people who'd worked at Los Alamos on the Manhattan Project and who were a sort of brotherhood of physicists. (I'm afraid it was all brothers, pretty much.)

And that left an interesting glow in the world of physics that lasted, well, I would say the end of that was the killing of the Super Collider, which happened in the 1990s. That was, I think, the end of the era of post-Manhattan Project government saying "thanks for helping us win the war" and the people having retired or died who'd been involved in that process.

I think this whole thing about intense projects and people who do intense projects and what's involved in doing intense projects... The Manhattan Project is obviously a bigger story than any projects I've been involved in. But it is always interesting that you see people who are involved in these projects, the project succeeds, there is, I think, a certain glow that persists for probably a decade or something for people when they've been involved in a project. Particularly projects where it goes from nothing, just an idea, to this whole thing in the world. People realise, "Gosh, one can actually do that."

And one of the things I found interesting is that sometimes I think, "Oh my gosh, we got to do this project. I got to push so hard with such intensity. These people are just going to quit. It's going to be terrible." Doesn't happen that way. It's like the intensity of projects is actually a very invigorating thing for people. Even though it's like, "Oh my God, I'm working so hard, it's terrible, et cetera, et cetera, it's actually a great experience. It's usually when the project is finished and everybody's like, "Oh, what do I do next?" That's when people are off to do something else.

WALKER: Yeah, it's a great joy to be down in the trenches with your colleagues, so to speak.

WOLFRAM: Yeah, I think it's also when collectively one achieves this big thing, there's kind of this excitement of realising that it's possible to do these things.

WALKER: Yeah, I see what you mean. I've got one final question on the impact of NKS and then the final section is just the content and some of its implications, which I will swiftly cover if it's okay.

WOLFRAM: Yeah, it's okay. I'm having fun. You're asking very interesting questions.

[3:35:40] WALKER: Okay, I really appreciate it. Okay, so the final question on the impact of the book is what would it take to get computational X for all X injected into universities and academia?

WOLFRAM: Oh, that's an interesting question. I've been thinking about that question.

The first step, I think, is even to define what it means for people to do computational thinking. And I think it got a bit easier because LLMs can now get people over some of the first hurdles. They don't write perfect computational language, but they get one roughly in the zone; I don't quite understand the dynamics of how that gets improved once you're in the zone, but it helps in building confidence for people.

So first step is what does it mean to learn computational thinking? It's going to end up falling on me to try and write some big introduction to computational thinking that is an attempt to explain that. What does it mean? What kinds of things do you need to know? It's not just principles. It's also just facts about the world. Images are encoded this way, audio is encoded this way. You've got to have intuition about how things work. And I think that's step one, and that's something broadly accessible to people. And now the tools, particularly thanks to LLMs... The art history majors, really are perfectly enabled to get computationally serious.

We've had formalisations of thinking about things from logic to mathematics now to computation. And computation has a great feature that the computers can help you with it.

And so now I think the dynamics of how does that get injected into universities? Fascinating question. I mean, I have a bunch of university presidents who have asked me about this, and it is complicated. Because, for example, does the computer science department eat the university? Everything's computational X, so it's all computer science?

Probably not. Just because fields use mathematics doesn't mean the mathematics department runs those fields. The computer science departments at most universities have swelled greatly through teaching people basically programming language programming. And it's not obvious that's going to be such a thing anymore. I mean, for somebody like me, it's like I've been automating that stuff for 40 years. I've told many people, don't go study rote, low-level computer science. Whatever you learn today is going to be like all the people who said assembly language is the only thing you could use back in the 1980s, and nobody learns assembly language anymore. It's not a great bet. And a lot of universities, even very elite, intellectually-oriented universities, have felt this need to bulk up their offerings in what amounts to trade school computer science. So I don't think that's the place where the computational thinking...

How do you write this specific program? That's one thing. How do you take something in the world and think about it computationally? That's actually a different kind of thing, and it's not what most of computer science and universities has consisted of.

So how do you get people who can do computational X? Do you inject them into departments of X? I think what's going to emerge is there'll be... hopefully, I don't know, maybe even the things I'm writing will end up being a general literacy, computational thinking thing that people learn, that I suspect people will say is one of the more useful things they learned in college or in high school or wherever it ends up being taught. Because it is the paradigm in the 21st century, and it's useful to have some intuition about it and some way of thinking in terms of it.

It's challenging. Actually, the person I was just talking to just before we were chatting here, that person is a philosopher who is now in charge of humanities at a large university, and talking about, okay, they want to hire AI ethics people. Where are they going to get them from? Who does that stuff? What is the track that leads you to that? Is it technical?

And there's a vacuum, I think, in a lot of these areas of what does it mean to not do engineering, computing, but to use computational thinking in attacking things in the world? I've spent my life building the tooling and the notation for doing that, but that hasn't solved the problem of what is the organisational mechanism for making that stuff happen.

Now, one of the more outlandish things that might happen is it just doesn't happen at universities. It gets built elsewhere. You were asking how basic science might support itself. Maybe what happens — and, I mean, after all, universities had to be invented back in the 1200s or whatever — maybe what happens is the computational thinking gets taught in a setting that isn't like a current university. I mean, our summer school, in a sense, is a small example of doing that, but we're educational amateurs. We're not giving out the indulgences, the degrees. We're not part of that ecosystem. We're just teaching certain content.

And I think that's an interesting question... The fact that programming gets taught in fancy colleges, intellectually-oriented colleges, is actually a little weird. And I think it's only happened because a bunch of high-end white collar jobs require programming.

Those places don't teach, for the most part, things like animation or one of these or post production skills, things like that. Those are taught in much more trade school, vocational kinds of places. And a lot of programming is that same kind of thing. It's not that different from being a CGI artist or something like that. It takes work, it takes human effort, et cetera. But it isn't the same kind of thing as the big intellectual kind of arc of things that you might think of at an elite intellectual university.

So the fact that happened at universities is a quirk of history, I think. And it might not have happened that way. It might have been that the boot camps and the alternative... Well, it's like many of these things work in a funny way. I mean, we were talking about Y Combinator a bit earlier [off mic] and the whole accelerator incubator type world — that's in a sense the parallel world to business school.

Business schools grew up in the '30s, '40s, '50s, they got attached to universities. Y Combinator isn't part of a university, but it's teaching the same kind of a thing as you might learn in business school. It just didn't happen to be attached to a university.

Maybe that's what will happen with computational X. I'm not sure. Maybe what will happen is it will grow that way at first and then those things will become acquisition targets for universities. And universities will absorb these things because universities just have this infrastructure that's been built up. I mean, it's different in different countries, but in the US, for example, there's a lot of this government intersecting infrastructure about student loans and all this thing and the whole machinery of credentialing that's very entwined with what is now, well in the US it's like 140 year old — I mean, some universities have been older than that, but it really started developing maybe 150 years ago or something — that infrastructure.

WALKER: That's fascinating. If you become the head of the kind of computational equivalent to university, I guess you could call yourself the principal of computational equivalents.

WOLFRAM: [Laughs] That would be nice. It would be lovely to. That's a cool idea.

One of the things that's great about computation is that it is in some level accessible to everybody.

WALKER: It's egalitarian.

WOLFRAM: Right. It's not like there happened to be a tantalum deposit here, so we can mine that. It's a global resource. And we were talking a bit earlier about who gets to make use of that resource and how do people get to the point where they can be at the leading edge of these kinds of things. And I think it's a societal challenge more than it is, in this particular case, a technical challenge.

[3:45:45] WALKER: Yeah. Okay, so let me now move to the final part of the conversation, which is the content of NKS and some of its implications for history, technology, and artificial intelligence. So I'd be grateful if you could just briefly explain the principle of computational equivalence and perhaps some of the remarkable discoveries that led to you formulating it. And please assume many listeners probably won't even know what computation is, universality is, or cellular automata are.

WOLFRAM: Right. So, I mean, what is computation? Computation, as I see it, is you define precise rules and then you follow them. It's a way of formalising things that happen in the world as you describe them. It's a way where you can say, let me write down this rule. The rule is going to say, I've got a line of black and white cells, and the rule says if I have a black cell here and a white cell to its right and a black cell to its left, then underneath I'll put down a black cell. You just keep applying that rule over and over again.

Rule 30 after the first 50 steps.

WALKER: That's rule 30?

WOLFRAM: That's a piece of rule 30. It has eight little pieces like that. And the terrible thing is, you ask me from my memory to produce them, and I'll say, "I just need to get out my computer."

Anyway, you've defined these rules. They're really simple rules. You can think of these rules as implementing a computation, but in a sense it's a computation with extremely simple rules.

And you might think, as I did, that when the rules are sufficiently simple, whatever the thing does will be correspondingly simple, and I'll certainly always be able to say what it's going to do, because after all, I know its rules.

Well, the big surprise to me was even though the rules may be very simple, it can still turn out that the behaviour that they have is very complicated, looks very complicated to me.

Rule 30 after 300 steps.

I can try and apply all kinds of mathematics, statistics, cryptography, whatever to it, and it's like, "Can I crack this?" That was what Dick Feynman was trying to do with Rule 30. Can I crack this using some mathematical method?

And the answer is, well, no. It's somehow doing something that is computationally sophisticated enough that you can't just say, "Oh, I know the answer." It's working out the answer for itself by following step by step what it's doing. But you can't just say, "I'm smarter than it is, I'm going to tell you what the answer is."

So I observed this first in probably 1982, I really didn't recognise it properly until 1984 — this phenomenon that very simple rules can produce very complicated behaviour.

And it's then like, how do you understand that phenomenon? What's the bigger picture of what's going on there?

And what I realised is every one of those rules being applied, that's a computation that's happening. And then the question is, well, is that a computation where it's a simple computation? I can just jump ahead and say what the answer is — or not?

And the thing that I realised is, in the end, even though the rules are simple, the computations that get done are just as sophisticated as the computations that can get done by much more complicated rules, including the kinds of rules that operate in our brains and things like this. And so the principle of computational equivalence is this statement that above a very low threshold, basically any set of rules where the behaviour is not obviously simple will turn out to be correspond to doing a computation that's as sophisticated as any computation can be.

So that's this idea that you're looking at these rules and they're really simple ones, they just do very simple things, they make periodic patterns, maybe they make fractal patterns (that's kind of the Benoit Mandelbrot point), and then as you go to other rules, suddenly you see all this incredible complexity. And there's this one threshold — once you pass that one threshold, they're all the same.

What does that mean? One of the big consequences is the thing I call computational irreducibility. You say, I've got rule 30, I've got this simple rule, I look at what it does, I run it for a billion steps, I can follow all those billion steps, but can I jump ahead and say what it's going to do after those billion steps by something less computationally expensive than following those billion steps? Computational irreducibility says you can't do that.

It's a very important idea, because it tells one there's a limitation to science.

What one had come to expect from the mathematical equations approach to science is science can predict stuff. We write down the equation, it just tells us, "Oh, at this value of the time, this is what will have happened." And that's the expectation, that's what people think science is about: predicting things and having a cheap way to predict stuff.

What computational irreducibility implies is you don't get to do that all the time. An awful lot of what's out there in the computational universe is computationally irreducible. And it's saying from within science, you're being told, "No, you can't make these kind of easy predictions." You can't expect what we thought was the mission of science to work out.

So I think it's a rather important thing in terms of our everyday understanding of the world and of what science means. And it's something which people are slowly coming to terms with.

It's like the question, can we force the AI to only do what we want it to do well? Well, no, because there will always be unexpected things it does because of computational irreducibility. Can we open up the code of the AI and say, "Oh, now we can see the code, so we know it's not going to do anything bad"? No, you can't, because of computational irreducibility. It has a lot of these consequences.

And in the end, it's the interplay of computational irreducibility and our finiteness as observers that ends up with the laws of physics that we have, because you might say, "Okay, there's computational irreducibility in the world. How come we can predict anything?" It could be the case that everything that goes on in the world is ultimately unpredictable, that in a sense everything is governed by fate. We just never know what's going to happen; it's always just wait and see what happens.

But one of the consequences of computational irreducibility is this phenomenon that there are always these patches, these pockets of computational reducibility, where you can jump ahead. Those are the things that are the discoveries we make that let us say things in science. And we live in particular pockets of computational reducibility.

And it turns out that for observers like us, we parse this computationally irreducible underlying structure of the world in terms where we aggregated things together, and there are inevitable laws of that aggregation. So, for example, we've got a bunch of molecules, gas, bouncing around in this room, and the motion of those molecules is really complicated. The whole Second Law of thermodynamics story is about, oh, it's really complicated, really random down there.

But yet, in terms of what we care about, about gas molecules or whatever, we notice these overall air currents, we notice the gas laws and so on — these are things that we can talk about at our level of observation, which are these are pieces of reducibility on top of this computational irreducibility.

Anyway, the philosophical both consequence and underpinning of our Physics Project is this interplay between "what are we like as observers?" and "how does computational irreducibility work?"

WALKER: Great. Okay, so at least three profound ideas in there. Let me push you on a couple of things. How many more rules have been shown to be universal since 2002?

WOLFRAM: It's a barren story. We got rule 110, we got the Turing machine. There's some kind of simple extensions of those kinds of things.

I would say that the proving universality is really hard. It's a computationally irreducible story. In fact, it's an undecidable story. You never know how far you're going to have to go to prove universality. For me, it's like, at least we've got a few datapoints. At least we've got a few kind of places where we can say, yep, we know it works out this way. I'd love to have more.

Well, there's a couple of points. In the end, it's all about making compilers that compile to a machine code that is unbelievably low-level. As molecular computing becomes more important, that may be something on which there is more emphasis put.

But not a lot has been done. It's terrible, really. Because ultimately it's a really interesting thing to know. I suspect that the S Combinator on its own is universal, and I put up a little prize for that. And so far, no serious takers on that one, except for a bunch of people saying it can't possibly be true. And I point out, "No, the argument you have for why it isn't true isn't right."

What people choose to work on, it's a funny set of choices because it's like, okay, we know we got a few datapoints here, at some point it might become like a celebrated problem and then everybody's got to solve it like the Riemann hypothesis or something. But for some reason, for whatever reason, it didn't quite get to the celebrated hypothesis stage. And so it hasn't had this herd try and populate it.

I actually haven't thought about it so much in recent times, because those things are always so incredibly difficult and technical and detailed. Not my kind of thing at all.

Now the question is, can I automate it? And that's a more feasible thing. And that's an interesting question. I mean, with proof assistance and these kinds of things... Actually, that's a reasonable question. Could one have a proof assistance system that is a computer-assisted way of doing universality proofs? I don't think anybody's touched that. It's a good thing, actually. It's a good thing. I will have to bear that in mind as I come up with projects in our summer school next week.

[3:57:20] WALKER: Does the fact that not many more rules have been shown to be universal since 2002 cast doubt on the principle of computational equivalence?

WOLFRAM: Not in the slightest bit.

WALKER: Because one of its key implications is that universal systems are ubiquitous.

WOLFRAM: Right, but the problem is it's so hard to climb those mountains that saying, "Oh, there isn't a mountain there..." There's no way you can say there isn't a mountain there. It's not like people said, "This might be a mountain," no, actually it turns out it's flat. It's like, yes, those mountains are still out there.

I would say that the intuition behind the principle of computational equivalence, of in any system you can find complicated behaviour, that gets repeated over and over again. And I watch actually, it's interesting at our summer schools and things like that, people say, "I've got this system, and look, it's a really simple system. It can't possibly do anything complicated." I've even said that myself about lots of systems.

When was the last time this happened to me? Within the last three months, I'm sure. I've even had the same mistaken intuition. "This system is so simple, it can't do anything interesting." And then I go, I study it, and, "Oh my gosh," it does something complicated. Who knew? And it's like, well, I've got this whole principle of computational equivalence.

I'm a first generation person in this regard, so it still seems to me very surprising. But to the next layer of people, the Jonathan Gorards of this world, it seems less surprising to them, because it's always been there for them. And by the time we're a few generations further on, it's going to be something people just take for granted as a principle in the same kind of way there are lots of scientific principles that one takes for granted.

The status of the principle of computational equivalence — at some level, it's almost a definition of computation; at some level, it's a provable thing; at some level, it's a fact about nature. It's a complicated meeting point of all those kinds of things.

I would like to think that in the course of time, there will be more datapoints where we can put a flag down and say, "Yep, it said this."

I mean, I think it's pretty cool that it could predict this Turing machine. Alex Smith could have discovered it's not universal. He didn't. (I would have been surprised if he had.) People say, "You've got some scientific theory, does it have predictions?" Well, this one has boatloads of predictions. Now go out and actually do the experiments — it isn't experiments here, it's theoretical work — and go validate these things. Well, that turns out to be really hard, But it's kind of nice that those things are out there to be validated.

[4:00:13] WALKER: Just as a piece of intellectual history. I'm curious, so computational irreducibility follows logically from the principle of computational equivalence, but—

WOLFRAM: That's not the order that I discovered them in.

WALKER: Yeah, right. Okay, so was it when you were looking at rule 30 that you had the intuition about computational irreducibility?

WOLFRAM: More or less, yes.

So that was 1984, 1985. And actually, interestingly, I tracked this history down. The thing that really caused me to condense... It's interesting, actually. This idea of computational irreducibility, I had the general intuition of it, but I was writing an article for Scientific American, and I wanted to explain what was going on more clearly. And that's when I condensed it into this idea of computational irreducibility.

And later on, when I was working on the NKS book, that's when I again wanted to condense a bunch of things that I'd seen, and that's when I came up with the principle of computational equivalence.

So both of these, in a sense, were summaries of things that were expositorily driven.

WALKER: Okay, so let me push you on computational irreducibility. So I guess my claim here will be that it's overstated or not as prevalent as the book makes out. So there must still exist many opportunities to outrun natural systems, because nature, with its tendency to maximise entropy, is less likely to naturally produce the complexity that we might associate with sophisticated computations, and instead we see a lot of randomness.

WOLFRAM: Well, let's see. You've packed a lot of ideas there, which are complicated to unpack. Is computational irreducibility not as much of a thing as I say it's a thing? You should go do some computer experiments, and you will come back saying it's a real thing. Because it's something for which we just don't have intuition.

Even now, even though I've lived with this thing for 40 years now, I still make this intuitional mistake, even though it doesn't last long for me because I know, oh, yeah, I made that same intuitional mistake again.

But the question about nature... The things that we notice most in nature and that we use for our technology and do engineering with are precisely the things that we can predict. We have selected those things to build our world out of, to build our built world out of, that are things where we can say what's going to happen. We want a car that goes from here to there. We don't want a thing that has this random walk where we don't know where it's going to end up. So we pick these pockets of reducibility to live in, so to speak.

You could live in the hostile environment of computational irreducibility, or you could live in the pleasant Mediterranean climate of computationally reducible things. I think that has a certain selection bias for us.

When it comes to, for example, if you're asking about the AIs or something, right now, the AIs that we've built are trained on human stuff, so they work in a way that's very aligned with the way that we work. But if you say, where could the AIs go? They've got this whole computational universe out there.

They could go off and start just spinning around in the computational universe. Well, then they might find other pockets of reducibility, but they're out there in the computationally irreducible world. This is a feature of: we are selecting things for ourselves that we can successfully navigate with the finite minds that we have.

WALKER: If the computational paradigm ultimately fails scientifically — and I know that you strongly believe it won't, and you've worked very hard to establish it — but assuming for the sake of argument that it does, what do you think the most likely reason for that would be?

WOLFRAM: Well, we're deeply past the point of no return, let's put it that way. If you look at the new models that have been made for things in the last 20 years, it's programs, not equations. If one was wondering, how was the story going to go? We know the answer.

But I think if you ask the question... There's computation. There's things which are not computationally universal but are simpler, where you can always jump ahead. There are things that are hyper-computational, where say you've got a Turing machine, it does its computation, it's computationally irreducible. But you could, say, imagine that you had a machine that could just answer all computationally irreducible questions. Just imagine you have such a machine. Alan Turing had this idea he called an oracle.

Imagine you have those things. Okay, we've got this hyper-computational world where it's full of these things which can do beyond what computational irreducibility talks about. It can jump ahead in every computationally irreducible computation. I don't think we live in that world.

I think we have pretty good evidence we don't live in that world. As a theoretical matter, that world is sealed off from the world in which we live in the same way that the innards of a black hole are sealed off by an event horizon — inside a black hole, at least in the simplest case, time stops. So in other words, we get to think that we have an infinite future. If you're living inside a black hole, looked at from our point of view, you don't have an infinite future. Time will stop. To you, you're just doing your thing. And there's a point at which, well, looked at from an outside observer, your thing just stops. But for you yourself, you're just doing your thing.

And similarly, from a hyper-computational observer of our universe, it would be like, well, those guys just stopped, they didn't do anything interesting; it's only hyper-computation that's interesting. But for us, there will be hyper-ruliads — those can, in principle, exist — but they are forever sealed off from us by an event horizon, basically. And so it's not even clear what it means to talk about their existence.

As a practical matter, you imagine the science fiction universe where AIs have been outlawed, we don't have computers, and it's like, what's the world like? Well, it's a little bit palaeolithic.

I think we're deeply past the point of no return.

It's like asking some question, like, what would happen if the speed of light was infinite? Well, it's just not. And the universe just is not constructible. All these things are interdependent. And the fact is, at this point in our development of our civilisation, I think we're really past the point of no return for computation as a paradigm.

Now, how will more people learn this paradigm? That may be by fits and starts. There was a long period of time when people didn't learn natural science, when it was like, well, it's either an Aristotle or it's in the Bible and there's nothing else to learn. So human affairs can certainly inhibit what happens. But I think there's a certain deep inexorability to the place where we're going to end up.

And you can already see there's enough has happened that the end of the story is pretty clear. And it's just like if you'd gone back to, oh, I don't know, in the 1500s, and you asked people, "How do you think about the world? How do you work out what to do in the world?" Nobody's going to say we use math to do that. That was not really a thing. Math was kind of a toy and it was used by merchants to do very basic math, but nothing fancy. It was the thing where people would do these competitions to prove cubic equations and things like this, but it wasn't a thing where people would say, "Well, everything we do in the world and all our engineering is going to be done with math."

Nobody would have said that. But yet it became quite inexorable at some point.

WALKER: A very quick digression on the graph-based physics. Aren't these theories compatible with nearly any world we could find ourselves in?

WOLFRAM: Well, again, you're packing a bunch of things into that question. "A world we could find ourselves in." So what happens in this idea of the ruliad, this kind of entangled limit of all possible computations, which we are part of and we are sampling it, and given our characteristics as observers, there are certain constraints on what world we can perceive ourselves to be in.

If we were different kinds of observers, if were observers who are greatly extended in our computational abilities, greatly extended in space, don't believe we're persistent in time, all these kinds of things, we could believe we're in a different world.

Let's see, you say, "are they compatible with any world we could find ourselves in?" I think that if you're asking: could our theories still be right if general relativity was not true in the world that we perceive?

The answer is: if we are the way we are, no. If we are aliens with very different sensory apparatuses, then sure. But I think for us to be the way we are, it is inevitable. It's a matter of formal science that the ruliad plus the way we are implies things like general relativity.

WALKER: I see. Are some historical dynamics computationally irreducible?

WOLFRAM: Yes. I think this question of theory of history, is there a theory of what will happen in the world? No. There's lots of computational irreducibility. There's lots of: "You just have to see what happens."

[4:11:54] WALKER: Oh, I'm sorry, I misspoke. The question was: are some historical dynamics computationally reducible?

WOLFRAM: So can there be theories about history?

WALKER: Yes.

WOLFRAM: The answer is yes, for sure. People at different times, lots of philosophers, have had theories of history. They've often been horribly abused in sociopolitical ends. But, yes, there can be an inexorability to certain aspects of history, for sure.

Everybody has an intuitive sense that history repeats itself. And certainly the lesson of history is that history repeats itself. And that, in a sense, is right there telling you that there are some reducibilities in history. There is some theory of history, at least at that local level.

WALKER: Right. Just through the repetition?

WOLFRAM: Yeah, well, I mean, that just shows you there's a theory. Whether there is a bigger arc to that repeatability, I don't know. But that there is some repeatability suggests that there is a theory.

WALKER: Where would Karl Popper's anti-historicism fit into your framework? Is it like a limiting case of computational irreducibility?

WOLFRAM: I'm not sure I know what it is. You'll have to tell me.

WALKER: Well, just his idea that the course of human history is fundamentally unpredictable, since it largely depends on the growth of knowledge and we can't predict the science and technology of tomorrow, since if we could, we would already have invented it.

WOLFRAM: Yeah, I think that's actually not that far away. One of the things that is... Well, now, let's see. I mean, when you say you can't predict it or otherwise we would have invented it, I'm not sure I would agree with that conclusion, because computational irreducibility is all about the fact that you can know the rules but not know what will happen. So I'm not sure. I mean, that's interesting. I should learn about that. I don't know that piece of intellectual history.

WALKER: I'll send you the reference. Computational irreducibility found a surprising application in proof of work for blockchains. What are the odds that you've met Satoshi Nakamoto at some point over the years?

WOLFRAM: What to say about this? I think the odds that Satoshi read the NKS book are high. You always have to wonder about something like that situation, and you have to wonder what's the human story and what's the right thing to do with whatever one knows or doesn't know about that? And I think it's one of these things where... Let's put it this way: the idea of computational irreducibility in the NKS book and the arrival of proof of work in blockchain were not unrelated.

[4:15:35] WALKER: Okay, interesting. So moving finally to AI, many people worry about unaligned artificial general intelligence, and I think it's a risk we should take seriously. But computational irreducibility must imply that a mathematical definition of alignment is impossible, right?

WOLFRAM: Yes. There isn't a mathematical definition of what we want AIs to be like. The minimal thing we might say about AIs, about their alignment, is: let's have them be like people are. And then people immediately say, "No, we don't want them to be like people. People have all kinds of problems. We want them to be like people aspire to be."

And at that point, you've fallen off the cliff. Because, what do people aspire to be? Well, different people aspire to be different and different cultures aspire in different ways. And I think the concept that there will be a perfect mathematical aspiration is just completely wrongheaded. It's just the wrong type of answer.

The question of how we should be is a question that is a reflection back on us. There is no "this is the way we should be" imposed by mathematics.

Humans have ethical beliefs that are a reflection of humanity. One of the things I realised recently is one of the things that's confusing about ethics is if you're used to doing science, you say, "Well, I'm going to separate a piece of the system," and I'm going to say, "I'm going to study this particular subsystem. I'm going to figure out exactly what happens in the subsystem. Everything else is irrelevant."

But in ethics, you can never do that. So you imagine you're doing one of these trolley problem things. You got to decide whether you're going to kill the three giraffes or the eighteen llamas. And which one is it going to be?

Well, then you realise to really answer that question to the best ability of humanity, you're looking at the tentacles of the religious beliefs of the tribe in Africa that deals with giraffes, and this kind of thing that was the consequence of the llama for its wool that went in this supply chain, and all this kind of thing.

In other words, one of the problems with ethics is it doesn't have the separability that we've been used to in science. In other words, it necessarily pulls in everything, and we don't get to say, "There's this micro ethics for this particular thing; we can solve ethics for this thing without the broader picture of ethics outside."

If you say, "I'm going to make this system of laws, and I'm going to make the system of constraints on AIs, and that means I know everything that's going to happen," well, no, you don't. There will always be an unexpected consequence. There will always be this thing that spurts out and isn't what you expected to have happen, because there's this irreducibility, this kind of inexorable computational process that you can't readily predict.

The idea that we're going to have a prescriptive collection of principles for AIs, and we're going to be able to say, "This is enough, that's everything we need to constrain the AIs in the way we want," it's just not going to happen that way. It just can't happen that way.

Something I've been thinking about recently is, so what the heck do we actually do? I was realising this. We have this connection to ChatGPT, for example, and I was thinking now it can write Wolfram Language code, I can actually run that code on my computer. And right there at the moment where I'm going to press the button that says, "Okay, LLM, whatever code you write, it's going to run on my computer," I'm like, "That's probably a bad idea," because, I don't know, it's going to log into all my accounts everywhere, and it's going to send you email, and it's going to tell you this or that thing, and the LLM is in control now.

And I realised that probably it needs some kind of constraints on this. But what constraints should they be? If I say, well, you can't do anything, you can't modify any file, then there's a lot of stuff that would be useful to me that you can't do.

So there is no set of golden principles that humanity agrees on that are what we aspire to. It's like, sorry, that just doesn't exist. That's not the nature of civilisation. It's not the nature of our society.

And so then the question is, so what do you do when you don't have that? And my best current thought is — in fact, I was just chatting with the person I was chatting with before you about this — is developing what are, let's say, a couple of hundred principles you might pick.

One principle might be, I don't know: "An AI must always have an owner." "An AI must always do what its owner tells it to do." "An AI must, whatever."

Now you might say, an AI must always have an owner? Is that a principle we want? Is that a principle we don't want? Some people will pick differently.

But can you at least provide scaffolding for what might be the set of principles that you want? And then it's like be careful what you wish for because you make up these 200 principles or something, and then you see a few years later, people with placards saying, "Don't do number 34" or something, and you realise, "Oh, my gosh, what did one set up?"

But I think one needs some kind of framework for thinking about these things, rather than just people saying, "Oh, we want AIs to be virtuous." Well, what the heck does that mean?

Or, "We have this one particular thing: we want AIs to not do this societally terrible thing right here, but we're blind to all this other stuff." None of that is going to work.

You have to have this formalisation of ethics that is such that you can actually pick; you can literally say, I'm going to be running with number 23, number 25, and not number 24, or something. But you've got to make that kind of framework.

WALKER: I have about two more pages of questions, but I think we should leave it there because I've kept you much longer than I intended. But perhaps we can pick up the AI topic another time, because I think it's, well, both important, but your work has really crucial implications for how we should deal with those problems. But Stephen, this has been an absolute honour. I really appreciate it. Thank you so much.

WOLFRAM: Thanks for lots of interesting questions.