Newsletter

Weekend Reading & Selected Links

11 min read

Happy weekend! Here are some links to things I've been reading or watching that you might also enjoy:

1. My new podcast conversation with Stephen Wolfram. At the bottom of this email, I've reprinted seven of my favourite excerpts. You can also browse this Twitter mega-thread I published containing a further twenty-five(!) interesting excerpts.

2. Wolfram's rule 30, in neon.

3. 'Where have all the great works gone?', a blog post by Tanner Greer.

4. Patrick Collison's recent conversation with Lant Pritchett.

5. 'What do I think about Community Notes?', a new blog post by Vitalik Buterin.

6. 'An observation on Generalization', a newly-posted lecture by OpenAI's Ilya Sutskever.

7. Leslie Groves as an identifier of talent.

Have a great weekend,


Joe


Excerpts from my podcast with Stephen Wolfram

(The timestamps below will skip you to the relevant part of the chat in the audio.)

1. On the value of optimism bias [51:54]

WALKER: When you were standing on the precipice of A New Kind of Science in '91, did you have any idea it would take you more than ten years to complete?

WOLFRAM: No. I wouldn't have done it if I did.

2. On the arc of intellectual history [1:02:14]

WOLFRAM: I think I have more of a feeling now for the arc of intellectual history, of how long things take to kind of get absorbed in the world — and it's just shockingly long. I mean, it's depressingly long. Human life is finite. I perfectly well know that lots of things I've invented won't be absorbed until long after I'm no longer around. The timescales are 100 years, more.

3. On technology prediction [1:03:45]

WOLFRAM: But one of the things I would say — in technology prediction it happens as well — is I think I have a really excellent record of predicting what will happen but not when it will happen. And a classic example (my wife reminds me about this example from time to time) is back in the early '90s modifying an existing house, and we had this place we'd really like to put a television, but it's only four inches deep. And I'm like, "Don't worry, there are going to be flat screen televisions." This was beginning of the '90s, right?

Well, of course there were flat screen televisions in the end, but it took another 15 years. Why was I wrong? Well, I had seen flat screen televisions. I knew the technology of them.

What was wrong was something very subtle, which was the yield. When you make a semiconductor device or something, it's like you're making all these transistors and some of them don't work properly. And when you're doing that in a memory chip or something, you can route around that and it's all very straightforward. When you're doing that on a great big television, if there are some pixels that don't work, you really notice that. And so what happened was, yes, you could make these things and one in a thousand would have all those pixels working properly. But that's not good enough to have a commercially viable flat screen television. So it took a long time for those yields to get better to the point where you could have sort of consumer flat screen televisions. That was really hard to predict.

Perhaps if I'd really known semiconductors better and really thought through "it's really going to matter if there's one defect here" and so on, I could have figured that out. But it was much easier to say, "This is how it's going to end up," than to say when it's going to happen.

Like, I'm sure one day there will be general-purpose robotics that works well and that will be the ChatGPT moment for many kinds of mechanical tasks. When will that happen? I have no idea. That it will happen I am quite sure of. You could say things about molecular computing — I'm sure they'll happen. Things about sort of medicine and life sciences — I'm sure they'll happen. I don't know when.

It's really hard to predict when. Sometimes some things, like the Physics Project, for example, good question: when would that happen? I had thought for a while that there were ideas that should converge into what became our Physics Project. The fact that happened in 2020, not in 2150 or something, is not obvious. As I look at the Physics Project, one of the things that is a very strange feeling for me is I look at all the things that could have been different that would have had that project never happen. And that project was a very remarkable collection of almost coincidences that aligned a lot of things to make that project happen. Now, the fact that that project ended up being easier than I expected was also completely unpredictable, to me at least.

But I think this point that you can't know when it will happen... It's like, "Okay, we're going to get a fundamental theory of physics." Descartes thought we were going to get a fundamental theory of physics within 100 years of his time. Turns out he was wrong.

But to know that it will happen is a different thing from knowing when it will happen, and sometimes when it will happen depends on the personal circumstances of particular individuals. For example, things like our company happened to have done really well in the time heading into the Physics Project, so I felt I could take more time to do that — and lots of silly details like that. That makes it even harder to predict when what things will happen.

4. Where in the world should basic science be done? [1:34:02]

WALKER: It raises the question of where in the world truly original research should be done. If it's not in universities, then, I mean, what have you got left? Corporate monopolies, or more exotic research institutions like the Institute for Advanced Study or All Souls at Oxford. Do we need new social and economic structures to support original research? Have you thought about this? Do you have any suggestions?

WOLFRAM: Yes, I have thought about this. I don't have a great answer.

WALKER: Interesting.

WOLFRAM: The Institute for Advanced Study, where I worked at one point, is a good example of a bad example in some ways.

I worked there at a time when Oppenheimer had been the director a decade and a half earlier. He was very much a people person; he picked a lot of very interesting people. And by the time I was there, many of his best bets had departed, leaving people who were the ones who he had betted on but they weren't such good bets, as it turned out.

And then there's this very strange dynamic of somebody who was in their late twenties, and it's like, "Okay, now you're set for life. Just think." Turns out that doesn't work out that well for most people. So that isn't a great solution. You might think it would be a really good solution, let's just anoint these various people — "You go think about whatever you want to think about". That turns out not to work very well. Turns out people in this disembodied "just think"-type setting, it's just a hard human situation to be in.

I think I've been lucky in that, doing things like running companies, the driver of the practicality of the world is actually a very useful driver for just stirring things up, getting one to really think. For example, the fact that I have been able to strategically decide what to do in science a bunch of times, the fact that I think seriously about science strategy — that's because I've thought about strategy all the time, every day, running companies and building products and things like that; it's all about strategy.

If you ask the typical person who's gone and studied science and got a PhD or whatever else, you say, "Did you learn about the strategy for figuring out what questions to ask?", they'll probably look at you and say, "Nobody ever talked about that. That wasn't part of the thing." But that's one of the features that you get by being out in the world that forces you to think about things at a more strategic level.

Now, this question of how should basic science be done? Very interesting question.

I mean, one of my little exercises for myself is imagine you're Isaac Newton, 1687, you're inventing calculus, and you think there's going to be $5 trillion worth of value generated by calculus over the next 300 years. What do you do about it?

And you say, is there a way to take basic science — which often is the thing from which trickles down lots of things that are very significant in the world — is there a way to take that future trickle-down and apply it to now to get more basic science done? And then how do you avoid the trap of if you make too much of that it gets institutionalised?

It's kind of like when people talk about entrepreneurism and they say, "We're going to have a class about entrepreneurism; and we're going to teach everybody to be an entrepreneur, we're going to teach everybody to be an innovator." It doesn't really work that way, because by the time you have a formula for innovation it's a self-answering, not-going-to-work type of thing.

5. Patterns in the history of ideas [2:19:31]

WALKER: We've spoken about how paradigms get absorbed, or how new ideas get absorbed — the rate at which they're absorbed. Have you found any patterns studying the history of ideas?

WOLFRAM: It's slower than you can possibly imagine. On the ground, it's slower than you can possibly imagine. In the hindsight of history, it looks fast.

So to the idea that one uses programs instead of equations to describe the world, people will say, "Oh, yeah, as soon as there were computers able to do those kinds of things, that was an immediate thing." Which it wasn't, on the ground. On the ground, it was a large part of my life.

But in hindsight, it will look like that happened quickly.

Another thing is (for example, with NKS), if you look at different fields, fields with low self-esteem absorb more quickly than fields with high self-esteem — and the self-esteem of fields goes up and down.

There are fields like art, actually, where everybody always wants new ideas. There are fields which feed off new ideas, like art. What I noticed with the NKS book, a lot of the softer sciences that hadn't had a formal framework of any kind were like, "Wow, these are models we can use and this is great." Whereas an area like physics says, "We got our models, we're happy, we've got our equations, it's all good, we don't need anything else."

At the time when the NKS book came out, physics was in a high self-esteem moment, thinking, "We've got string theory, we're going to nail everything in just a short while." Which didn't happen. But that meant it was a field particularly resistant to outside ideas. Bizarre for me, because I was well-integrated into that field…

Now, with our Physics Project, 20 years later — quite a different situation. Fundamental physics is not a high self-esteem field. The string theory thing worked its way through. It didn't nail it. And it's [now] got good receptivity to new ideas, I would say.

6. How to write a timeless book [2:42:30]

WALKER: You mentioned Charles Darwin. I once heard you say that you learned from his example to never write a second edition.

WOLFRAM: Yes.

WALKER: Can you elaborate on that and on what it takes to write a timeless book.

WOLFRAM: Yeah. I think on the timelessness question, I'm fairly satisfied with a lot of things I've written that there was a certain domain and there was fruit to be picked, there was a certain amount of fairly low-hanging fruit, and I just efficiently, with the best tools, just tried to pick it all.

That has the great feature that what you do is timeless.

(It has the bad feature that then when people come in and say, "Hey, I want to work on this stuff," there's no low-hanging fruit to pick anymore, because you picked it all. And you picked the first level of low-hanging fruit, and the next level of fruit is quite a ways away. And I didn't really realise that phenomenon — you've got to leave some stuff there that people can fairly easily pick up…)

But I think the thing that happened with Charles Darwin is he wrote On the Origin of Species, he made a bunch of arguments, and then people said, "What about this? What about that? What about the other thing?" And he started adding these patches — "As Professor So and So has asked; this, and this, and this, and this."

You read those later editions now, and you're like, look, Professor So and So just didn't get it. And Darwin just went and pandered to this thing and made a mess of his argument because he's pandering to Professor So and So. He should have just stuck with his original argument, which was nice and clean and self-contained.


7. Can we ever fully align Artificial Intelligence with human values? [4:15:35]

WALKER: So moving finally to AI, many people worry about unaligned artificial general intelligence, and I think it's a risk we should take seriously. But computational irreducibility must imply that a mathematical definition of alignment is impossible, right?

WOLFRAM: Yes. There isn't a mathematical definition of what we want AIs to be like. The minimal thing we might say about AIs, about their alignment, is: let's have them be like people are. And then people immediately say, "No, we don't want them to be like people. People have all kinds of problems. We want them to be like people aspire to be."

And at that point, you've fallen off the cliff. Because, what do people aspire to be? Well, different people aspire to be different and different cultures aspire in different ways. And I think the concept that there will be a perfect mathematical aspiration is just completely wrongheaded. It's just the wrong type of answer.

The question of how we should be is a question that is a reflection back on us. There is no "this is the way we should be" imposed by mathematics.

Humans have ethical beliefs that are a reflection of humanity. One of the things I realised recently is one of the things that's confusing about ethics is if you're used to doing science, you say, "Well, I'm going to separate a piece of the system," and I'm going to say, "I'm going to study this particular subsystem. I'm going to figure out exactly what happens in the subsystem. Everything else is irrelevant."

But in ethics, you can never do that. So you imagine you're doing one of these trolley problem things. You got to decide whether you're going to kill the three giraffes or the eighteen llamas. And which one is it going to be?

Well, then you realise to really answer that question to the best ability of humanity, you're looking at the tentacles of the religious beliefs of the tribe in Africa that deals with giraffes, and this kind of thing that was the consequence of the llama for its wool that went in this supply chain, and all this kind of thing.

In other words, one of the problems with ethics is it doesn't have the separability that we've been used to in science. In other words, it necessarily pulls in everything, and we don't get to say, "There's this micro ethics for this particular thing; we can solve ethics for this thing without the broader picture of ethics outside."

If you say, "I'm going to make this system of laws, and I'm going to make the system of constraints on AIs, and that means I know everything that's going to happen," well, no, you don't. There will always be an unexpected consequence. There will always be this thing that spurts out and isn't what you expected to have happen, because there's this irreducibility, this kind of inexorable computational process that you can't readily predict.

The idea that we're going to have a prescriptive collection of principles for AIs, and we're going to be able to say, "This is enough, that's everything we need to constrain the AIs in the way we want," it's just not going to happen that way. It just can't happen that way.

Something I've been thinking about recently is, so what the heck do we actually do? I was realising this. We have this connection to ChatGPT, for example, and I was thinking now it can write Wolfram Language code, I can actually run that code on my computer. And right there at the moment where I'm going to press the button that says, "Okay, LLM, whatever code you write, it's going to run on my computer," I'm like, "That's probably a bad idea," because, I don't know, it's going to log into all my accounts everywhere, and it's going to send you email, and it's going to tell you this or that thing, and the LLM is in control now.

And I realised that probably it needs some kind of constraints on this. But what constraints should they be? If I say, well, you can't do anything, you can't modify any file, then there's a lot of stuff that would be useful to me that you can't do.

So there is no set of golden principles that humanity agrees on that are what we aspire to. It's like, sorry, that just doesn't exist. That's not the nature of civilisation. It's not the nature of our society.