LIVE EVENTS

From January to March 2025, I'm running a series of live podcasts on Australian policy problems. Get tickets or learn more here.

Newsletter

Weekend Reading & Selected Links

9 min read

Happy weekend! Here are some links to things I've been reading that you might also enjoy:

  1. My new podcast episode, with David Deutsch and Steven Pinker. Their first ever public dialogue. Four excerpts from the conversation below.
  2. 'Knightian Uncertainty', a new Cass Sunstein article.
  3. 'The employment effects of JobKeeper receipt', a Treasury analysis of the Australian Government's JobKeeper program.
  4. Elite overproduction?
  5. 'Why Antisemitism Sprouted So Quickly on Campus', by Jon Haidt.
  6. 'Is it time for insect researchers to consider their subjects’ welfare?', a recent article by Andrew Crump et al.
  7. Finally, if you'd like to support my podcast in 2024, you can do so here. (Please don't contribute if it'd detract from your charity budget. And, if you've already contributed to the show, thank you very much and feel free to ignore this link!)

Merry Christmas,

Joe


Excerpts from my podcast with Deutsch and Pinker

1. The scientific method as a bottleneck on runaway superintelligence

DEUTSCH: Actually, I think the main thing [AGI] would lack is the thing you didn't mention, namely the knowledge. 

When we say the universal Turing machine can perform any function, we really mean, if you expand that out in full, it can be programmed to perform any computation that any other computer can; it can be programmed to speak any language, and so on. But it doesn't come with that built in. It couldn't possibly come with anything more than an infinitesimal amount built in, no matter how big it was, no matter how much memory it had and so on. So the real problem, when we have large enough computers, is creating the knowledge to write the program to do the task that we want. 

PINKER: Well, indeed. And the knowledge, since it presumably can't be deduced, like Laplace's demon, from the hypothetical position and velocity of every particle of the universe, but has to be explored empirically at a rate that will be limited by the world – that is, how quickly can you conduct the clinical, the randomised controlled trials, to see whether a treatment is effective for a disease? It also means that the scenario of runaway artificial intelligence that can do anything and know anything seems rather remote, given that knowledge will be the rate limiting step, and knowledge can't be acquired instantaneously. 

DEUTSCH: I agree. The runaway part of that is due to people thinking that it's going to be able to improve its own hardware. And improving its own hardware requires science. It's going to need to do experiments, and these experiments can't be done instantaneously, no matter how fast it thinks. So I think the runaway part of the doom scenario is one of the least plausible parts. 

2. Why there may not be physical limits to growth in the universe

DEUTSCH: It is true that if we continue to grow at 2% per year, or whatever it is, then in 10,000 or 100,000 years, or whatever it is, we will no longer be able to grow exponentially, because we will be occupying a sphere which is growing, and if the outside of the sphere is growing at the speed of light, then the volume of the sphere can only be increasing like the cube of the time and not like the exponential of the time. So that's true. But that assumes all sorts of things, all sorts of ridiculous extrapolations to 10,000 years in the future. So, for example, Feynman said, there's plenty of room at the bottom. There's a lot more room. You assume that the number of atoms will be the limiting thing. 

What if we make computers out of quarks? What if we make new quarks to make computers out of? Okay, quarks have a certain size. What about energy? Well, as far as we know now, there's no lower limit to how little energy is needed to perform a given computation. We'll have to refrigerate ourselves to go down to that level, but there's no limit. So we can imagine efficiency of computation increasing without limit. Then when we get past quarks, we'll get to the quantum gravity domain, which is many orders of magnitudes smaller than the quark domain. We have no idea how gravitons behave at the quantum gravity level. For all we know, there's an infinite amount of space at the bottom. But we're now talking about a million years in the future, 2 million years in the future? 

Our very theories of cosmology are changing on a timescale of a decade. It's absurd to extrapolate our existing theories of cosmology 10,000 years into the future to obtain a pessimistic conclusion which there's no reason to believe takes into account the science that will exist at that time. 

PINKER: Also, I'll add, and this is a theme that David has explored as well, humans really thrive on information, on knowledge, not just on stuff. So when you talk about growth, it doesn't mean more and more stuff. It could mean better and better information, more entertaining virtual experiences, more remarkable discoveries, or ways of encountering the world that may not actually need more and more energy, but just rearranging pixels and bits in different combinations of which we know the space of possibilities is unfathomably big. And growth could consist of better cures for disease based on faster search in the space of possible drugs and many other mass advances that don't actually require more joules of energy or more grams of material, but could thrive on information which is not... 

DEUTSCH: And it might largely require replacing existing information rather than adding to information.

PINKER: Getting rid of all the things that we know are false. 

DEUTSCH: So we may not need exponentially growing amounts of computer memory if we have more and more efficient ways of using computer memory. In the long run, maybe we will. But that long run is so long that our scientific knowledge of today is not going to be relevant to it. 

3. The poverty of 'P(doom)'

DEUTSCH: This argument [(P(doom))] has all been about nothing, because you're arguing about the content of the other person's brain, which actually has nothing to do with the real probability, which is unknowable, of a physical event that's going to be subject to unimaginably vast numbers of unknown forces in the future. 

So, much better to talk about a thing like that by talking about substance like we just have been. We're talking about what will happen if somebody makes a computer that does so and so – yes, that's a reasonable thing to talk about. 

Talking about what the probabilities in somebody's mind are is irrelevant. And it's always irrelevant, unless you're talking about an actual random physical process, like the process that makes the patient come into this particular doctor's surgery rather than that particular doctor's surgery, unless that isn't random – if you're a doctor and you live in an area that has a lot of Brazilian immigrants in it, then you might think that one of them having the Zika virus is more likely, and that's a meaningful judgement. 

But when we're talking about things that are facts, it's just that we don't know what they are, then talking about probability doesn't make sense in my view. 

4. Will artificial general intelligence be sentient?

PINKER: Let me take it in a slightly different direction, though, when you're talking about the slave revolt and the rights that we would grant to an AI system. Does this presuppose that there is a sentience, a subjectivity – that is, something that is actually suffering or flourishing, as opposed to carrying out an algorithm that is therefore worthy of our moral concern, quite apart from the practicality of “should we empower them in order to discover new sources of energy”? But as a moral question, are there going to be really going to be issues that are comparable to arguments over slavery, in the case of artificial intelligence systems? Will we have confidence that they’re sentient?

DEUTSCH: I think it's inevitable that AGIs will be capable of having internal subjectivity and qualia and all that, because that's all included in the letter ‘G’ in the middle of the name of the technology. 

PINKER: Well, not necessarily, because the G could be general computational power, the ability to solve problems, and there could be no one home that’s actually feeling anything.

DEUTSCH: But there ain't nothing here but computation [points to head]. It's not like in Star Trek: Data lacks the emotion chip and it has to be plugged in, and when it's plugged in, he has emotions; when it's taken out again, he doesn't have emotions. But there's nothing possibly in that chip apart from more circuitry like he's already got. 

PINKER: But of course, the episode that you're referring to is one in which the question arose: “Is it moral to reverse-engineer Data by dismantling him, therefore stopping the computation?” Is that disassembling a machine, or is it snuffing out a consciousness? And of course, the dramatic tension in that episode is that viewers aren't sure. I mean, now, of course, our empathy is tugged by the fact that it is played by a real actor who does have facial expressions and tone of voice. But for a system made of silicon, are we so sure that it's really feeling something? Because there is an alternative view that somehow that subjectivity depends also on whatever biochemical substrate our particular computation runs on. And I think there's no way of ever knowing but human intuition. 

Unless the system has been deliberately engineered to tug at our emotions with humanoid-like tone of voice and facial expressions and so on, it's not clear that our intuition wouldn't be: “this is just a machine., it has no inner life that deserves our moral concern as opposed to our practical concern.”

DEUTSCH: I think we can answer that question before we ever do any experiments, even today, because it doesn't make any difference if a computer runs internally on quantum gates or silicon chips or chemicals, like you just said, it may be that the whole system is not just an electronic computer in our brain; it's an electronic computer, part of which works by having chemical reactions and so on, and being affected by hormones and other chemicals. But if so, we know for sure that the processing done by those things and their interface with the rest of the brain and everything can also be simulated by a computer. Therefore, a general universal Turing machine can simulate all those things as well. 

So there's no difference. I mean, it might make it much harder, but there's no difference in principle between a computer that runs partly by electricity and partly by chemicals (as you say we may do), and one that runs entirely on silicon chips, because the latter can simulate the former with arbitrary accuracy. 

PINKER: Well, it can simulate it, but we're not going to solve the problem this afternoon in our conversation. In fact, I think it is not solvable. But the simulation doesn't necessarily mean that it has subjectivity. It could just mean it's a simulation – that is, it's going through all the motions., it might even do it better than we do, but there's no one home. There's no one actually being hurt. 

DEUTSCH: You can be a dualist. You can say that there is mind in addition to all the physical stuff. But if you want to be a physicalist, which I do, then… There's this thought experiment where you remove one neuron at a time and replace it by a silicon chip and you wouldn't notice. 

PINKER: Well, that's the question. Would you notice? Why are you so positive? 

DEUTSCH: Well, if you would notice, then if you claim… 

PINKER: Sorry, let me just change that. An external observer wouldn't notice. How do we know that from the point of view of the brain being replaced every neuron by a chip, that it's like falling asleep, that when it's done and every last neuron is replaced by a chip, you're dead subjectively, even though your body is still making noise and doing goal-directed things. 

DEUTSCH: Yes, so that means when your subjectivity is running, there is something happening in addition to the computation, and that's dualism. 

PINKER: Well, again, I don't have an opinion one way or another, which is exactly my point. I don't think it's a decidable problem. But it could be that that extra something is not a ghostly substance, some sort of Cartesian res cogitans, separate from the mechanism of the brain. But it could be that the stuff that the brain is made of is responsible for that extra ingredient of subjective experience as opposed to intelligent behaviour. At least I suspect people's intuitions would be very… Unless you deliberately program a system to target our emotions, I'm not sure that people would grant subjectivity to an intelligent system...

When I shut down ChatGPT, the version running on my computer, I don't think I've committed murder. And I don't think anyone else would think it. 

DEUTSCH: I don't either, but I don't think it's creative. 

PINKER: It's pretty creative. In fact, I saw on your website that you reproduced a poem on electrons. I thought that was pretty creative. So I certainly granted creativity. I'm not ready to grant it subjectivity. 

DEUTSCH: Well, this is a matter of how we use words. Even a calculator can produce a number that's never been seen before, because numbers range over an exponentially large range. 

PINKER: I think it's more than words, though. I mean, it actually is much more than words. So, for example, if someone permanently disabled a human, namely kill them, I would be outraged. I want that person punished. If someone were to dismantle a human-like robot, it'd be awful. It might be a waste. But I'm not going to try that person for murder. I'm not going to lose any sleep over it. There is a difference in intuition. 

Maybe I'm mistaken. Maybe I'm as callous as the people who didn't grant personhood to slaves in the 18th and 19th centuries, but I don't think so. And although, again, I think we have no way of knowing, I think we're going to be having the same debate 100 years from now. 

DEUTSCH: Yeah, maybe one of the AGIs will be participating in the debate by then.