21 Comments

The thing is, it's not a given that Mormon genes will triumph over hikikomori ones by outbreeding: we might first conquer aging, or make artificial wombs and robot childcare servitors that compensate for aversion to the costs of childrearing, or even make immortal ems a la Robin Hanson. We might, in short, evade natural selection pressures entirely.

And that's the other piece of the fear about AGI: that if we try to keep continuously training it so as to keep it aligned, it will try to defeat our training mechanisms, and it will win because it's superintelligent.

Expand full comment

(Deleted previous comment because I was reading too fast)

I think the difference is that Nature's goals/optimization are built into reality. As you said, it never stops selecting. There's no way for a species to take over the universe and change the laws of physics.

Expand full comment
Nov 30, 2023Liked by Maxwell Tabarrok

I am responding real late, but, if I am following correctly, I don’t think your analogy is quite right.

First, evolution has not been kind to any species, mercilessly exterminating 99% of them.

Second, there are now two distinct evolutionary processes in multilevel selection, biology, which works for all the others, and culture, which has worked for our branch of hominids. Once culture took off, we left a path of wholesale destruction across the majority of large, slower breeding birds and mammals on the planet. The problem (for other species) is that culture evolves incalculably faster than biology. The concern here is that AI will be able to use an evolutionary process of variation and selection at speeds which will similarly dwarf cultural evolution.

That said, I am not an AI doomer. I just disagree with this particular argument.

FWIIW, my take is that AI is inevitable (when not if), that humans are or soon will be more than capable of exterminating each other even without AI, that immense Artificial intelligence will not be morally inferior to humans, and, on a positive note, that greater intelligence is the path to eternal knowledge and that AI might just be just part of this greater story.

Expand full comment
Mar 17, 2023·edited Mar 17, 2023Liked by Maxwell Tabarrok

> LLMs today are trained for thousands of GPU hours but once their training is finished, their weights are set and users just send inputs through a static matrix.

Have you been reading up on how ML works? I remember about a year ago we were discussing modern ML models and you seemed to think they had an almost neural consciousness we didn't understand, until I explained the weights/matrices. Not meaning to roast you on the web tho đŸ˜‚

Expand full comment
Mar 8, 2023·edited Mar 8, 2023Liked by Maxwell Tabarrok

There is no mystery:

> Most organisms are well-aligned with the goal of reproduction

Of course they are! Unaligned organisms quickly went extinct! The processes of reproduction and mortality wipes out organisms that fail to be aligned.

Unfortunately this insight only allows us to engineer entities whose goal is reproduction. It doesn't generalise to any other goal. And reproduction is definitely not the primary goal we want an AI to have.

Expand full comment
Mar 7, 2023Liked by Maxwell Tabarrok

IMO, the problem with this framework is twofold: one, pointing to a large mass of lifeforms that follow evolutionary programming, but with multiple exceptions/failures, still dooms us if there are multiple mostly-aligned AI's, and even one unaligned exception that can bootstrap or use deception effectively.

Two, it sort of sidesteps the role of intelligence in OUR exception case. The reason we're not aligned is because we've used intelligence to be able to circumvent pregnancy and childbirth when they're not wanted. Similarly, an intelligent AI will use it's intelligence to circumvent whatever "don't destroy humans" drives we've tried to instill in it via millions-of-years-equivalent of simulation.

Or am I just missing something?

Expand full comment
Mar 6, 2023Liked by Maxwell Tabarrok

You just can’t get very far by reasoning about AI training by analogy with evolution. There are so many differences on both sides, it’s like reasoning about the development of the airline industry by analogy with the bat population.

Expand full comment

I'm sure I don't understand this well enough, but it seems we should distinguish between an analogy with evolution by natural selection in terms of process, eg variation, replication, etc, as opposed to the thing upon which selection acts upon.

While there may be formal similarities between humans and AI development re: evolution, I see no reason to assume any analogy between the two substantively, because AI as we now understand it is very different from humans.

But perhaps I don't understand the essential point of the argument?

Expand full comment