8 Comments

Worth noting that existential risks are not limited to risks that will end humanity. They also include the lock in of bad trajectories (e.g. authoritarian dictatorships, indifference to animal torture, etc.). So there may still be a case for longtermism if we think humans are more likely to achieve decent values than aliens.

There may also be value in humanity's survival for the sake of diversity in the universe, if only to ensure that there isn't the disvaluable lock in later on.

Expand full comment

Objections:

1. Do we have strong reasons to think that morally valuable aliens will also be morally upright aliens? Maybe they'll be a moral catastrophe. Maybe they like ritualistic torture or factory farming that makes our version look like paradise.

2. you're arbitrarily limiting existential risk to destroying humans, but human caused disasters could affect aliens as well. Probably not biotech but AGI definitely could.

Both of those arguments seem plausible to me and very substantially weaken your argument for higher volatility.

Having a higher prior for alien civilization existing should perhaps make us somewhat more willing to speed up tech development instead of prioritizing safety, because presumably other aliens civilizations may prioritize safety less or have worse values, but I don't think it transforms long-term nearly as much as you think.

Expand full comment
Oct 12, 2022Liked by Maxwell Tabarrok

My take on this is: we shouldn’t be seriously engaging with these types of alien arguments due to huge uncertainties , and hence shouldn’t engage with arguments talking about the *very* long term (although most x-risks don’t require that we do, so not sure it changes the calculus).

Coincidentally, I wrote about this similar but less rigorous take just a couple weeks ago here:

https://conormcgurk.substack.com/p/but-what-about-aliens

Expand full comment

This is interesting! Though to play devils advocate, i would add that Robin Hanson also thinks if aliens do exist they have rules for not expanding to other planets. So perhaps ending intelligent life here means it cannot be replaced easily (or it must be replaced through slow earth based evolution). Also, how easily can aliens adapt to our planet? Maybe there is value in the millions of years of evolution it took for us to come to the place we are now. (Though arguably we're changing our environement so much beyond that which we evolved in now, which goes against this line of reasoning).

Expand full comment

1. There is an embedded time constant in human civilization and that is the length of a single human life. That's going to continue to drive most decisions.

2. If you look at log(number of people) as your moral utilty index you get much more tractable decision tradeoffs out of longtermism.

Expand full comment