Preview Mode Links will not work in preview mode

Astral Codex Ten Podcast


Mar 25, 2023

Machine Alignment Monday 3/13/23

https://astralcodexten.substack.com/p/why-i-am-not-as-much-of-a-doomer

(see also Katja Grace and Will Eden’s related cases)

The average online debate about AI pits someone who thinks the risk is zero, versus someone who thinks it’s any other number. I agree these are the most important debates to have for now.

But within the community of concerned people, numbers vary all over the place:

  • Scott Aaronson says says 2%

  • Will MacAskill says 3%

  • The median machine learning researcher on Katja Grace’s survey says 5 - 10%

  • Paul Christiano says 10 - 20%

  • The average person working in AI alignment thinks about 30%

  • Top competitive forecaster Eli Lifland says 35%

  • Holden Karnofsky, on a somewhat related question, gives 50%

  • Eliezer Yudkowsky seems to think >90%

  • As written this makes it look like everyone except Eliezer is <=50%, which isn’t true; I’m just having trouble thinking of other doomers who are both famous enough that you would have heard of them, and have publicly given a specific number.

I go back and forth more than I can really justify, but if you force me to give an estimate it’s probably around 33%; I think it’s very plausible that we die, but more likely that we survive (at least for a little while). Here’s my argument, and some reasons other people are more pessimistic.