Preview Mode Links will not work in preview mode

Astral Codex Ten Podcast

Jul 30, 2021

On yesterday’s post, some people tried to steelman Acemoglu’s argument into something like this:

There’s a limited amount of public interest in AI. The more gets used up on the long-term risk of superintelligent AI, the less is left for near-term AI risks like unemployment or autonomous weapons. Sure, maybe Acemoglu didn’t explain his dismissal of long-term risks very well. But given that he thinks near-term risks are bigger than long-term ones, it’s fair to argue that we should shift our limited budget of risk awareness more towards the former at the expense of the latter.

I agree this potentially makes sense. But how would you treat each of the following arguments?:

(1): Instead of worrying about police brutality, we should worry about the police faking evidence to convict innocent people.

(2): Instead of worrying about Republican obstructionism in Congress, we should worry about the potential for novel variants of COVID to wreak devastation in the Third World.

(3): Instead of worrying about nuclear war, we should worry about the smaller conflicts going on today, like the deadly civil war in Ethiopia.