This week a group of well-known and reputable AI researchers signed a statement consisting of 22 words:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
As a professor of AI, I am also in favour of reducing any risk, and prepared to work on it personally. But any statement worded in such a way is bound to create alarm, so its authors should probably be more specific and clarify their concerns.
As defined by Encyclopedia Britannica, extinction is “the dying out or extermination of a species“. I have met many of the statement’s signatories, who are among the most reputable and solid scientists in the field – and they certainly mean well. However, they have given us no tangible scenario for how such an extreme event might occur.
It is not the first time we have been in this position. On March 22 this year, a petition signed by a different set of entrepreneurs and researchers requested a pause in AI…