I’m two years later to the party, but I believe this is the source of the famous “AI experts believe there’s a 10% chance that it will destroy humanity” statistic that was the subject of a certain amount of understandable-on-the-surface media frenzy a while back (e.g. here or here).
The exact question asked was:
“What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?”
And the median answer was 10%.
The responses are from some subset of 738 questionnaire responses that came from a group consisting of researchers that published something at a couple machine learning related conferences. This represents a 17% response rate from the 4271 people solicited. It sounds like this particular question was given to a smaller subset of the 738 because recipients could get different questionnaires depending on whether they’d already completed another assessment in the past.
Perhaps not unrelated, 69% of respondents think that AI safety research should be given a higher priority than it was at the time.
I wonder what goes through the head of someone doing work that they think stands a substantial chance of destroying everyone and everything that they love. Particularly if they’re in a group which mostly thinks that insufficient attention is being given to safety. I’m curious to see if there’s any clues in the detailed data.
Some ideas that come to mind before actually bothering to take a look include:
- Maybe those with high estimates actually work in AI safety, or organisations concerned with mitigating the impact of AI. Doing AI research doesn’t mean you’re promoting AI. After all, virology researchers don’t usually want to infect everyone with a virus. But I don’t know if there’s enough (prestigious) AI safety jobs to go around for that to be the case every time.
- Perhaps they believe that the upsides of AI are so great that a 10% risk of total destruction is worth the risk.
- Perhaps they think that they’re more moral, careful and sensible than everyone else working in the field, and thus, given that someone is going to produce these systems, it’s best that it’s them. After all, we (nearly) all think we’re above average drivers.
- In some part similar to the above, maybe it’s more that yes, some day there’ll be an AI that destroys everything, but it’s not going to be my silly little model. Maybe it’ll be 1000 years in the future and come from an entirely different line of research. We’re not surprised that people who build space rockets do what they do just because nuclear missiles look and work vaguely similarly.
- Or perhaps the respondents who place high probabilities on absolute destruction don’t actually viscerally believe that number and its implications, or at least not all the time. Maybe they never thought about the possibility until they were asked. It is possible for a person to hold two different and incompatible thoughts in one’s head at different times or even simultaneously, particularly when one concerns your livelihood here and now and the other is a potentially distant future theoretical risk. We’re not constantly shocked at how people from oil companies don’t quit even though there’s a lot more evidence that their work is directly contributing to the destruction of the planet.
- Let’s never forget the Lizardman’s constant. There’s a certain amount of respondents who will say they agree with almost anything, no matter how outlandish, on any survey. Even when they either don’t actually agree or never really thought about whether they do or not.
- And then there’s the fact that most humans are not really very good forecasters, especially when it comes to rare or unprecedented events. Being good at making AI is a different skillset to being good at making forecasts about it. In particular we tend to be “disproportionately swayed by improbable but extreme eventualities, such as terrorism, that come to mind easily”, to quote Lieder et al, which the total destruction of everything would probably count as.
- Because of the fact that some researchers were given different questionnaires to others it is also possible that the 69% who feel like insufficient attention is being given to AI safety are not representative of the sample who provided answers to the question resulting in that 10% chance of total destruction.
- There is of course, as ever, the possibility that there are other methodological issues with the survey which a detailed reading might reveal.