Future of AI // surveyed 2,778 researchers

sbagency
3 min readJan 12, 2024

--

https://arxiv.org/pdf/2401.02843.pdf

In the largest survey of its kind, we surveyed 2,778 researchers who had published in top-tier artificial intelligence (AI) venues, asking for their predictions on the pace of AI progress and the nature and impacts of advanced AI systems. The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model. If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier [Grace et al., 2022]. However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey). Most respondents expressed substantial uncertainty about the long-term value of AI progress: While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes. Between 37.8% and 51.4% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction. More than half suggested that “substantial” or “extreme” concern is warranted about six different AI-related scenarios, including spread of false information, authoritarian population control, and worsened inequality. There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more.

Here are the key points from the paper:

- The paper reports on a survey of 2,778 AI researchers who recently published at top AI conferences. This is the largest expert survey on AI timelines and impacts to date.

- Respondents estimate a 50% chance of high-level machine intelligence (HLMI) being achieved by 2047. This is 13 years earlier than estimates from a similar survey done in 2022.

- The median estimate for when all human occupations could be fully automated was 2116, down 48 years from the 2022 survey.

- On tasks, 39 milestones were assessed and all but 4 were predicted to be achievable within 10 years. This includes coding websites, writing songs, and sample-efficient game playing.

- Respondents exhibited a wide range of views on the social impacts of advanced AI. While most thought good outcomes were more likely, over a third gave at least a 10% chance of extremely bad outcomes like human extinction.

- Concerns were expressed about many issues beyond human extinction, including the spread of misinformation, authoritarian control, and worsened inequality. Over half thought 6 scenarios deserved “substantial” or “extreme” concern.

- 70% of respondents thought AI safety research should be more prioritized than it currently is. But there was disagreement on whether faster or slower progress would be better for humanity.

In summary, the survey suggests experts anticipate rapid progress in coming years, while expressing uncertainty and concern about long-term impacts. This highlights the complex considerations around steering the path of AI advancement.

--

--

sbagency

Tech/biz consulting, analytics, research for founders, startups, corps and govs.