People Worry More About Today’s AI Harms Than Future Catastrophes


Summary: A new study reveals that people are more concerned with the immediate risks of artificial intelligence, such as loss of employment, prejudice and disinformation, than hypothetical future threats to humanity. The researchers exhibited more than 10,000 participants in various AI accounts and found that, although future disasters raise concerns, the current real dangers resonate more strongly.

This questions the idea that the dramatic messaging of “doomsday” distracts urgent problems. The results suggest that the public is able to maintain nuanced views and supports a balanced conversation on the risks of current and long -term AI.

Key facts:

  • Present> Future: Respondents prioritize concerns such as biases and disinformation on existential threats.
  • No compromise: Awareness of future risks has not reduced concerns for today’s real professional damage.
  • Public dialogue required: People want a thoughtful speech on the challenges of IA immediate and long -term.

Source: University of Zurich

Most people are generally more concerned with the immediate risks of artificial intelligence than on a theoretical future in which AI threatens humanity.

A new study by the University of Zurich reveals that respondents establish clear distinctions between abstract scenarios and specific tangible problems and take these latter very seriously.

There is a large consensus according to which artificial intelligence is associated with risks, but there are differences in the way these risks are understood and prioritized.

A widespread perception emphasizes long -term theoretical risks such as that of AI potentially threatening the survival of humanity.

Another common point of view focuses on immediate concerns such as the way in which the Systems of IA amplify social prejudices or contribute to disinformation.

Some fear that emphasizing dramatic “existential risks” could distract the attention of the most urgent real problems that AI is already caused today.

Risks of present and future

To examine these opinions, a team of political scientists from the University of Zurich conducted three large-scale online experiences involving more than 10,000 participants in the United States and the United Kingdom.

Certain subjects have received a variety of titles that depict AI as a catastrophic risk.

Others read current threats such as discrimination or disinformation, and others on the potential advantages of AI.

The objective was to examine whether the warnings of a disaster far in the future caused by AI decrease the vigilance to the current real problems.

More concern about current problems

“Our results show that respondents are much more concerned with the current risks posed by AI than on potential future disasters,” said Professor Fabrizio Gilardi of the Department of Political Science at UZH.

Even if the texts on existential threats have amplified fears concerning scenarios of this type, there was still much more concerns about current problems, in particular, for example, a systematic bias in AI decisions and job losses due to AI.

The study, however, also shows that people are able to distinguish the theoretical dangers and specific tangible problems and take both seriously.

Lead a broad dialogue on the risks of AI

The study thus fills a significant gap in knowledge. In public discussion, fears are often expressed that focusing on sensational future scenarios diverts attention from current urgent problems.

The study is the first to provide systematic data showing that awareness of real current threats persists even when people are faced with apocalyptic warnings.

“Our study shows that the discussion on long-term risks does not automatically occur at the expense of vigilance to present problems,” explains the co-author Emma Hoes.

Gilardi adds that “public speech should not be” or or “. A simultaneous understanding and appreciation of immediate and potential future challenges are necessary. »»

About this news of research on AI and psychology

Author: Nathalie Huber
Source: University of Zurich
Contact: Nathalie Huber – University of Zurich
Picture: The image is credited with Neuroscience News

Original search: Open access.
“”Existential risk accounts on artificial intelligence do not distract immediate damage»By Fabrizio Gilardi et al. PNA


Abstract

Existential risk accounts on artificial intelligence do not distract immediate damage

There is a large consensus on the fact that AI has risks, but a considerable disagreement as to the nature of these risks.

These different points of view can be understood as distinct stories, each offering a specific interpretation of the potential dangers of AI.

A story focuses on the apocalyptic predictions of AI posing long -term existential risks for humanity. Another story favors the immediate concerns that AI brings to society today, such as reproduction of biases integrated into AI systems.

An important point of discord is that the account of “existential risk”, which is largely speculative, can distract the less dramatic but real and present dangers of AI.

We approach this “distraction hypothesis” by examining whether the emphasis on existential threats diverts attention from the immediate risks that has posed it today.

In three pre-recorded online survey experiences (n = 10,800), participants were exposed to news titles which represented AI as a catastrophic risk, highlighted its immediate societal impacts or highlighted its potential advantages.

The results show that i) Respondents are much more concerned with the immediate risks, rather than existential, of AI and II) existential risk accounts increase the concerns of catastrophic risks without reducing significant concerns that express respondents for immediate damage.

These results provide significant empirical evidence to shed light on the ongoing scientific and political debates on the societal implications of the AI.

Leave a Reply

Your email address will not be published. Required fields are marked *