AI’s Threat to Humanity Rivals Pandemics and Nuclear War, Industry Leaders Warn

White Robot. Photo by Possessed Photography on Unsplash

It is a truism: AI has immense potential but comes with significant risks. What is AI really capable of and what are governments doing to mitigate the risks?

by Alina Liebholz

June 2, 2023

The Center for AI Safety released the following statement on its webpage: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Among the signatories of the statement are Sam Altman, chief executive of ChatGPT-maker OpenAI, Demis Hassabis, chief executive of Google DeepMind, Dario Amodei of Anthropic and the so-called godfathers of AI: Dr Geoffrey and HintonYoshua Bengio.

According to the Center of AI Safety, some of the most significant risks posed by AI include the weaponisation of AI technology, power-hungry behaviour, human dependence on machines, as shown in the Disney movie Wall-E, and the spread of misinformation.

In a recent blog post, OpenAI proposed that the regulation of superintelligence should be similar to that of nuclear energy. “We are likely to eventually need something like an IAEA [International Atomic Energy Agency] for superintelligence efforts,” the firm wrote.

In March, an open letter, signed by Elon Musk, Apple co-founder Steve Wozniak and a handful of other big names in tech, asked to halt AI developments for six months due to the risks of AI and fears that it could become a threat to humanity. 

The letter, which was published by the Future of Life Institute, received over 31,000 signatures, although some of these are said to have been forged.

Furthermore, in a Senate hearing on the oversight of AI earlier this month, OpenAI CEO Sam Altman said: “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening, but we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that.”

A Distraction From Imminent Risks of AI?

Other AI scientists and experts, however, see these statements as overblown. Some even say it is a distraction from other, more imminent, problems AI poses, such as AI biases, spreading misinformation or invasions of privacy.

In fact, “current AI is nowhere near capable enough for these risks to materialise. As a result, it’s distracted attention away from the near-term harms of AI,” Arvind Narayanan, a computer scientist at Princeton University, told the BBC.

New AI products are constantly being released due to the ongoing advancements in the field. Ultimately, it’s crucial to address both potential and current harms.

“Addressing some of the issues today can be useful for addressing many of the later risks tomorrow,” said Dan Hendrycks, Centre for AI Safety Director.. “

In April 2021, the European Union (EU) proposed a bill on rules for artificial intelligence. The bill, expected to be finalised in June 2023, will introduce new transparency and risk-management rules for AI systems while supporting innovation and protecting citizens. 

In a press release regarding the new AI law, the EU stated: “AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics).”

“We are on the verge of putting in place landmark legislation that must resist the challenge of time,” said Brando Benifei, member of the EU Parliament, following the vote on the new regulation. 

Subscribe to our newsletter.

The USCanada, the UKSouth Korea and many other countries have also produced bills and white papers on AI regulation. Furthermore, the G7 have established a working group on the challenges of AI technology, with their first meeting having taken place on May 30

This article was originally published on IMPAKTER. Read the original article.

0 Shares