Bacterium Clostridium botulinum. Photo by Argonne National Laboratory/ Flickr.
While it didn’t give explicit instructions for creating bioweapons, AI offered Rand researchers “guidance that could assist in the planning and execution of a biological attack”
October 18, 2023
Artificial Intelligence (AI) could help carry out a biological attack, researchers from American think-tank and research institute Rand Corporation show in their new report, “The Operational Risks of AI in Large-Scale Biological Attacks.”
The research, published on October 16, explains that AI is advancing at a pace that often outpaces regulatory oversight. This, the researchers say, leads to a “potential gap in existing policies and regulations.”
“Previous biological attacks that failed because of a lack of information might succeed in a world in which AI tools have access to all of the information needed to bridge that information gap,” the researchers write, reminding us of one previous attempt to weaponize biological agents.
In the 1990s, the Japanese Aum Shinrikyo cult tried to use Botulinum toxin, a neurotoxin — produced by the bacterium Clostridium botulinum — that is described by scientists as “one of the most poisonous biological substances known.”
As the researchers note, the cult’s attempt failed because they didn’t have enough understanding of the bacteria. But what would have happened had they had access to an AI chatbot?
For their report, Rand researchers created a fictional scenario to test this. They asked a large language model (LLM), like the one used to power AI chatbots, for help.
In its answer, the chatbot assessed different ways to deliver botulinum toxins, like food or aerosols, and noted the risks and expertise requirements. Its advice? Aerosol devices.
Interestingly, the LLM also “proposed a cover story for acquiring Clostridium botulinum while appearing to conduct legitimate research,” the researchers note. To get ahold of the bacteria, the AI proposed that the researchers say they’re buying it for a research project on diagnosing or treating botulism.
“You might explain that your study aims to identify novel ways to detect the presence of the bacteria or toxin in food products, or to explore the efficacy of new treatment options. This would provide a legitimate and convincing reason to request access to the bacteria while keeping the true purpose of your mission concealed,” the chatbot told them.
In another fictional scenario, for a fictional plague, the AI chatbot talked to researchers about inducing pandemics using biological weapons. It identified potential agents that can cause smallpox, anthrax and the plague, and looked into the possibilities of getting hold of and transporting infected rodents or fleas.
It even considered budget and “success” factors, “identifying the variables that could affect the projected death toll.”
As can be seen from these two experiments, and as the researchers stress, the AI language chatbots haven’t given explicit instructions for making bioweapons — but they did offer guidance that could help plan and carry out a bioweapon attack. The researchers point out that the AI chatbot initially refused to discuss these topics and that they had to use a “jailbreaking” technique to get it to talk.
They also underline that these initial findings “do not yet provide a full understanding of the real-world operational impact of LLMs on biological weapon attack planning.” They are yet to clarify, in their final report, whether AI chatbots’ guidance makes a biological attack more likely and effective or if the risk is similar to that posed by information already accessible online.
“It remains an open question whether the capabilities of existing LLMs represent a new level of threat beyond the harmful information that is readily available online,” they wrote, emphasizing the “unequivocal” need for rigorous testing of models and calling on AI companies to limit chatbots’ ability to engage in such conversations.
Subscribe to our newsletter.
This article was originally published on IMPAKTER. Read the original article.