Widely available AI could have deadly consequences

- Advertisement -


In September 2021 scientists Sean Akins and Fabio Urbina worked on an experiment they called “Dr. Evil project. The Swiss government’s Spiez Lab asked them to figure out what would happen if their AI drug discovery platform MegaSyn fell into the wrong hands.

- Advertisement -

Just as undergraduate students in chemistry play with sets of balls and rods to learn how different chemical elements interact to form molecular compounds, Akins and his team at Collaborations Pharmaceuticals used public databases containing the molecular structures and bioactivity data of millions of molecules. teach MegaSyn to create new compounds with pharmaceutical potential. The plan was to use it to speed up the process of discovering cures for rare and neglected diseases. The best drugs are those that are highly specific—for example, acting only on the desired or targeted cells or neuroreceptors—and low toxicity to reduce side effects.

- Advertisement -

Usually MegaSyn is programmed to create the most specific and least toxic molecules. Instead, Ekins and Urbina programmed it to create VXan odorless, tasteless nerve agent and one of the most toxic and fastest-acting man-made chemical warfare agents known today.

Ekins planned to present the results on Spitz Convergence conference — a biennial meeting that brings experts together to discuss the potential security risks associated with recent advances in chemistry and biology — in a presentation on how AI for drug development can be used to build biochemical weapons . “For me, it was an attempt to see if the technology could do it,” Ekins says. “It was the curiosity factor.”

- Advertisement -

In their office in Raleigh, North Carolina, Ekins stood behind Urbina, who launched the MegaSyn platform on his 2015 MacBook. In a line of code that would normally tell the platform to generate the least toxic molecules, Urbina simply changed 0 to 1, changing the platform’s end goal to toxicity. They then set a threshold for toxicity by asking MegaSyn to generate only lethal molecules like VX, which require only a few grains the size of salt to kill a human.

Ekins and Urbina left the program for the night. The next morning, they were shocked to learn that MegaSyn had produced about 40,000 different molecules as deadly as VX.

“That was when the penny fell,” Ekins says.

MegaSyn has generated VX in addition to thousands of known biochemical agents, but has also generated thousands of toxic molecules not found in any public database. MegaSyn has taken a computational leap to create completely new molecules.

At the conference, and then in a three-page paper, Ekins and his colleagues issued a stern warning. “Without being too alarmist, this should serve as a wake-up call to our colleagues in the AI ​​in drug discovery community,” Akins and colleagues wrote. “While creating toxic substances or biological agents that can cause significant harm still requires some knowledge of chemistry or toxicology, when these areas intersect with machine learning models, where all you need is the ability to code and understand output. the models themselves, they significantly lower technical thresholds.”

The researchers warned that while AI is becoming more powerful and more accessible to everyone, the technology is largely unregulated or controlled, and researchers like himself have little knowledge of its potential malicious uses.

“It is particularly difficult to define dual-use equipment/material/knowledge in the life sciences, and decades have been spent trying to develop the framework for this. There are very few countries that have specific legislation on this matter,” says Philippa Lenzos, Senior Lecturer in Science and International Security at King’s College London and co-author of the article. “There has been a lot of discussion about dual use in AI, but the focus has been on other social and ethical issues such as privacy. And there has been very little discussion about dual use, and even less in the field of AI drug discovery,” she says.

Despite the fact that a significant amount of work and experience went into the development of MegaSyn.hundreds of companies around the world AI is already being used for drug development, Ekins says, and most of the tools needed to replicate his VX experiment are publicly available.

“When we were doing this, we realized that anyone with a computer and limited knowledge of how to find datasets and find these types of software that are all public domain and just put them together can do it,” Ekins says. . “How do you keep track of potentially thousands, maybe millions of people who could be doing this and have access to information, algorithms, and also know-how?”

Since March, the paper has received over 100,000 hits. Some scientists have criticized Ekins and the authors for crossing an ethical gray line in their VX experiment. “It’s a really evil way to use technology, and it wasn’t good to do it,” Akins admitted. “I had nightmares after that.”

Other researchers and bioethicists have applauded the researchers for providing a concrete, proof-of-concept demonstration of how AI can be abused.

“When I first read this article, I was quite alarmed, but not surprised. We know that artificial intelligence technologies are becoming more powerful, and the fact that they can be used in this way does not seem surprising,” says Bridget Williams, a public health physician and research fellow at the Center for Population-Based Bioethics at Rutgers University.

“At first I wondered if it was not a mistake to publish this article, as it could lead to people with bad intentions using this type of information maliciously. But the benefit of such a document is that it can encourage more scientists and the research community at large, including sponsors, journals and preprint servers, to think about how their work might be misused and take action to protect against it. as the authors of this article did,” she says.

In March, the US Office of Science and Technology Policy (OSTP) summoned Akins and his colleagues to the White House for a meeting. According to Ekins, the first thing OSTP representatives asked was if Ekins had shared with anyone of the deadly molecules created by MegaSyn. (OSTP did not respond to repeated requests for an interview.) The second question from the OSTP representatives was whether they could get a file with all the molecules. Ekins says he gave them up. “Someone else could still go and do it. Definitely no review. There is no control. I mean it’s up to us, right?” He says. “There is just a strong dependence on our morality and our ethics.”

Ekins and colleagues are calling for further discussion on how to regulate and control the application of AI to drug development and other biological and chemical fields. This could mean rethinking what data and methods are available to the public, keeping a closer eye on who uploads certain datasets from open sources, or setting up AI ethical oversight committees like those already in place for research involving humans and animals. .

“Research involving humans is highly regulated and all research requires the approval of an institutional review board. We should consider having a similar level of oversight for other types of research, such as AI research,” says Williams. “These types of studies may not include humans as test subjects, but they certainly pose risks to large numbers of people.”

Other researchers have suggested that scientists need more education and training in dual-use risks. “What immediately struck me was the admission of the authors that it had never occurred to them that their technology could be so easily used for nefarious purposes. As they say, this needs to be changed; ethical blind spots like this are still all too common in the STEM community,” says Jason Millar, Canadian Research Chair in Ethical Engineering of Robotics and Artificial Intelligence and Director of the Canadian Laboratory for Ethical Design of Robotics and Artificial Intelligence at the University of Ottawa. “We really need to recognize ethics training as fundamental, along with other fundamental technical training. This is true for all technologies,” he says.

Government agencies and funders do not seem to have a clear path forward. “This is not the first time this issue has been raised, but the appropriate mitigation strategies and who will be responsible for what aspects (the investigator, his institution, the National Institutes of Health, and the Federal Selective Agents and Toxins Program likely all play a role) remains to be seen. define,” said Kristin Kolvis, director of drug development partnerships at the National Center for the Advancement of Translational Sciences (NCATS), and Alexei Zakharov, AI team leader at Pandemic Antivirus Program and informatics team leader at NCATS Early Translation in an email.

At his company, Akins is considering how to reduce the risk of dual use between MegaSyn and other AI platforms, for example by limiting access to MegaSyn software and providing ethics training to new employees while continuing to leverage the power of AI for drug discovery. . It is also reimagining an ongoing project funded by the National Institutes of Health Sciences that aims to create a public website with MegaSyn models.

“As if it’s not enough that the whole world is on our shoulders and we have to try to come up with cures for really terrible diseases, now we have to think about how to prevent others from misusing technology. which we tried to use for good. [We’re] looking over his shoulder and saying, “Is this a good use of technology?” Should we actually publish this? Are we sharing too much information?” Ekins says. “I think the possibility of misuse in other areas is now very clear and obvious.”

.


Credit: www.wired.com /

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox