Cybersecurity & Tech

Artificial Intelligence and Chemical and Biological Weapons

Paul Rosenzweig
Wednesday, April 27, 2022, 8:01 AM

 The pharmaceutical industry is using artificial intelligence to discover new beneficial drugs, but this new tool also presents the possibility for the creation of new catastrophic biological and chemical weapons.

A depiction of medical research. (CC0 1.0) (Source: https://pxhere.com/en/photo/1457639)

Published by The Lawfare Institute
in Cooperation With
Brookings

Sometimes reality is a cold slap in the face. Consider, as a particularly salient example, a recently published article concerning the use of artificial intelligence (AI) in the creation of chemical and biological weapons (the original publication, in Nature, is behind a paywall, but this link is a copy of the full paper). Anyone unfamiliar with recent innovations in the use of AI to model new drugs will be unpleasantly surprised.

Here’s the background: In the modern pharmaceutical industry, the discovery of new drugs is rapidly becoming easier through the use of artificial intelligence/machine learning systems. As the authors of the article describe their work, they have spent decades “building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery.” 

In other words, computer scientists can use AI systems to model what new beneficial drugs may look like for specifically targeted afflictions and then task the AI to work on discovering possible new drug molecules to use. Those results are then given to the chemists and biologists who synthesize and test the proposed new drugs. 

Given how AI systems work, the benefits in speed and accuracy are significant. As one study put it:

The vast chemical space, comprising >1060 molecules, fosters the development of a large number of drug molecules. However, the lack of advanced technologies limits the drug development process, making it a time-consuming and expensive task, which can be addressed by using AI. AI can recognize hit and lead compounds, and provide a quicker validation of the drug target and optimization of the drug structure design.

Specifically, AI gives society a guide to the quicker creation of newer, better pharmaceuticals.

The benefits of these innovations are clear. Unfortunately, the possibilities for malicious uses are also becoming clear. The paper referenced above is titled “Dual Use of Artificial-Intelligence-Powered Drug Discovery.” And the dual use in question is the creation of novel chemical warfare agents. 

One of the factors investigators use to guide AI systems and narrow down the search for beneficial drugs is a toxicity measure, known as LD50 (where LD stands for “lethal dose” and the “50” is an indicator of how large a dose would be necessary to kill half the population). For a drug to be practical, designers need to screen out new compounds that might be toxic to users and, thus, avoid wasting time trying to synthesize them in the real world. And so, drug developers can train and instruct an AI system to work with a very low LD50 threshold and have the AI screen out and discard possible new compounds that it predicts would have harmful effects. As the authors put it, the normal process is to use a “generative model [that is, an AI system, which] penalizes predicted toxicity and rewards predicted target activity.” When used in this traditional way, the AI system is directed to generate new molecules for investigation that are likely to be safe and effective.

But what happens if you reverse the process? What happens if instead of selecting for a low LD50 threshold, a generative model is created to preferentially develop molecules with a high LD50 threshold?

One rediscovers VX gas—one of the most lethal substances known to humans. And one predictively creates many new substances that are even worse than VX.

One wishes this were science fiction. But it is not. As the authors put the bad news:

In less than 6 hours ... our model generated 40,000 [new] molecules ... In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible. These new molecules were predicted to be more toxic, based on the predicted LD50 values, than publicly known chemical warfare agents. This was unexpected because the datasets we used for training the AI did not include these nerve agents.

In other words, the developers started from scratch and did not artificially jump-start the process by using a training dataset that included known nerve agents. Instead, the investigators simply pointed the AI system in the general direction of looking for effective lethal compounds (with standard definitions of effectiveness and lethality). Their AI program then “discovered” a host of known chemical warfare agents and also proposed thousands of new ones for possible synthesis that were not previously known to humankind.

The authors stopped at the theoretical point of their work. They did not, in fact, attempt to synthesize any of the newly discovered toxins. And, to be fair, synthesis is not trivial. But the entire point of AI-driven drug development is to point drug developers in the right direction—toward readily synthesizable, safe and effective new drugs. And while synthesis is not “easy,” it is a pathway that is well trod in the market today. There is no reason—none at all—to think that the synthesis path is not equally feasible for lethal toxins.

And so, AI opens the possibility of creating new catastrophic biological and chemical weapons. Some commentators condemn new technology as “inherently evil tech.” However, the better view is that all new technology is neutral and can be used for good or ill. But that does not mean nothing can be done to avoid the malignant uses of technology. And there is a real risk when technologists run ahead with what is possible, before human systems of control and ethical assessment catch up. Using artificial intelligence to develop toxic biological and chemical weapons would seem to be one of those use-cases where severe problems may lie ahead.


Paul Rosenzweig is the founder of Red Branch Consulting PLLC, a homeland security consulting company and a Senior Advisor to The Chertoff Group. Mr. Rosenzweig formerly served as Deputy Assistant Secretary for Policy in the Department of Homeland Security. He is a Professorial Lecturer in Law at George Washington University, a Senior Fellow in the Tech, Law & Security program at American University, and a Board Member of the Journal of National Security Law and Policy.

Subscribe to Lawfare