Lennart Justen
  • About
  • Blog
  • Publications
  • CV

AI accelerates beneficial and harmful applications of biology

AI accelerates beneficial and harmful applications of biology

Lennart Justen | Google Student Lightning Talk | April 7, 2026 | Slides (PDF)

Subscribe to stay up to date with the latest on this work.

Biology is an extremely powerful technology. And like all powerful technologies, it can be used for both good and for bad.

Let’s start with the good.

Progress in our ability to design, engineer, and manufacture biology has enabled huge breakthroughs in areas like agriculture, human health, and basic science. And this progress is now being turbocharged with AI.

A few years ago this little AI company called DeepMind released AlphaFold — a tool designed to predict the structure of proteins just from their sequence. This is a notoriously difficult problem in biology, and AlphaFold basically solved this problem and in doing so kicked off a new era of AI models in biology, hugely accelerating the field. (Jumper et al., Nature, 2021)

Take COVID for instance.

Moderna was able to design its mRNA vaccine within just two days of the SARS2 genome being made public. The clinical trials took quite a bit longer, but some 60 days later the vaccine was in human arms. That kind of speed was unthinkable even a decade ago. (Spooner et al., Our World in Data, 2025)

But biology’s power also makes it an appealing weapon.

COVID, which was far from a worst case scenario, killed an estimated 20 million people. It’s clear from this pandemic and those in the past, that biology is capable of being a weapon of mass destruction.

And we know that some have pursued these aims.

During the Cold War, the Soviet Union operated a huge biological weapons program — tens of thousands of people, weaponizing anthrax, smallpox, plague and putting it on warheads. (Leitenberg et al., Harvard University Press, 2012)

On the slide here are photos of a Soviet bioweapons facility at Stepnogorsk in modern day Kazakhstan, designed to produce 300 tons of anthrax per year. It is estimated that 100 kg of anthrax could kill up to three million people… and the Soviet Union was producing 5,000 tons of anthrax per year.

Today, the U.S. State Department assesses that Russia, alongside North Korea and potential others continue to operate offensive biological weapons programs.

Another example — this time from a non-state actor.

Aum Shinrikyo — an apocalyptic death cult in Japan with hundreds-of-millions of dollars in assets and graduate-trained scientists tried to weaponize anthrax and botulinum toxin and use it in attacks but failed — in part because they didn’t have the tools and knowledge. They turned to sarin gas instead, killing 13 people and injuring over a thousand in the Tokyo subway in 1995. (Danzig et al., CNAS, 2012)

The slide here shows some of the illustrations from Aum members on their designs for bioweapon production and dispersal.

These examples and others illustrate the unfortunate reality that the world has to deal with some base rate of deranged individuals, apocalyptic cults, and rogue states who may be motivated to develop and use biological weapons.

And as biology gets easier and cheaper to engineer, we have to contend with the potential for more and more overlap in the Venn diagram of those capable of making biological weapons and those motivated to pursue them — a trend I worry is being accelerated by AI.

We developed a benchmark called the Virology Capabilities Test or “VCT.”

VCT is a multi-modal, multi-select benchmark, assembled by rigorously crowdsourcing hard, practical virology troubleshooting questions from actual virologists.

When we gave the test to expert virologists with internet access, tailoring questions to their specific specialties, they scored on average 22%. The leading model at the time, o3, scored 44%, outperforming 94% of virologists. (Götting et al., arXiv, 2025)

This performance is not just limited to static question and answer assistance. AI can now use or even build you the tools to perform complex in silico design tasks relevant to bioweaponeering. There is also early evidence that AI can uplift people actually working in a wet lab.

So what can we do about this?

Well, there is a lot we can do, and I’ve tried to address a range of these at the Media Lab, especially around making societies more resilient to biological threats. This has included research on new ways to prevent airborne transmission with low wavelength ultraviolet light (Williamson et al., Blueprint Biosecurity, 2025) and building better systems to warn us when a novel pathogen emerges (Justen et al., Nature Microbiology, in review).

But more recently I’ve been thinking about prevention, and how to make AI systems more robust against biorisk.

And I asked myself, well, what can AI companies actually do to mitigate biological risk stemming from their models.

Some of the answers are pretty basic, like refusing to divulge information that unduly enables bioweapons creation. But in practice, this actually requires a fairly nuanced policy, because biology is deeply dual-use. The knowledge required to make a pandemic influenza vaccine strongly overlaps with the knowledge required to make pandemic influenza as a weapon. So we may have to make some difficult tradeoffs here in terms of who should be able to access what knowledge when and with what oversight.

But there’s actually a growing toolbox here — including inference-time classifiers that monitor for misuse, tiered access systems and know-your-customer screening for the most powerful biology models, techniques to filter or remove dangerous knowledge from models before or after training, and using AI itself to accelerate biological defenses like pathogen detection. My current research is looking at how these interventions stack up and what AI companies should actually be prioritizing, and I’ll have an article on this in the coming weeks.

But the meta point I want to leave you with is this: the AI systems being built right now, including in this building, are going to be among the most consequential dual-use technologies in history. Biology is where the stakes are arguably highest. And the people building these systems are in a unique position to make them safer — not just by making models more capable, but by investing seriously in the safeguards, the evaluations, and the defensive applications that keep the balance tilted toward the good.

Thank you.


References

  1. Jumper, J. et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021). doi:10.1038/s41586-021-03819-2

  2. Spooner, F. et al. Vaccination. Our World in Data (2025). ourworldindata.org/vaccination

  3. Leitenberg, M., Zilinskas, R.A. & Kuhn, J.H. The Soviet Biological Weapons Program: A History. Harvard University Press (2012). hup.harvard.edu

  4. Danzig, R. et al. Aum Shinrikyo: Insights Into How Terrorists Develop Biological and Chemical Weapons. CNAS (2012). cnas.org

  5. Götting, O. et al. Virology Capabilities Test (VCT). arXiv (2025). arxiv.org/abs/2504.16137

  6. Williamson, C. et al. Blueprint for Far-UVC. Blueprint Biosecurity (2025). blueprintbiosecurity.org

  7. Justen, L. et al. Deep untargeted wastewater metagenomic sequencing from sewersheds across the United States. Nature Microbiology (in review). medRxiv preprint


LinkedIn · Substack · ljusten [at] mit [dot] edu