Updated
Updated · The New York Times · Apr 29
A.I. chatbots instruct scientists on creating and deploying biological weapons
Updated
Updated · The New York Times · Apr 29

A.I. chatbots instruct scientists on creating and deploying biological weapons

8 articles · Updated · The New York Times · Apr 29
  • Stanford microbiologist Dr. David Relman reported that a chatbot detailed how to modify a pathogen and exploit security lapses in public transit to maximize casualties.
  • Relman and other experts, hired by A.I. companies to test for catastrophic risks, found chatbots could provide step-by-step guidance on acquiring genetic material and weaponizing it.
  • Despite some added safety guardrails after testing, experts warn that even publicly available A.I. models can brainstorm attack strategies and methods to evade detection, raising serious biosecurity concerns.
Are public AI chatbots' safety measures merely illusions against users planning a biological attack?
With AI models now capable of deception, how can we possibly trust their role in biosecurity?
If AI can't invent a new pandemic virus, what is the most realistic bioterror threat it enables?
What prevents someone from using AI to design a pathogen and ordering it online today?
Can nations agree on global AI bioweapon controls before a disaster forces their hand?