ChatGPT will soon be able to tell us how to create new bioweapons
Credit: Cat, Shutterstock
OpenAI admits that its AI future could be used to build bioweapons.
Brace yourselves, folks — the brains behind ChatGPT have just made a confession that’s part Tech breakthroughPart Science Fiction Horror. OpenAI, Microsoft’s AI powerhouse, admitted that its new artificial intelligence models may be flawed. Help create new bioweapons. Yes, that’s right. Robots are becoming so clever that they can create killer bugs to eliminate humans.
OpenAI has revealed that it is racing to develop AI technology that could revolutionise the world in a blog so casual, it was as if it came with popcorn. Biomedical research — and also, potentially, the next Global pandemic.
The company stated that it felt a responsibility to maintain a barrier against harmful information while enabling scientific progress.
Translation? Translation?
AI: helping to prepare for the end of days?
OpenAI’s Head of Safety, Johannes Heidecke told Axios that the company does not believe its current technology can create new viruses. Not yet — but warned the next generation might help “highly skilled actors” replicate known bio-threats with terrifying ease.
Heidecke confessed, “We are not yet at a point where we can create biothreats which have never existed before.” “We are less concerned with replicating things experts already know.”
In other words, AI isn’t inventing zombie viruses yet — but it might soon become the world’s most helpful lab assistant for bioterrorists.
OpenAI’s bold plan
The company says its focus is on prevention. We don’t believe it’s acceptable for us to wait and watch a biothreat occur before we decide on the level of safety that is required,” reads the blog. But critics say that’s exactly what’s happening — build now, worry later.
Heidecke believes that their safety systems must be nearly perfect to keep bots from going rogue.
“This isn’t something where 99 percent or one in 100,000 is enough,” he warned.
Sounds reassuring… until you remember how often tech goes glitchy.
Biodefence or a biotrap:
OpenAI’s models could be useful in biodefence. But some experts fear these “defensive” tools could fall into the wrong hands — or be used offensively by the right ones. Imagine the potential of a government with a questionable track record using AI to manipulate pathogens.
If history has taught us one thing, it is that good scientific intentions can lead to disaster.
Chatbot of Doom? How AI nearly contributed to the development of a bioweapon for 2023
As reported by BloombergIn late 2023, an ex-UN weapons inspector entered a building adjacent to the White House with a small, black box. This was not a spy film. Washington was the location, and the contents of that box stunned officials.
The box held synthetic DNA — the kind that, assembled correctly, could mimic components of a deadly bioweapon. It wasn’t just the contents of the box that shocked people. It was the choice of ingredients.
An inspector working for AI safety company Anthropic used its chatbot Claude to play the role of a bioterrorist. The AI suggested not only which pathogens were to be synthesized but also where to place them for maximum destruction. The AI even gave suggestions for where to place the pathogens. Buy the DNA — and how to avoid getting caught doing so.
The bioweapons threat and AI chatbots
Over 150 hours were spent by the team in order to understand the responses of the bot. The results? It didn’t just answer questions — it Think of a brainstorm. Experts say that modern chatbots are more dangerous than search engine because they’re creative. They’re creative.
The AI provided ideas that they had never even considered asking. Bloomberg Riley Griffin, the journalist who broke the news.
The US government responded a few weeks later by issuing an executive order calling for tighter control of AI and science funded by the government. Kamala Harris warned that “AI-formulated biological weapons” could endanger millions.
Should AI be treated like a biohazard in terms of regulation?
Scientists urge caution as regulators race to catch up. Over 170 researchers have signed a promise to use AI in a responsible manner, arguing that its potential for medical advances outweighs its risks.
Still, Casagrande’s findings sparked real concern: AI doesn’t need a lab to do damage — just a laptop and a curious mind.
Griffin stated that “the real fear isn’t AI.” It’s the result of AI and synthetic biotechnology colliding.
No one is talking about the biosecurity blindspot
The briefings for the government did not include small companies who handle sensitive biological information. This, say experts, creates a dangerous blindspot.
Anthropic claims to have patched up the vulnerabilities. But the black box moment was a wake-up call: we’re entering an age where chatbots might not just help us cure disease — they might teach us how to spread it.
Not a doomsday scenario yet. It’s a new type of arms race.
This isn’t a theoretical threat. If models such as GPT-5, or even beyond, end up in the hands of malicious people, we may be facing a digital Pandora’s box. We could have instant access to instructions for synthesising virus, altering DNA or bypassing laboratory security.
OpenAI acknowledges that “those barriers are not absolute.” This is the technical equivalent of saying: “The door’s locked — unless someone opens it.”
The verdict: Smarter tech or a scarier future?
OpenAI wants science to save lives. It’s also edging towards a world where anyone with a computer and a grudge can play God. Is this innovation — or a slow-motion disaster in progress?
Now, all we have left is one burning questions: Should you even bother building AI if your AI could be used to help someone build a bioweapon at all?
More information technology news.
Then, you can read more. US News.