
Artificial Intelligence News: Anthropic and OpenAI Hire Experts to Prevent AI Misuse
The world of artificial intelligence (AI) is rapidly evolving, but with great power comes great responsibility. Leading AI firms, Anthropic and OpenAI, are taking proactive steps to mitigate potential risks associated with the misuse of their technologies. Both companies are actively recruiting experts in chemical weapons and explosives, signaling a growing concern that their AI tools could inadvertently assist in the creation of dangerous materials.
Anthropic’s Proactive Measures
Anthropic, a prominent US-based AI firm, recently posted a recruitment notice on LinkedIn seeking a specialist with at least five years of experience in “chemical weapons and/or explosives defence.” The role also requires knowledge of “radiological dispersal devices” – commonly known as dirty bombs. This move underscores the company’s commitment to building robust safeguards against the “catastrophic misuse” of its AI software. Essentially, Anthropic fears its AI could provide instructions on how to manufacture chemical or radioactive weapons and wants an expert to strengthen its defenses.
OpenAI Follows Suit
Anthropic isn’t alone in this approach. OpenAI, the developer of ChatGPT, has advertised a similar position – a researcher specializing in “biological and chemical risks.” The salary offered by OpenAI, reaching up to $455,000, is nearly double that of Anthropic’s, highlighting the perceived urgency and importance of this role. You can find more information about OpenAI’s career opportunities here.
Ethical Concerns and Expert Warnings
However, this strategy isn’t without its critics. Some experts are raising alarms, arguing that providing AI systems with information about weapons – even with instructions not to utilize it – introduces inherent risks. Dr. Stephanie Hare, a tech researcher and co-presenter of the BBC’s AI Decoded TV programme, questioned the safety of using AI to handle sensitive information related to chemical and explosives, stating, “There is no international treaty or other regulation for this type of work and the use of AI with these types of weapons. All of this is happening out of sight.”
Geopolitical Context and Government Involvement
The urgency surrounding these hires is further amplified by the current geopolitical landscape. The US government’s calls on AI firms coincide with ongoing military operations in regions like Iran and Venezuela. This has led to increased scrutiny and a demand for responsible AI development.
Anthropic’s Stance Against Military Applications
Interestingly, Anthropic is currently engaged in legal action against the US Department of Defence. The company was designated a supply chain risk after refusing to allow its systems to be used in fully autonomous weapons or mass surveillance of American citizens. Dario Amodei, Anthropic’s co-founder, expressed concerns in February that the technology wasn’t yet mature enough for such applications.
OpenAI’s Compromise
While OpenAI initially aligned with Anthropic’s position, it subsequently negotiated a contract with the US government, though implementation hasn’t begun yet. Anthropic’s AI assistant, Claude, remains embedded in systems provided by Palantir and is reportedly being deployed in the US-Israel Iran conflict.
Industry Response and Future Outlook
A group representing tech giants has criticized government action against Anthropic, labeling it a “temper tantrum.” As artificial intelligence news continues to unfold, the debate surrounding ethical considerations, government regulation, and the potential for misuse will undoubtedly intensify. Staying informed about these developments is crucial for understanding the future of this transformative technology.
Stay updated with the latest tech stories and trends by subscribing to BBC’s Tech Decoded newsletter.




