Rules of Engagement
Bottom Line Up Front
BLUF #4: When it comes to AI safety, tech companies can’t have their cake and eat it too
0:00
-10:38

BLUF #4: When it comes to AI safety, tech companies can’t have their cake and eat it too

AI Battle Royale: ChatGPT vs. Gemini vs. Claude – Who Wins?
Graphic: https://lav1.com/wp-content/uploads/2025/02/ChatGPT_vs_Gemini_vs_Claude.jpg.webp; https://creativecommons.org/licenses/by-nc/4.0/

In September 2025, AI company Anthropic detected attempted cyber-espionage by Chinese hackers who used Claude to carry out most of their attack campaign. Anthropic quickly disrupted the majority of attempts but signaled that the cybersecurity risks of agentic AI will only increase. Another important issue to note is that Anthropic’s report was not co-authored or linked to any external threat intelligence body. AI companies cannot be both risk analysts and policy advocates for their own products. The regulatory vacuum opens the door for private companies to preempt (and further fragment) AI standards and practices with their own prescriptions. Third-party verification mechanisms are irreplaceable in domains where safety can come into conflict with corporate advocacy. U.S. policymakers should place more importance and attention on self-regulatory organizations and federally funded research and development centers, and they should model AI safety evaluations after those of non-profit safety institutes in other industries.



Thank you for subscribing. Share this episode.

Discussion about this episode

User's avatar

Ready for more?