Graphic: https://lav1.com/wp-content/uploads/2025/02/ChatGPT_vs_Gemini_vs_Claude.jpg.webp; https://creativecommons.org/licenses/by-nc/4.0/
In September 2025, AI company Anthropic detected attempted cyber-espionage by Chinese hackers who used Claude to carry out most of their attack campaign. Anthropic quickly disrupted the majority of attempts but signaled that the cybersecurity risks of agentic AI will only increase. Another important issue to note is that Anthropic’s report was not co-authored or linked to any external threat intelligence body. AI companies cannot be both risk analysts and policy advocates for their own products. The regulatory vacuum opens the door for private companies to preempt (and further fragment) AI standards and practices with their own prescriptions. Third-party verification mechanisms are irreplaceable in domains where safety can come into conflict with corporate advocacy. U.S. policymakers should place more importance and attention on self-regulatory organizations and federally funded research and development centers, and they should model AI safety evaluations after those of non-profit safety institutes in other industries.
Thank you for subscribing. Share this episode.













