UK financial regulators have initiated urgent discussions with the government’s cybersecurity agency and leading banks to evaluate the risks associated with Anthropic’s latest artificial intelligence model. Key officials from the Bank of England, the Financial Conduct Authority, and HM Treasury are engaging with the National Cyber Security Centre to investigate potential weaknesses in critical IT infrastructure highlighted by the new AI system.
In a significant development, representatives from major British banks, insurance firms, and exchanges are scheduled to receive briefings on the cybersecurity threats posed by the Claude Mythos Preview model during a meeting with regulators expected within the next two weeks. This move reflects growing concern over the AI’s implications for financial sector security.
Meanwhile, Anthropic has not responded to requests for comment, and the Bank of England declined to provide a statement. The Treasury, National Cyber Security Centre, and Financial Conduct Authority were also unavailable for immediate comment.
This initiative follows a similar meeting convened by U.S. Treasury Secretary Scott Bessent with prominent Wall Street banks to discuss the cyber risks linked to the AI model. Anthropic has revealed that the Claude Mythos Preview is being deployed under “Project Glasswing,” a controlled program allowing select organizations to utilize the unreleased AI for defensive cybersecurity applications.
Earlier this month, Anthropic disclosed in a blog post that the model had already detected thousands of significant vulnerabilities across various operating systems, web browsers, and other widely used software platforms, underscoring the AI’s potential impact on cybersecurity defenses.
