Banks around the world are bracing for a new wave of AI-powered cyberattacks, with regulators and policymakers issuing fresh warnings about the technology’s potential to exploit longstanding vulnerabilities in financial systems.
The concern centers on Mythos, the latest and most capable AI model from Anthropic, the company behind the Claude chatbot. The model is not currently available to the public because Anthropic and other AI developers consider it too capable to release widely.
Internal testing of Mythos has uncovered thousands of severe security vulnerabilities across every major operating system and web browser. Some have gone undetected for decades, and many are what the cybersecurity industry calls “zero-day” vulnerabilities — flaws so dangerous that developers must fix them immediately upon disclosure.
To counter the emerging threat, Anthropic has granted Mythos access to a defensive coalition of partners including Microsoft, Amazon Web Services, Apple, Cisco and the Linux Foundation. The company has committed $100 million in usage credits and $4 million in open-source grants toward finding and fixing affected vulnerabilities. More than 40 additional organizations have been granted access, including several U.S. banks. However, as of publication, Anthropic has not granted access to any banks in Australia, the United Kingdom or Europe.
Anthropic confirmed Wednesday that it is investigating claims reported by Bloomberg that a small group of unauthorized users had gained access to Mythos. There is no current indication the alleged access was for malicious purposes.
The risk to the banking industry is heightened by its reliance on legacy systems — decades-old technology that may be especially exposed to AI-driven attacks. The issue dominated discussions at the International Monetary Fund Spring Meetings in Washington last week, where the Iran war was a major focus alongside warnings about emerging cybersecurity threats to the financial sector.
For consumers in countries with strong banking protections, the immediate risk is limited. In Australia, the first 250,000 Australian dollars of a customer’s deposits are insured through the government-backed Financial Claims Scheme, and the Australian Securities and Investments Commission ensures banks investigate and reimburse fraudulent transactions where the customer is not at fault. Experts recommend regular operating system and banking app updates and continued vigilance against phishing attacks.
The longer-term challenge, according to Toby Walsh, professor of AI at UNSW Sydney, is that defending software is fundamentally harder than attacking it. Software is among the most complex products humans build, making it nearly impossible to guarantee that it is bug-free. The result is an ongoing race between attackers and defenders to identify and patch vulnerabilities before they are exploited.
The European Union recently released its age verification app — intended to underpin emerging laws on social media access, pornography and age-restricted content — but security researchers identified vulnerabilities within hours that underage users could exploit.
In high-stakes settings, organizations are turning to mathematical verification to prove software is free of bugs. The Beneficial AI Foundation has launched a “moonshot” project to prove that the messaging app Signal is bug-free and protects user privacy as claimed. Such efforts remain exceptional, but advances in AI itself may eventually help reverse the imbalance between attackers and defenders.
#Banks #Brace #Wave #AIPowered #Cyberattacks #Anthropics #Mythos #Model #Reveals #Thousands #Vulnerabilities