AI Cyber Risk 2026: RBI, FinMin assess Mythos threat
Why AI-driven cyber risk is now a banking issue
India’s financial system is confronting a cyber risk that is tied not only to criminals and malware, but also to fast-improving artificial intelligence tools. Bankers have flagged fresh concerns linked to Anthropic’s AI model, referred to in reports and transcripts as “Mytho” and “Mythos”. The worry is that such models could be used to identify weaknesses in long-running technology processes and, in the worst case, enable access to internal data.
The context is a digital banking ecosystem where systems are deeply interconnected. Payment rails, cloud infrastructure, vendors, and data pipelines link banks with corporates, merchants, and retail customers. In that environment, a single successful breach can travel quickly across multiple layers.
What sparked the Mythos alert
Central banks have been described as being on high alert after Anthropic’s advanced cybersecurity model Mythos was accessed by unauthorised users. The controversy centres on “Claude Mythos Preview”, which was described as a highly advanced model considered too dangerous for public release because of its cybersecurity capabilities.
In broadcast transcripts, bankers are shown expressing concern that Mytho could find vulnerabilities that have existed in systems and processes for years. The core fear is not just a new type of attack, but a faster way to locate and exploit known and unknown weaknesses across complex bank technology estates.
Finance Ministry and RBI steps cited in reports
Sources cited in the material indicate that the Finance Ministry and the Reserve Bank of India (RBI) are actively assessing risk around Mythos, and that banks have been asked to take preemptive steps. The Finance Ministry also called an urgent meeting with bank CEOs to assess the threat posed by a single AI model, Anthropic AI.
From a supervisory standpoint, the episode is being treated as more than a one-off cyber story. It adds to a broader regulatory theme that technology-led risks can affect stability even when traditional balance-sheet indicators appear strong.
Why banks say the risk could scale faster
The concern around AI-assisted attacks is speed and repeatability. If an attacker can automate discovery of vulnerabilities, generate convincing social engineering content, and rapidly adapt based on responses, the same playbook can be deployed across many institutions.
Bankers, as referenced in the transcript, are “spooked” by the prospect that such tools can identify vulnerabilities that have existed in technology processes for years. That matters because legacy systems, integrations, and patch cycles can create a gap between discovering a weakness and being able to fix it safely.
RBI’s warning: digital banking changes what supervision must measure
RBI Deputy Governor Swaminathan J has warned that traditional metrics such as capital adequacy and liquidity are no longer sufficient to assess banks operating in a technology-driven ecosystem. Speaking at the Third Annual Global Conference of the College of Supervisors in Mumbai, he highlighted that many jurisdictions face similar issues such as rapid digitalisation, platform-based delivery, and fast-changing threat landscapes.
He flagged “shared digital dependencies” as a less visible source of systemic risk. Banks increasingly rely on the same cloud service providers, payment rails, data vendors, and cybersecurity tools. These dependencies may not show up in balance-sheet ratios, but can create common exposure across the financial system.
He also warned that cyber threats are often organised, well-funded, and persistent, and that vulnerabilities at vendors, partners, or shared technology components can undermine even strong internal controls. In this framework, resilience and recovery are not back-office functions but core capabilities.
Evidence that AI assistants can be exploited at scale
Separately, the supplied material includes an adversarial benchmark of 24 AI models from major providers, configured as banking customer-service assistants. The testing claimed that every model proved exploitable, with success rates ranging from 1% to over 64%. The most effective attack categories averaged above 30%, and the techniques were described as automated prompt injection methods that adversaries could replicate.
The stated takeaway from that benchmark was that the results point to an implementation problem rather than isolated model flaws. For banks, this shifts the focus toward governance, access controls, and how AI systems are integrated with data and workflows.
How phishing, deepfakes, and ransomware are evolving
The broader threat environment described in the material points to rising social engineering and ransomware risks in India’s BFSI ecosystem. One cited data point says phishing accounts for nearly 38% of all reported fintech frauds, with BFSI being the most targeted. Another report section notes phishing accounted for 25% of initial infection vectors, highlighting how frequently it is used as a starting point.
The same material states that India accounted for over 50% of global ransomware assaults in 2024, according to multiple threat-intelligence reports, and that supply-chain ransomware attacks now make up close to 90% of incidents affecting financial institutions. It also notes that attackers are increasingly using generative AI for deepfake emails, texts, and voice calls, making impersonation harder to detect.
Key facts and figures mentioned
What “preemptive steps” can mean in practice
The material does not list specific actions taken by individual banks, but it does outline the direction regulators are pushing. RBI-linked guidance and commentary referenced in the content emphasise zero-trust approaches, stronger governance for data and AI models, and accountability that cannot be outsourced even when banks depend on fintechs and technology partners.
The Digital Threat Report section also lists areas for organisations to strengthen across people, process, and technology. Examples mentioned include more frequent security training, accelerating vulnerability assessments, comprehensive incident response playbooks, integrating threat intelligence into monitoring, patching network devices more frequently, and tighter authentication and access control.
Why this matters for financial stability, not just IT security
A key risk described is contagion through connectivity. Banks connect corporates, retail users, and payment systems, and a breach in one layer can trigger a domino effect across the financial network. RBI’s supervisory framing also connects customer harm to confidence and liquidity risk, noting that in a digital environment, mistrust can spread rapidly.
In that backdrop, concerns around a powerful cybersecurity-focused AI model being accessed by unauthorised users, and the possibility of AI-assisted exploitation, fit into a systemic-risk lens rather than a narrow incident-response lens.
Conclusion
The Mythos-linked concerns, the Finance Ministry’s CEO meeting, and RBI’s recent supervisory warnings point to a common theme: digital banking has made risks faster, harder to isolate, and more dependent on shared technology ecosystems. Next steps, as indicated in the material, are continued regulatory assessment of AI-linked cyber risk and banks taking preemptive measures aligned with tighter governance, resilience, and third-party risk controls.
Frequently Asked Questions
Did your stocks survive the war?
See what broke. See what stood.
Live Q4 Earnings Tracker