Event ReportsPublished on Jan 16, 2026
Beyond Agentic Threats: Fostering Cyber Resilience

The discussion was held as part of the “AI For People” event organised by the ORF, Carnegie India and Permanent Mission of India to the United Nations (Geneva). The discussions highlighted dual-use potentials of agentic AI, emerging threats, and strategies for cyber resilience.

The inherent autonomy of agentic AI amplifies both defensive and offensive cyber capabilities. In the cyber domain, this shift marks an escalation of the arms race: attackers and defenders compete through AI acceleration. Speakers highlighted that threat actors have already exploited agentic AI to supercharge malicious operations, while defenders have leveraged it for rapid threat detection. Key risks stem from the diffusion of agentic AI across interconnected systems. One of the speakers noted that as these AIs interact, unintended interactions could cascade vulnerabilities, evolving the “Internet of Forgotten Things” into an “Internet of Forgotten Agentic AI Models.” Forgotten or abandoned models become persistent threats, exploiting legacy flaws in critical infrastructure.

The case of Anthropic AI’s Claude LLM breach highlights how autonomous agents can orchestrate sophisticated intrusions at speeds beyond human capability. At the same time, access challenges impede response: cybersecurity researchers lack relevant data, hindering threat modelling and resilience building. Participants noted that this is particularly pronounced among Global South nations facing heightened development challenges, where uneven AI access enables targeted campaigns against emerging digital economies.

On the other end of the spectrum, agentic AI offers transformative defence tools that bridge chronic cybersecurity skills shortages. Highlighting the beneficial uses, speakers noted that agentic AI automates posture improvements, real-time anomaly detection, and response orchestration, addressing human limitations in scale and speed. Boosting cyber defences enables a “digital immune system” analogous to biological resilience, self-healing against evolving threats. Positive digital transformation impacts are already evident: agentic AI enhances monitoring in resource-constrained settings and establishes guardrails to protect critical national infrastructure.

Participants noted that the advent of agentic AI also demands a pivotal shift from static cybersecurity to dynamic cyber resilience, which prioritises adaptation over mere protection. Resilience requires systems to withstand, recover, and evolve amid AI-augmented threats. This paradigm acknowledges perpetual attacker-defender competition, where AI levels the playing field but demands proactive hardening.

However, even as these guardrails and regulations are put in place, they must balance innovation with safety – a “security by regulation design” approach. This imposes compliance without stifling growth. In addition, data access needs to be democratised, and Big Tech must be held accountable. In this, the Global South’s role is crucial: shape AI safety protocols collaboratively, leveraging forums like Interpol for cross-border enforcement. In a way, India’s story exemplifies this approach: establish innovation ecosystems, prioritise AI for development and control AI models and data to foster resilience.


This event report has been written by Sameer Patil.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.