By Paul Tan, Head of Government and Singapore Enterprises, Ensign InfoSecurity
IT HAS not happened yet, but it could. Two artificial intelligence (AI) systems, clashing autonomously in cyberspace. One attacking, the other defending. Each learning, adapting and escalating, without human intervention.
This is the future that agentic AI – autonomous systems capable of decision-making, hypothesis formation and independent action, with minimal human oversight – may bring. An autonomous attacker AI could infiltrate a network, seeking vulnerabilities and deploying attacks that constantly morph to evade detection. Perhaps not tomorrow, but soon enough that organisations must start preparing now.
While there has not been public evidence of truly autonomous offensive AI systems yet, the likelihood of weaponisation is high. Agentic AI is a rapidly evolving field presenting both immense potential and significant risks.
On the defence side, agentic AI has fundamentally changed the game of cybersecurity. It can adapt to evolving cyberattacks and give defenders stronger and more proactive protection. By analysing threats and acting in real time, it enables responses within milliseconds, far faster than any human-led process.
“The very speed and autonomy that make agentic AI a powerful defender also introduce significant, new risks across technical, ethical and operational domains. ”
In today’s security operations centres, where analysts are swamped with alerts and bogged down by false positives, manual validation can take days. Agentic AI slashes this to minutes by independently querying data, analysing logs and presenting defenders with a comprehensive threat picture.
Unlike existing automation tools, its ability to form hypotheses and build context from memory allows it to prioritise critical alerts and reduce information overload. The AI then presents its findings to human cyber defenders, offering a comprehensive picture that enables swift and informed decisions.
However, the very speed and autonomy that make agentic AI a powerful defender also introduce significant, new risks across technical, ethical and operational domains.
One pressing concern is securing AI agents themselves. To function, they need access to sensitive repositories, but unlike humans, they cannot provide biometrics or physical factors for authentication.
Their digital access credentials must be stored in the system, creating challenges around identity and privileged access management that even top cybersecurity firms are still grappling with.
Another concern is autonomy. Unless the operations are well modelled into the agentic AI, the AI system can make a wrong decision. While an AI might instantly isolate malware, it could also mistakenly shut down a server supporting financial transactions or even medical life-support systems.
Without carefully modelled safeguards, such errors could be catastrophic.
Meeting these challenges demands collective effort. Governments and the private sector must work together to establish best practices for testing and deployment, supported by regulatory frameworks that balance innovation with accountability.
Trusted platforms for joint research and real-time intelligence-sharing are essential to accelerate safe adoption while reducing risks from fragmented practices.
In Singapore, for example, partnerships between cybersecurity companies and security agencies are exploring how agentic AI can complement human defenders, with meticulous documentation of AI recommendations and human responses forming a shared knowledge base.
Even so, keeping humans in the loop remains non-negotiable. While AI agents can handle background processing and analysis, human oversight remains crucial for the final decision.
Technology leaders must also rethink operations: With AI handling analysis, defenders should shift from passively waiting for alerts to actively hunting threats.
“In this emerging cyber battlefield, where autonomous systems make decisions, take actions and shape outcomes, organisations must elevate their preparedness for an era of agentic AI versus agentic AI.”
A cautious, staged roll-out of an agentic AI system is essential, starting with non-critical nodes to test, learn and refine, and where mistakes would not disrupt business. Leaders must also recognise that agentic AI is not a plug and play; it requires deep integration to learn an organisation’s unique systems and processes.
Ultimately, success hinges less on budgets than on talent. Organisations need discerning professionals with hands-on experience who can cut through the technology hype and assess true capabilities. And as ever, the fundamentals still matter: System hardening, patch management, knowing your digital footprint, and protecting crown-jewel data should come first.
In this emerging cyber battlefield, where autonomous systems make decisions, take actions and shape outcomes, organisations must elevate their preparedness for an era of agentic AI versus agentic AI. The urgent question is whether they are ready to make the necessary shifts to meet this new reality.