
By Kenny Chiu, Head of Solutions Engineering, Ensign InfoSecurity
In the not-so-distant past, the idea of a machine not only recommending but deciding what information reaches your screen was the stuff of speculative fiction. That’s changing today. We are now seeing a new class of AI systems – agentic AI – that is rapidly outgrowing its past confines of traditional AI. It doesn’t just filter information; it acts, reasons, adapts, and reflects. And this changes everything.
Agentic AI systems are structured around autonomy, memory, and goal orientation. They’re designed not simply to respond to queries but to pursue outcomes. This has profound implications for the information economy, as it means machines are increasingly capable of steering discourse, curating narratives, and discern signal from noise. This shift from AI as a passive tool to an active partner is changing everything.
Watch the Expert Talk:
Kenny Chiu unpacks the key ideas from this article in a video, exploring how agentic AI systems are redefining autonomy, decision-making, and digital trust. Watch the video →
These systems operate through modular architectures that mirror human reasoning in fragments: perception modules gather and validate data, planning and deliberation agents set goals and decompose tasks, use of episodic and vector-based memory stores, and actuators that produce content, trigger alerts, or adjust policy decisions. Leading approaches such as ReAct, Reflexion loops, and Tree-of-Thought prompting are now the backbone of deployed systems in digital diplomacy, autonomous cyber defence, and next-generation journalism.
The question that haunts technologists and policymakers alike is not just what these systems can do, but what they are permitted to do. In an information domain increasingly shaped by agentic AI, traditional norms of visibility, authorship, and accountability erode. The very notion of truth becomes layered. Each layer reflects an intentional system decision, made by an agent that might revise its own rationale tomorrow.
This is where the stakes escalate. Elite developers and strategic AI planners are already embedding continuous evaluation loops within systems—feedback where both machine and human reviewers test and question decisions made. These are powered by “eval agents” that monitor how well a system aligns with its own principles and with externally imposed norms.
Trust, then, is no longer about provenance alone. It's about traceability. Can we understand why a given agent made a decision, what tools it used, what assumptions it made? Explainability now sits at the core of agentic AI governance. Models increasingly generate their own explanations via interpretable interfaces, using tools or proprietary justification chains that simulate decision paths.
Yet even explainability is insufficient without oversight. Modern agentic architectures now support ON/OFF toggles, policy sandboxes, and override APIs—frameworks like CrewAI make it technically feasible to interdict where required. In sensitive information environments, especially where geopolitical or economic narratives are at stake, this capability becomes non-negotiable.
Experts also insist on robust memory systems to ensure accountability over time. Episodic memory enables agents to learn from past tasks; vector memory helps agents detect when prior knowledge should influence a present decision. Together, they enable something closer to continuity of intent—an essential function if systems are to maintain consistent narratives over time rather than drift unpredictably.
One major evolution from older systems is the rise of multi-agent orchestration. In large-scale environments, no single agent can meaningfully control the flow of high-volume, high-risk data. Instead, agents collaborate: one gathers intelligence, another refines it, another evaluates ethical risk, and another handles external communication. These coalitions of reasoning—often arranged in role-based chains—create emergent behaviour that is more than the sum of its parts.
Still, all of this hinges on what goes in. The integrity of agentic outputs is only as strong as the inputs. Elite developers increasingly rely on rigorous data provenance strategies to guard against poisoned data. Without it, even the most transparent agentic process can end up propagating a falsehood with algorithmic conviction.
Governments and tech coalitions are moving fast. Singapore’s AI Verify, the UK’s AI Safety Institute, and ISO/IEC 42001 are converging on the idea that agentic systems must be evaluated not just at deployment but continuously—in real-time and under real-world pressure. This is not about checkbox compliance. It’s about establishing whether an agent can be trusted to act in our name.
Ultimately, the arrival of agentic AI in the information space forces a new question: Who gets to decide what’s real, what’s relevant, and what’s right? The algorithm, increasingly, answers with action—not analysis. But that answer must itself be explainable.
The path forward will demand more than regulation or new tooling. It will require a new philosophy of design—where transparency is not bolted on, but baked in; where agency is monitored, not presumed; and where every autonomous decision still echoes a trace of human intention.

Head of Solutions Engineering, Ensign InfoSecurity
Since joining in 2020, he has overseen the transformation of in-house R&D into bespoke cybersecurity solutions, leading multidisciplinary teams spanning Big Data Engineering, and Web App Development, ensuring technical innovations are translated into practical outcomes.
Prior to joining Ensign, Kenny served as Director of New Products and Solutions at StarHub, where he led data science and engineering teams to develop geomobility and behavioural analytics solutions. He also held roles in Singapore’s Ministry of Defence and began his career as a software engineer at CSIT.
Kenny holds a Master’s degree in Electrical and Computer Engineering from Cornell University and brings deep expertise in telco big data analytics and engineering, and strategic organisational planning.