Seeing Machines: How OT Analytics and Machine Learning Are Rewriting Industrial Defence

Seeing Machines: How OT Analytics and Machine Learning Are Rewriting Industrial Defence

By Dr Jonathan Goh, Head of Machine Learning, Ensign InfoSecurity

 

There’s something quietly profound unfolding in the most inconspicuous facilities—water treatment plants, power substations, and industrial floors. They don’t resemble cyber battlegrounds, but in today’s world, they are.

 

Operational Technology (OT) environments are fundamentally different from traditional IT. These systems weren’t designed with cybersecurity in mind—they were built for uptime, safety, and precision control of physical processes like pumping water, generating power, or regulating chemical flows. Many were designed decades ago, isolated from the internet, relying on proprietary protocols and air-gapped designs. But times have changed.

 

Modern connectivity, remote access needs, and the convergence of IT and OT have pried open these once-contained systems. In doing so, they’ve introduced a new and dangerous set of vulnerabilities. What was once protected by physical barriers and isolated architectures is now exposed to sophisticated cyber threats—from ransomware gangs to nation-state adversaries.

 

Watch the Expert Talk:
Dr Jonathan Goh breaks down the core ideas of this article in a short video, explaining how OT analytics and machine learning are transforming the landscape of industrial cybersecurity. Watch the video →

 

OT security today is plagued by a set of interrelated challenges that stem from the very nature of these environments. The first and most persistent issue is the disconnect between OT and IT systems—each operates in isolation, resulting in a fragmented landscape where data doesn’t flow across domains. This lack of visibility means that any attempt at unified threat detection is fundamentally impaired. On top of this, many of the existing OT monitoring tools rely on signature-based or threshold-driven rules. These systems are effective at flagging deviations but lack the intelligence to discern intent or context. As a result, they often produce a flood of alerts that overwhelm operators, many of which are false positives or low-priority issues, leading inevitably to alarm fatigue.

 

More critically, these alerts offer no real narrative. They don’t tell security teams who might be behind the activity, what tactics are being used, or whether the anomaly represents an actual threat. Without that context, teams are forced to treat every alert the same, resulting in wasted time and effort. Even when every alert is examined, there's no reliable way to tell if it stems from a routine system fault or a deliberate cyberattack. Most turn out to be harmless, but the few that matter risk being lost in the noise. And underpinning all of this is the lack of integration with modern threat models like the MITRE ATT&CK framework for ICS. Without a structured way to map observed behaviours to known adversary techniques, teams remain reactive—unable to connect the dots, prioritise threats, or see the bigger picture that would allow them to act decisively.

 

To secure Operational Technology environments in today’s threat landscape, we need a paradigm shift—from isolated alerts to contextualised detection. This requires a multi-layered approach that fuses telemetry, security signals, adversary behaviours, and threat intelligence into a unified detection fabric. Instead of treating each data source—sensor logs, network flows, command histories, and firewall events—as isolated signals, we must synthesise them into a coherent story of system behaviour and attacker intent.

 

This begins at the sensor level, where our detection systems observe the physical behaviours of critical infrastructure. Using a hybrid approach that blends designed-based logic and machine learning, we pinpoint anomalies within Critical Infrastructures. Our distributed sensor anomaly detection framework is built from the ground up using data-driven dependency modelling: if a valve is open, the flow must increase; if a pump runs, the pressure should rise. These relationships become auto-generated rules that reflect expected system behaviour.

 

Meanwhile, continuously varying signals—like water levels, temperatures, or vibration patterns—are modelled using machine learning using historical baselines, accounting for seasonal or shift-based variations. The result is a domain-aware expert system that understands what “normal” looks like for a specific plant and flags deviations with precision. This alone allows operators to localise incidents, rapidly triage sensor anomalies, and act. But anomalies, in isolation, don’t tell the whole story.

 

Sensor alerts explain what happened—but not who caused it, or why. Was it a valve failure? A misconfigured controller? Or an intentional command injection? Without attribution, the response is a guessing game. That’s where context becomes indispensable. By incorporating telemetry from IT-facing security systems—such as endpoint detection tools, firewalls, and IDS—we begin to add layers of meaning.

 

At the network layer, we analyse command traffic. Machine learning models trained on normal control flows detect when a command’s effect does not match historical patterns. For instance, if a “start pump” command is issued but pressure doesn’t rise—or drops unexpectedly—it could signal command tampering. This allows us to separate benign faults from deliberate manipulation.

 

If such an anomaly coincides with a blocked C2 domain or a known malicious IP, the evidence mounts. Cross-domain correlation is key: a firewall event followed by command tampering and then a pressure anomaly paints a picture. These aren’t isolated signals—they’re linked steps in an unfolding intrusion. The ability to trace how a compromise moves from network to logic to physical outcome is what transforms basic detection into operational clarity.

 

But to go further, we enrich this chain of events with threat intelligence. Our threat intelligence engine aggregates curated sources—from paid feeds, open-source intelligence, and the MITRE ATT&CK for ICS framework—to build a searchable, unified adversary database. When alerts surface with identifiable TTPs, we map them back to known threat actors, campaigns, or sightings. This elevates detection from technical alerting to strategic attribution. We can now answer: has this pattern been seen before? Is it associated with a known threat actor? Are similar campaigns underway in other regions or sectors?

 


Through this process, alerting becomes insight. Anomalies become incidents. Incidents become narratives. We move from a sea of unranked alerts to threat-informed, prioritised detection that empowers action.

 

This is not AI for its own sake. It’s analytics with purpose—governed, explainable, and accountable. The risk in OT isn’t just false positives. It’s false decisions. That’s why our detection architectures must translate telemetry into comprehension. Dashboards must show not just what was detected, but what it means. Alerts must be tailored to roles. Responses must be mapped to operations. At every level, trust must be earned—with clarity, not complexity.

 

The next frontier isn’t more data. It’s better judgment. And with layered, threat-informed detection architectures, we are teaching our machines to make that judgment—carefully, collaboratively, and in service of a higher goal: trust in the systems that keep our societies running.

 

Author Photo

Dr Jonathan Goh

Head of Machine Learning, Ensign InfoSecurity

Dr Jonathan Goh studied computer science in the United Kingdom, earning both his undergraduate and doctoral degrees from the University of Surrey. He graduated with First Class Honours in Computer Information Technology and completed his PhD in 2011 with a thesis in medical image analysis. He began his career in academia as a Research Fellow at the University of Surrey before moving back to Singapore, where he held research leadership roles at A*STAR’s Institute for Infocomm Research and the Singapore University of Technology and Design, leading work in multimedia forensics and the security of cyber-physical systems.

He later moved into senior leadership positions at ST Engineering and Booz Allen Hamilton, where he drove AI and cybersecurity innovation. Since 2022, he has served as Senior Director of Machine Learning Operations at Ensign InfoSecurity, where he leads the development of secure, end-to-end AI platforms for cybersecurity, with a strong focus on Operational Technology (OT).

His work integrates machine learning with threat intelligence to enable knowledge-driven threat detection and resilient AI deployment within critical infrastructure environments. Bridging deep technical expertise with real-world impact, his contributions span the full lifecycle from research to operational solutions in cyber defense.

    Contact Us
Copyright © 2025 Ensign InfoSecurity Pte. Ltd.