Every major technological breakthrough follows a well-known pattern: the promise is enticing, adoption accelerates, competitive pressure intensifies, and security always comes last.
This was already the case with the public cloud. A broad and poorly defined concept, having different meanings depending on the organizations, cloud adoption created both opportunities and concerns. Established companies were often caught off guard, either exposed by more agile competitors or surprised by shadow IT initiatives operating outside centralized control. The result is a mix of fear, ambiguity, and uncertain security posture.
Today, the same pattern is seen with artificial intelligence. But this time, things are moving even faster, managed on a larger scale, and with much higher stakes. AI is not a single technology. It is an evolution in waves, and the poor understanding of these waves currently constitutes one of the greatest risks for businesses.
The Three Waves of AI: Why They Matter for Security
The first wave of AI focused on predictive analytics: data lakes, large-scale pattern recognition, and machine learning operating mainly in the background. For many organizations, this adoption happened quietly, without real oversight at the board level. From a security perspective, these systems primarily represented a data protection issue: ensuring that sensitive information was neither disclosed nor misused.
The second wave, generative AI, changed everything. When tools capable of producing human-like text, code, and images entered the public domain, AI suddenly became a central topic of discussion. However, this visibility came at a cost. Generative AI was lumped into a single, overly broad concept of "AI," masking critical differences in risk profiles and security controls. Security teams reacted predictably by focusing on what was most visible.
But it is the third wave, agentic AI, that fundamentally changes the threat landscape.
Agentic AI: When Systems Act, and No Longer Just Assist
Agentic AI systems do not just analyze or generate content: they act. They connect directly to business systems, make decisions, and trigger workflows. Increasingly, they do so semi-autonomously, with limited human oversight. This is not a theoretical future.
Predictive AI and generative AI are fundamentally data exchange problems. Agentic AI is a problem of behavioral integrity and system integrity. As soon as AI agents are allowed to interact with ERP platforms, financial systems, logistics workflows, or customer environments, the impact perimeter of a compromise expands significantly.
The parallels with the early evolutions of the Internet are striking. Static websites gave way to dynamic applications driven by databases. Suddenly, SQL injections became a dominant threat. Automation exposed new attack vectors. Each architectural evolution introduced risks that security teams were not yet equipped to handle. Agentic AI represents a similar inflection point.
The Blind Spot: Internal Control vs. External Reality
There is not a lack of investment, but rather a misplaced overconfidence.
In other words, organizations think they are secure because they control what happens within their own infrastructures, while neglecting the expanding ecosystem of partners, platforms, and AI-driven supply chains beyond their borders.
This blind spot becomes particularly dangerous when agentic AI starts operating beyond organizational boundaries. Today's "internal" AI quickly becomes the interconnected automation of tomorrow's supply chains. The retail, logistics, and manufacturing sectors are likely to lead this transformation, as companies pursue sustainability, just-in-time production, and AI-driven operational optimization.
When agentic systems begin transferring work from one organization to another, the attack surface multiplies. Security failures will no longer be isolated incidents. They will cascade.
Defending Against the Evolving AI-Driven Threats: A Change in Mindset
Defending against AI-driven threats does not require abandoning existing security principles, but demands their evolution. Many of the safeguards needed to secure agentic AI are derived from effective controls used to manage human users. The main difference lies in the speed, scale, and continuous nature of operations.
Despite this, AI agents must still be treated as human users from a security perspective, with controls based on the Zero Trust model. This means assigning identities, defining access according to the principle of least privilege, establishing behavioral baselines, and continuously monitoring anomalies. If an agent suddenly starts interacting with systems outside its defined perimeter, this deviation must be as visible and actionable as suspicious human behavior.
Segmentation becomes essential, not as an abstract architectural ideal, but as a concrete way to limit the scope of impacts in case of a compromise. Without it, compromised agents can move laterally at machine speed.
And perhaps more importantly, organizations must stop viewing AI security as a mere add-on. If organizations are already struggling to cope with current threats, how could they manage emerging threats such as agentic AI and quantum computing?
From Reactive Cybersecurity to "Resilient by Design" Cybersecurity
The main lesson from both cloud adoption and AI evolution is this: reactive security does not scale.
The pace of innovation now consistently outstrips governance, legislation, and procurement cycles. Waiting for regulatory frameworks to mature or for incidents to force action is no longer viable. Resilience must be designed from the start, not added after the fact once disruption has occurred.
This involves shifting the focus from point solutions to architectural agility. Organizations must build security models capable of adapting as AI capabilities evolve, rather than breaking with each evolution.
AI will not slow down. Agentic systems will only become more capable, more connected, and more autonomous. Organizations that continue to view AI security as a marginal or future problem will repeat the mistakes of the cloud era.
This time, however, the consequences will spread faster and further.
The question is no longer whether AI will reshape the threat landscape. It already has. The real question is whether companies are ready to defend against it before the cascading effects reach them.
*Martyn Ditchburn is CTO in Residence at Zscaler
Photo: © DR
The post { Expert Opinion } – From the Cloud Era to Agentic AI: Why Cybersecurity Must Catch Up with Innovation appeared first on Silicon.fr.