The accelerated adoption of artificial intelligence is profoundly transforming technological systems. From copilots integrated into business tools to automated workflows driven by autonomous agents, AI is no longer limited to a simple application service. It now acts as an operational player in the IT system, capable of executing actions, querying third-party systems, and even handling sensitive data. An emerging challenge for Zero Trust architectures.
This evolution gives rise to a new category of digital identities: non-human identities associated with AI agents. API keys, service accounts, OAuth tokens, or machine-to-machine identities become the vectors of interaction between models, applications, and infrastructures. But how can these new actors be integrated into existing trust models?
For several years, the Zero Trust paradigm has established itself as a reference for securing information systems. Its principle is simple: trust no identity by default and systematically verify each access based on context and risk level.
However, Zero Trust was initially designed to manage human identities and user equipment. Yet, in an IT system enriched by AI, an increasing share of interactions comes from automated agents. If these technical identities are neither inventoried, governed, nor monitored with the same level of rigor as human accounts, the Zero Trust architecture loses part of its effectiveness. Securing AI agents thus becomes an essential element of any modern security strategy.
AI Agents, New Identities of the IT System
AI architectures rely on multiple automated access mechanisms. To call a model, query a database, trigger an action in a business tool, or retrieve information via an API, an AI agent must have a technical identity. Identities can take the form of API keys for accessing services or databases, service accounts to execute automated workflows, OAuth tokens authorizing access to SaaS applications, or machine-to-machine identities in cloud-native architectures.
These mechanisms are essential for the functioning of agents. However, they also introduce a paradigm shift: interactions with the IT system, primarily triggered by human users, are increasingly orchestrated by autonomous processes. An agent capable of analyzing a support ticket, consulting a documentation base, and triggering an action acts as a system user, often on a much larger scale and speed.
The Blind Spot of Zero Trust
The Zero Trust model is based on several pillars: strong authentication, access segmentation, continuous verification, and the principle of least privilege. These mechanisms are well mastered today for human users. Non-human identities, however, often remain less regulated. In many organizations, API keys and service accounts are created to meet a project need and then remain active without real supervision. They sometimes have extended privileges to simplify technical integrations.
This phenomenon is amplified by the rapid experimentation around AI. Data, innovation, or business teams deploy agents to automate tasks, connect models, or enrich applications, sometimes outside traditional governance processes. As a result, part of the IT system operates with automated access that escapes usual controls.
For an attacker, these identities represent an attractive target. Unlike user accounts, they are generally not protected by multi-factor authentication and may have extensive permissions. An exposed API key in a code repository or a compromised token can thus open access to critical resources. In a Zero Trust architecture, these identities outside the control perimeter create an implicit trust zone. Exactly what this model seeks to avoid.
Excessive Privileges and "Shadow Agents": Beware of Danger!
One of the major risks related to AI agents concerns privilege management. To simplify development, it is common to grant an agent excessive rights: full access to a database or extended permissions on multiple APIs.
Contrary to the principle of least privilege. An agent should only have the rights necessary for its function. However, this granularity is still rarely applied.
Added to this is the emergence of "Shadow Agents." Like Shadow IT, agents can be informally created by business or technical teams to automate certain tasks. They often use quickly generated technical identities without centralized registration.
Over time, these identities become difficult to trace. Some persist even though the initial project has disappeared, while others retain unjustified high privileges. In an environment where AI agents interact with multiple systems (databases, SaaS, or internal tools), these phantom identities constitute a significant risk vector.
Mapping and Governing AI Agents
To effectively secure an automated IT system, it is essential to make AI agents' identities visible and controllable. This involves inventorying all present agents, whether officially deployed by IT or from unregulated local initiatives. Each technical identity (API key, service account, OAuth token, or machine-to-machine identity) must be inventoried and associated with a human responsible to ensure traceability and accountability.
Beyond their simple location, it is crucial to control the resources these agents can connect to. Centralized management of tools, applications, APIs, and databases allows the application of the principle of least privilege to each interaction, continuous monitoring of access, and immediate detection of any abnormal activity. Technical identities must be able to be quickly revoked or updated to limit risks related to abusive exploitation or malfunction.
Finally, the governance of agents must cover their entire lifecycle: creation, privilege assignment, activity monitoring, secure renewal of identifiers, and deletion when no longer needed. Treating AI agents as full-fledged identities ensures complete visibility, systematic control, and coherent integration into the overall security strategy.
Towards a Truly Extended Zero Trust
The principle of Zero Trust (Never Trust, Always Verify) must also apply to automated entities. This implies several evolutions in security strategies to systematically authenticate machine-to-machine identities, apply the principle of least privilege to AI agents, monitor their behaviors to detect potential abnormal uses, or integrate these identities into segmentation and access control policies. As AI architectures become more complex, controlling these identities becomes a structuring element of the security posture.
Artificial intelligence no longer just analyzes or recommends: it acts directly in information systems. In an increasingly automated IT system, trust must no longer be verified only for users but for every agent capable of acting on their behalf.
*Aziz si Mohammed is Senior Manager, Solutions Engineering at [OKTA](https://www.okta.com/fr-fr/).*
The post [{ Expert Opinion } – AI Agents: The Missing Link of Zero Trust?](https://www.silicon.fr/cybersecurite-1371/tribune-expert-agents-ia-le-chainon-manquant-du-zero-trust-226404) appeared first on [Silicon.fr](https://www.silicon.fr).