Organizations should establish identification and lifecycle management for all managed AI agents interacting with their systems. This includes provisioning agents with unique client identifiers and de-provisioning them when no longer needed. Understanding the different classes of AI agents based on their interaction methods (API vs. GUI), autonomy, supervision, ownership, and segment is crucial for tailoring appropriate IAM strategies. Furthermore, assigning sponsors or custodians responsible for reviewing and recertifying agent access helps ensure accountability.
IAM systems should be capable of identifying when a session or connection is driven by an AI agent. This is particularly important for GUI-interacting agents where differentiating them from human users and malicious bots presents a challenge. Being able to tag or mark sessions originating from AI agents allows enforcing appropriate IAM journeys. This allows downstream services to apply appropriate controls, such as limiting access or showing specialized interfaces.
When AI agents act on behalf of human users, it is crucial to apply delegation mechanisms rather than allowing the agent to impersonate the user. This aligns with the concept of authenticated delegation where a user securely grants limited permissions to an agent. User credentials should not be directly shared with the agent; instead, issue delegated tokens with limited scopes. This ensures that the agent acts within defined boundaries and maintains a clear chain of accountability back to the human principal, while also enabling visibility and monitoring into agents' interactions with systems.
When AI agents act on behalf of users, the IAM system should prompt the human user for explicit authentication and authorization using out-of-band methods (such as push notifications, QR codes, or other suitable flows). This ensures that human users are not required to provide credentials to the agent and that sensitive operations may have human oversight and consent in real-time.
AI agents should be granted privileges based on the principle of least privilege, meaning they should only have access to the narrowest set of actions and resources required to perform their delegated tasks. Utilizing short-lived tokens or time-bound scopes can further reduce the risk of agents' specific vulnerabilities. On high risk operations, explicit approval from the human identity should be applied also known as human-in-the-loop.
For sensitive operations attempted by AI agents, organizations should explicitly verify human approval (a.k.a human-in-the-loop). Depending on the use-case and sensitivity, consider using challenges that are more robust to being mimicked by AI (for example, identity proofing with a selfie match is more robust to AI compared to OTP over email). This provides a crucial checkpoint for ensuring that critical actions are reviewed and authorized by a human sponsor or end-user before execution. Logging these verification checkpoints is essential for audit purposes. The authenticated delegation framework supports this by making the human role in agent workflows explicit, allowing for verification of decisions and correction of errors.
Organizations should implement robust monitoring and auditing mechanisms to track AI agent activities. This includes logging agent actions, detecting anomalies in their behavior or access patterns, and establishing audit trails to review their interactions and ensure compliance. The ability to revoke an agent’s credentials if it is compromised or exhibits malicious or abnormal behavior is also critical for maintaining security.
Start Today
Contact Sales
See how Ping can help you deliver secure employee, partner, and customer experiences in a rapidly evolving digital world.
Request a FREE Demo