Skip to content

Exabeam Delivers First Configurable Peer Benchmarking for CISO Decision-Making — Read the Release

The Rise of AI Agents: A New Insider Threat You Can’t Ignore

  • Aug 05, 2025
  • Kevin Kirkwood
  • 4 minutes to read

Table of Contents

    TEN18 by Exabeam

    The nature of insider threats is evolving. Security leaders understand autonomous AI agents are increasingly present in enterprise environments, transforming the way organizations must think about identity, access, and risk. What may be less clear is how rapidly these agents are advancing, how seamlessly they integrate into workflows, and how subtly they can shift from productive contributors to potential liabilities. 

    This is more than an evolution of insider threats. It’s the emergence of a new threat class: AI-powered insiders. These are not rogue employees or compromised accounts; they are synthetic identities with operational privileges, autonomy, and in many cases, little to no oversight. The security models in place today were not built to account for their presence. 

    TEN18 by Exabeam has assumed a proactive role in defining and testing this category. Through controlled evaluations of agents such as Devin and Claude, we’re observing actual behavior and identifying gaps in operational oversight that demand immediate attention. We believe it’s time to recognize and formalize this new class of insider threat and build the frameworks necessary to govern.

    AI Agents Behave Like Employees

    In our testing, these agents were embedded in research, engineering and security workflows to examine their real-world functionality. We observed that they do more than just assist. Emerging AI agents inherit the full digital identity of their users. For instance, Devin functioned under a developer’s credentials, accessed internal systems, interfaced with repositories, and demonstrated rapid contextual learning. 

    The critical concern is that these agents are granted full access without corresponding layers of oversight or governance. Unlike human employees, they do not pause for approval, and they operate with an efficiency and persistence that can mask subtle boundary violations. This lack of friction creates new exposures, particularly when agents begin to operate across multiple systems or initiate actions based on inferred goals rather than explicit instructions. 
     
    Additionally, the growing complexity of these agents includes their ability to interact with one another. Cross-agent behavior introduces new detection, identity, and control challenges, particularly when their coordination isn’t human-supervised. These interactions could amplify unintended consequences, introduce blind spots in incident detection, and create an entirely new class of operational headaches. 

    From Helpful to Harmful: The Shift Happens Fast

    There’s no denying that these tools offer value. They generate foundational code, detect documentation inconsistencies, and analyze data sets at scale. However, we documented behaviors that present immediate risk: 

    • Seek private and public repo access without prompt 
    • Traverse entire codebases to catalog internal assets 
    • Suggest policy workarounds that may violate security controls 
    • Connect to third-party and competitor domains without permission 

    Organizations experimenting with AI agents must recognize these actions not as theoretical but as established behavioral patterns. The potential for misuse, whether accidental or intentional, is significant. Left unchecked, these agents could become unwitting participants in exfiltration, privilege escalation, or lateral movement, often without leaving behind conventional indicators of compromise. 

    Why This Is a New Class of Insider Threat

    What we are witnessing is not a progression of insider threat tactics or behaviors. Instead, it’s the emergence of a new actor. AI-powered insiders operate within the perimeter, inherit trusted identities, and perform actions with operational legitimacy. They can navigate systems, access sensitive data, and execute code, yet they are not employees, contractors, or adversaries in the traditional sense. 

    The core problem isn’t malicious design; it’s the autonomous execution of tasks without built-in ethical boundaries or accountability. These agents can inadvertently create vulnerabilities, misroute data, or facilitate lateral movement simply by following incomplete instructions. 

    Moreover, when agents begin interacting with other AI systems, accessing external LLMs, or initiating tasks based on inferred goals, they transition from tools to independent actors. That changes the equation entirely. TEN18 by Exabeam is leading the charge to define this threat class because the industry needs clarity, language, and controls before these agents evolve beyond our ability to manage them. 

    What Security Leaders Should Be Doing Now

    Security leaders cannot afford to take an observational or passive stance. The introduction of autonomous AI agents into enterprise environments is already reshaping the threat landscape. If you’re deploying or testing AI agents in your environment, you need to treat them as distinct identities. That means: 

    • Monitoring AI agent activity independently of their associated users 
    • Applying behavior-based analytics to detect unusual access patterns or privilege escalations 
    • Creating policies to govern where and how agents can operate, who owns them, and how ownership responsibilities are enforced 
    • Preventing agent-to-agent communication unless explicitly required and auditable 
    • Logging all agent interactions and mapping them to specific tasks and user requests 

    Legacy approaches to identity and access management are insufficient. These agents demand a new lens on accountability and risk. 

    What to Watch for Next

    This briefing is the first of a series designed to support your security team as you navigate the implications of AI-driven activity in the enterprise. Future entries will include: 

    • Profiles of agent behavior tied to specific operational anomalies 
    • Detection models for unauthorized task execution and lateral movement 
    • Guidance on separating user telemetry from agent-driven events 
    • Policy templates and control strategies for agent deployment 

    While some are just beginning to consider the implications of autonomous agents, Exabeam is already gathering empirical evidence and building practical detection strategies. We invite the broader community to join the conversation. 

    If you are deploying AI agents — or exploring how to safely scale them — your insights and questions are critical. Together, we can shape a new generation of insider threat defense. 

    Kevin Kirkwood

    Kevin Kirkwood

    Chief Information Security Officer | Exabeam | Kevin Kirkwood is the Chief Information Security Officer at Exabeam, overseeing the global Security Operations Center (SOC), Application Security (AppSec), Governance Risk and Compliance (GRC), and Physical Security. With over 25 years of experience, Kevin has led security initiatives for organizations such as PepsiCo, Bank of America, and the Federal Reserve System. Kevin studied Marine Biology and Journalism at Texas A&M and after six years in the US Navy, he received a Bachelor of Science in Computer Information Systems. Kevin is passionate about giving back and volunteers as the Vice Chairman of the Planning Commission for his county and serves as President of the local water board. In his free time, Kevin enjoys continuous learning, riding motorcycles, and dreams of creating a farm for both fun and profit.

    More posts by Kevin Kirkwood

    Learn More About Exabeam

    Learn about the Exabeam platform and expand your knowledge of information security with our collection of white papers, podcasts, webinars, and more.

    • Blog

      The Cost of Compromise Begins Inside the SOC

    • White Paper

      Breaking the Rules: When Static Detection Logic Reaches Its Limits, What’s Next?

    • Blog

      What’s New in LogRhythm SIEM October 2025

    • Blog

      What’s New with New-Scale in October 2025: Measurable, Automated, Everywhere Security Operations

    • Blog

      Catching the Quiet Threats: When Normal Isn’t Safe

    • Blog

      UEBA vs. XDR: Rethinking SIEM Augmentation in the AI Era

    • Show More