Skip to content

Exabeam Confronts AI Insider Threats Extending Behavior Detection and Response to OpenAI ChatGPT and Microsoft Copilot — Read the Release.

Claude Mythos, Project Glasswing, and the Machine-Speed Security Race 

  • Apr 16, 2026
  • Exabeam Editor
  • 6 minutes to read

Table of Contents

    Anthropic’s latest Claude news shows how AI is compressing the time from vulnerability discovery to credentialed lateral movement, and why security teams need behavior-based detection across humans and AI agents.  

    Anthropic’s Project Glasswing, announced on April 7, 2026, gives selected partners early access to Claude Mythos Preview for defensive cybersecurity work. Anthropic says the model has already identified thousands of zero-day vulnerabilities across critical infrastructure. Much of the discussion in the industry has focused on what these capabilities could mean for attackers, but that reaction is only telling half of the story.  

    The other half is just as important. The same AI that can help attackers discover weaknesses faster can also help defenders validate exposure sooner, investigate suspicious activity with more context, and reduce risk before it spreads. This is not simply a story about offensive acceleration. It’s a story about both sides gaining access to a new pace of cyber operations.  

    That is what makes this so significant. The old attack chain still applies: find a weakness, gain access, abuse trust, and expand through legitimate pathways. What changes now is the pace. AI can compress that sequence from months into days, hours, or even minutes. That means defenders have to do the same on their side. They have to effectively leverage AI to combat AI-powered attacks.  

    Why Claude Mythos Matters for AI Cybersecurity  

    One of the strongest points in Exabeam Chief AI and Product Officer, Steve Wilson’s, recent LinkedIn article is that none of this should feel sudden to people who have been paying attention. AI-enabled attackers are not a new idea. What Mythos changes is the clarity of the signal and the urgency of the response. It makes visible just how quickly offensive capability is advancing and how little room defenders have left for slow, sequential workflows.  

    As AI compresses the time between discovery and exploitation, security teams need an operating model that assumes access may already be established and focuses on how to detect, understand, and contain risk as it moves at machine speed through the environment. That is especially important as AI agents, assistants, and autonomous workflows become active participants inside enterprise systems with real permissions, real identities, and real operational impact.  

    Why This Starts to Look Like Insider Risk  

    The most meaningful part of many attacks has never been the initial entry point. It is what happens next.  

    Once inside, attackers do not need to look noisy. They can operate through valid credentials, service identities, approved tools, APIs, and familiar systems. That was already difficult to detect. AI raises the stakes by allowing those same tactics to move, iterate, and scale faster. What once looked like a slow-moving campaign can now become machine-speed trusted expansion.  

    That’s s why what’s happening now increasingly resembles an insider-risk problem as much as a classic perimeter-defense problem. The signal is often not the existence of access, but the way that access is being used. The clearest indicators come from changes in behavior: a pattern that shifts, activity that expands, access that appears in a new place, or a sequence that no longer fits what should be normal.  

    This becomes even more important as AI agents take on more work inside the enterprise. These systems can authenticate, retrieve data, call APIs, trigger actions, and operate through delegated permissions, service identities, and automated workflows. That makes them productive assets, but also privileged actors whose behavior needs to be understood in the same way security teams already think about users, service accounts, and trusted processes.  

    Behavior is now the most practical lens for defenders. As environments grow more distributed, more automated, and more agentic, security teams need ways to distinguish routine activity from the subtle signs of misuse, compromise, or misalignment at a faster pace than attackers who use AI.  

    What Security Teams Should Do Now  

    The right response requires a practical shift in emphasis:  

    1. Assume You Have Less Time Between Exposure and Action  

    Prevention still matters, but it cannot carry the full burden in an environment where discovery and exploitation continue to accelerate. More than 62% of OT environments take more than 90 days to apply security patches. Security programs need stronger readiness for the period immediately after initial access, when attackers begin testing privileges, moving between systems, and trying to expand their reach.   

    2. Behavior Detection Across Humans and Agents  

     A single login, API call, prompt, or file access may appear normal in isolation. What matters is the sequence and the deviation over time: first-time access, unusual tool use, changed request patterns, unexpected outbound movement, privilege shifts, or an agent behaving outside its normal bounds. That is why behavior baselining matters more than ever. It gives defenders a way to separate normal automation from risky activity, even when both use legitimate pathways.  

    3. Treat AI Agents, Assistants, and Autonomous Workflows as Actors Inside the Environment  

    These productivity tools are semi or fully autonomous systems that can authenticate, retrieve data, invoke APIs, trigger actions, and operate with meaningful permissions. That means the same basic questions apply: Who created them? What can they access? What does normal behavior look like? What changed?   

    4. AI-Powered Investigation That Keeps Pace  

    Detection alone is not enough in a machine-speed environment. Analysts need faster movement from anomaly to context to action. AI should be leveraged to quickly correlate signals, assemble the sequence of activity, and clarify what changed, what access is involved, and where risk is spreading.   

    5Rehearse For Credentialed Lateral Movement, Not Just Initial Compromise  

    Many of the highest-impact failures do not happen at the moment of intrusion. They happen later, when an attacker, compromised process, or autonomous agent uses valid credentials, service accounts, tokens or inherited permissions to expand access across systems. The risk is no longer restricted to lateral movement, but trusted expansion that unfolds faster and stealthier than human investigation cycles were designed to contain.  

    Why Behavior Intelligence is Crucial  

    Legacy detection approaches were not built for this pace or this level of ambiguity. Static rules and short time-window correlations still have value, but they are often not enough to detect subtle misuse of trusted access, especially when that misuse unfolds across users, systems, applications, and AI-driven workflows.   

    That’s why security operations increasingly need continuous observation, stronger behavioral baselining, and faster movement from suspicious activity to a coherent understanding of risk to support context and faster decision making.  

    However, the challenge is not only speed. It’s stealth at scale. As AI becomes better at automating nation-state-style attacks, low-and-slow misuse of valid identities, service accounts, and routine workflows becomes increasingly difficult to distinguish from normal operations without continuous behavioral baselining.   

    One of the big advantages of user and entity behavior analytics (UEBA) is that it detects low and slow threats that are lurking in your environment. Now organizations also need Agent Behavior Analytics (ABA), which applies behavioral baselining, anomaly detection, and contextual analysis to digital workers operating inside the enterprise. Just as UEBA transformed how organizations manage human insider risk, ABA provides visibility, governance, and control over the behavior of non-human actors.  

    With the rise of AI like Mythos, tactics that previously required a nation-state level of effort have the potential to be replicated by the average bad actor and AI will increasingly become smarter about hiding behind existing perimeter controls to incur maximum damage. That’s why security teams need to look for immediate abnormalities as well as beyond the initial compromise. To prepare for hidden persistence, privilege drift, and subtle deviations in trusted activity that can otherwise remain buried without costly manual threat hunting, behavior intelligence is essential.  

    Why This Matters for the Next Phase of Security Operations  

    The strongest security organizations have always adapted as attacker methods evolved.  

    As AI raises the pace of discovery and execution, security operations have a chance to become more continuous, more behavior-aware, and more precise in how they surface risk. That is a healthy evolution. It encourages closer coordination across security, identity, cloud, and response teams, and it sharpens the organization’s ability to see meaningful change early.  

    In that environment, the advantage goes to teams that can interpret activity in context and act with clarity. 

    Learn More About Exabeam

    Learn about the Exabeam platform and expand your knowledge of information security with our collection of white papers, podcasts, webinars, and more.

    • Blog

      Claude Mythos, Project Glasswing, and the Machine-Speed Security Race 

    • Brief

      Outcomes Navigator

    • Brief

      Threat Center

    • Blog

      Add Context, Risk Scoring, and Automation to Microsoft Sentinel

    • Data Sheet

      Exabeam Success Services

    • Guide

      Five Benefits of Augmenting Microsoft Sentinel with New-Scale Analytics

    • Show More