Skip to content

Exabeam Named a Leader in the 2025 Gartner® Magic Quadrant™ for SIEM, Recognized for the Sixth Time — Read More

Seeing the Invisible: Visualizing and Protecting AI-Agent Activity with Exabeam & Google 

  • Oct 23, 2025
  • Steve Povolny
  • 5 minutes to read

Table of Contents

    Artificial intelligence is no longer just an emerging technology in security operations and modern SIEMs. AI agents now enrich alerts, drive investigations, generate reports, and increasingly act as extensions of human analysts. But as with any powerful technology, new risk surfaces follow. 

    From the Exabeam perspective, every entity in your environment deserves protection. That now includes AI agents, which have rapidly evolved capabilities and access to sensitive data, in many cases superseding human users and devices on the network. Naturally, AI agents must be visible and protected from doing harm in the same way as more traditional entities. 

    Our latest integration with Google Cloud AI builds on that philosophy, combining Google Gemini Enterprise logs with Model Armor guardrails inside the Exabeam New-Scale Security Operations Platform. This enables security teams to detect misuse, visualize agent activity, and investigate AI-related risks with the same rigor applied to users, endpoints, and cloud workloads. 

    AI Agents as Identities: The New Risk Surface

    When AI agents take on tasks, they don’t operate in a vacuum. They are tied to human requesters, corporate data, and potentially sensitive workflows. As such, they act as identities with privileges, behaviors, and risks. 

    Consider the following scenarios: 

    • A developer pastes an API key into a Gemini prompt for troubleshooting. 
    • A finance employee uploads a spreadsheet containing PII to an agent for quick summarization. 
    • A malicious insider attempts to jailbreak an AI model using an adversarial prompt injection to bypass safeguards. 
    • An external partner uses an agent in unexpected ways, requesting restricted topics or tools. 

    Each of these scenarios represents the kind of policy violation or abnormal behavior that would previously go unseen in any SIEM platform. Until now, the conversational context and model-safety signals that matter most weren’t part of the detection strategy.

    Google Signals That Matter: Gemini Enterprise Agent and Model Armor

    Google provides two crucial sources of visibility: 

    • Google Gemini Enterprise Agent logs capture the operational heartbeat of agents — who requested them, when, from where, with which tools, and how usage changed over time. 
    • Google Cloud Model Armor telemetry adds the intelligence layer — monitoring prompts and responses, categorizing violations (e.g., sensitive data exposure, malicious injections), and enforcing actions such as redaction or block. 

    Together, they provide a holistic picture of both “what happened” and “was it safe”. By bringing both into Exabeam, we unlock entity modeling, detection, dashboards, and AI-powered investigations. 

    The Exabeam Lens: Bringing AI Agents into the Entity Model

    Exabeam New-Scale Analytics is designed to baseline identities and behaviors, and flag anomalies. Extending New-Scale Analytics to AI agents was natural. 

    • Entity modeling: Agents are treated as first-class entities with baselines for normal prompt categories, token volumes, tools invoked, and response outcomes. 
    • Behavioral scoring: Deviations like sudden spikes in blocked prompts, unusual data categories, or first-time tool use are flagged. 
    • Contextual correlation: Model Armor violations are tied to human requesters, devices, geolocation, and privileges to separate benign experimentation from genuine risk. 

    In practice, this means you can tell the difference between an analyst innocently testing an edge case and a bad actor attempting to exfiltrate data through Gemini. In the world of a security operations platform, this is the difference between a false positive and an actual threat. 

    Dashboards and Visualizations: Making Agent Activity Visible

    Security teams need more than raw logs; they need clear, actionable visualizations. Leveraging the Exabeam Nova Visualization Agent, analysts can build dashboards and visualizations to illustrate many of the use cases described here, in natural language. The New-Scale Platform already provides dashboards such as: 

    • AI/LLM Usage – GCP: Models the unique operations and users across Google Cloud AI products, such as Vertex 
    • Public AI/LLM Usage: Visualizations of most common LLM or AI-related domains visited, across users, IP addresses, devices and more 
    • AI/LLM Alerts for DLP Solutions: Illustrates the policy violations for AI DLP tools by user 

    As we look to expand into Agent Identity visualization, we are now exploring dashboards such as: 

    • Agent Hygiene Overview: Top agents by volume, token consumption, and violation rate.
    • Policy & Risk Hotspots: Heatmaps of violation types by business unit, showing where redactions or blocks cluster.
    • Sensitive Data Exposure: Trending views of PII or secrets detected in prompts, with ratios of blocked vs. allowed.
    • Anomaly Explorer: Unusual activity for a specific agent or peer group, such as marketing agents suddenly invoking code-execution tools.

    These views make invisible behaviors visible, giving leaders and analysts confidence in how AI is being used.

    Unlocking Agent Behavior Detections

    This capability unlocks, for the first time ever, the ability for Exabeam to create policy-driven detections and UEBA-style anomalies for AI agents

    • Guardrail Confirmed Violations
      • Prompts containing private API keys. 
      • PII detected in inputs or outputs. 
      • Repeated jailbreak/prompt injection attempts. 
    • Anomaly-Based Behaviors
      • Unusual spikes in blocked prompts per agent. 
      • First-time tool or connector use. 
      • After-hours high-risk usage from new locations. 
    • Process Integrity Alerts
      • Model version mismatches versus approved Gemini models. 
      • Drift in Model Armor policies applied to the agent. 

    Each detection is paired with remediation guidance surfaced in Exabeam Nova, via the Investigation Agent. 

    Here’s a sneak peek at the kind of detections that are possible when modeling agent behaviors. 

    Investigation with Exabeam Nova: From Noise to Narrative

    When something suspicious happens, Exabeam Nova does what human analysts need most: it turns detections into a coherent investigation. 

    For example: 

    • Detection: A finance agent shows a spike in “PII in prompts.” 
    • Exabeam Nova summary: Identifies the agent, users, devices, and prompt violations. Integrates agent activity along with other detections in an alert or case to create a comprehensive threat investigation summary.  
    • Root cause analysis: Suggests the spike coincides with a new workflow integration deployed today. 
    • Recommended actions: Quarantine the agent, notify the business owner, and tighten the Model Armor policy for PII exposure. 

    Exabeam Nova natural-language summaries save hours of manual pivoting, while its recommendations accelerate containment.  

    Measuring Success

    Security leaders want measurable outcomes. With Exabeam and Google Gemini Enterprise, you can track: 

    • Reduction in successful policy violations. 
    • Mean time to detect and investigate AI-agent misuse. 
    • Remediation time after Exabeam Nova recommendations. 
    • Even secondary benefits such as token usage optimization and prevention of redundant tool calls. 

    These KPIs help quantify the value of AI oversight in real business terms.  

    Getting Started 

    We’re working hard to provide default rules, dashboards/visualizations and much more agent visualization and detection capability. If you’re interested in learning more about how to replicate this capability in your environment via custom rules and collectors, please contact your Sales Engineer. 

    Conclusion: Agents Represent a New Insider Threat 

    AI agents are here to stay. They represent both unprecedented productivity and, left unchecked, unprecedented risk. By combining the Google Gemini Enterprise and Model Armor telemetry with the New-Scale Security Operations Platform, we make this new insider threat observable, analyzable, and defensible, allowing you to see what other tools overlook, understand the context, and act with speed.  

    Steve Povolny

    Steve Povolny

    Senior Director, Security Research & Competitive Intelligence | Exabeam | Steve Povolny is a seasoned security research professional with over 15 years of experience in managing security research teams. He has a proven track record of identifying vulnerabilities and implementing effective solutions to mitigate them.

    More posts by Steve Povolny

    Learn More About Exabeam

    Learn about the Exabeam platform and expand your knowledge of information security with our collection of white papers, podcasts, webinars, and more.

    • White Paper

      Unlocking the Power of AI in Security Operations: A Primer

    • Blog

      Seeing the Invisible: Visualizing and Protecting AI-Agent Activity with Exabeam & Google 

    • Podcast

      Pick Your Pain: A Methodical Approach to Career Growth

    • White Paper

      10 Reasons to Augment Your SIEM with Behavioral Analytics

    • Blog

      Why Rule Count Is a Misleading KPI for SIEM

    • Guide

      Eight Ways Agentic AI Will Reshape the SOC

    • Show More