Skip to content

Securing the Future of Work: Agent Behavior Analytics with Google Cloud — Read the Blog

Stranger Than Science Fiction: The Future of AI-Augmented Attacks

  • Jan 17, 2024
  • Steve Wilson
  • 3 minutes to read

Table of Contents

    Back in the early days of 2020, the FBI came out with a disturbing warning. It cautioned that deepfake technology — the ability to generate highly realistic images, video, or audio — was already able to dupe certain biometric tests.

    Fast-forward to the present, and the ever-expanding capabilities of artificial intelligence (AI) can potentially pose profound challenges to the security operations center (SOC). Threat actors are already developing new tactics for overcoming and evading security measures, and more methods are likely to emerge.

    A new generation of threats

    Beyond the ability to create deepfakes, generative AI can automatically produce text and write code, which can be used to undermine an organization’s defenses — sometimes in surprising ways. Consider these five scenarios.

    1. Phishing on a whole new level

    Cybercriminals have always known that human error is often an organization’s weakest link. Historically, phishing has been the most common type of social engineering — and in the age of generative AI, threat actors can automatically create more authentic-sounding messages that emulate a person’s specific writing style. They can also deploy them with far greater efficiency on a far grander scale.

    2. Malware with a thousand faces

    Analysts are already familiar with the problem of polymorphic malware — self-propagating and mutating code programmed to alter its shape and signature in an attempt to evade detection. In an age where AI is more responsive, reflexive, and adaptable than ever, it’s feasible — perhaps even inevitable — that the evasion and obfuscation techniques will become much more flexible and advanced.

    3. The exploitation of orchestration

    So many organizations have automated and orchestrated critical systems “as code,” meaning behind-the-scenes processes have been configured and coordinated to perform certain functions immediately when they receive the proper prompts. As AI develops, criminals will likely invent new ways to infiltrate and corrupt such systems. Of course, the SOC should be aware that generative AI is prone to make mistakes on its own, and as companies adopt new tools, new vulnerabilities may appear without any meddling from threat actors.

    4. A multitude of malicious binaries

    If generative AI can automatically write malicious binary code, it’s not a stretch to imagine a scenario where a criminal has an AI tool that produces multiple pieces of malicious code that all perform the same function. The first one may be caught and recorded in the library, but that would do nothing to stop the others. This should concern those SOCs that rely on signature-based rules or immature machine learning capabilities.

    5. Constructing a labyrinth of lies

    In the past, criminals have employed social engineering techniques such as “backstopping” to support their most highly targeted attacks, investing time and resources into building an elaborate smoke-and-mirrors network of fake people, products, and organizations that make their fraudulent façade look more legitimate. Now, the world is entering an era where such deception takes no time and could become considerably more common.

    Stranger Than Science Fiction: The Future of AI-Augmented Attacks

    The race, and chase, is on

    Security experts are actively researching how AI can be utilized for harm and devising ways to circumvent these kinds of attacks. But as any security leader knows, the resourcefulness of highly motivated cybercriminals should never be underestimated. Attackers and defenders will always seek new ways to outmaneuver each other in a relay that has been raging for decades.
    As a pioneer in delivering AI to security operations, Exabeam wants you to understand the monumental shifts in today’s AI-driven landscape. Check out the CISO’s Guide to the AI Opportunity in Security Operations to learn more.

    Read our white paper: CISO’s Guide to the AI Opportunity in Security Operations. This guide is your key to understanding the opportunity AI presents for security operations. In it, we provide:

    • Clear AI definitions: We break down different types of AI technologies currently relevant to security operations.
    • Positive and negative implications: Learn how AI can impact the SOC, including threat detection, investigation, and response (TDIR).
    • Foundational systems and solutions: Gain insights into the technologies laying the groundwork for AI-augmented security operations. 
    Steve Wilson

    Steve Wilson

    Chief AI and Product Officer | Exabeam | Steve Wilson is Chief AI and Product Officer at Exabeam. Wilson leads product strategy, product management, product marketing, and research at Exabeam. He is a leader and innovator in AI, cybersecurity, and cloud computing, with over 20 years of experience leading high-performance teams to build mission-critical enterprise software and high-leverage platforms. Before joining Exabeam, he served as CPO at Contrast Security leading all aspects of product development, including strategy, product management, product marketing, product design, and engineering. Wilson has a proven track record of driving product transformation from on-premises legacy software to subscription-based SaaS business models including at Citrix, accounting for over $1 billion in ARR. He also has experience building software platforms at multi-billion dollar technology companies including Oracle and Sun Microsystems.

    More posts by Steve Wilson

    Learn More About Exabeam

    Learn about the Exabeam platform and expand your knowledge of information security with our collection of white papers, podcasts, webinars, and more.

    • Blog

      UEBA vs. XDR: Rethinking SIEM Augmentation in the AI Era

    • Blog

      How Exabeam Helps Organizations Adapt to Australia’s Privacy Reforms

    • Webinar

      New-Scale Security Operations Platform: October 2025 Quarterly Launch

    • Blog

      Can You Detect Intent Without Identity? Securing AI Agents in the Enterprise 

    • Blog

      Securing the Future of Work: Agent Behavior Analytics with Google Cloud

    • Brief

      Exabeam and Google Cloud: Securing AI Agents and LLM Usage With Behavioral Analytics

    • Show More