The Promises and Perils of AI in Cybersecurity
AI and security operations: how to leverage AI while simultaneously defending and protecting your organization against it.
While artificial intelligence (AI) is certainly not new, it re-burst onto the scene this year thanks to the democratizing effect of generative AI powered by large language models (LLMs). Enterprise innovators like Google, with Google Cloud, have made AI engineering experimentation incredibly easy with Bard and Vertex AI respectively, and OpenAI introduced the public at large to the wonders of generative AI in ChatGPT.
In our industry, AI is going to keep evolving security operations. With nearly 40 cybersecurity patents, half of them featuring AI, Exabeam has some real insights about how this is going to continue to play out. What we see right now are the stories around AI’s potential for defense- and productivity-related capabilities bumping up against concerns around privacy, accuracy, and rapid adoption by adversaries.
While admittedly there’s a lot of market hype around AI at the moment, organizations are legitimately asking themselves — and the vendors they work with — how they can leverage it to improve their security posture and streamline their operations. At a macro level, the AI conversations we have with Exabeam customers fall into three categories: how to leverage AI to transform security operations; how to protect the AI they are using; and how to defend against AI.
In this article:
The first category is the one you might expect. It asks the question, “How do we leverage AI to transform our security operations?” Security pros want to know how they can use AI, specifically generative AI, around predictive analytics, better detections, investigation techniques, AI copilots, and workflow automation.
Right now, AI is opening a whole new world of possibility when it comes to combating attackers. Think for a moment about the sheer quantity of security-relevant data a company generates in a given day. At Exabeam, we process more than 200 terabytes of data every day — a number that represents two million events per second (EPS) on average (or 173 billion events per day); at peaks, this can go up to 300 terabytes. This is an avalanche of information — information on a scale that is almost incomprehensible to the human imagination.
Machine learning (ML) has been demonstrating its value in this arena for years now. It is one of the best technologies for effectively identifying patterns in such vast stores of information. It is, after all, mathematically based, and therefore perfect for detecting all kinds of statistical anomalies. For example, it can detect user access to a system that is unexpected because the user in question never typically accesses this system.
Exabeam dynamic risk scores are a good example of this kind of technology at work: They employ ML on top of traditional log ingestion to categorize security and IT events, then map those events along user and asset timelines. The risk scores are automatically assigned based on user behavior including web, endpoint and print activities, and VPN login locations, among other user activities. If a user who typically logs in from San Francisco is suddenly logging in from Croatia, for instance, that user’s risk score will go up.
Leveraging AI to transform security operations is the fun part of AI innovation, and there is a lot of exciting market development.
This part is a bit more challenging — namely, how to protect the AI technology you are using once it’s being leveraged anywhere inside your organization.
There are a number of concerns at play here. A major one is hallucinations — that is, generative AI’s unfortunate tendency to confidently spit out information that is patently false. For, say, a high schooler trying to write a paper, this is a problem, but not a major one; they can always cross-reference other sources. For a business dealing with many thousands of complex cybersecurity operations per day, it is a major issue — and it breeds a degree of mistrust that might slow the widespread adoption of generative AI in this context.
Then there’s the problem of model collapse. Model collapse is when generative AI models ingest large quantities of generative AI-produced materials, gradually losing information about the less common aspects of the data. It’s kind of like the game of telephone, where one person begins with a message and whispers it into the ear next to them, until the end of the chain. Typically, the person at the end of the line has the same idea of what the original message is, but is lacking the exact details.
Finally, there’s the problem of data leakage. The fact is that these public LLM tools are not yet anywhere near as safe as we’d like them to be, and any proprietary information you input into them is liable to be added to the model. Just like many consumers didn’t realize how much oversharing on social media networks like Facebook could put their own data privacy and security postures at risk, businesses are equally vulnerable with public LLM tools. Organizations need to be cognizant of the kind of data and information their employees are feeding into the models.
Defend against AI
AI-powered cyberattacks have been plaguing organizations for years now. The rise of generative AI just puts those attacks on steroids. Major breaches — many that utilize AI — are making the headlines on what feels like a weekly basis. The widespread shift to the cloud, while necessary, inevitable, and overwhelmingly beneficial, has also accelerated these attacks, and led a number of troubling trends to converge: the growth of shadow or unmanaged data, an expanded attack surface, and the increasing incidence of credential theft and ransomware demands. Today’s human and AI attackers alike are disturbingly relentless.
Another alarming trend is the use of AI to generate realistic images and recordings, known as “deepfakes.” These are going to upend phishing and social engineering. They are growing more persuasive and sophisticated by the day, and it is now easier than ever to simulate a person’s voice or writing style. Given that phishing and social engineering have already been used as common methods by attackers for years now, there is no question that these new tools will be put to nefarious uses too.
AI innovation continues
Still, it’s important to note that these are very, very early days when it comes to generative AI. Whether you’re creating, riding, or getting out of the way of the wave, the latest AI hype cycle sees no signs of breaking anytime soon. Some of the possibilities — and fears — touted by cybersecurity experts will inevitably flame out within a few months or a year; some, however, will gain significant traction. The AI-driven Exabeam Security Operations Platform is ultra-responsive and built for scale — it is designed to solve the pain points faced by security operations teams today and into the future. Which is to say that whichever direction things go, we’ll be well equipped to handle it.
Want to learn more about AI for security operations?
Read our white paper, A CISO’s Guide to the AI Opportunity in Security Operations.
- Clear AI definitions: We break down different types of AI technologies currently relevant to security operations.
- Positive and negative implications: Learn how AI can impact the SOC, including threat detection, investigation, and response (TDIR).
- Foundational systems and solutions: Gain insights into the technologies laying the groundwork for AI-augmented security operations.
In this paper, we:
- Examine AI developments through three different lenses
- Explore three impactful use cases of AI in the SOC
- Explain two crucial functions of security information and event management (SIEM) solutions and their role in AI-enabled defense
Unlocking Leadership: The 100th Episode of The New CISO Podcast
Dream Jobs and Diamonds: Communicating Success with Maria Sexton
From “WarGames” to Wall Street: Frank Vesce’s Cybersecurity Journey
Exabeam SIEM: Bridging the Gaps for Advanced SOC Functionality
Exabeam IRAP Assessment Completion Creates New Opportunities for Partners in Australia
Exabeam Completes Information Security Registered Assessors Program (IRAP) Assessment at the PROTECTED Level
Subscribe today and we'll send our latest blog posts right to your inbox, so you can stay ahead of the cybercriminals and defend your organization.
See a world-class SIEM solution in action
Most reported breaches involved lost or stolen credentials. How can you keep pace?
Exabeam delivers SOC teams industry-leading analytics, patented anomaly detection, and Smart Timelines to help teams pinpoint the actions that lead to exploits.
Whether you need a SIEM replacement, a legacy SIEM modernization with XDR, Exabeam offers advanced, modular, and cloud-delivered TDIR.
Get a demo today!