Exabeam SIEM: Bridging Gaps for SOC - Exabeam

What You Don’t Know Can Hurt You: How to Use AI Responsibly

Published
January 24, 2024

Author

Reading time
5 mins

The mission of the security operations center (SOC) is to protect the organization’s data, as well as the data and privacy of users. To do that, it requires tools, systems, and solutions that analyze and produce reliable information, remain compliant with data protection and privacy regulations, and don’t behave unpredictably or irregularly.

Can the latest artificial intelligence (AI) tools and applications meet these criteria in their current iterations? Not always, which is why SOCs need to be conscientious as their organizations adopt these technologies, and as they’re increasingly deployed in security operations.

In this article:

Peeking into the black box

Technologies such as generative AI are powered by large language models (LLMs), which are trained on massive datasets typically scraped from across the web. In fact, these datasets are so gigantic that it’s challenging to know what data the model has ingested, nor is it possible to track what happens to data that gets into the model.

Therefore, you might compare this type of AI to a black box — or perhaps a black hole, since once data goes into the model, there’s no getting it back out.

Data scientists are already making great progress in delimiting what models can and can’t ingest, and generative AI models that are purpose-built for specific industries and use cases can help impose guardrails. However, there can still be security implications when working with opaque systems and processes. Security teams need to be aware of these when utilizing and monitoring AI, as well as vetting the controls used by potential solutions vendors.

One risk is data leakage, in which sensitive or proprietary information is absorbed into the model; this could carry regulatory ramifications for organizations. But another risk comes from the fact that when you don’t know what’s gone into the model, you can’t trust what you get out — hence the well-documented challenges of AI hallucination.

What You Don’t Know Can Hurt You: How to Use AI Responsibly

How hallucinations happen

When you’re sending a text message, you may have noticed that your smartphone constantly suggests the next word you might be looking for. That may be a relatively rudimentary application of machine learning (ML), but it’s not entirely different from what’s happening when you use an application like ChatGPT.

As you watch generative AI automatically produce text, it’s not consulting some vast repository of information to provide you with an objective, verifiable truth; instead, it’s predicting the next most likely word in the context of your query. Therefore, it’s liable to go on strange tangents in which the AI confidently and authoritatively yields nonsense, known as hallucination.

For lay people, it may be amusing at best and inconvenient at worst. But for security analysts, who must make quick decisions based on accurate insights, it’s imperative that any generative AI tools they’re deploying are protected against this risk, and flag when they have insufficient information rather than making it up.

What is model collapse?

Another cause for concern is model collapse, which is the tendency for the quality of a generative AI model’s outputs to deteriorate surprisingly quickly, as AI-generated content is increasingly added to the model.

It goes back to the aforementioned tendency of generative AI to deliver outputs based on probabilistic likelihood instead of genuine truth. AI-generated materials quickly gloss over and lose minority characteristics present in the data, which means that as these materials are incorporated into models, they become less and less representative of reality.

Are there transparent models?

Traditional ML typically requires data scientists to build and train models for specific use cases, meaning they tend to be accountable for what goes into the model and what comes out the other side. Unsurprisingly, ML and its more advanced offshoot, deep learning, has driven the most reliable AI solutions for the cybersecurity space and, for years, has been utilized to analyze immense stores of information and identify patterns.

At Exabeam, our AI-driven Security Operations Platform uses the AI in ML to enable everything from assigning dynamic risk scores to user and entity behavior analytics (UEBA), which understands an organization’s baseline of normal activity to flag deviations effectively. AI capabilities like these remain essential even as generative AI uncovers new ways to augment defenses — and it remains especially critical as generative AI is used to augment attacks. Download the CISO’s Guide to the AI Opportunity in Security Operations for a fuller examination of these shifts.

Want to learn more about AI in the SOC?

Read our white paper: CISO’s Guide to the AI Opportunity in Security Operations. This guide is your key to understanding the opportunity AI presents for security operations. In it, we provide:

  • Clear AI definitions: We break down different types of AI technologies currently relevant to security operations.
  • Positive and negative implications: Learn how AI can impact the SOC, including threat detection, investigation, and response (TDIR).
  • Foundational systems and solutions: Gain insights into the technologies laying the groundwork for AI-augmented security operations. 
The AI Opportunity in Security Operations

Tags: SOC, NLP, LLM, AI,

Similar Posts

Generative AI is Reshaping Cybersecurity. Is Your Organization Prepared?

British Library: Exabeam Insights into Lessons Learned

Beyond the Horizon: Navigating the Evolving Cybersecurity Landscape of 2024




Recent Posts

What’s New in Exabeam Product Development – March 2024

Take TDIR to a Whole New Level: Achieving Security Operations Excellence

Generative AI is Reshaping Cybersecurity. Is Your Organization Prepared?

See a world-class SIEM solution in action

Most reported breaches involved lost or stolen credentials. How can you keep pace?

Exabeam delivers SOC teams industry-leading analytics, patented anomaly detection, and Smart Timelines to help teams pinpoint the actions that lead to exploits.

Whether you need a SIEM replacement, a legacy SIEM modernization with XDR, Exabeam offers advanced, modular, and cloud-delivered TDIR.

Get a demo today!