Webinar - Overview of Exabeam's SIEM & Security Analytics Product Innovations - Exabeam

Overview of Exabeam’s SIEM & Security Analytics Product Innovations

Webinar Transcript | Air Date October 19, 2022

Watch the Webinar | Read the Blog Post

Jeannie Warner: 

Thank you for joining. We’ll get the webinar started momentarily. Good morning, afternoon or evening depending on where or when you were watching our webcast. We’ll get started in just a moment, but first I wanted to cover some housekeeping information. Today’s webinar will be recorded and we’ll email you the link to the recording after the live event. Secondly, we will have a Q&A session at the end. Please submit any questions in the webinar’s sidebar, you’re welcome to go to them as we go in chat, but if I miss them, we will definitely review at the end. Thanks, and we will start momentarily.

Welcome to our webinar where we introduce you to Exabeam New-Scale SIEM. I’m Jeannie Warner, Director of Product Marketing here at Exabeam. I am joined by the amazing Andy Skrei, Senior Director of Product Management. Next slide. A bit about who we are. I’ve been in security over 20 years in infrastructure, operations, and security. I started my career in the trenches working Unisys, help desks and network operations before running away to SOC. One of the first ever built that was IBM MSS. Nine years in multiple hats later I left to wander through other technical product and security program manager positions for small companies like Microsoft, Symantec, Fortinet, White, WhiteHat, Synopsis. I was the global SOC manager for Dimension Data building out their Global Multi-SOC approach. So my background is all about analysis, forensic investigation and incident response with a soupcon of marketing.

I’m joined by Andy’s Skrei. Andy has over 15 years of experience working in cybersecurity from practitioner to executive at Exabeam. Prior to Exabeam, Andy worked as a lead security engineer at eBay, developing and deploying technologies for its global SOC. Prior to eBay, he was manager at KPMG, helping some of the largest organizations in the world, increased security maturity and reduced risk. Andy has a unique view on how security analytics and fresh look at the problems that SOC space, which really increases their chances of success against advanced threats. So here we go. Two SOC analysts telling you about world problems and what Exabeam is doing about them. Next slide. Detect the undetectable. Exabeam helps organizations by being purposely built for security. Let’s talk a little bit about what that means and why it’s important. Next slide. You are cybersecurity reality. This whole topic of how to make it easy for security analysts is near and dear to our hearts.

We have a lot of personal experience. So when we talk to customers, there are four things we consistently hear that they’re struggling with. First, they need to collect not just more data, but the right data. Every new security sensors, detection product or security tool they bring in is creating and driving the collection of more data generating gigabytes, terabytes, exabytes of logs, which creates two issues. One, it drives up data storage costs making SIEM super expensive, and two, it becomes harder to get the right data for the holistic picture of what’s actually going on in the environment. We’ve talked about this since the inception of Exabeam. The defender has to know what they’re looking for. They might get a clue, maybe an alert from their EDR product, but then they have to run a series of manual investigations to find scope and keep generating manual reports.

We’ve all remember snort rules. The Achilles’ heel of correlation rules is the need to create them in advance. And please understand me, building correlation rules is absolutely important, but if you rely on only correlation, zero day attacks become more dangerous since you don’t know yet how they will operate or which system or piece of your infrastructure they will go against. Not to mention defenders may not get an alert at all because the behavior of an adversary looks completely normal, even if it’s not, because normal is constantly changing. Now, there’s the latest as adversaries are buying credentials and getting inside. How is your security team going to know what the threat actors do once they’re inside and looking like normal people? What is normal versus abnormal? Finally, these threats are buried in a sea of noise. It’s trying to find a needle in a haystack. Every product generates a ton of alerts and not all of them need to be actioned on.

How do you know which ones to pay attention to? Which ones are relevant? Which ones are just noise? Let me give you an example of that. There was a breach at Neiman Marcus when the forensics company came in and did their final analysis of what happened. They said there were 59,000 security events that the customer missed in their investigation or didn’t pay attention to. Now, before you say how do you miss 59,000 events? When you put it in the context that they were getting over a billion security events, it’s really easy to miss important clues in a massive volume of noise. So when you’re relying on humans to perform manual investigations, everybody’s going to do it in a different way. You can miss big pieces of what’s happening in the environment and those misses lead to downstream mistakes. There is often no uniform way to go about an investigation, even if I personally trained up a whole SOC at once.

If 10 different analysts run an investigation, we will get 10 similar but still different answers. We believe at Exabeam that behavior analytics can help tell a broader story about what’s happening. A great example of this is a renewable energy customer of ours. They had us come in and replay an incident to see if we could find in their Splunk logs what occurred to them before they knew about Exabeam. So we deployed Splunk on top of theirs. I’m sorry, we deployed Exabeam on top of their Splunk data in the known time period of the breach. Our results showed a web shell attack that compromised the service account which accessed 12 set of systems within their environment. Interestingly, when the CSO came back and compared their own team’s investigations to ours, they thought only four systems had been impacted. So the moral of this story is when we rely on humans to do investigations, they might miss big chunks because they’re manually querying data, trying to piece together all the parts of the investigation. They don’t see the full picture.

Next slide piece, the breaches going on are rooted in compromised credentials. Pretty much every attack involves misuse of credentials somewhere, no matter if it starts from phishing or even a web vulnerability exploit with no human involvement. This has always been an important observation from our standpoint. Phishing, ransomware, malware, path traversal attacks, these all tie eventually to getting and compromising credentials or whether it’s an adversaries has been able to use a valid credential or a legitimate user or a mistaken user or a malicious user. The objective is to use those credentials to access systems and data. Everything looks legitimate when using valid credentials. If a hacker obtains the credentials, what systems in your environment tell you the normal behavior of those credentials? And without that, how do you identify abnormal behavior? Customers spend a lot of money trying to prevent credential misuse with things like two-factor authentication, but it’s no longer enough.

Time and again, we see bad actors that can bypass prevention tools. We saw this recently with lapses where they just went out and bought credentials on the black market or literally spoke to an employee saying, “Hey, here’s 10,000 bucks if you give me your login and password.” So for lapses, Exabeam caught them in two different environments of ours just because of the data models we’ve been using since the beginning of the company. That’s eight years of being the best UEBA on the market. The TTPs don’t change, the individual vulnerabilities are discovered, but from recon through to exfiltration, the framework remains the same. This means that we’ve been catching zero days for longer than just about anybody else. Because, yeah, the exploits change, but wouldn’t you like the confidence of knowing that you’ll see movement in new activity and anomalies from day one. Next slide. The problem is-

Andy Skrei: Actually, I think we’re a slide ahead, so if we can go back to the-

Jeannie Warner: Are you?

Andy Skrei: Yup.

Jeannie Warner: Yeah. Back up one. There you go.

Andy Skrei: There we go.

Jeannie Warner: 

Thank you. Sorry, I’m a wild clicker here. A lot of the legacy SIEMs weren’t built for today’s environment or designed to identify compromise credentials, creating what we call the SIEM Effectiveness Gap. And that gap is widening as data exposure points, credential-based attacks, alerts, and frankly, the cost of people in storage continues to rise. Step back in time, generation one in the days of ArcSight or QRadar, it was about alerts, logs, and correlation. The problem was that storage was based on relational databases and correlation wasn’t efficient, it was expensive, slow, required a lot of horsepower. I’ll even tell you a secret that in 2002 we had somebody writing manual rules just so that if we saw it and we knew it was vulnerable for a vulnerability scan, then we had to escalate those.

Now, this quickly spilled into generation two. Splunk joined the game, they were disruptive. They proved that relational databases weren’t great for storage. They used flat files and added indexing of alerts, logs, data points, and other information on their platform. This level of drill down was transformational for the SIEM market and Splunk’s growth. However, Splunk is still a data company, not a security company. If Taco Bell wants to know how many quesadillas they sold on Tuesday in Peoria, Splunk is perfect, but their approach is fast and efficient, but it relies on search and correlation, makes you have to know what you’re looking for. So keeping on top of a flat file system, it’s still very expensive and resource intensive. If you look at third generation, it takes alerts, logs, correlation, and indexing and marries it with behavior analytics and automation. This is where UEBA and SOAR products come in.

And this is where Exabeam joined the game where behavioral analytics came to be recognized. You had all the data in one place, but you need to make more intelligence out of it. For a lot of vendors, these new capabilities get bolted on top. They buy something, they integrate it, they put it in there, but it’s not perfect. And they present statistics as behavioral analysis. For us at Exabeam, it’s always been foundational to our product. We started off augmenting environments where we didn’t own the data lake, so our analysis engine is open and can run on top of any SIEM. Exabeam was always designed to ingest third party alerts from different systems and automation was or added then organically. So fourth generation, this is where we think things are going. It all lays down into a use case called TDIR, which for us means Threat Detection, Investigation and Response. So when we look at the next generation, it needs to complete the entire use case from the collection of data through a human being like me accurately and holistically analyzing and responding to that data.

And we do it at cloud-scale. Next slide. We have built our new product suite to be cloud-native in every way so that it scales in a far more agile way as our customers grow. We manage credential-based attacks exponentially well. As we know normal behavior, we get rid of a lot of the alert noise with our behavior analytics and Alert Triage capability, we speed investigations and response with automation. So the benefit to your customers is a very efficient data storage cost with the infrastructure on the cloud. And when we say New-Scale, yes, it is about managing more data sources at higher volume in a cloud-native architecture, but it’s also about scaling your response to focus on risk-based priorities to scale your investigation with automation, to scale your detection with behavioral analytics intelligence across those billions of access points I mentioned earlier. And scaling operations in people to help elevate their talent to scale your budget with cloud economics.

Our New-Scale SIEM stands on three key pillars. First, we can ingest, parse, store and search data regardless of where it’s coming from, on-premises or in the cloud at scale, we are not limited by a volume of data, physical memory and space, sources customers need to bring in, et cetera. Secondly, we’re not just dealing with data visualization and correlation, but adding behavioral analytics. This is the core of Exabeam beam’s history in UEBA, we practically invented this space. We map users to IPs, to devices and baseline normal behavior across all those users and devices. Seeing normal lets you identify anomalies basically the first time an event happens. Thinking back, I was in a SOC when Code Red SQL Slammer and all the worms of the early 2000s came out, we watched signatures bloom across the planet each time a new timezone came online and turned on their computers. Now we can see anomalies when they first happen and proliferate that lateral movement and spread. Your SOC can answer the question, how many machines are affected right now and what are they with a single click? And that’s huge, a huge improvement over where we were.

Finally, an automated investigation experience give analysts the full picture of an incident, helping them respond quickly, thoroughly and consistently. SOC analysts like me weren’t really just looking at critical web or endpoint alerts. They can see the chain of events as they started from a phishing attempt to a VPN use to something new to the first time any new process fires, and then automate that response for a complete outcome. Organizations are really realizing a lot of the beneficial economies of the cloud, legacy on-prem SIEMs are expensive and inefficient and they have limits of storage, power consumption, memory. I mean, asking me about the time somebody put an Barracuda Email Archiver in the same rack as my SIEM without doing a power consumption survey and it crashed for hours twice in a month at 2:00 in the morning. So next slide. Let’s break down what we mean and how we’re putting all of these into each of the new products.

Exabeam Security Log Management, we’re bringing the ability to ingest, parse, store and search at sustained speeds of over 1 million events per second on every single tenant on a cloud-native platform that scales to hundreds of petabytes. We can deal with those difficult discussions about collecting more logs and finding the right data because the economics are so much better with cloud than they were on-premises. We’re bringing fast, modernized search and visualization, allowing analysts to see and act faster. We can ingest raw data from 549 on-premises and cloud tools with nearly 8,000 prebuilt parsers that automatically build security events for faster performance and search correlations and dashboards. Next slide. We have behavior analytics. This is the core. What we’ve done since our inception. In the early days, we sat on top of other platforms to help the analysts ask really difficult questions. Now, we bring all of this advanced analytics, automated detection and risk-based prioritization to the table either with our SIEM or on top of somebody else’s so you don’t have to start with our full platform.

We understand that a lot of you have an established SIEM, which may be sticky, everybody may love it. Fine, keep your toys, we can augment it and make it better. The most critical component to behavioral analytics is base lining normal. It’s really difficult for an analyst to determine what is normal for a behavior for a credential or a machine or a service account. And without understanding normal, everything is chaos. So how do we know what to prioritize? Giving your analyst’s information on normal behavior paints a very clear picture on what to focus on. So when we’re saying we detect the undetectable, this is what we mean. We help analysts find what they don’t know to look for. We stay ahead of threats whether it’s from external adversaries, malicious insiders, or compromised insiders. Because our models look at the tactics and techniques that attackers use.

We map every Exabeam event, the MITRE ATT&CK framework, which everyone showed at this point, right? These pre-mapped rules and models are applied to every user and device beyond anything that could be done with manual correlation rules. I want to give you an example of math here. Say, you have an organization that does basic logging for 20,000 users and has 50,000 assets. Exabeam, they’re multiplying all that by all of our different rules, behavioral and fact-based correlation. We provide over 50 million unique detection rules and then tell you which to look at. Funny thing, models that were created by Exabeam in 2014 are still catching attackers early in the attack cycle, with those two customers with lapses we mentioned, just knowing the abnormal access to assets highlighted the attacker before any other security product in their environment. They had the latest ADRs, they had the latest detection techniques in play.

But because we had a model that showed these abnormal access patterns, it highlighted the risk immediately upon access. Our automation helps analysts prioritize where they spend their time. Consider a web attack. Do you escalate every time somebody launches a vulnerability scan or do you want to save those midnight phone calls for something that looks successful or weird, because really weird is what’s interesting. We recently found an anomaly with the customer’s DLP product. At one customer, they had an alert set to fire on potential self-harm for an employee. Typically, this gets buried because DLP products don’t have machine learning to alert not just that the activity happened, but that it is unusual that it happened. Everything looks like noise coming out of a DLP and I’ve loved DLPs. But with Exabeam, we were able to identify this for the customer that had never seen this alert fire, not only just for this individual but for anybody.

We raised the priority level to our highest level and told the organization they should investigate. When they looked into it, they discovered it was a real issue. The employee was considering self-harm and they were able to get them help. The great part of all of it is this. You can run out of our analytics-specific products on top of other data lakes for SIEMs as a starting point to see value. Next slide. Automation. It is a word that is frequently insecurity, too often it is isolated as part of incident response to close the as SIEM effectiveness gap, you want to automate the entire threat detection and investigation process along with response. We bring automation to each step of the TDIR workflow, not just when an alert happens, but before and after. We bring automation and context to the response, who the users are, what departments are there and what peer groups.

We bring in threat intelligence that we pay for, not you, and we use it to enrich data and build events. We see Exabeam as a decision support for your entire security infrastructure. We’re fast answering the questions that used to take days or weeks. Now, we have automatic timelines that create and reconstruct what happened. It’s accelerated, it’s streamlined, and all of this helps an analyst perform their jobs faster with more provision precision in helping you scale. Next slide, please. The great part about it is we have some really good distributed options that work in a modular way with your environment. We’re not going to tell anybody you need to rip and replace everything and buy everything from us at once. There’s a lot of different ways to get started. So maybe you need behavioral analytics buried on top of your data lakes for an existing SIEM. Or maybe you’re just starting to look into a logging infrastructure and you’re dissatisfied with your current solution that cost your maintenance of it.

You can start with security log management or you can purchase the entire banana with all the capabilities which we call Exabeam Fusion. We want to be flexible to your needs and infrastructures. We are built on Google Cloud and we are 100% committed to the Google Cloud platform. We selected Google Cloud because of its hyperscale speed and the ability to support ginormous amounts of data. They support the largest network in the world and our really inventive leaders in this space, our perfect partner to grow with. Next slide. We are used by the largest brands. Currently, we have 20% of the Fortune 1000. We are a visionary our first year out and have been a leader in the Gartner MQ for SIEM for the past four years, including the one that just got released. We’re a Gartner peer insight’s customer choice vendor and a leader in the Forrester Wave. So about our customers are forward thinking organizations across all of the industries. About half of them use our behavior analytics on top of somebody else’s SIEM. Bottom line is we have a very large and healthy growing customer base, so you won’t be alone.

Next one, please. We want to address the SIEM effectiveness gap, ground-up purpose-built for security-native in the cloud and built for scale and that’s Exabeam. So with Exabeam, what we’re saying you get is cloud security, log management, behavioral analytics that help you understand normal automated investigation experience and more meaningful work from your team, not tedious tasks. We want to make your teams more effective, boosting your morale and frankly reducing your analyst’s churn. We know that analysts don’t stay in your SOC for five years, so we want to make it easy and quick to bring in somebody new, teach them how to be the great SOC analyst of your dreams. Exabeam business was born in security. We didn’t bolt it on. We live and breathe security and it’s crossed all of our products and people every day. Now, let me turn you over to the fabulous Andy Skrei who’s going to show you some of the new features and functions of our new portfolio. Andy, are you ready to take this way on a demo?

Andy Skrei:

I’m ready. Thank you so much. Let me go ahead and share out my window here. All right. So thank you everyone for joining this morning. I’m super excited to be able to share Exabeam’s new Security Operations Platform. We’ve built this platform from the ground up with scale in mind. This platform combines SIEM and the core capabilities of log collection, data management, search reporting, dashboarding, along with powerful security analytics for automated threat detection investigation and response workflows. So today, I’m going to walk you through all of these great new features and functionalities that we just released on Monday and how this is helping organizations reduce that SIEM effectiveness gap. So what we’re going to see is I’m logged into the Exabeam Security Operations Platform and I have access to all the different applications. And as I said, we built this from the ground up with a real core focus on the platform and the plumbing and making sure that it’s easy to get data into Exabeam, be able to normalize it, enrich it, and then be able to leverage that normalize and enrich log data into various applications related to TDIR.

So we built a brand new collector management application to manage the cloud collectors, the site collectors and context collectors. I’m not going to spend a lot of time on that today. We’re going to jump into, if I’ve already collected data, I have my collectors deployed, I now want to make sure that I’m able to see that data, I’m parsing it, I’m enriching it, and that it’s going to be useful for my downstream applications. And so we built a brand new application called Log Stream, and as Jeannie had mentioned, Log Stream is our application that sits on top of our new unified ingestion pipeline. A single place to ingest all log and context data at scale. You’ve certified this for a sustained million EPS per tenant. And within Log Stream here, what I’m able to see is and understand is how are my parsers performing and what data that I’m ingesting is being parsed?

And so right away I can see how many parsers are actually enabled and the health of those parsers. So I can see that right now I have 173 parsers parsing all of the different data feeds that are coming in. Out of the 8,500 that we have out of the bots default parsers, I can see none of them are in an error or idle state. I can then take a look at the parser health over time where I can see that in some cases parsers fire on data that comes in realtime. I don’t always have the same logs being generated every single second. And so my parsers may increase and decrease just a little bit over time as I have Log Streaming in. But I can clearly see that I’m fairly consistently in the 150 or above parsers over the last 24 hours.

And again, no errors or idle parsers that I need to address right away. And then I see my most active vendors. So again, this is a good visualization of understanding out of all the data that’s coming into the platform, which ones are parsing and how many logs are being parsed from each one of these vendors. And is this what I’m expecting to see? If I just onboarded my checkpoint logs, then it’s great. I can now see that a lot of those logs are being parsed and this is my number one vendor at the moment. If we scroll down, we can start to see all of the parsers that are active in the pipeline. I can see the parser name, the vendor and the product of these parsers. I can see the events that are built post-Exabeam’s normalization as part of our common information model.

I can see the volume of logs matching these parsers, the status, the health, and when they were last updated. I also can reorder the parsers here. So we have something called parser precedence, and this makes sure that I can actually ensure that the right default parser is matching the right logs and format that I’m bringing in. I could import parsers. I can also create brand new parsers using our Auto Parser Generator, which has now been built into the Log Stream application. And I can search, so let’s say I want to take a look at some of these checkpoint parsers. I can search this and I see I have two parsers here. And I can do various things. I can view details, which we’re going to do in just a moment. I could customize this if I wanted to extract more fields or change field extractions, I could duplicate this parser, disable it, or launch Live Tail.

I want to talk about Live Tail in just a second. I have it open in my next tab. But first, let’s view the details of these parsers. So when I open up the detail for any parser, I see the new naming convention that we’ve developed for our parsers. So we’ve built a brand new common information model to help us normalize all of the data that’s being ingested into the platform. It’s a common information model that we actually open source with our XDR Alliance partners at black half this year. But I can see my naming convention is the vendor, the product, the format of the logs and the output event type. We’ve also introduced parser versioning here. So I can clearly see, “Am I on the latest version?” Or, “The changes in the versions?” And any changes to these parsers are audited. And I can see these in the Activity Log.

I can see the actual configuration of the parser and the event builder configuration files. And I can even see an Extraction Preview, which is showing me an actual sample of my checkpoint logs and the extractions themselves in the value of those extractions based on that parser configuration. And again, if I wanted to customize this, I could click Customize and this is going to bring me into my Auto Parser Generator workflow. Or I can actually scroll down to the bottom and I can start to look at these tokenized fields and continue to extract more and more fields. So we really want to make sure that we provide the right end-to-end experience that it’s seamless. So when you’re looking at a parser need to make changes, we can guide you to that next step to be able to extract more fields, validate those fields, and get these new extractions into the pipeline in a matter of minutes.

Now, as I mentioned, not only do we have Log Stream here, but we have an application called Live Tail. And Live Tail will actually show you a live look into the pipeline. So if I had just ingested those checkpoint alerts, let’s say, I configured my firewalls to syslog that data to our on-prem site collector, I can then in realtime see and validate that those checkpoint logs are making it into the ingestion pipeline and into the Exabeam platform. And I paused this for a moment, but I can replay this. I can look at the parse logs, I could look for events that are not being parsed where I may need to create a new parser, but I can click on these and again, I can see all of the sample of the event and how many of the fields are actually being parsed and how many of our core detection and informational fields defined by the common information model are available in this log.

So we’ve gone to very detailed length to try to prescribe and help customers understand what are the most relevant fields in a security log for TDIR workflows? And so in some cases, you may not have the right levels of auditing turned on. And so we want to help prescribe what fields are super important for detection and are going to drive our prepackaged content. We have over 1,900 rules between our correlation engine and analytics. We want to make sure that customers have as many of those fields that we leverage as possible being parsed out of their events. So this is Log Stream. This is an application that I’m really excited about because it really helps organizations streamline the data onboarding and make sure that that data is set up for success and be able to be leveraged by the downstream applications. Now, once I have data coming in and it’s been normalized and it’s been enriched as well with our threat intelligence data, I may want to start to search those logs.

So I’d like to introduce you to our new search application. We’ve built this new search application to solve two challenges, both the realtime search on hot data as well as that long term historical search needs. Some organizations want to be able to search data for the last 10 years. Previously at Exabeam, we had two products that did this. We had the hot data storage in our data lake and we had our cloud archive. We’ve now merged these experiences into one. So within the new search application, I can search the logs from the last 10 minutes over the last 10 years. And you’ll notice here I’ll demonstrate how quickly we can get those results back. Now, one of the things that we notice for most organizations when they log into a search experience for the first time, their first search is a star search because they’re not quite sure what data exists for them to actually search and explore.

And so because we do normalization and enrichment on ingest, we were able to build a simple point-and-click search experience. Now, we have a more advanced search editor as well where I can just type in all of the different queries that I want. But you can think of this as almost like shopping on Amazon. We give you various different places to start and all of the fields available here that I can click on to drive my search are dynamic based on the data that’s actually being ingested into your environment. So I could search by the subjects which are high-level objects, like I could look at my database logs or my email activity and logs or my file. I could start by searching on specific vendors and products or any of the predefined common information model fields. So I’m going to go ahead and I’m going to just click on network.

 So this is subject network, and I’m going to go ahead and let’s just search this over the last 24 hours. So I’m going to run my search. You’re going to notice here, just in a matter of a couple of seconds, I just got 146 million results returned across all of my network logs. Now, I can see here on the left-hand side, I have a summary, a filter summary here. So I can see and I could break down if I wanted to look at specific dest ports or I can even see the different products here. Again, our common information model has already aligned your Cisco Firepower, your ASA, and your Check Point Firewall into this network subject. I can see all of the relevant fields here up at the top. So I can click on this and I can see the full raw log and all of the parsed fields.

And I can talk along these eye icons and that’s going to show what are the most valuable fields for this type of event. And these are all dynamics. So when I’m looking in network logs, Exabeam is going to expose things like the dest IP, dest port, protocols. If I was looking at endpoint logs or process execution, it would show the file path and information that’s relevant to those types of events that’s in those logs. Now, I mentioned we could search for 24 hours, we could actually go and we can search a year’s worth of data. And if I run this search again, you’re going to see that the results return very quickly. So I had 146 million events before. In just a few seconds, I’m going to see a lot more data, 459 million network events. As I’m going through my data, I can again look at these fields and I can toggle on and off and include things into my search.

So maybe I want to look at the direction of the traffic. So maybe I want to look at only outbound so I can say direction is not inbound. And as soon as I do that, my search is updated, I’m just going to go back here to the last 24 hours and we can continue to run these searches. Now, one of the value ads for Exabeam is our ability to automatically enrich all logs coming into our platform with our threat intelligence information. And if I want to see any logs that have known IOCs related to them, I can simply come into my field summary. Is IOC true? I can add that to my query, and again, hit search. And now, I’ll be able to see all of the events that have been ingested that actually are matching against known threat intelligence. And we’re going to show you not just that it’s a known threat intel hit, whether it’s an IP, a domain, we’ll even give you the type.

So I can see that I have a lot of botnet activity happening right now. The other thing I may want to do is, I know that I have my firewalls in place and a lot of this activity is probably being blocked or denied. So I want to even focus this search a little bit more and I’m going to come down to the outcome and I want to make sure that I want to see only where this traffic was successful outbound to a known IOC. And the other great thing here is outcome is a field defined by our common information model. Once again, So I don’t have to know between my different firewalls and network vendors, is it allowed, denied, success, blocked? I can simply say, “Was it a success or a failure?” So I can see here in the last 24 hours, I have a lot of traffic going outbound through my network devices two known bad sites.

Now, I could obviously go ahead and save this search, and this could be a search that I run every morning when I come into the office to check on what are the latest hits to thread intel. But rather than taking that reactive approach, I’d rather be proactive, and I’d like to build a correlation rule. So what I can do is I can click convert to rule. This is going to open up our brand new correlation building experience, and we’ve built this with a close tie-in to that search experience. So I can automatically from when I’m searching or threat hunting, if I find something that I want to be detected on next time, I can click that build correlation rule. And it brings me into my building experience. So I have my search already predefined here, had I not started from the search interface and drove right into creating correlation rule? With also a test fund.

So if I wasn’t sure if this query was successful or the data matched, I can test it from here as well. My next step now is to add additional conditions. So I set up my search to filter on the data that I care about, and then I can add new conditions. So if I wanted to build a port scan rule, I can do counting and aggregation, but in this case, I’m just looking at matches to known IOCs. So I’m going to trigger anytime this event matches my search. Next thing I can do is set the outcome, right? This is one of my favorite things about this new correlation experience is that you don’t always want to treat the outcome of a correlation rule the same. In some cases, you have correlation rules. Like if Mimikatz is running on your domain controller, you may want to send an email and create a case and wake somebody up in the middle of the night to handle that right away.

In other events like this one, it’s just an IOC hit. I may not want to have somebody respond to all of these all of the time. What I can do is click Generate Alert. What this will do is this will trigger a new alert that Exabeam will create. It’s going to flow into our behavior-based analytics. I’ll show you where it’s going to show up in user and entity timelines. It’s going to add risk, and it’s also going to flow into our Alert Triage products, which is where analysts can start to triage all of the third party alerts and Exabeam correlation rules. So I’m going to say, I’m just going to call this IOC hit. I’m going to give it the same name for the rule. I’m going to categorize this as an external malware threat. I’m going to give it a low priority and I can enable this rule and hit save. So now I’ve just built a brand new correlation rule.

So I was starting my search experience or threat hunting. I can quickly pivot into building rules. So here, I can see all of the rules that are in my correlation experience that my I’ve created. I can see whether they’re enabled, I can see the severity set, the use case, when they were last modified, created, and who created them. Now, not only can I go and create hundreds of correlation rules, but Exabeam is now delivering correlation rules prepackaged for our customers. So we have over a hundred correlation rules at launch covering all of our Exabeam use cases. So it’s a great place to get started to learn how to build correlation rules and to also be able to enable a bunch of these for those standard SIEM use cases. We’re able to now also deliver this content directly to our customers via the cloud. So this really enhances our ability to deliver new content when things like Log4j or emerging threats are happening, we’re able to get our content to our customers in a matter of minutes.

So I have my data coming in, I’m able to search it. I’ve built my correlation rules. Now, I may want to visualize my logs as well. So we’ve built a brand new platform level dashboarding service, and I can build visualizations and dashboards on all of that log data that I’m collecting from all of my telemetry sources, as well as all of the log data that Exabeam is generating. So all of that audit log data. So I can see here I have various different dashboards that I can create, Exabeam ships, pre-packaged dashboards, like a summary of all of our analytics anomalies or a summary of case management. So I can see how my SOC is performing. I have a bunch of different dashboards that I can open up here, like our network IOCs, again, using our own threat intelligence. I can see things like IOC trends over time.

We have 12 prebuilt chart types that allow you to visualize all of this log data, either to spot new trends, summarize data, or build reports for management. I can see where the IOCs are coming from via this world map. I can see breakdowns of the top destination IPs, domains, source IP addresses. We can build out various different charts around my RDP Connections. Maybe I want to summarize if I don’t have a lot of RDP traffic via bar charts, we can look at things like summarizing our top MFA logins. With our newest content in our analytics that we just shipped on Monday, we introduced a bunch of new content related to TDIR for public cloud infrastructure. So modeling behavior around AWS, TCP and Azure, as well as brand new content for MFA bombing. And we even have these great sankey charts as well that really help summarize and visualize data flow across the environment.

So we have this great new platform level dashboarding capability that customers have access to our prepackaged content, but I have the ability to create all of their own visualizations reports and share those out with the rest of the organization. Now, that we’ve done our onboarding, our collection, our search, our dashboarding, and our correlation rules, now it’s time to start to talk about those TDIR workflows. So this really covered that first pillar that Jeannie talked about, this cloud-native data lake. Let’s now talk about behavior analytics and automating the entire TDIR workflow. And we’re going to do this by starting in Exabeam’s Alert Triage application. As I mentioned, this is where Exabeam takes an alert-centric approach to security. You can either start in our Alert Triage or you could start in our behavior analytics looking at the highest risk users and devices.

 But regardless of where you start, we’re going to drive you into a common investigation workflow through our case management and our incident responder products. Not only does Alert Triage summarize and bring all of the third party and correlation roles into a single pane of glass, but we also have what we call Dynamic Alert Prioritization. This is a machine learning algorithm that we’ve built on our new cloud-native analytics engine to help reprioritize alerts. So we can either increase the priority of alerts or decrease the priority of alerts, and we’re looking at multiple different dimensions every time an alert fires, like how rare is it for this particular signature or threat type from this vendor? Is it abnormal for the organization to see these alerts or for a specific user? And we all know that there are alerts that we have in our environment that for specific vendors that are almost always high and critical, and after we triage them and investigate their false positives.

 So in Exabeam’s case, we can start to lower those priorities. And we also know there’s oftentimes alerts that get triggered at lower-medium priority that we just don’t have time to look at. And so Exabeam is trying to bring those alerts up by bringing higher priority and attention to them. So I’m going to start by looking at my EDR alerts. I can see here that I have two EDR alerts that have been prioritized as high by Exabeam’s Dynamic Alert Prioritization, and I’m filtered on my high priority alerts. I could look at all of the EDR alerts that I have from CrowdStrike in Palo Alto. I can see some of them are being lower priority, some have no change in the priority or status, but I am going to focus on just my high priority alerts here. So I can see high priority from Exabeam. I can see the vendor and the product that generated this alert, where it came from, the severity from that vendor, the threat type, and the threat name.

 And you see, I have two alerts here, both for Fredric Weber and a specific device. So Exabeam is automatically going to enrich all of these alerts with the information about the users and the devices, even if the source alert didn’t have a username in it. Exabeam’s analytics already knows what host is assigned to what IP and what user is logged into that host. And therefore, even for network alerts that don’t have a user, we can attribute and understand who the user was on the device when that alert fired. And this context is not static. This is live dynamic context because when I help her over this, you’re going to see we’re showing you the risk score of both the user and the device when this alert fired. Now, because I have two alerts for the same user, I’m going to start by looking at my medium alert from CrowdStrike and I can open up my window here.

And the goal of Alert Triage is to be able to give me the information I need to make a quick decision to either resolve, dismiss, or escalate this alert. And ideally, I want to do this in under a minute. So first I’m going to assign this to myself. If I wanted to, I can see the full raw log from that alert from CrowdStrike. I can see that it looks like it ran out of maybe a temporary directory called barbarian.jar was the file name. I have a link to the CrowdStrike console if I wanted to pivot to investigate there. I have the SHA hash value and then I get this context about who I’m investigating from an entity perspective. I have a user, Fredric Weber, who’s a web developer, and he is got 399 points of risk. Exabeam’s default threshold to alert on risky behaviors at 90 points.

So he is quite a bit higher than even the basics for alerting. And we can see his device has got 80 points of risk. So it’s almost at that 90-point threshold. And we’re going to tell you whether or not this device was the source or the destination of this particular piece of malware or this detection. Now, the first thing I often do when I’m triaging alerts is I want to say, “Well, this CrowdStrike alert fired, what else happened around it? I want to try to figure out why did this alert fire? Where did it come from? What happened after?” And so Exabeam is already going to show me the nearby anomalies. It’s going to tell me that it added 40 points of risk in the analytics because the user was on a VPN, that this is the first time this users are seen or had a Trojan generic detection before.

Same with the first time we’ve ever seen this signature across the organization. It was run out of a temp directory and it was the first time this user ran this process. So already, I have enough information right now that I can say, “You know what? I should probably escalate this. I want to spend a little bit more time investigating.” So I’m going to go ahead and escalate this into a high case. Now, Exabeam’s automatically going to create this case for me in the case manager, gives me a link that I can open up. But before I pivot into the case, I want to take a look at Fredric’s timeline. I want to better understand is this a true positive and where did this CrowdStrike alert come from? So we’ve now pivoted from Alert Triage into Exabeam’s Advanced Analytics. This is our UEBA solution and I get brought right into Exabeam Smart timelines.

 These timelines are a summary of all behavior, both normal and abnormal for Fredric on this given day. And we build these timelines for every single user every single day, regardless if they’ve done anything abnormal. It’s like a DVR of everything any user and device has done in your environment. And by having these pre-built timelines, this is really where we start to talk about automating the investigation. We’re putting all of the puzzle pieces together to give that holistic view of what happened. So I can see this user accessed a web domain, this zoomer.cn domain. This is the first time the users ever accessed an IP address in China. It’s a good anomaly. But as an analyst, if I see this anomaly, the first question is going to be, “Well, this is the first time. Where does he normally browse the web? What geolocations?” If I click on China, Exabeam will show me the data model, the histograms that we’ve built that show me what normal behavior looks like.

It’s 30% of the time he’s accessing IPs in internal locations, Indonesia, Georgia, Saudi Arabia, but never China. Exabeam has over 750 prebuilt data models that we build behavior around for every user in every device. You can almost think about this as Exabeam is building essentially millions of unique correlation rules for every user in every device across all of the different behavior or elements. So the user accesses this odd domain. We then see this process barbarian.jar run. It’s running for the first time for the user, his peer group, Salesforce, and the organization as a whole. And again, if I want to know what process has been run across my entire org, I can click on this data model and see all of the processes we’ve seen run in the past.

 Now, luckily, when barbarian.jar ran, CrowdStrike kicked in and generated this security alert where we started our investigation with Alert Triage, but it looks like maybe it didn’t catch everything. Because then I see VSS admin and BCD edit run and VSS admin is deleting shadow copies. BCD edit is disabling my recovery mode. I continue to scroll down and then we see what looks like a classic case of lateral movement. This is exactly what we saw with one of our customers where we were able to uncover and catch lapses for them. Their employee had unfortunately sold his credentials to lapses. They had logged in and they started to move laterally against the Citrix environment. And we saw hundreds of first accesses and abnormal accesses to devices for that user. So we see lateral movement happen, and then at the very end here, we see that Palo Alto alert again, that was the second alert and Alert Triage.

And we see a few more normal behavior and the user logs out of the VPN, right? So this has automated a lot of the questions that I would typically have by starting from that CrowdStrike alert. Where did it come from? Likely, that Chinese domain. What did it do? Well, I have lateral movement, a lot of other hosts that I likely want to investigate because of that and hopefully this Palo Alto alert fine and cleaned up that malware. Now, I have a lot more information where I can start to pivot over into my case manager to finish up the remediation. This is one of the important things I think for a lot of SOCs is, a lot of socks want to stop playing whack-a-mole with alerts. It’s not just about getting an malware alert and re-imagining that machine. We want to make sure that we’re solving the real problem, understanding where it’s coming from and being able to stop and prevent users from continuing to get infected.

And that’s what bringing automation into the full TIR lifecycle allows. So we’re in the case, it’s automatically been tagged as malware. I have a bunch of fields prepopulated both about the case as a whole, as well as the malware that was frowned from CrowdStrike. I have all of the entities that are related to this case, like the file involved, the devices that were accessed and the users involved. And all of this has been dynamically added in here based on our playbooks. Our turnkey malware playbook is automatically run when a malware case was created, and that’s going to detonate these malware files in a sandbox and provide a lot more intelligence around that malware. Auto case enrichment has happened, as well as incident classification, classifying this as malware. Now, I may seasoned security analysts, so I know what I should be doing as next steps, but oftentimes I have junior folks on my team.

 And so Exabeam provides a prescriptive task list of what we think an analyst should do for detection and analysis, containment, eradication, recovery, and that post-incident activity. So I can expand this and see some of those tasks that Exabeam believes an analyst should perform and what answers or what questions they should get answers for, determine the malware details or review evidence for suspicious outbound network traffic. I can assign these to analysts in my team. I can set due dates or add my own tasks. Now, one of the great things about this case is it’s also dynamic. If you remember, we saw lateral movement in that timeline. So I can now add lateral movement as another incident type. And when I do that, it’ll automatically add a new section into my case that I may want to fill out more details. And it’ll also add new tasks to these task lists so my analysts can make sure that they, again, are responding to the entire threat.

Now, as I mentioned, some of our prebuilt playbooks already ran, and I can see the output of these in our workbench. I can see each individual action that ran in those playbooks. And every time an action runs and returns results, it returns a card on screen with those results. So here, this is where it added all of those entities to my case automatically for me. I can scroll down and I can see information about the user and the context about who this user is that I’m investigating. I can see things like all of the MITRE TTPs that were found in that timeline and in that session. And so I have now enough information where if I wanted to, I could run more playbooks, maybe my threat intelligence playbook, or I could create a playbook to remediate these types of threats. So I could run another playbook or I could simply run an action by quarantining a host or blocking a URL.

 And this allows me to now remediate this threat. So when you think about this, this is all about using analytics and automation to be able to go from an alert and Alert Triage all the way to remediation in a matter of five or 10 minutes and fully remediate the entire threat, not just a piece of it. This is really where we’re putting machines to work on the analytics and the automation side. Now, one last application I want to show you that’s brand new is once you have all of this data coming into the Exabeam platform, you’re able to do effective TDIR. How do you know how effective your coverage is and how much value you’re getting out of the Exabeam platform? Well, we’ve built a product called Outcomes Navigator to help answer just those questions. And you’re going to see an Outcomes Navigator, we’ve aligned these outcomes to our three categories of use cases, compromised insider, malicious, insider and external threats.

And this is going to show organizations how much coverage they have against any one of these use cases based on the data that’s being ingested. And we’ve taken an approach of, we don’t call coverage one to a hundred, you’re never done in security. You’re never 100% covered. We’re saying you’re at good, better, best. And so I can hover over any of these lateral movement and I can see good coverage. I can click onto view details. It’s going to tell me what lateral movement is. It’s going to show me how the scores are calculated. But essentially, I have 15 or 16 different product categories feeding the Exabeam platform out of 56 total that are driving this lateral movement use case. I can scroll down and I can see each one of those different product categories. So I can see I have access management logs from Auth0 and Okta.

I have a bunch of endpoint auditing logs that are coming in, and Exabeam is going to show me the parser calibration tier. How well is this data being normalized against their common information model? The more it’s the better it’s aligned and normalized, tier one is the best, the more outcomes and value I can get. So we’re not about telling customers all the time. In order to get better coverage, you just need to constantly be bringing more data in. As Jeannie mentioned at the very beginning, it’s about collecting the right data. And so we want customers to collect the right data and make sure they’re getting the most value out of that data that’s being collected. I can also see below here the amount of coverage and the outcomes that I’m getting from an analytics rules perspective. So for lateral movement, I have 242 available anomaly detections, and I have one dashboard related to lateral movement that I can use based on the data that’s coming in.

Now, if I wanted to get more coverage, I can pivot into the recommendations tab, which again, we’re going to show customers. We want to first tell you that if you have logs in here that are not at calibration tier two or three, or one or two, excuse me, you want to look at those logs through Log Stream again to make sure that they are parsed and normalized correctly. We’re also going to tell you if you did want to increase the parser calibration tier for, let’s say, off zero, these are all of the other use cases that that would impact. So again, we want to help customers understand, in some cases, I can just bring in one log source that’s going to drive outcomes across the platform versus sticking with one log that’s only going to drive one use case. So I can get more value out of that data. We’ll also prescribe what additional products that you could actually onboard into Exabeam that are missing today that, again, would enhance that use case coverage.

So this is a great new application. This is something that CISOs and soft managers can use on a quarterly basis by exporting these and sharing these with management and say, “You know what? Next quarter we’re going to focus on our data leakage use case.” And then three months from now they can actually see, based on making sure that the data’s coming in correctly, or adding new log sources, continue to expand the coverage and capabilities across these use cases. So I know that’s a lot that we just covered. We’ve released a ton of new applications on this new cloud-native platform that we’re super excited about, and I hope you were able to see here how Exabeam’s new SOC operations platform really solves those three core components of what organizations need to fight these credential based attacks and the emerging threats that continue to happen. So with that, I’m going to stop sharing my screen. I think we’re going to open it up for Q&A.

Jeannie Warner: 

I’ve only got one question that I can see right now that says, “I’m a current user, does Log Stream replace APG?” And that’s that’s our old Auto Pipeline Generator. So, yes, the answer is once you upgrade to the new platform on GCP, you’ll have full access to Log Stream that Andy was demonstrating. We really think you’re going to love the ease and improvement. Also, it has the ability to monitor more closely all of your log sources and the health of their connections. So Log Stream, so much better than APG, you’re going to like. Somebody had a question about dashboards, “Do you need professional services to build them?” I don’t think you’ll do. I think you’ll like the step-by-step instructions. The guys have been building… And gals, sorry, have been building classes on how to make your own dashboards.                                

So between videos and written instructions, which are going to be up on community very soon, I think it’s cool. I think anyone with a basic idea of what do you look at every day will have a chance and it’ll model you through how to build your own dashboards. So any other questions out there that Andy or I can answer for you? I think that’s it then. Thank you so much for joining us today. We look forward to having you on our future webinar. And again, this will be sent as a recording to all of you who attended. So thank you very much for joining us and have a great day.

Watch the Webinar | Read the Blog Post