The Hidden Threat: Understanding Insider Risks
Webinar Transcript | Air Date April 11, 2023
Cynthia Gonzalez (00:04):
Thank you for joining. We’ll get the webinar started momentarily. Good morning, afternoon or evening depending on where and when you’re watching our webcast, we’ll get started in just a minute, but first I want to cover some housekeeping information. Today’s webinar will be recorded and we’ll email you the link to the recording after the live event. Secondly, we’ll have a Q&A session at the end of the webinar, so please submit any questions in the webinar sidebar. You’re welcome to ask them as we go and chat, but in case we miss them, we’ll review at the end as well. Thanks. We’ll start momentarily. Welcome to our webinar entitled The Hidden Threat. I’m Cynthia Gonzalez and today I will be joined by Jordan Forbes, technical account manager. Jordan, would you like to share a little bit about yourself?
Jordan Forbes (01:06):
Yeah, of course, Cynthia. Nice to meet everyone. Thank you very much for joining this webinar. As Cynthia mentioned, my name’s Jordan Forbes. I’m a technical account manager here at Exabeam and I’ve been at Exabeam just over a year. I’ve held various cybersecurity positions over the last 10 plus years. I was at an Exabeam customer, so for a Fortune 500 company. First bought the product back in 2018, so worked hand in hand with the product as a customer. Now finally made the transition over to Exabeam itself. Thank you very much for joining and look forward to demoing to you later in this webinar.
Cynthia Gonzalez (01:37):
Now let’s get started. So just quick agenda overview, what we’ll cover today. What is an insider threat, the problem, we’ll talk about the dangers of insider threat. Then we’ll do a little bit of a deeper dive into user entity and behavior analytics. We’ll give an example scenario, and then Jordan will walk us through the demo as he just mentioned. So let’s kick it off. Gartner defines an insider threat as a malicious, careless, or negligent threat to an organization that comes from people within the organization. This can be employees, former employees, contractors or business associates. Anyone who has information concerning the organization’s security practices, data and computer systems. The threat could involve fraud, theft of confidential or commercially valuable information or sabotage of computer systems. They’re really three types of insider threats and they’re defined as compromised credentials. One of that’s a common example is when an employee whose computer has been infected with malware typically happens via phishing scams or by clicking on links that cause malware downloads compromised.
Insider machines can then be used as a home base for cyber criminals from which they can scan file shares, escalate privileges, infect other systems and more. This is the case of the Twitter breach where attackers used a phone spear phishing attack to gain access to employee credentials and their internal network. The attackers managed to gain information about Twitter’s processes and target employees with access to account support tools to hack high profile accounts and spread a cryptocurrency scam that earned $120,000. Another type of insider threat is malicious user. That’s an employer contractor who knowingly looks to steal information or disrupt operations. This could be an opportunist looking for ways to steal information that they can sell or which can help them in their career, or a disgruntled employee looking for ways to hurt an organization punish or embarrass their employee and their employer. An example of a malicious user are the various Apple engineers who were charged with data theft for stealing driverless car secrets for a China-based company.
And the last type of insider threat is a careless user that’s an employee who does not follow proper IT procedures. An example is someone who leaves their computer without logging out or an administrator who did not change a default password or failed to apply a security patch. An example of a careless user is a data analyst who without authorization took home a hard drive with personal data from 26.5 million US military veterans. The hard drive was later stolen in that from their home during a a burglary, the layoffs, mergers and moves. Some of the reason insider threats are real for you, there are plenty of opportunities for insider threats to arise. Layoffs, layoffs, especially those at the end of the year can create many unfortunate bad behavior patterns. Employees who just want to keep some of their good designs tend to want to email themselves successful documents, pitches, notes, or in the case of surprise layoffs, they might want to send themselves customer less financial information or more a new hire. When a new hire comes on board, it’s a dangerous time for the whole network they’re being bombarded with. Install this and update that messages from it and security. Malicious actors keep an eye on changes in LinkedIn and other job sites and time their crafted phishing emails accordingly. How a new employee to know whether your team uses box which is legitimate or Adobe, which may not be legitimate and what links to click.
Mergers and acquisitions, there’s no such thing as homogenous network and this is complicated by adding new offices, branches and organizations with a main line directly into your network. Especially when a larger mature organization with a security department acquires a much smaller group who may or may not have any security hardening or processes in place. Bribery or disgruntlement lapses taught the world that everyone has a price and the price for legitimate credentials on the market is fairly affordable, saving time and effort on the part of malicious actors. Likewise, disgruntled employees may attempt anything from sabotage or IP theft, funneling out important information. So what are some financial and reputational consequences of insider threats? Trusted insiders have authorized access to sensitive information and can cause significant harm to your organization. Whether they mean to or not, the impact can be severe. The 2022 put among cost of insider threats. Global report reports that an insider threat incident cost an organization 13.8 million. However, insider threats are incre incredibly hard to detect due the due to the complexities of pinpointing abnormal behavior. That’s really where user entity and behavior analytics comes in.
Oftentimes we don’t have the visibility or setup for some of these insider threats. Now one of the key things about understanding what type of insider threat we might have on our hands is to understand if this is compromised, malicious or careless. And so how do we go about understanding intent? Really what we need to be able to do is see the full story with the right context. It’s not just being able to take a single indicator on alert or an event and being able to say, I know exactly what this is. There are a lot of questions that we need to go through an answer before we can correctly classify which kind of threat type this is. Let’s say I have two users and they both generate some kind of alert or event and we want to figure out is it a threat and what type of threat is it?
In Scenario one, a user uploads two gigs of data to Dropbox and in scenario two I have an admin that sets up a scheduled task on a server. This is the only information you have. What’s your thought process here? Which one of these do you think is a threat or are they both a threat? Initially you may try to decide which to respond to first and it’s probably going to be scenario one, right? User uploads two gigs to Dropbox. That is potentially data going out of your company that needs to be monitored. Scenario two is a little bit less concerning just because generally an admin has access to the server.
I probably will put that one on the back burner and really focus on the data exfiltration. But if you have the full story with the right context, your mindset may change. First scenario one, the user’s role as data analyst and looking at his past behavior over the last 30 days. Every Friday around the same time he’s doing bulk uploads to Dropbox and other users in his department with really similar titles are doing the same exact thing. Let’s uncover a little bit more about scenario two. This user has never accessed a server before. He scheduled a task, he performed this activity on a Sunday morning and he has never logged in on a Sunday morning before. He recently had a poor performance review and he is also starting to search for jobs using things like monster.com. While he’s working with context, the right, the first scenario is probably still something that I would respond to but wouldn’t prioritize scenario one in this instance because the context makes me believe that this is some sort of data archiving, that there’s a process, I’m more concerned about scenario two because of the level of access. And if this admin is on a poor performance review or a poor performance improvement plan, looking at jobs while at work, that’s going to raise my suspicion that maybe the scheduled task isn’t something that’s part of his day-to-day responsibilities.
Monitoring behavior helps provide the context. Detecting insider threats effectively requires a deep understanding of the normal access patterns for each user within an organization. This knowledge is essential for identifying abnormalities and potential breaches. A behavior-based approach is key to detecting and thwarting insider threats. The key to effective detection is being able to differentiate between ordinary and unusual behavior. To achieve this, it is necessary to understand normal access patterns. User entity and behavior analytics establishes a normal behavioral baseline for each user. Once you understand normal behavior, you can detect deviations from the pattern, also known as abnormal behavior. In this example, Barbara connects to VPN from China for the first time. From the data model. We can see that while connecting to VPN from US, Canada, Germany and Ukraine are common. This is the first time she connects to VPN from China, so it might be worth investigating.
So how do you handle insider threats with Exabeam detecting with UEBA — user and entity behavior analytics — detects threats by identifying high risk and almost user and entity activity like we saw with the example. For Barbara, this happens by using our machine learning to baseline normal activity for all users and entities in an environment. Once the baseline’s available, the system automatically detects deviations compared to that baseline, the baseline of a peer group and that of the organization as a whole and assigns that activity or risk score prioritized by risk. Exabeam aggregates security alerts and events together into a user or entity’s timeline. Risk scores are assigned to each anomalous event or alert and then aggregated within the timeline escalating the highest risk users and assets to the top. For analyst review from the dashboard, an analyst can easily identify notable users or assets. An analyst could also create watch lists to track high risk users at a glance, then investigate with Smart Timelines. Exabeam Smart Timelines offers a comprehensive view of events enabling security teams to answer crucial questions such as how did the attacker gained entry, which locations and assets were accessed during the incident? How many assets were accessed by visually representing the sequence of user and device activity. Smart Timelines drastically reduce the time and effort required for manual correlation.
So let’s walk through an example before Jordan does the demo. So this is an example of what the timeline looks like. So for all anomalies detected, the timelines stitched together both normal and abnormal behavior for users and machines. These timelines include all information and analyst needs to perform a rapid investigation including normal and abnormal behavior as well as the surrounding context, like what happened before and after an alert does this alert map to a MITRE ATT&CK tactic, technique or procedure. In this example you can see a timeline of the suspicious activity cloning GitHub and if we scroll through we will see that Gary has put in his notice giving us context to the cloning, giving us the context we need to understand his activity a little bit better. This is probably not something he should be doing. This is probably something we’ll need to investigate further. So now we’ll move over to Jordan, who will walk you through a demo of how to detect, investigate, and respond to insider threats within Exabeam.
Jordan Forbes (13:06):
Perfect. Cynthia, thank you very much. I’m going to go ahead and share my screen. Well thank you very much Cynthia for the, the breakdown there and the the great slides and thank you everyone for joining here today. So as you can see, this is Exabeam security analytics platform. This provides key visibility into the threats across your entire environment as well as the ability to triage and response to the threats. This demo here today is going to be really focused around malicious insiders. What are the key challenges around malicious insiders? Well, if we think about malicious insiders in general, they potentially already have access to the environment. They already have access to key parts of your environment sensitive information. So it’s really important to understand what their behavior is and when they start deviating from the norm, that’s really going to give you that capabilities to detect when somebody who’s trusted within your organization starts to act differently and maybe starts to exfiltrate that data or starts to pull that data from sensitive repositories.
Now straight away we can see as we look along the top here, we’ve got our incidents in the instance queue. Now because Exabeam Security Analytics is so closely integrated with our Case Manager and Incident Responder, we can see the incidents that are associated with me. So assigned to me that I need to work on and also the team. I’m on my tier one analyst group. We can see the instance on the right hand side. Now specifically what we’re going to look at here, this is our overall dashboard that has different watch lists. Now we’re going to get into watch lists in a little bit more detail as we go through this demo here today. The two we’re going to specifically focus on is notable users and notable assets that you can see on the left hand side of your screen. Now, the notable users and notable assets, these are the highest risk individuals for this particular day or particular week across your entire environment.
Why? Because they’ve reached a threshold of what we of 90 because actually Exabeam Security Analytics is a risk-based platform. It’s a sign in risk when users or entities start to act abnormally or start to deviate from their normal behavior. Now this threshold of 90 is completely configurable, as is every risk point across the entire system. So we have 1700 rules out the box that are all configurable to meet the needs of your organization. Now keep in mind as we go through this today, all the capabilities you see with our user’s. Notable user here in our exfiltration demo. We have the same capabilities for assets. So if an asset starts to act abnormally, maybe it’s connecting out to a command and control, maybe it’s sending data to a brand new geolocation that we’ve never seen that asset connect to before.
These are all abnormal data points that’s going to add up a risk score and potentially get that asset up to that 90 threshold. So what we’re going to do is we’re actually going to drill in to a particular individual who’s become notable and start to look at what that means from a detective perspective. Now, if we think about this user here, we’re going to look at Billie Wells right across the top. Immediately we can see an information card within Billie’s profile. So this gives us all the information I need as an analyst to understand what’s Billie’s role within the organization. Well, I can see Billie’s a civil engineer. He’s based in Los Angeles. I know he is department, I know he is part of the engineering department. I also know obviously his contact information. I can see when Billie was first seen in last seen. Now this is really important because if this is somebody who’s been in our organization for an extended period of time, we’re expecting that user ums behavior to a baseline.
This is a brand new user to the environment who’s only left, you know, only arrives a few days ago. Then maybe their behavior’s going to be slightly different. We’re still going to be baselining. Now also within this information card, something that’s really important and what really separates Exabeam from its competitors is this top peer group. We can see here Exabeam does what we call dynamic peer grouping. Now we are using different ad attributes to see, okay, what groups is this person a part of? It could be this individual Billie that we’re looking at. He’s part of the design peer group, he’s part of AutoCAD. We’re also doing peer grouping based on his manager. Now why is that so important? Well, when we think about the malicious insider use case, not only are we baselining this individual’s behavior against itself, we’re also also looking to see, okay, how does that compare to the peers within the same group?
That gives you a multi-dimensional view because as an analyst, if I’m deviating from, you know, my own behavior and I’m also doing behavior that’s very different to everyone else in my group, that’s a significant data point. That’s something I need to know. And Exabeam has a whole suite of content that’s going to give you that visibility. So again, another key point as we start to build this picture of who this individual is and the detections we have. So now that we’ve looked at the information card, let’s scroll down and take a look at the user’s risk trend. Now we can see here the user’s risk trend is really powerful for an analyst because this is when we start to understand when did this user deviate from the norm? So it shows us, okay, previous weeks this user’s, you know, very normal behavior, he hasn’t deviated from the norm, there’s no risk scores tied to any of this behavior.
Now all of a sudden there’s been a spike. So on March 3rd we can see this user starts to deviate from the norm. So back at the start of the demo, as I mentioned, the challenge a lot of organizations have with detecting malicious insiders is they already have access to those locations, you know, potentially access to databases, et cetera. How do we then know that our user’s starting to act up normally? But we can see here, we can see, okay, this user’s acted very, very standard now all of a sudden starting to deviate from the norm. So what’s caused that? Let’s have a loop. So again, within our risk reasons, this tells me as an analyst every single abnormal trigger that’s happened within this session. So immediately I can come down and start to build a picture as to what’s happened. So if I just look at this again, the different color sessions gives you rule triggers from different data sources.
So for example, we have a a list here. User has access to job search domain. This is happened three times. So we can see this user’s accessing glassdoor.com monster and Indeed and it’s a sign and risk score at every single point at another score of five. So again, think about our malicious insider. This user’s starting to potentially look for different jobs. Let’s have a look and see at the event and see if we got a little bit more information. So we can see here this user’s accessing potentially looking for engineering jobs on glassdoor.com. Now this is abnormal for the individual. Let’s continue to scroll down and see what else we’ve got. Well, as we can see here, not only is it abnormal activity for Billie himself, it’s also abnormal for the peer group. So Billie’s part of the AutoCAD peer group as we saw.
Now we’re actually seeing, okay, we don’t expect anyone else. It’s abnormal for anyone in the AutoCAD peer group to do job search activity. That gives me an additional lens to see, okay, this is really starting to look a little bit abnormal to me. Maybe I need to escalate this up a little bit. Take a closer look on what’s going on. Now the key thing about what we do here at Exabeam is you can see here all the blue M hyperlinks that takes us into an additional view on additional bit of information. So this particular link here takes us into the data model. So this data model shows us job search activities of users in the peer group. So we’ve got different groups here. We’ve got two Peterson who’s an an HR manager. We’ve got human resource coordinator. So again, we expect people in human resources maybe recruitment to do job search function.
That’s normal behavior. However, somebody in the auto card group we don’t expect. Now we’re going to get into a little bit more detail on data models coming up and seeing the power of that. Exabeam has about 700 data models that are building in real time in the backend day in day out. So now that we’ve seen some suspicious job search activity, this is really peaking our interest. So again, looking back at our user risk trend, this user on the 4th of March is also exhibiting even further abnormal activity. Let’s see what it is. So we can see here, okay, 4th of March this user outside a risk score of 412. So it’s actually even escalated. The abnormal triggers is even more than it was yesterday. So we’ve seen the power of the risk reasons. Let’s pivot into the Smart Timeline and see what that’s going to give us.
So as we can see, this is Exabeam Smart Timelines. Now the Smart Timelines are so powerful. When I was an Exabeam customer, this really changed the game for us. As you know, whether you’re a soft analyst, insider threat analyst, when you’re investigating an incident, what’s the first thing you do? You build a timeline. You have to build a timeline of events to understand what is happening and put that in order. Well, our smart times all timelines already do that for you. So first and foremost, we can see on Saturday, March 4th, we have the risk transfer from the past session. So this is another key component within the product. If somebody’s acting abnormally from the day before, we can see, okay, that’s going to add on 12 points of risk. So immediately we know Billie has had some suspicious activity the day before. He’s going to start his session with a risk code of 12.
So let’s see what else has happened in this session. So again, we’re not having to keep pivot query. This is all going to tell us. The timeline’s going to tell us the story. So let’s scroll down and have a loop. So as I’m looking down, I can see, okay, I’ve got email sent to Billie Wells at [email protected]. So this one event is actually triggering all of these different rules. So I can see here the mail was sent, you can see the subject, I can see the size of the email, I can see the attachment. Everything’s there for me. Now what’s that telling me is an analyst, well I have a fact based rule telling me that this user’s actually sent 26 an email with 26 megs of data to a public email domain that’s assigning a risk code of 20 straight away. I just have a model based rule.
So again, thinking back to our models, we have this user has sent an unusually large amount of data in a single email. So we expect Billie to send emails again around with the nine kb mark. This one’s actually 26.2 megs. And if I click into my data inside, that’s going to show me my model. So that’s going to immediately tell me how far outside the norm is that well this is what we normally see. This is what’s normal for Billie about that nine kb. This is hugely outside the norm for this individual. This is a key point when we think about malicious insiders because they have access to this data, they have access to emails, et cetera. So him sending an email out the organization maybe isn’t abnormal the size of the email. We know exactly the size of the emails across an entire session, across individual emails, we know this is what we expect. Going even further, we can say, okay, this is the first time Billie has sent an email to icloud.com. That’s an additional data point I need as an analyst. Now keep in mind as well, we’ve got all these information here on the event for any of this information as an analyst, if I need more and can click on my view logs and I can see all the information that’s in the database that really tells me everything I need to know about this particular event.
Now depending on your package that you have with with us, you’ll obviously be able to see the full raw logs. You’ll be able to pivot into our search product as well. So the entire suite and you can do part of your investigation in there as well. However, as we continue to come down through the timeline, the timeline starts to answer questions that we not may not have known to ask or an analyst may not have known to ask. Because if you think about that exfiltration via email, you’d have to then pivot query and say, okay, what else happened in the time? Within that investigation? Well this is going to tell us. So now as we scroll down, we see Billie accessing abnormal file activity. He’s accessing the patents documents. So again, that comes back to our abnormal behavior. Billie doesn’t normally access files from this particular asset.
This could be an abnormal amount of data that’s been, you know, read or downloaded. And that’s going to give us some additional data point as well. Let’s continue to scroll down. So as we come down we can see more email exfiltration. Again, Billie’s interacting and sending, you know, over 20 megs in over 20 megs again. So I’m getting the full picture as an analyst to say, okay, this is really something I’m going to need to investigate and I’m going to have to escalate this and take response actions to really clean this up. So let’s finish off this timeline again, we can see Bill is continuing to access additional files within the organization. And then to this off, we’ve got the session end. This has given us a summation of all of the bytes across the emails that Billie sent to the personal account. So we can see across the entire session we’ve got 191 megs.
Normally we expect 150 k, 115 kb. This is a huge deviation from the norm and this is an analyst. I know this is something I’m going to have to look at. Well maybe even escalate and start to maybe remove that access. So what we’ve seen here is we’ve actually seen multiple different opportunities to detect that maybe is more difficult in a traditional SIEM because we are got by additional lens to see, okay, this is what we expect. This is the baseline for the individual, this is the baseline for the peer group. This user is significantly deviating from the norm. This is adding risk points all the time. And again, bubbling up those high fidelity alerts. So the analysts can tell us, the analyst can action as they need. So just finishing this off, this particular data model here that we can see, we want to think about more holistic and different opportunities within our data model.
So every single user you see, every single asset, that’s what we call our data insights. That’s another key thing about Exabeam. We’re going to show you everything that’s normal. It’s not a black box, it’s not heading behind the scenes. You’re going to see everything that’s normal and we’re going to bubble up what’s abnormal. So within a couple of clicks here we can pivot into Billie’s data insights. Now data insights is all of the models, the train for this user. Again, you don’t have to do anything, just get the data into Exabeam. We do all the modeling for you in the backend and then trigger those rules. So let’s look at a couple of different models. So again, thinking of putting our malicious insider use case hat on, maybe I want to see database activity. Again, I’m working with multiple customers now who were looking at their database logs to see, okay, is somebody accessing a a different database table than they normally do?
Is somebody conducting a different database operation than what’s normal for them or abnormal for their peer group? On top of that, some of the response sizes, if somebody’s doing a database query and pulling back data from that database, we can see firsthand the size of the responses and actually what’s abnormal for that individual. So again, it’s a key point to understand. We know what’s normal. We’re going to show you exactly what’s abnormal. Continuing down through our models, again, we’ve seen email, we saw that first time in time hand in the timeline. We’ve got our email models behind the scenes, but again, attachments per email, some of these add in multiple attachments that they don’t normally do. That’s an additional data point. We’re going to give you that visibility. Let’s finish this particular part off. And when we think about exfiltration via different meth methods, it doesn’t matter tax again, whether you are actual trade malicious insiders, trying actual trade via email, web printers, data, usb, it doesn’t matter. We have the same models, the same detections, they have no route out the network because we have all those data points covered and we’re going to bubble up when somebody starts to act up normally, as we can see here and web activity, if this user start to access a file share website that they haven’t done before, then we’re going to detect that and bubble that up.
So concluding this investigation of Billie Wells kinda now that we know the timeline’s already painted the picture, we know we’re going to take some remediation and response. Well, every notable there’s an incident. So for b.wells, we’ve created an incident that the analyst can use that has a full set of prescriptive tasks that they can use to really help their investigation. So if we scroll down to the bottom here, we can see all of these tasks that are available. The analysts can go through and make sure they’ve completed the necessary tasks for this particular investigation and say we want to automate some of this, but we can move up to our workbench. And within our workbench we have a whole plethora of options and playbooks that can be run. So let’s see again, if I’m thinking of, so an insider threat case, and I know this individual hemp is exfiltrating data, which is clear, maybe I want to reset their password, I can reset the password for the individual.
Maybe I want to add the user to a watch list. So very simply, I could add the asset to a watch list. If it was an asset investigation, we can say add user be wells to a watch list, suspicious levers. Now all of a sudden we’re automating that response and we also even have a playbook for any individual exhibit in this behavior that has a series of tasks. So this really cleans up and automate those tasks for the the analyst. Very, very powerful. So again, we run this go back to our Exabeam homepage. We can see b.wells is now part of this suspected levers watch list. Now thinking about now that we’ve kind of concluded that sort of investigation, the next sort of thing we need to be thinking about is how do we become proactive? Well, if you think about our watch lists, the watch lists allow us to detect whether it be a suspected lever like it is in this scenario.
Could be a privileged user, could be a critical server, could be a critical system in the environment. We don’t want to wait until that user or asset reaches that notable status. We can see here if Billie didn’t reach that notable threshold and maybe started to deviate from the norm and had the score of six day, it’s going to pop up on our radar. The analyst is going to be able to see it. This gives us the single paneer glass an additional lens to see when somebody’s starting to deviate from the norm. That’s the power of the watch list here. Now to complete this to even become even more proactive, we can leverage and actually look at our threat hunter. This is where we start to change the mindset. So we have these notables in place, we have our watch lists. Now we’re going to add a pro proactive checks on top of this that any analyst can do. So within a few clicks here we can see we scroll up to the top, we’ve got our threat hunter. Now there’s a huge range of options within our threat hunter. We’re not going to get into them here on today’s webinar. However, if we look our safe searches a populated one here earlier around users who send emails to personal email accounts similar to Billie. Let’s see if anyone else is exhibiting the same behavior as Billie.
No. Okay, so just Billie for this particular search, however this is maybe a search I want my analyst to do daily, weekly, just on a cadence that meets the needs of the soc of the insider threat program. Whatever makes most sense. Now, what’s so key here is you don’t need to understand complex query languages, build multiple queries and string them all together. Analysts of any level can do this. It’s four clicks away, it’s repeatable and it levels up your entire analyst workforce because now I’ve got tier one analysts being able to threat hunt like a tier three. They can escalate, they can find these particular examples. This is really, really powerful and this was a complete game changer for me when, when I was an ex being customer because small with a small team and we needed obviously the resources to be available and we needed all of our analysts to be leveled up. That’s a common challenge a lot of organizations face where we were able to run these checks. Analysts one, analyst two could run the same checks as a more senior analyst and really come to the same conclusions because Exabeam’s painting in the picture, it’s telling us the full story of what we need.
Adding this on on top of this, going back to our user label that we spoke about as well. We can go a step beyond that. We can add in a suspected lever. So now I’m actually building a suspected lever’s user label that we can see here for Billie Wells suspected lever. And I can threat hunt for anyone who has that particular label. Think about how complex that is in traditional sims, you’d have to build a query for everyone in that. Who’s part of that group here. One click bring me into the same list. Now again, it’s only Billie that has this particular label, but it just shows you the power of what we can do around our threat hunting. Now just to finish off here, finishing off again with our suspected levers. Now again, this was something I used when, when I was a customer as well. Wait, an internal process, somebody hands in their notes to the organization, they get added to an ad group. That ad group syncs with Exabeam every single evening and gets updates. This particular watch list, my analyst can then threat hunt on top of that. So a lot of it’s automated and really, really provides a powerful lens to detect these types of malicious insiders.
So in conclusion, for for the demo, we can think we’ve seen the full end-to-end workflow. We’ve seen the notable investigation, the ability to triage that using the Smart Timelines. Now if we think of the smart timeline that we saw rebuilding that, that’s going to take hundreds of queries. Why? Because you’d have to rebuild that not only for that individual but for that entire peer group because that’s what we are doing. We’re building those automations, that Smart Timelines for every user, for every asset every single day in comparing that behavior to the peer group, to the organization. Again, multi-dimensional detection is what you’re going to get here. So in conclusion, as you’ve seen, we provide the entire picture. The analyst doesn’t have to kind of piece together, spend hours trying to recreate the scene of the crime to try in front of them. The timeline answers, questions the analyst may not have known to ask. Adding on top of that, those, these high fidelity detections, we have those proactive checks, simple threat hunts that any analyst can do that’s very difficult to recreate in other platforms. Cynthia, talk over to you. Thank you.
Cynthia Gonzalez (37:22):
Thank you Jordan, and thank you for that very thorough demo. Now we have a couple of questions that came into the chat and I’ll start with the first one. So does user entity behavior analytics only support insider threat use cases?
Jordan Forbes (37:39):
That’s an excellent question. No. So we have an entire use case package. So we have multiple packages that are compromised insider, malicious, insider and external threats. And within those packages we have additional use cases. So Exabeam is very use case driven. So if you think about a use case like data leak, data exfiltration, our UEBA has the capability to detect compromised insiders — those compromised credentials, that lateral movement — all the way through to ransomware, to data leak abuse, privilege abuse. So that basically your entire security requirements <laugh> can be built in and use are out the box content within the UEBA platform.
Cynthia Gonzalez (38:26):
And then the next question you mentioned Exabeam could also detect abnormal database activity. Can you give an example or, or show an example?
Jordan Forbes (38:37):
Yeah, this is another great question. So, yeah, let me actually bring this back up. Let me bring this screen back up because we could probably just give one quick example here specifically around databases. So, give me one second. Let me go ahead and share my screen. So, excellent question. So, as we can see here, let me bring it back up the demo vitamin. So when we think about malicious insider, potentially if somebody abnormally accessing database very simply within our 300 we can see, you know, who’s, who’s got database events that maybe is exhibiting a higher risk score and let’s actually drill into that and see what is actually causing that so we can see these different assets. So again, thinking about our, our use cases and our capabilities, we have the same detections for assets as we do for users. So looking here, let’s pivot and look at Barbara Salazar.
So using my timeline, I’m actually going to be able to see, okay, is Barbara exhibiting any abnormal database activity? So again, we didn’t get into all of the nuances around the timeline because it’s so powerful and only got so much time here today. But thinking about these database alerts, what we can see here, we’ve got an abnormal login to the payroll database. We don’t normally see this user access the payroll database from that asset. On top of that, we can also see it’s an abnormal zone. So again, TB Mazda’s concept of network zones. If I always access the database from the same location, now all of a sudden accessing the database from a different location, that could be malicious insider, it could be Compromise Insider. Now just to finish off this particular point, scrolling further down, we’ve got an abnormal response size. So again, we know exactly what’s normal from a response size for this particular user.
However, in this event we can see this user basically downloading entire contents of the payroll database. It gives a significantly large download. This is what we normally expect around about, you know, 10,000 bites. This is how much out the normal is. So again, as a whole, when we think about the malicious insider use case as well, specifically, we need to understand what this user’s doing day in, day out. And we need to be able to very quickly detect when that user starts to deviate from the norm. What we’re seeing right here is we know we’ve baselined it, this user connects to this database from this asset and does this particular behavior. As soon as that starts to deviate, that’s going to attract attention from Exabeam. You’re going to have a risk code to send to it and then you start to bubble that up and there’s many ways that’s going to bubble up to 90 again with all of our default content. So huge amount of capabilities. Now expanding beyond this, like I mentioned in the demo, I have a large amount of customers pulling in those database logs and really giving them additional insight as to what individuals are doing across their entire database. And that could cover everything from PCI requirements, banking, you name it, right across the entire plethora of customers, industries, et cetera, can really benefit from this particular out the box content. Thank you, Cynthia.
Cynthia Gonzalez (41:57):
Okay, thank you. And thank you so much for joining us today. We look forward to having you on a future webinar. And if you’re interested in a bit more information about insider threat, make sure to check out our insider threat. Central QR code can be seen here on the screen.