Webinar - How to Build an Insider Threat Program with Exabeam - Exabeam

How to Build an Insider Threat Program with Exabeam

Webinar Transcript | Air Date December 13, 2022

Watch the Webinar | Read the Blog Post

Jeannie:

Thank you for joining. We will get started with the webinar momentarily.

Good morning, afternoon or evening, depending on where you are when you’re watching our webcast. We’ll get started in just a minute, but first I wanted to cover some housekeeping information. Today’s webinar will be recorded and we will email you the link to the recording after the live event. Secondly, we will have a Q and A session at the end of the webinar, so please feel free to submit any question you have in the sidebar. You’re welcome to ask them as we go in chat, but in case we miss them, we’ll review at the end. Thanks, and we’ll start momentarily.

Welcome to our webinar, Building an Insider Threat Program using your tools from Security Operations. I’m Jeannie Warner, director of Product Marketing here at Exabeam, and I am joined by the amazing Andy Skrei, senior director of product management. A little bit about who we are. I’ve been in security over 20 years in infrastructure, operations, and security. I started my career in the trenches of  one of the first ones ever built. After that I left to wander through technical program management through Microsoft and other programs through Symantec, Fort Net, White Hat, Synopsis. I’ve been a global SOC manager for Dimension Data, building out their global SOC approach. So really my background is all about security analysis, forensic investigation, and incident response with of marketing.

I’m joined today by Andy Skrei. Andy has over 15 years of experience in cybersecurity from practitioner to executive at Exabeam. Before he was here, he worked as a lead security engineer at eBay, developing and deploying technologies for its global SOC. Before eBay, he was a manager at KPMG, helping some of the largest organizations in the world increase their security maturity and reduce their risk. We’re both here to talk about how to build an insider threat team from a mission statement and brass tacks like budget processes and tools. And Melinda put the demo about how to use Exabeam analytics to power up your insider threat team fast. So for an agenda, what we hope you’ll get out of it is, what is the problem, especially right now with the seasoned layoffs, new hires, mergers, et cetera? Why have an insider threat team and a SOC? Why do you need both? Some attributes we think will help you be successful, where we think you might want to begin. We’ll throw a couple examples scenarios at you and then Andy’s going to do a demo.

So my favorite, layoffs and mergers and moves. Oh my! It is really, really a good time if you were to start talking about this. There’s a lot of reasons insider threats come because of just basic common occurrences of what happened. The big first one is layoffs. Layoffs, especially at the end of the year, can create many unfortunate bad behavior patterns. Employees who just want to keep some of their good designs tend to want to keep their emails. They want to email them their successful docs, their pitches, their notes. Or in case of a surprise layoff, they may want to send their customer lists, their financial info and more. Andy, what do you think about the new hire problem?

Andy:

So new hires are interesting. Oftentimes organizations are not thinking about brand-new employees as being potential risks. But when you think about when a new hire comes on board, it does introduce new risks to the business. They start, and their inbox is filled with emails about installing new applications and updating things, and the employee doesn’t really have an understanding, what is normal for an organization? What tools should they be using? And so it’s a great opportunity for attackers to try to fish these employees and start to get access to those credentials. Attackers are smart, they see those employees posting on LinkedIn that they just started a new job. And the other challenge here is from a detection perspective, even when you’re using analytics, new employees often don’t have a baseline of normal behavior to compare against. And so by leveraging things like Exabeam’s analytics that includes dynamic peer grouping, we’re still able to provide visibility into those new hire credentials when they start to deviate from things that their peers do, even if they’re a new employee.

Jeannie:

Deadly true. I got some when I first started at Exabeam. I liked to talk about mergers and acquisitions. And let us all repeat the mantra, there is no such thing as a homogeneous network. Once you start adding additions and buying other companies, I visited a company out in Denmark that had bought a lot of local dairies, a lot of local farms, a lot of local businesses, and then was trying to put 52 of those little tiny businesses plugging straight into their mainline computer systems. So this created security nightmares in all directions, of course. We always remember that maybe larger, mature organizations who have a security department acquire smaller groups who may or may not have any security hardening in place. So we all remember when there was a large retail chain that got hacked through their air conditioner or another opportunity through a fish tank or the time through the espresso machine. Anytime there’s a merger, anytime you’re adding something in so that somebody’s authenticating into your network, that’s an opportunity for badness.

Andy:

Yeah. And then on the bribery or disgruntle, really taught the world that everyone has a price. And oftentimes, buying credentials is fairly affordable. I think everyone came at originally thinking they were such a sophisticated hacking group, being able to get in and infiltrate all these really large organizations with sophisticated cybersecurity programs. But that really didn’t turn out to be true. It was simply, they were willing to fork over a couple thousand dollars and employees were willing to hand them their credentials. And this is really where we start to see the blur between malicious insiders transitioning into compromised insiders, just being willing to sell those credentials. And likewise, you have disgruntled employees as well that may attempt to perform any sort of sabotage, deleting data, bringing down data centers, or simply trying to lead the organization with IP and funnel that information either to other organizations or take it with them to their next role.

Jeannie:

I always say that anybody who says that their credentials are not for sale has simply never had their price met. Is it $10,000 like we saw recently with a couple of our customers or is it 5 million? For me, depending, right? But why? A lot of people have come and said, “We’ve already got a SOC,” but they’re already building an insider team. Why do you think somebody would have both or need both? I mean, I’ve worked in security operations and we’ve chatted at a couple events with people who have come by saying they’re building an insider threat team. So Andy, tell what an insider threat team needs that’s different than a SOC, or even what they have in common.

Andy:

Yeah. So for both, it really is about collecting the right data. Oftentimes, organizations think they just need to collect more and more data, but they really need to collect the right data. And when you think about the right data, it’s a balance between what is the right data to be able to detect the threats that I care about as well as the data that I need to support those investigations to really understand what’s happening here. And the insider threat teams must know what they’re looking for. And I think oftentimes this is a little bit easier sometimes on the insider threat side where this is really about protecting that critical data, in most cases. Your IP, your crown jewels, if they were. But like any types of security detections, your analysts are oftentimes buried in a sea of noise. And this is really true with things like DLP. DLP is a hard product to get actionable. There’s the classification of the data, there’s the detection and the movement of the data. And so oftentimes these analysts are buried in a sea of noise. And then-

Jeannie:

Hold on a second, I want to plus one that one. I have worked in three SOCs where they didn’t actually ever send us their DLP data because it was just too noisy. So sometimes the security operation team isn’t even looking at those events.

Andy:

Exactly. And DLP typically had been an endpoint product or email, but now with the advent and the rise of SaaS applications, you almost need DLP all over the place, and every application has its own little flavor of DLP. So trying to normalize and bring all of those things together can be a challenge. And then lastly, it’s not just about the detections, it’s about the investigation as well. And manual investigations often lead to incomplete or inconsistent outcomes across the team.

Jeannie:

Absolutely true. And I wanted to start by saying, I went through the SANS GCIA training and this was how they defined security operations, it’s straightforward. People, process and technology, which we absolutely all agree on, but a lot of the goal for a SOC is protecting the holistic information system through design configuration monitoring. So we looked say things like, “Hey, the day after Microsoft Tuesday when everybody’s done their updates, we looked a little and we always saw a few more events firing.” As I worked on the map program, we’d send out to all of our different antivirus and partners out there and everybody would. They’d push their new signatures, new things are being seen. So it does create a noise just on the basis of what’s vulnerable and what’s not, which is a very specific view and very important in an operation, but not always exactly what you’re looking for, for insider threats.

Andy:

Yeah. So from an insider threat mission statement, the insider threat team is a combination of people, processes, automation and technology looking for rogue users, compromised credentials and evidence of entity misuse in the organization. And this is done through ongoing monitoring of normal system and user state, the research and implementation of adversarial aligned defensive capabilities and automation in response to whatever possible to minimize damage. And really, when we look at insider threat from the Exabeam side, we do see this as two sides of a coin. You have the compromised insider, so that’s the external attacker gaining credentials, and then you have your malicious insider, that’s your credentialed employee that you’re trusting within the environment.

And one of the challenges oftentimes is trying to understand when you see an alert, which bucket does that alert fit into? There’s some really blurred lines between the compromised insider and the malicious insider. And really, this usually comes down to needing the right context and to do some investigation and triage. I always talk about, you really need to understand the intent behind the behaviors and the alerts themselves to understand what type of threat this really is. And so this is really where the automation of the investigation plays a huge role.

Jeannie:

It really does. Let’s grab into the four. We broke these down, the two of us, when we were talking over how to approach this. Four main attributes of a successful insider threat program, or frankly, the team members that make it up. So the first one, this was Andy’s absolutely.

Andy:

Yeah. So first one, prevention alone is not enough. We throw a lot of our eggs in the prevention basket. And it’s great to have the prevention technologies there, but it’s not just enough to have the technology itself in place. You need to make sure, “Is it configured, is it deployed properly? Do I have the full coverage across the organization?” And we see over and over, prevention is not enough. It will fail, attackers will get in. Or if it is that malicious insider, they already know the prevention tools, sometimes they know how to get around them, they may be the administrators. And so you do need that fallback of real time threat detection and response capabilities.

Capabilities that tie back to insider threat that are critical, like credentials. Almost all insider threat detections are going to be focused around some sort of credential or use of credentials. We see that offensive posture. We need to focus on that more than just the defensive, and we should look for opportunities to support zero trust initiatives and validation as well.

Jeannie:

This is a big one. The next one is understanding what the normal looks like. I recall once that we had, back when it was all just IDSs and firewalls, we were monitoring a bank and everything was fine, and then suddenly, out of the blue, on Thursday at two o’clock in the morning, there was this enormous back and forth traffic with Russia, which scared the crap. I mean, I had to get on and call them in the middle of the night and say, “Oh my gosh, something terrible is going on.” And the person I had to call who was on their escalation chain woke everybody else up and then I had to come back at me and say, “Oh, Jeannie, we do have a branch bank in Russia and this is where they do their switch.” We had no idea of what normal, so we didn’t know what abnormal was.

So this is why it’s so important to know, what are the right tools that can detect half of these attacks? All of these attacks, they say three quarters, frankly, I think it’s over 90%, eventually is part of the attack chain used compromised credentials. So you need something that sees compromised credentials, lateral movement, privileged escalation. And there can be a lot of different tools you already have in your network on this, but are they being used by your SOC? Maybe they’re not. Again, this is one of those things that a SOC never saw. But in earlier years, and again this is dating myself from 2001 to 2015, we never saw active directory within the security operations center. So we were really missing all of the insider threat challenge. All I could do is say, “I think something large went out through the firewall because that was a pretty big SNMP bundle that went through.” So literally it was guessing, checking, double-checking, and it spent a long time. How can we make it shorter, Andy.

Andy:

We need to embrace automation, and this is not just automation in terms of capabilities and automating response, but it’s automation across the full lifecycle, right? We need to let the CPUs put in the effort, not your analysts. Let’s leverage the analysts for decision support to make those tough decisions. But those repeatable, consistent tasks that a lot of analysts are doing today, those are ripe for automation. We want to take the bandwidth and focus it on the investigations as well. We need to put more time and thought into, how do we investigate the threats to understand and uncover the full threat, not just portions of it?

We know that the triage investigation process consumes 74% of the analyst’s time, so we should focus, how do we bring automation to that part of their role? And the automation can really create repeatable consistency, and that makes sure that when an alert fires, that you have confidence that regardless of which analyst is going to pick it up, you’re going to end up with the same result or a very similar result. You don’t have to worry about between shifts and analysts. “Is my response good enough?” And we know creating timelines of events can take hundreds of queries oftentimes. And so that’s a great place to as well, focus automation and being able to automatically provide your analysts timelines to kickstart their investigations.

Jeannie:

It is, and one of the big things that I think was important is to say, “I not only need to communicate this out to hey, the IT guy who does this, or hey, the next team that does this, I need to actually say, how do I talk like everybody? I need to think like a bad guy and say, what could I do? What can I get at? Where can I go? What does this system that this credential is acting do? How do I know what’s normal in the network? How do I know what that asset is?” And here’s where I stop and say things like, “Oh, that’s just our website.” Do you know you can have a thousand applications running on a single IBMX series? That’s again dating myself. Many different applications can be loaded, so what are they doing? What in particular? What’s on that server? What can they get to? What is that server connected to? And we need to be able to respond to all of these the way that a good business responds to market conditions. Do I need to take something offline? Do I need to move in a different direction?

And here’s why I have an interesting one. I want to talk about cyber securities and HR and legal partner. I want to have a relationship with my HR team so that they feel that we are confidential, we know, and they can say, “Right, there are going to be layoffs at the beginning of December because that’s just what we do. So we need you to be on a heightened state of alert.” Or maybe they can help us create watch list or maybe they can help us allow or block lists that will help us be more effective and then not wake them up at two in the morning for known bad or good behavior.

I also need to constantly educate my team and say, “What do they need to know? I can grab people from anywhere. Do I need them to understand the basics of security, plus? Do I need to send them to a SANS? Do I need them to do other things?” And then how do I report back in a language that executives understand? If my CISO says, “Tell me how your team is doing,” can I show them, “Hey, this is my framework mapping of what we’re looking at and what we can see. I can see these classes of events, therefore I think we are covered and going to reduce your risk.” Because that in the long run is what everybody cares about, right? Healthy paranoia, the abilities you see what’s normal and abnormal. Make automation and thinking like your hacker to communicate risk upward.

So here’s the fun part, I think, budget and tools. We’re going to talk a little bit about where can you find a budget in your organization? How do you define stakeholders? What log sources do you need? And then how do you communicate the value of what your team is finding and doing back upward so that if anybody comes to your CISO, somebody like the board and says, “So this insider threat team, is it doing its job? How are you safer than you were yesterday,” they were going to have solid answers. Budgets. Well, there’s the top one. You can go with your hat out to multiple different organizations. If your IT department has business transformation, if your security department is talking about zero trust architecture mandates, maybe they’re trying to get FedRAMP, maybe they’re doing subcontracting to a government agency. All of these things. Maybe it’s just talking about fraud privacy and data security. Each of those different areas can be a different area that you’re looking for.

Andy:

And we know there’s always pain around the cost related to preparing it for a data breach and trying to prevent them. A big one is on the storage of logs and the collection of logs. It can be very expensive. And so again, it is about figuring out, what am I trying to protect and what is the right level of visibility? What logs do I need? How long do I need to store them? Again, a 30-day rolling window is a good place to get started to kickstart the program. And then figure out as you go along, where do I still have visibility gaps while I’m trying to investigate or do I need data for longer? Those are good ways to continue to mature the program but doesn’t prohibit you from actually getting started. And the cost of the breaches are well documented. And so having this trade-off between, this is what it’s going to cost to bring in a third party and have a breach in terms of the reputation to the business and other things, or even IP theft or data leakage. Oftentimes, the pain is there and the ROI is easy to communicate to the business.

Jeannie:

Absolutely, and here’s where I start when I’m putting a team. I’ve got my mission statement together. I get my executive stakeholders. Who’s got a stake? And often it’s, “Who did you get money from?” They’re the ones that might have a stake in it. You think about what are your executive needs for success? Think about, what are your HR needs to be embedded when do you need to get legal involved?

Throwing an example out there, I worked at a company once that somebody sent me an email in the security operations center and said, “We have this email where an IBM employee has talked about bombs they’ve put on airplanes.” This was over 20 years ago. It’s not a secret anymore. I then had to come up with a plan of saying, “This reads like somebody is schizophrenic. Maybe we should talk to their manager and find out, but I need to let Sam Palmisano know,” because it was also sent to outside the organization. And if somebody wakes him up in the morning and sticks him a microphone in his face with a reporter, my CEO needs to know these things. So figuring out all of that is, this is where I figure my escalation, my questions. What other risks are going to be a question I need to resolve? If I have a machine floor, my biggest risk is personnel safety. So if I have sabotage and a machine could hurt a human, that’s going to be my priority.

Andy:

So we look at essential security log sources, and again, this is very much tailored to the specific use case or outcome that you’re trying to achieve with the insider threat program. But in terms of the high level categories to get started, we see authentication and authorization as being absolutely critical, that view on the credentials. Whether that’s SSO, privileged activity monitoring solutions, active directory, VPN and other remote connection solutions. Once you have the authentication covered, it’s, what’s happening on the endpoint? Whether that’s EDR, you could get into just the alerts coming off your EDR or detailed process execution. Again, depending on what level of visibility that you really need. Host intrusion prevention, WAF may or may not be valuable here. And Syslog streams off of Unix systems and other endpoint devices.

Jeannie:

I’m going to argue for WAF for you here for just a moment here. And background, Andy and I did have a little fight about this. A WAF, it is true, is not on an endpoint. It’s something that can be through a proxy, it can be virtual, et cetera, but it’s looking at what a human being is doing in an application. So if I’m logged onto our website and I’m starting to try a sequel attack, I’m ending at one equals one. Am I trying to do a path traversal attack? Am I trying to back up? That behavior is affecting the application on the endpoint, so it could really go either way. It could be considered endpoint or data and access management, much like a CASB.

Andy:

Yeah, absolutely. And again, if you have internet-facing applications where critical data is stored in a backend database, then WAF is probably going to be higher up on the list in terms of important logs to collect. And then there’s the data and access management. So this is CASB data, DLP. Again, whether that’s endpoint, email, cloud, online document management, Google Drive, the SaaS document repositories, and also your endpoint Windows logs. And then with all of the cloud, there’s local and SaaS, whether it’s the cloud infrastructure, Google, Microsoft, AWS, whether it’s the buckets and the storage. You could have other applications like an SAP, CRM, and even SaaS applications themselves. So there’s a wide array of different log sources. And again, depending on the use cases would really tailor, what is the best starting point based on what you’re trying to protect, first and foremost?

Jeannie:

I learned something from one of our customers, that they actually have robotics as a service now. So when I talk about OT and ICS, this is going to be huge. But you don’t need every single one of these for every single case. This is where you need to figure out which of the tools are protecting your business and your particular needs at the end. If you are only a website, then you need all the things that protect a website.

Andy:

And I think the other thing there, Jeannie, is there’s such a diverse set of logs there that this is again where we need to bring automation into play to normalize those events and learn the behavior of the users in each of those. It’s going to be very difficult to try to craft complex correlation rules and detections across such a wide array of different types of logs.

Jeannie:

So how do they augment what they’ve got? I’m soft balling you into this beautiful picture here.

Andy:

Yeah. So really, when we look at augmentation today and looking at, how do you bring automation into an insider threat program? It really should be focused holistically across the entire TDIR lifecycle, preparation and collection, right? Understanding again, what is the right data source to collect and how do you get it in effectively and at scale? Then you move to the detection side. You want a solution that does provide some of that pre-built detection capabilities to rules, data models, being able to build dynamically a watch list based on that have given notice. And I’ll show this in the demo itself. So it’s all about trying to get the most value out of the data that’s coming in from a detection perspective. And then once those alerts fire and they’re escalated, how do we automatically enhance those alerts themselves through contextual enrichment? Whether that’s information about the users and the assets that you have within your organization through context, or third party like threat intelligence information. Building those timelines of related events and the connections and relationships across the organization so that an analyst doesn’t have to do this manually through constant queries and pivots.

On the initial response, once you do find something that is important, you might assign a case, and you need to do that initial response. And this is really where, again, third party integrations and playbooks come into play, bringing in more information that doesn’t exist in the logs or the context data you have that again, can provide that decision support, help uncover the intent of the behaviors themselves else. And then we go into this full investigation, this idea that bringing in risk scoring, understanding the individual behaviors, not just seeing alerts that are low, medium, high, critical, but understanding how risky is this amongst everything else that’s happening across the business? We provide proactive checklists that help an analyst understand, what are my next steps based on the types of incidents or cases? Is this a DLP? Is this the initial access of a particular threat actor? And making sure that an analyst has a clear understanding of what to do next. Also, basic event team escalation and tracking, that collaborative nature of an insider threat program. And this really helps answer the who, what, where, when and how?

And then finally it comes to that response and closure. And again, having done all of this enrichment, the investigation, this helps to ensure that the remediation that we perform is holistic, that we’re really getting to the root of what led to this security incident and eradicating all aspects of it, not just a portion of it. And that’s oftentimes where you see attacks and attackers living in environments for long periods of time. Usually what we see is there was an alert that was handled or a portion of the attack was remediated but not the full thing. This again, comes back into those integrations and playbooks being able to quickly and efficiently be able to take action, whether that’s against a firewall, endpoints, to remediate that. And then it’s about updating the playbooks themselves and dashboards and auditing. How do I make this process cyclical? You’ll see that at the top, the arrow goes back to that continuous improvement. It’s not just about playing whack-a-mole in closing cases, but how do I understand what happened and make sure that that doesn’t happen again? Overall, reduce the risk to the organization.

Jeannie:

Actually in this, I always say don’t hide your light under a bushel. Don’t hide the value that your team is giving away from anybody else. You have to be able to show people. So how do they show technical value?

Andy:

So there’s a few things here. One is meantime to resolution. And I’m making sure to call out here, it’s with a full investigation. The metric should not be, “Let’s close things really quickly.” We want to close things quickly, but we want to do that full investigation. We want to close them correctly. And again, making sure that there is that thought of, “How do I reduce the risk?” So that’s a big one that can show technical value. Being able to quickly identify intent. What is this? Is it the malicious insider or the compromised insider? Is this really a problem or is it a user that simply accidentally added risks to the business? They accidentally made something or shared a document publicly when they shouldn’t have? Those are really important things to be able to quickly understand that intent.

We want to have measurable time to detect, time to create an incident with the full scope. That’s an important aspect. “How quickly can I identify these threats?” And then ultimately, gaining more insights with the data you already have. Oftentimes, for insider threat teams, this data’s already collected somewhere else in the organization. How can you show your management that you’re getting more insights and more value out of the data that’s already being collected?

Jeannie:

And I want to throw up for business value that, the faster that you can show somebody, “Hey, when your auditor comes around, they’re asking a lot of questions,” which you usually answer yes or no, but here’s a case where you can say, “Right, I’ve got all the documentations. Here’s where I can see every authentication that goes in. Here’s how we monitor it, here’s the past reports, here’s everything to show that we have done it.” And I’d like to talk about dwell time a little bit, only because I read a lot of the Verizon investigation breaches, the IBM ones. The dwell time is how long a malicious actor sits in your organization. And sometimes we learned Gate, it was months and months and months where somebody sat in your area and nobody noticed, before suddenly they said, “Hey, there’s a vulnerability. Hey, it’s obvious.” But we can see that dwell time now and we can reduce that. And that’s money, straight up.

Finally, if you do it with the right tools, you make it simple. So maybe I don’t need an expert in active directory who can tell the difference between NTLM and LDAP. Maybe I can actually train somebody that’s just a college student to come in and say, “As an intern, I can get the right people with the right tools in an affordable way to show all of this,” and money is all about it. So we’re going to do a couple quick example scenarios. Andy, we’re running low on time, so I’m going to breeze through these fast so I can get to your demo, all right?

Andy:

All right.

Jeannie:

Unauthorized access. There’s a lot of different places that you’ll see. All of these things can be data exfiltration, so that’s your DLP system or your CASB, maybe I’m looking at your PAM, maybe I’m seeing, “How many times did Jeannie fat finger her login?” Hint, it’s a lot. A service account should behave in a certain way. If I see a service account trying to change directory or do something else or access a new machine, all of these are straight up unauthorized access things that I’ll see. So as we were talking about tools, this is all of the different ways that we might see it manifesting.

That service account. The first time a service account starts to access a new file system, that’s a little bit suss. If there’s a remote login from a new place from a service account, totally suss. And then I start saying, “Why is it looking for a new area? Why is my service account acting like a user?” These are all things that I shouldn’t see and that should automatically raise it up. And I stole our timelines for these two examples so you get a picture of this. Next I have like Sherri, “Hey, Sherry’s got some interesting something from a competitive. Is she looking to change jobs here?” Then I have, “Why is Sherri doing something to the tour network? Able to do something separately on saying, is it ransomware? Is she leaving? Is she hiding her tracks?” This is something to investigate. But all of these can show, all across different tools how you want to be able to look at them together to say, “What is an insider doing? Is she malicious or not or just changing?” We don’t know, but we should investigate her. So Andy, I’m going to stop sharing here. If you can take us deeper into one of these kinds of stories and show us what we’re going to look at. You take it up for the next one.

Andy:

Absolutely. All right, let me go ahead. We’re going to do a demo here and I’m going to focus on Exabeam’s advanced analytics. This is our UEBA capability. And this is oftentimes the first tool set that insider threat teams will adopt from Exabeam, given that it’s focused on compromised credentials, understanding normal behavior. And it provides those pre-built incident timelines to really speed up and automate the investigations. So what Exabeam’s advanced analytics does is, it takes in all of those different logs. We support hundreds of different products and vendors today, including all of those that were listed on the top log sources for insider threat teams. We bring in all of that log data, we normalize it, and then we start to learn, what is the normal behavior for every single user and every single device within an organization? And once we’ve base lined and understood normal, we have over 1600 different behavior based anomaly detections to identify when users deviate from their baseline.

Every time we see a deviation, we trigger a rule and we give it a risk score. And we aggregate those risk scores over 24 hours to determine and start to prioritize the riskiest users in devices and environments. And that’s what you’re seeing with notable users and notable assets. Now for an insider threat team specifically, I could start to pick off and look at my highest risk users and devices. Or one of the other key elements here are our watch list, where I can start to pre-populate a set of users or devices based on specific conditions. Like users that are maybe locking themselves out in my environment. Keeping that close eye on service accounts that might start to be used for nefarious purposes or logging interactively. Two of the most common insider threat watch lists would be suspected levers and departed employees. So these departed employees are users that have already given notice to HR that they’re leaving the organization.

And so I can put them on a list and quickly see any changes in risk. This user obviously has a much higher risk here and would’ve been notable, but I can clearly see Gary is leaving my organization, he’s got a high risk score. We can also do suspected levers. So this is a watch list where my own analyst says they’re investigating users can populate if they think a user might actually be leaving the org. And so we’re going to drill into Billy Wells here. I’m going to click on his risk score, and this is going to bring me into his pre-built, smart timeline. So Exabeam builds these smart timelines for every user and every device, every single day, regardless if they’ve done anything anomalous. And this really kick-starts my ability to investigate this particular user or this particular threat.

So this activity actually occurred on Saturday, July 2nd. And I can quickly see here that Billy Wells is a civil engineer based out of LA. So I can see that the first thing that happened on Saturday is, he logs into an asset that looks like it’s his based on the naming convention, but he’s logging in from a new network zone, New York. And I also see he has some risk transfer from the previous session. So what I might want to do is, in a little bit, I’ll scroll above and you’ll see that the timeline from the day before will tell me exactly what happened and why we transferred some risk. But if we start to scroll down here, we see some remote access where we see various anomalies related to first access to a particular system and the network subnet where he is accessing that. But where we start to see some really interesting details here, as I’m scrolling through, you’re going to start to see, we can see that he sent an email to Bill.Wells@iCloud, and I can see that there’s a job application subject. And you can see all of the different anomalies that Exabeam is able to identify here.

So we’re saying this is the first time that these are sending an email to a particular geolocation for the organization and for the user as a whole. And any of these links that are blue are clickable. And this will expose the data model that we can see behind the scenes why this is abnormal for this particular user. We can see that he sent over five meg of email, actually 26.2 meg to a public email domain. Not only is this a static alert, anything above five megabytes, but we also can model this and again, show the normal behavior. He typically sends 8.7 KB in this range, and now he’s sending emails much larger over here. We can also see it’s the first time that anyone from the organization has sent to iCloud. It’s the first time for Billy and also for his peer group, AutoCAD.

We can see more emails going out to iCloud. Not only can we see the emails outbound, but we can start to see things like file activity. Now we can actually see he’s reading a file called Patents and this is the first access to this document from this device and from this particular location because he’s logging in from the New York office. So again, oftentimes, when you start to see data going outbound, the question is, well, what’s in that email? Especially if it’s been zipped up or encrypted. And so giving visibility into both normal and abnormal behavior starts to give the full story of what happened. We also see he accessed a weird domain that has identified as DGA. And again, this is one of those types of alerts that, if I was only investigating this in a silo, I may have thought that Billy had malware or some sort of command and control or maybe an external threat was part of this attack. But this could just be a website that he’s actually trying to exfiltrate data too.

And again, I’m getting more information with all of the surrounding events in this timeline that’s pointing more toward Billy being that malicious insider trying to take data out of my organization. And again, throughout the day, I can continue to see him sending more and more emails around the 20 megabyte limit. And a lot of organizations, they have a limit of 30 meg or certain emails of a certain size are automatically blocked. And a lot of insiders know those because they’ve tried to send large emails out and they’re blocked. So Billy May know this and he’s just keeping below that 30 meg threshold to continue to send emails because he doesn’t think this is going to be detected. So we can continue to see all of this abnormal email behavior. And at the very end, Exabeam is actually summarized in total. He sent 191 meg to a personal email account and we expect 115 K.

And you’ll notice here this particular alert is actually using our machine learning algorithm that’s determining that Billy.Wells@iCloud is actually Billy Wells’s personal email account. So we can obviously take the domain itself and mark that as a personal email, but we have a machine learning algorithm that will tie the user themselves to personal email accounts. Now again, if I want to really understand, is this Billy? He’s about to leave my organization, he’s trying to steal IP. What I can do is I can actually search and scroll from the day before and see, what were those things that we transferred risk for? On the day before he had a risk score of 146 points of risk, and we can start to see he was actually looking for jobs. So this is where we start to see first job search activity for Billy in ,the organization, where he is accessing Glassdoor, Monster and Indeed.

So again, these timelines really help to uncover that intent and tell the full story of what’s happening. And now as an analyst, I’m much more equipped to understand the full extent of this particular malicious insider to make the right recommendations in terms of the remediation, understanding and seeing, what are all of the emails that were sent? Understanding what data is in those emails and being able to claw that data back and shut Billy down before he continues to exfiltrate more data.

So this is really how Exabeam and the analytics is able to empower these insider threat teams, both through automating the detection, through understanding and learning that behavior, and then stitching all of these behavior based anomalies into a timeline based on risk to prioritize the highest fidelity threats and provide the analyst enough information to make those right remediation steps.

Jeannie:

This might also be one of those discussions where you talk about, if I see this, do I call HR? Who do I call to say there may actually be legal action that my company wants to take to say, “Right, is Billy sending out our intellectual property? Is he sending out our customer lists? What is he sending out?” So this is why I say those stakeholders and those relationships and those escalation paths can be really, really big for this kind of thing.

Andy:

Absolutely. And depending on the size of the organization, we’ve had customers where HR actually has access to this tool. So they don’t have to go to the insider threat team or the SOC and say, “Hey, can you tell me about user X who may have just given notice?” We make it very easy in terms of the way that we represent the data and tell it in a timeline that, really anyone can look at a user and their risk score and understand clearly what happened and what is the possible impact?

Jeannie:

I like it. Oh, I have a question that’s come up. Their SOC has their SIM, but they don’t want to give the team logins. Can they use their data? Yes, you can.

Andy:

Yeah, absolutely. So Exabeam’s analytics solution can augment any log repository, data store and SIM. So oftentimes that is a great starting point, especially for a dedicated insider threat team, is to pull the logs that are already being collected in a SIM and provide this great visibility and automation on top of that data.

Jeannie:

The marketing part of me wants to say the names, that, Exabeam’s security analytics is your basic threat detection and gets you all of this stuff that Andy’s been showing you. Exabeam’s security investigator adds on an incident responder thing so that you can automate a little bit more of activity of what happens at the other end. So maybe you have a Splunk, maybe you have an ARC site, maybe you have a curator, any of the big players, absolutely, we can take feeds from there. Plus then you go talk to the IT department and say, “Hey, I have this instance and if you give me this kind of information from off of Andy’s log list and interesting sources, we can pull that together in a new way. It doesn’t have to go live forever in a SIM. You can just plug it straight into Exabeam where we look at it on our rolling 30 day. All right, here’s another question. Thanks, this is fun. Their SOC is currently only looking at Fortinet, their firewalls, a handful of IDS, they’ve got some endpoint and they have Imperva. Is that enough to go on?

Andy:

Yeah, it is. Again, some organizations, it could be a single log source like monitoring a specific application, like Office 365. I have a great example in here of Howard Osborne, where the primary data here is Salesforce. So it really depends on what is the primary use case? But we can take all of those logs and provide immense amount of visibility and enhancements to the basic detection capabilities that you would get in other solutions.

Jeannie:

All right. Thanks so much for joining the Jeannie and Andy show today. We look forward to having you on a future webinar, and again, you will get a copy of this after the webinar. Thanks very much for joining us today. Thanks, Andy. You always do great demos.

Andy:

Thanks everyone.

Watch the Webinar | Read the Blog Post