During a typical day, your team might have to review dozens or hundreds of security alerts, hopefully only a fraction of which will turn out to be real incidents. As you begin your response to these alerts, rather than simply trusting the alert as 100% accurate and remediating, or pulling a full disk image from each potentially infected endpoint, you can do something in between: a triage collection.
Exabeam Threat Researcher, Ryan Benson expanded on this topic in a recent webinar.
What is a Triage Collection?
A triage collection is when you grab a targeted subset of files that is likely to help you answer your initial question, which is typically something like “is this machine infected with malware”, or “has this user been doing something that they shouldn’t.” If the answer to this question is yes (or even maybe) after your initial investigation, you can go back and do a more comprehensive collection to support a more thorough examination. But in the meantime, by just starting out with a smaller, targeted collection, you can complete that initial investigation quickly.
Four Triage Data Categories
We can break up the data from a triage collection into four main categories:
1. Volatile Data
Volatile data is easily lost. It is temporary in nature and often disappears quickly. Much of what we refer to as volatile data is in a system’s memory. Some of the volatile information can be pulled out of memory using system commands, like:
- What processes are running, from what file path.
- Open network connections.
- Network settings.
Other artifacts can only be examined by looking at a raw memory image. A more thorough look through a memory image can reveal things like rootkits, network packets, and even memory-only malware.
2. Windows and File System Related Artifacts
The second category includes collecting core windows and file system information, including:
- Windows Registry is like a database of lots of configuration information. You can either grab the whole Registry or do targeted queries to collect specific things.
- Prefetch files are files that Windows uses to help frequently-used programs load faster, but are valuable in investigations because they show when different programs were run.
- Event logs are rich data sources that can be very valuable. It’s encouraged to collect all the logs on the system.
- NTFS has core metadata files that have incredibly useful information. The MFT, or Master File Table, itself contains file paths, multiple timestamps per file, and even the full content of small files. The USN Journal and the LogFile, record modifications to files with timestamps.
- Suspicious file locations include the Temp directory, SystemRoot, or a user’s AppData directory. You can consider collecting the hashes of all files in these locations, doing digital signature checks, or just collecting all the files for analysis.
3. Persistence Mechanisms
The third category is persistent mechanisms, also called auto-runs or “Autostart Extensibility Points”. These are ways of making programs run automatically without being intentionally started by a user. This is normal for many legitimate applications. However, malicious applications often set up persistence so they can survive reboots.
Autoruns by SysInternals is a comprehensive tool that lists applications that are set to auto-start. Examples include:
- Scheduled tasks (or AT jobs) – Useful for both starting a program on reoccurring schedule and for escalating privileges.
- Services – Often used by system processes or other lower-level programs, but also makes a great spot for hiding malware.
- The Registry – Contains many, many (often undocumented) ways to automatically start a program. The most basic and obvious of these locations is called the “Run key”.
4. Application-specific Information
The fourth and final category to examine is application-specific information:
- Installed applications – Check to see if a user has something non-standard installed, something that is against the company’s acceptable use policy, or an old version of a legitimate application that has known vulnerabilities.
- Browser history – Review search engine queries and pages visited as well as webmail, both work and personal, that can be especially important in phishing cases.
- Cloud syncing apps – Google Drive and Dropbox have local databases that can give insight into what files were moved around.
- Application configuration information – Applications track all kinds of interesting things, including the last few files opened, user accounts or computers accessed, or how the program is configured to run. This type of info is commonly stored in the Windows Registry, but it also can be in separate files.
Performing a Triage Collection
Whatever approach you choose for performing a collection, it’s important to build in as much automation as possible. This standardizes the collections and makes them faster. Especially with volatile data in a malware case, the faster that you do a triage collection after the initial alert, the more likely you will capture information that is relevant to your case.
Click here, to listen to the full webinar.