Log Aggregation: Making the Most of Your Data

Even if you aren’t aware, most of the devices your organization uses are producing, or are capable of producing some type of data log with valuable information about their activity, your system’s health and functionality. Log aggregation can help you get the most out of these logs and minimize the time and headaches involved in manually sifting through them. To achieve this, you will want to use a log management solution that includes log aggregation capabilities.

In this page:

What Are Logs?

Logs are records with continuous streams of time-stamped events that are generated by systems and applications. They record event types, times and origins, in addition to whatever finer level detail has been specified. They are used to debug software, identify security breaches, and provide insight into system operations. The file type of logs and the structure of the data vary by developer, application and system.

Logs are key to understanding the health of applications, network infrastructure and security issues. When used correctly, it helps IT teams identify and address issues more quickly and ensure that faulty systems are not impeding worker productivity or customer experience. Log data is particularly important when using third-party applications or infrastructures, such as clouds, as the combination of added layers of complexity and inability to modify functionality can lead to IT issues.

Even if you don’t want to make use of logs, most organizations are required to retain them to maintain compliance with regulatory bodies, and as proof of operations in the case of financial or forensic audits.

Log Aggregation

As organizations expand and adopt a wider variety of applications, services, and infrastructures, logs become dispersed across locations and their usefulness drastically decreases due to inaccessibility and difference of data format. This issue can be solved with log aggregation, which centralizes log data, making it easier to analyze and search.

When logs are aggregated, the amount of time you need to spend tracking down files, deciphering data formats and searching for specific errors within logs, much less connecting information between logs drastically decreases. Aggregated logs are easier to analyze and provide a more robust view of your operations than can be accomplished through individual examination.

Log Aggregation Methods

There are several methods you can choose to aggregate your logs depending on your technical abilities and needs. These include:

  • Syslog—collects log data through a standard logging protocol; requires a central network daemon and client daemons for each log source to be forwarded
  • Event streaming—collects log data from network devices through streaming protocols like SNMP, Netflow, or IPFIX
  • Log collectors—collects logs from sources in real-time through an agent, typically a third-party option
  • Direct access—collects log data directly from network devices or systems through API or network protocol integration

Considerations for Choosing a Log Management Solution

Log management solutions must be powerful enough to ingest massive amounts of data from a wide variety of sources regardless of log format and be scalable to accommodate spikes in log volume. Provided these conditions are met, you should consider the following before selecting your solution.

Log Collection

The solution you choose should grant control over how and when logs are collected and centralize collected log data outside of live applications. You need to be able to automate the process of log collection in a way that reduces the impact on system resources, ensures that all errors are collected and protects from data loss due to server failures.

Log Ingestion

Solutions must be able to collect, format and import data from external sources, including applications, servers and platforms. Collected logs need to include all necessary data, be efficiently stored and indexed, and be easily accessible to teams for analysis and monitoring.

Log Analysis and Search

A good solution enables users to search using natural language structures and returns results quickly. Solutions should present log activity in as close to real time as possible and allow point-in-time searches.

Log Monitoring and Alerts

Solutions need to include the ability to set up customized alerts, including rules for when alerts are created and to whom they are sent. You should be able to trigger alerts from a wide variety of events using criteria such as the number of errors per minute and have them sent to a variety of sources, from Slack groups to personal email addresses.

Visualization and Reporting

Effective solutions provide visualizations and reporting on system states and log volume for point-in-time analysis as well as over user-defined periods of time. The shift to DevSecOps teams necessitates the use of tools that simplify reporting and allow for easy sharing and viewing of requested reports, including the use of graphs in visualizing data.

Cost-Effectiveness

Efficient solutions should be able to meet your data requirements, in terms of volume and length of retention, at a reasonable cost. Solutions that offer flexibility and scalability, and those that offer granular pricing, are the best options.

Log Aggregation Tools

There are a number of third-party tools that have been created for log aggregation, and the ones you choose will depend on your specific needs. If you’re looking for solutions that you can completely customize, the following open-source tools might be for you. Keep in mind that although the tools themselves are free for many solutions, they require you to manage and maintain your own system and cost in terms of operational complexity.

Elastic (Formerly ELK)

A popular solution involves creating a log management service with the Elastic Stack, also known as ELK due to its makeup of the following tools:

  • Elasticsearch—a near real-time RESTful search and analytics engine that indexes data for faster use and can integrate with Hadoop
  • Logstash—a log ingestor and processing pipeline system that transforms data and loads it into Elasticsearch for analysis
  • Kibana—a data visualization tool that includes machine learning functionality

Beats, a tool that can be included, is a set of agents that collect and send data to Elasticsearch directly or through Logstash along with metadata for context.

Elastic is highly flexible and customizable and can even provide some of the capability of a Security Information and Event Management (SIEM) system, but it lacks the ability to generate alerts without a paid add-on. Elastic can be hosted on-premise or in the cloud and its popularity means that it is well supported, including third-party services that can operate the system on your behalf for a fee.

Fluentd

Recommended by AWS and Google Cloud, Fluentd is a local aggregator that is often used as a replacement for Logstash in an Elastic stack. It uses a plugin system to create a Unified Logging Layer that integrates a variety of data sources from which it collects logs and sends them to a central storage system.

Fluentd currently has around 500 plugins available and its open-source nature allows you to create new ones as needed. Part of its popularity is due to its low resource requirements. It runs on only 30-40MB of memory and can process 13,000 events per second per core in use and can be used with an even lighter weight data forwarder called Fluent Bit.

Graylog

Although not as widely used as Elastic, Graylog is growing in popularity. It operates on a combination of Elasticsearch, MongoDB and the Graylog Server and can make use of Beats functionality. Unlike Elastic, it comes with built-in alerting as well as streaming, message rewriting, and geolocation.

Graylog allows you to separate log data into different streams, based on error type, which reduces response latency. By aggregating specific error types from multiple devices into a single stream, it allows users to more easily view the specific data they wish to see.

The message rewriting feature allows messages to be ignored, fields to be added or removed, or contents to be modified according to user-defined rules. Like Elastic, Graylog can be used as a SIEM system. It must be hosted either through Graylog or on a private set-up.

Conclusion

Log aggregation can mean the difference between being able to identify an application issue within an hour and having that application out of commission for a week while you struggle to connect the dots between an uncountable number of logs. A good log management system can simplify your search for errors and even alert you to possible issues before they impact productivity, allowing you to make the most of your time and energy.