Nowadays, most organizations keep their application logs. These logs have crucial data about the user, traffic, configuration, performance and security aspects of a distributed IT ecosystem. The granular monitoring and effective log analysis can offer organizations opportunities to make continuous improvements and optimizations. With the modern tools for log analysis and management current available on the market, on the other hand, it is possible to predict and prevent errors in the environment from a proper, and above all, rigorously detailed analysis of data.
Log management applications can be divided into 3 main categories: tools for log analysis, tools for log monitoring and tools for log management.
In this post, we’ll cover the pros and cons of the most popular tools used around the world.
SolarWinds Log Analyzer
SolarWinds Log Analyzer collects log data to provide insights about the integrity and performance of IT environments. It is fully integrated with the native Orion platform, offering extra features and tools. With a single console, you can get a unified view of network performance, systems and associated log data.
- Easy to install and configure
- Has a variety of system connectors
- Simple to monitor log size and system resources
- Allows generation of alerts
- Has a slot with the Orion system (native)
- Provides repository storage for log files, so they don’t just exist on workstations
- Automated threat detection
- Log collection with custom rules
- Visualization: UI is elegant and easy to follow
- Powerful log refinement, with real-time filtering (Live filtering)
- Gathers security events from several system sources
- The price is not based on events, but on monitored nodes
- It is a robust and proprietary product, so it is not entirely “out of the box” what the software can do for the user
- Agent installations sometimes require manual removal
- If you are running an older version of SEM, migrating clients to a new installation is not automatic
- Care must be taken with the online documentation available to make the right configuration
- GUI is not very intuitive and requires a kind of learning curve – training is provided to customers, however, a more friendly interface would be useful
- Alerts can be confusing to set up
- Initial setup can take a long time
- It may not scale to millions of us
Datadog is software for monitoring servers, databases, tools and services through a SaaS-based data analysis platform for applications that seek to scale in the cloud. With the increase in cloud adoption, the company grew rapidly and expanded its product offering to cover service providers, including Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform, Red Hat OpenShift, VMware and OpenStack.
- Due to the tool’s versatility and the huge ecosystem that surrounds it, you can use it to track virtually “anything”
- Alerts and warnings configuration allows you to dramatically reduce false positives
- Runbooks provide guidance to members of the DevOps teams on how to act on alerts
- Powerful data analysis features: You can slice and dice your data almost the same as you can with a behavioral analysis tool. This means that the Datadog user can efficiently refine his metrics as his company starts to scale
- Simple Rest API that allows integration with basically any application. It enables the creation of a centralized data source
- Good API documentation and customer service
- Good pricing model for microservices, obtaining data from several sources
- Very difficult learning curve for new users
- Even users familiar with the tool may have difficulty finding a specific metric or dashboard
- Counter-intuitive navigation for some areas
- Favors customization over simplicity – something that can seem a little confusing for beginners in the tool
- Limitation on what you can do with reports and analysis: very advanced mathematical or graphical operations may need another BI tool to be completed
- Laborious to install and configure the entire software stack
- Requires improvements in security policies
Splunk facilitates access to data in an organization by capturing, indexing and correlating that data in real time in a searchable repository from which the user can generate graphs, reports, alerts, dashboards and views. The software can identify data patterns, provide metrics, diagnose problems and provide intelligence for business operations. Splunk’s main function is to give meaning to data, and in many cases without the user having to do anything. This way, problems that seem complex become simple since it is possible to visualize all the data in one place, instead of having to keep looking for them and then figure out how to group them.
- Quick log consultation on different types of infrastructure
- Adaptable dashboards to handle large amounts of continuous data
- Easy access and information sharing through URL links
- Maximizes endpoint logs
- Can find and store logs of all types of assets
- Dashboard customization and data visualization
- Building applications based on your needs
- Alarm feature alerts relevant people in the organization
- Search queries can be saved for future access or even converted into applications
- Solid community of experts and many training materials
- Analyzes data from multiple sources, with a large number of relationships between partners
- Stackoverflow has a large community, which makes learning more convenient
- Consultations on Splunk can be tricky without prior knowledge of the software and applications involved
- Duplicating the dashboard for different areas can be complex
- Capturing all the necessary data from cloud platforms is not always simple
- Complex overall architecture with long implementation time
- Slow interface
- Requires continuous team dedication to work effectively
- High learning curve: not intuitive for new users
- High cost
Working with any infrastructure and architecture, supporting multiple platforms and ingestion methods, including syslog and code libraries, LogDNA provides a SaaS-based log management service capable of centralizing logs for all applications, servers, platforms and systems. It is developed and optimized for log management in Kubernetes: users can start working using just two kubectl commands and collect logs from all their containers. The offers available on the IBM Cloud help users configure logging on the cluster level in the IBM Cloud Service Kubernetes.
- Ease of configuration, integration and use
- Solid user interface, with log tracking in real time in a practical and agile way
- Structured log and good research resources
- Very affordable price compared to the segment alternatives
- Simple interface with many integrations that support multiple operating systems
- Themes with little contrast and color combinations
- There may be delays in log processing and also missing logs
- Log search can be complicated, as they are not simple Booleans: complex consultations can be a little challenging
- Log-based dashboards and metrics are very simple, without many features related to dashboards and graphs
Graylog is a powerful platform that uses a three-layer architecture and scalable storage based on Elasticsearch, MongoDB and Scala to capture, store and enable real-time search and record analysis in terabytes of machine data from any component in the infrastructure and IT applications. It has a main server, which receives customer data, installed on several servers, and a web interface, which visualizes the data and allows working with logs aggregated by the main server.
- Quick to group, retain and search different types of logs
- Flexible configuration
- Intuitive web interface and relatively quick installation
- Allows you to customize each message to obtain specific information
- Can process large amounts of data quickly
- Backend for storage is Elasticsearch while MongoDB is used to store the configuration
- Able to abstract a great part of the Elasticsearch index management (sharding, creation, deletion, rotation, etc.)
- Storing is not a standard feature in the Community edition
- Uses MongoDB exclusively as a database
- Some aspects of Graylog are less intuitive, which can make device configuration more complicated
- Updates to Graylog may require manual updates to Elasticsearch and MongoDB
- Lacks instructions for adding geolocation database by city and country
Netwrix Auditor is a SIEM solution (Security Information and Event Management), designed to help organizations track and access security-related events across the IT environment. The tool is able to mitigate the risk of breaches in the system, increasing the efficiency of operations of the DevOps teams and reducing costs accordingly. Formerly known as Change Reporter Suite, Netwrix Auditor uses user behavior analysis to control changes, settings and access to hybrid IT environments, protecting data regardless of location. In August 2019, the same name company announced Netwrix Data Classification, which uses Concept Searching technology to identify confidential information and reduce its exposure by automatically tagging it with metadata.
- Provides accurate information about security threats
- Easy to generate reports
- Follows and enforces compliance on detections
- Allows you to create personalized notifications and reports using the “Search” function
- User can choose “What”, “Where” and “When” for the event they want to monitor and configure email reports for each one of them
- Warns users when their passwords are about to expire
- Provides report on passwords and expired accounts
- Reports on access to files and folders that allow you to monitor when users try to access items for which they are not authorized
- It is difficult to learn the filter options when using the tool for the first time
- Could provide in-depth training with additional videos and guides
- Complex database structure with sparse documentation
- Interface is intuitive, but needs improvement
ELK stands for 3 open source projects: Elasticsearch, Logstash and Kibana.
The combination is often referred to as “Elastic Stack”, and makes up the log management tool known simply as ELK. Elasticsearch is a data search and analysis engine developed in Java and based on the Lucene library. It offers a distributed search engine for all types of data, including textual, numeric, geospatial, structured and unstructured.
Logstash, in turn, is a very light server-side data processing pipeline that allows you to collect data from a variety of sources, transform it “on the fly” and send it to the desired destination. Kibana is an open source data visualization dashboard for Elasticsearch. It provides visualization features at the top of indexed content in an Elasticsearch cluster. Users can create different types of graphs and maps over large volumes of data. Kibana also provides a presentation tool, known as Canvas, which allows users to create slide presentations with active data extracted directly from Elasticsearch.
- Log storage efficiency
- Speed: the user can search an indexed database of 200 million events and find an answer in seconds
- It is open-source and has good performance and easy configuration
- Search queries based on the names of members of the Java class
- Very detailed consultations through the standard library
- It is really stable: it is very difficult to bring down a cluster
- Simple configuration, with plain text format files
- Built with open source tools
- Filter plugins are powerful for extracting and enriching input data
- Plugin ecosystem allows modular extensions.
- Control panels
- Log analysis
- Log search
- Price: the free level is excellent, but there is a significant difference to the level where machine learning modules, endpoint security and more are available
- Complex consultation mechanism
- Complex architecture to configure and optimize
- The setting for input data performance can be tricky
- Merged documents can become a bottleneck
- Poor documentation
- Not so easy to use at first contact
- Problems difficult to debug
- Sizing/scaling challenges encountered with large data sets (usually in petabytes).
- Logstash works in command line and can be complex for beginners
- Documentation could be better
- Community support: for being a relatively new tool, finding answers can often be a challenge
- As it is a Java product, JVM setting must be done to handle high loads
- Some performance issues make it slow with large data sets
- Linking to dashboards creates extremely long URLs
- Lacks reports
Other tools for log management that are worth mentioning are:
Fluentd collects events from several data sources and records them in files: RDBMS, NoSQL, IaaS, SaaS, Hadoop etc. The app helps to unify an organization’s log infrastructure. It is an open source tool with 9.9 thousand stars and 1.1 thousand forks on GitHub.
Grafana is a control panel and graphic composer developed to provide rich ways to visualize time series metrics, mainly through graphs. Supports other ways to view data through a pluggable panel architecture. End users can create complex monitoring panels using interactive query builders.
As a visualization tool, Grafana is a popular component in monitoring stacks, often used in combination with time series databases, such as InfluxDB, Prometheus and Graphite; monitoring platforms such as Sensu, Icinga, Checkmk, Zabbix, Netdata and PRTG; SIEMs, such as Elasticsearch and Splunk; and other data sources.
Octopussy, also known as 8Pussy, is free and open source software that monitors systems, analyzes syslog data generated by them and transmits them to a central Octopussy server (often called the SIEM solution ).
The software has the ability to monitor any device that supports the syslog protocol, such as servers, routers, switches, firewalls, load balancers and their applications, in addition to other important services. The main purpose of Octopussy is to alert its administrators and users to various types of events, such as failures, system attacks or application errors.
The program is compatible with many Linux system distributions, such as Debian, Ubuntu, OpenSUSE, CentOS, RHEL, and even meta-distributions like Gentoo or Arch Linux.
Checkmk is software developed in Python and C++ for monitoring servers, applications, networks, cloud infrastructure (public, private, hybrid), containers, storage, databases and environmental sensors .
Checkmk is available in 3 editions: an open source (“Checkmk Raw Edition – CRE”), a commercial business edition (“Checkmk Enterprise Edition – CEE”) and a commercial edition for managed service providers (“Checkmk Managed Services Edition – CME “). These Checkmk-Editions are available for a variety of platforms, in particular for many versions of Debian, Ubuntu, SLES and RedHat / CentOS, and also as a Docker image .
Loggly is a cloud-based log management service provider. It does not require the use of proprietary software agents to collect log data. The service uses open source technologies, including Elasticsearch, Apache Lucene 4 and Apache Kafka.