Splunk vs. Logstash vs. Sumo Logic

What are the main trade-offs between the leading log management tools and how to choose the one that’s right for you?

I don’t know about you, but to me, it can feel like log files are the rabbits of production environments. You look away for a second, and all of a sudden they’ve bred and multiplied. Writing to logs files can take up GB of data per day, all of which is unstructured and coming from potentially several machines and sources. This is where log management tools come in.

Of course, deciding to use a log management tool is just the first step. Next you actually have to decide which one you want to use. This can be a rather large decision. Depending on the tool you choose, you may have to weave it throughout your code or set up and install the whole thing yourself. Either way, the cost of changing these tools can be significant.

With that being said, let’s take a look and compare a few of the tools in this space. Today, I’ll be comparing Logstash, Splunk, and Sumo Logic. There are more tools out there worth considering, like Loggly for example, but these three give a good representation of the different types available. You can read more about log management tools in the new ebook we’ve just released: The Definitive Guide for Production Tools.

Getting Started

First things first. What are these tools all about? The three tools we’re looking at today cover the on-premises, SaaS, and open source models. All three of these tools are designed to help you manage and analyze your log files. They all work with the majority of operating systems and can handle a wide range of log file formats. As always, it’s worth double checking to make sure the tool you choose will work with the particular flavor of your data without too much difficulty before you deploy.

Splunk: Splunk is the big player in the log management tool space. It’s the most enterprise-focused tool and works as an on-premises model. With Splunk, you get the most features and the most integrations, but that comes at the highest price. To keep up with the movement of the log management environment, they also offer a SaaS version and a cheaper light version for SMBs. In this post, I’ll be focusing on their main offering however.

Splunk - Getting Started

Splunk’s search dashboard

Sumo Logic: Sumo Logic started out attempting to be a SaaS version of Splunk. They’ve gone their own way as they’ve matured, but as a result of their beginnings, they are one of the most feature-rich and enterprise-focused SaaS log management tools.

Logstash: Logstash is an open source log management tool that is most often used as part of the ELK stack along with ElasticSearch and Kibana. In the ELK stack, Logstash plays the role of the log workhorse, creating a centralized pipeline for storing, searching, and analyzing log files. It uses built-in filters, inputs, and outputs, along with a range of plugins, to deliver strong functionality to your logs.

Bottom line: On-premises, SaaS, and open source are all represented in this post. These tools are designed for the same purpose – log management – but they accomplish this task in different ways.

Find the Crap in Your Java App

Show me how >>



The installation and setup of these tools are very different. On-premises, SaaS, and open source – all are covered here.

Splunk: Splunk Enterprise is an on-premises model, which means you’ll be setting it up locally. Installing Splunk will put a process on your host that acts as a distributed server for accessing, processing, and indexing streaming data. Depending on your OS, there may be a few more processes installed for controlling, monitoring, and configuring certain elements of Splunk. Since Splunk is on-premises, you will need to plan for the hardware and space capacities that you will need.

Sumo Logic: Sumo Logic is a SaaS model, which means you’ll be setting up a communication out to the Sumo Logic cloud. Sumo Logic has two options for their collectors – hosted and installed. Hosted collectors require no local install and are hosted by Sumo Logic directly. With a hosted collector, you will need to have AWS and add an S3 or HTTP source to upload your data to the collector. For an installed collector, you will need to install it locally on a machine within your environment. Once you do so, you will need to configure the sources that will gather and send the logs to Sumo Logic. Installed collectors can work with sources like SSH, Syslog, and scripts. Sumo Logic works as a multi-tenant design, which helps prevent capacity and scaling limitations since it allows using multiple log sources to a single collector.

Logstash: Logstash is open source, which means that you’ll be setting it all up on your own system. One thing to note is that Logstash requires a Java runtime to install. Installing Logstash requires downloading the binary and unpacking it on your local system. Generally, it’s recommended to also include Elasticsearch as your backend for storing your logs. Once this is set up, you’ll need to configure the inputs, filters, and outputs that you’ll want. Open source means that you’ll need to provision the hardware and space that you’re going to need, but as far as open source tools go, Logstash is a fairly simple installation process.

Bottom line: There are large differences in installation and setup for these tools, as you would expect when one is on-premises, one SaaS, and one open source. Which one you prefer comes down to the type of tool you tend to like. Some factors to consider are whether or not you want to keep things within your environment, and whether or not you want to manage capacity and hardware yourself.


There are plenty of similarities between the tools, but some of the differences may be important to you. Once you decide what you want your log management tool to be able to accomplish, you can choose the right one for you much faster.

Splunk: Splunk is likely the most feature-rich log management tool available. Splunk’s search and charting tools are feature-rich to the point that there’s probably no set of data you can’t get to through its UI or APIs, and it has extensive features based around high availability and scale, as befits an enterprise tool. Security is extensive and it can work with a huge range of machine data.

Sumo Logic: As far as SaaS log management tools go, Sumo Logic is one of the most feature-rich. As it was initially started as a SaaS version of Splunk, Sumo Logic has a good chunk of similar features. It is chock-full of features to reduce, search, and chart mass amounts of data. One of Sumo Logic’s main points of attraction is the ability to establish baselines and to actively notify you when key metrics change after an event, such as a new version rollout or a breach attempt.

Sumo Logic Features

The Sumo Logic dashboard

Logstash: Logstash is great for centralizing and unifying your data. It can parse different formats of data and converge them into one common format for your analytics tools. Due to its open source nature, it’s simple to extend it to custom log formats or add plugins for custom data sources. On its own, it doesn’t provide much in the way of front end or back end, which is why the ELK stack is an acronym.

Bottom line: As far as features go, Splunk comes out on top. Outside of large enterprises, you may not need quite the extensive amount of features that Splunk offers, in which case Sumo Logic will likely have you covered. The open source nature of Logstash gives you the most control over what you can do with the tool between your own development and the community.

Dashboard and Usage

Another consideration when choosing a tool is ease of use and, frankly, whether or not it looks good. This section looks at the dashboards and usage for the three tools.

Splunk: Splunk gives you the ability to create and manage your own dashboards. They offer a range of data visualization options, such as tables, charts, and event listings. By default, dashboards are created using Simple XML, but there are extensions available if you want to get fancier. Within Splunk, you can look at specific pages or build out a dashboard with several panels of different measurements. Dashboards can be built in either XML code or through the Splunk dashboard editor.

Sumo Logic: Sumo Logic uses a panel-based dashboard system as well. They offer real-time data, but there are certain limits to the types of queries that can be seen in a dashboard. Creating panels is largely straightforward and simple. Most of the information is presented in a chart-based manner.

Sumo Logic Dashboard The Sumo Logic Log Analysis Quickstart dashboard

Logstash: On its own, Logstash doesn’t give you dashboards at all. Logstash gives you the flexibility to determine where and how you output the data however. As part of the ELK stack, Kibana is often used as the frontend reporting and visualization tool, but there are existing output options for a wide range of other visualization and metrics tools as well, such as Graphite, Librato, and DataDog.

Logstash on Kibana Logstash on Kibana

Bottom line: If you’re not already using a separate visualization & metrics tool, then choosing Logstash means that you’ll shortly be adding one of those as well. Splunk gives you a flexible dashboard and visualization platform in either XML or drag-and-drop. Sumo Logic also gives you a panel-based dashboard, with a focus on real-time data.

Integrations and Plugins

An important consideration when choosing a tool is how well it integrates with your existing workflow and tool ecosystem. For log management tools, you want to make sure the tool can handle the format of your log data and can sync up with your environment.

Splunk: Splunk is all about their plugins, with over 600 available. Many of them focus on IT operations, security and compliance, and utilities for Splunk. With this many plugins, you’re very likely to be able to use Splunk to make sense of any format of log data.

Splunk Integrations Splunk Integrations

Sumo Logic: Sumo Logic has applications targeted for specific large tools, including development automation tools, cloud platforms, OS platforms, and compliance and security tools. Sumo Logic covers the main tools, but if you’re using something smaller or more obscure, they won’t have an application designed for it.

Logstash: As an open source tool, Logstash has a continuously growing plugin environment. As of today, there are over 160 plugins available, many of them from the community. Due to the open source nature of the tool, the documentation and design of these plugins is very clear, and you have the power to change them to your heart’s content if you’re willing to put in the work.

Bottom line: If your environment is made up of the larger tools and platforms available, all three will have plugins and integrations available, but as you start to get smaller and more obscure, Splunk is going to shine.


Can’t forget pricing of course. With these three tools, there are real differences here that can have a large impact on the business side of your decision. It’s here that the differences between models come through.

Splunk: $1800 – $60,000 per year, depending on daily GB volume needed. Splunk offers volume discounts, but is generally the most expensive tool here.

Sumo Logic: Free lite version, and a $90 per month version for each GB/day needed, up to 20 GB. With Sumo Logic, high scale can get quite expensive, but the entry level pricing is friendlier.

Logstash: Free, although there are paid subscriptions available for professional support and monitoring through elastic.co. The costs to you here are the equipment and bandwidth that you’ll need to run it.

Bottom line: There are large differences between the tools here. Splunk is very expensive, with real-world applications requiring tens of thousands of dollars. Although at high scale, Splunk has volume discounts which bring it more in line with other tools pricings. Sumo Logic works through the SaaS model, but with a linear pricing model, it can equal Splunk costs at high scale. Logstash is open source, which of course means the tool itself is free, although the equipment you’ll need to install and run it is not.

Documentation and Community

Perhaps you’re someone who is able to be fully self-sufficient when using a tool, in which case this section won’t matter much for you. For the rest of us, good documentation and an active community can be a great boon when we run into a wall or need some help. This section looks at how the three tools are doing on that front.

Splunk: Splunk has excellent documentation. The information is presented clearly and organized well, and they even have a dropdown menu that lets you look at the documentation for any version, so you can be sure that what you’re reading is written for the version of Splunk you’re using. On the community front, they have a forum-based community section for questions and answers that seems to be quite active.

Splunk Docs

Splunk documentation

Sumo Logic: The Sumo Logic support area is pretty poor. It’s difficult to navigate, not well organized, and often seems old. They do have a community section where you can ask questions, but that’s confusing as well. There is a separate service section that provides much more of a traditional documentation feel, but that’s difficult to find (not linked from the website) and is more bare bones than the other tools. They definitely come in third place in this area.

Logstash: The documentation for Logstash is centralized and fairly extensive. Much of the getting started and plugin documentation is quite good, but some of it is inconsistent from page to page, as can happen with open source tools. If you want to dive into it, you can a huge range of info to guide you do whatever it is you want to do. On the community front, there is a mailing list and IRC channel where you can ask questions, report bugs, or request features.

Bottom line: There are some real differences on this front between the tools. Splunk does documentation and community very well, Logstash provides a solid foundation, and Sumo Logic… hasn’t focused on this area much.

Going beyond your log management tool

No matter which tool you use, sifting through masses of log files can be a chore and can get expensive. On top of that, writing all the data to log files can drag your app’s performance and overhead. So another approach then is to reduce the amount you write to log files and your reliance on them for troubleshooting. One way to do this without sacrificing your troubleshooting abilities is OverOps.

Takipi error analysis in production

OverOps’s production error analysis tool

OverOps works at the JVM level to capture complete code and variable data for your errors, without relying on log files. Cut down on writing to and sifting through log files and gain actionable information to fix your errors at the same time.

Check it out.


There are of course more log management tool options out there than what I’ve covered here, but what’s here gives you a good example of the main types of tools available in this space. Choosing between these three tools comes down to a few factors. One of the primary factors is going to be the deployment model you’re comfortable with.

On-premises, SaaS, and open source all have different pros and cons that require a careful examination of your needs and environment. Depending on how much control you want and effort you’re willing to put in, you’ll lean more towards one type or another. Other factors include the cost, extensibility, and extra features of the different tools. It’s not exactly a winner takes all scenario.

A “two men enter, one man leaves” Thunderdome style throwdown isn’t in the cards here, but depending on the particulars of your environment, one of these tools (or other log management options) may be the best fit for you. If only there were a Thunderdome for developer tools… Which tool you use? Let us know in the comments section below.


15 tools to use when deploying new code to production – View tool list


Join over 30,254 Java developers

Get new posts directly to your inbox about Java, Scala and everything in between

Watch a live demo
Josh does product marketing for OverOps. He's a big baseball fan and a small beer nerd.
  • http://outcoldman.com/ outcoldman

    Josh, thank you for the good review of these three really good tools!

    Splunk also has free version, up to 500Mb/day, this page has more information https://www.splunk.com/en_us/products/splunk-enterprise/free-vs-enterprise.html
    Just download Splunk Enterprise and select Free license when you will be prompted for license on first login.

    P.S. I’m working @ Splunk

    • Jim

      The Free Splunk Enterprise License is good for only 60 Days, then you get the Ford Pinto version of Splunk, with many of the robust features turned off. We ended building a pooled license for our developers, it worked, but it’s not FREE as stated on the Splunk website – they should add the disclaimer *Free for 60 days

  • Jim Sherman

    Great review Josh, thanks.
    You should add to the comparison also Stackify (http://stackify.com). We actually went through evaluation of the products you’ve covered here and ended up selecting Stackify as it added several features and capabilities that none supported, including monitoring (server, app but also specific log statements or parameters in logs).
    Oh and I forgot to mention that their price is much better that those other ones

  • Shalom Carmel

    You missed the real Sumologic documentation, which is https://service.sumologic.com/help/

    • Josh Dreyfuss

      Thanks for the link. What I was saying in the post was that that section isn’t linked to from their webpage or support area, and isn’t as fleshed out as other tools’ documentations

  • Andrés

    Hi Josh!! Thank you for this review is very useful! I recomend you Logtrust, (https://www.logtrust.com/index.html) another log management tool to watch. Very powerful and easy to use cloud based which works in real time. As the others, they have a free plan too.

  • colin corstorphine

    Josh, Thanks for the post. Sumo Logic employee here…

    A small clarification… for our hosted collector you don’t have to have the data in AWS. You can post to the https endpoint from any source if for some reason you prefer not to use an installed collector on the server.

  • zman58

    Josh, Thanks for the good writeup.
    Adding a few points from my perspectives…

    I have used both Splunk and ELK (elasticsearch, logstash, kibana) and like them both. Splunk sets up more quickly as compared to ELK stack, while both take some time to learn and apply. Both provide a very expressive GUI model for data visualization.

    I like the filtering capabilities of logstash–very flexible. You can automate log variable extractions easily from custom or standard log format plugins, including the ability to embed ruby code directly into logstash filter.

    You mentioned for logstash “The costs to you here are the equipment and bandwidth that you’ll need to run it” but that is the case for Splunk as well. Bottom line for ELK is that there are no required license costs at any indexing scale. Local Splunk server instances require local equipment and bandwidth costs—and license costs if you are indexing more than 500 MB per day (the no cost version limit). Not sure if there are other limitations with the no-cost license version, or if that will change in the future.

    ELK and Splunk services both can run locally on minimal Linux installs very efficiently–another bonus for both. It is difficult to beat the flexibility of ELK because you have full access to the code base and can leverage it as you wish (Apache 2 license). ELK presents a competitive support model, including self or third party, where as Splunk is single-vendor supported.

    • Marco Scala

      Splunk has a wide range of Partners that can support Customers in implementations and project based on Splunk. Moreover the training and certification program guarantees Certified Splunk specialits at your service.
      Far better that on effort based open source model, IMHO, on an Enterprise level perspective.

  • http://www.titleoftheblog.com Eric M

    Splunk Admin looking at ELK here.

    One big differentiator between the two I’m seeing is the parsing model. ELK does event parsing when data is ingested, while Splunk does parsing when searches are executed. This results in a drastically different workflow:

    For Splunk, when on boarding new data there are only a couple of things you really need to define: Where events begin and end, and timestamp parsing. (And Splunk can generally figure these out automatically) That is enough to get data into the tool successfully. Extracting fields from the event text (Say, response codes from apache logs) happens when searches are executed. This means you can ingest data now, and then do extraction later, cutting down the time and effort required to onboard significantly. I can, as a splunk admin, quickly prepare my environment for data ingestion, and then allow the SME’s for that data to do the deeper parsing configuration within the tool. Parsing is also non-destructive, so there is the flexibility to change how data is parsed on the fly.

    ELK appears to need to parse all metadata on ingestion. So as an admin, I need to define all the fields I want to identify from the event data before bringing it into the system, and if I want to change how those fields are identified, I can’t apply those changes to already ingested data. This is less of a problem for relatively defined data formats (again, like apache logs) but if you have custom application logs which can vary wildly in content, and are somewhat volatile due to active development, it will require much more effort to keep the parsing rules current.

    So you save on the licensing by going with ELK over Splunk, but you pay for it in the actual work needed to implement and maintain. Wether this is a fair trade very much depends on the kind of data you’re ingesting and it’s volatility.

    • Francois Boulanger

      Hi Eric, I currently have the same understanding as your post mentions – ELK appears to require a proper parsing/mapping of the data before onboarding it (“schema-on-write”) while Splunk lets you do (or modify) field extraction afterwards (“schema-on-read”).

      Seems to me like this is a huge disadvantage for ELK in regards to agility and flexibility – you need to do the work up front and “get it right” the first time and can’t easily change fields if you missed something.

      Did you get a chance to get more experience with ELK? Has your understanding changed since writing this? I’m quite surprised that this aspect doesn’t pop up more in discussions and I’m wondering if we’re missing something here. Thanks!

  • AlbertM

    Very Good review. There is also jKool: https:/www.jkoolcloud.com. The main difference is that jKool deals with automated transaction tracing, discovery in addition to log analytics and performance metrics. Shops that want unified application analytics will find log analytics alone is somewhat limited for accurate root-cause analysis. Logs alone simply lack the required information, so you need automated tooling to get the metrics out of applications without dependency on logs.

  • https://www.itcentralstation.com/ Danielle Felder

    Great review, Josh! Love the variety of log management tools that you feature in this post. Your readers may also benefit from reading real user reviews of these tools, as well as all the other major log management tools at IT Central Station: https://www.itcentralstation.com/categories/log-management.

    Users interested in tools of the Splunk variety also read reviews for LogRhythm. This user wrote, “We also evaluated Splunk, and we chose LogRhythm as the correlation rules performed it handled clients on DHCP better.” You can read the rest of his review, as well as learn what others have to say about LogRhythm, here: https://goo.gl/T7iCRg