Duke logging

The benchmark tests to help you discover how Logback performs under pressure

Logging is essential for server-side applications but it comes at a cost. It’s surprising to see though how much impact small changes and configuration tweaks can have on an app’s logging throughput. In this post we will benchmark Logback’s performance in terms of log entries per minute. We’ll find out which appenders perform best, what is prudent mode, and what are some of the awesome side effects of Async methods, sifting and console logging. Let’s get to it.

The groundwork for the benchmark

At its core, Logback is based on Log4j with tweaks and improvements under Ceki Gülcü’s vision. Or as they say, a better Log4j. It features a native slf4j API, faster implementation, XML configuration, prudent mode, and a set of useful Appenders which I will elaborate on shortly.

Having said that, there are quite a few ways to log with the different sets of Appenders, patterns and modes available on Logback. We took a set of commonly used combinations and put them to a test on 10 concurrent threads to find out which can run faster. The more log entries written per minute, the more efficient the method is and more resources are free to serve users. It’s not exact science but to be more precise, we’ve ran each test 5 times, removed the top and bottom outliers and took the average of the results. To try and be fair, all log lines written also had an equal length of 200 characters.

** All code is available on GitHub right here. The test was run on a Debian Linux machine running on Intel i7-860 (4 core @ 2.80 GHz) with 8GB of RAM.

First Benchmark: What’s the cost of synchronous log files?

First we took a look at the difference between synchronous and asynchronous logging. Both writing to a single log file, the FileAppender writes entries directly to file while the AsyncAppender feeds them to a queue which is then written to file. The default queue size is 256, and when it’s 80% full it stops letting in new entries of lower levels (Except WARN and ERROR).

Sync Vs. Async

The table compares between the FileAppender and different queue sizes for the AsyncAppender. Async came on top with the 500 queue size.

  • Tweak #1: AsyncAppender can be 3.7x faster than the synchronous FileAppender. Actually, it’s the fastest way to log across all appenders.

It performed way better than the default configuration that even trails behind the sync FileAppender which was supposed to finish last. So what might have happened?

Since we’re writing INFO messages, and doing so from 10 concurrent threads, the default queue size might have been too small and messages could have been lost to the default threshold. Looking at results of the 500 and 1,000,000 queue sizes, you’ll notice that their throughput was similar so queue size and threshold weren’t an issue for them.

  • Tweak #2: The default AsyncAppender can cause a 5 fold performance cut and even lose messages. Make sure to customize the queue size and discardingThreshold according to your needs.

<appender name="ASYNC500" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="FILE" />

** Setting an AsyncAppender’s queueSize and discardingThreshold

Second Benchmark: Do message patterns really make a difference?

Now we want to see the effect of log entry patterns on the speed of writing. To make this fair we kept the log line’s length equal (200 characters) even when using different patterns. The default Logback entry includes the date, thread, level, logger name and message, by playing with it we tried to see what the effects on performance might be.

Logging patterns

This benchmark demonstrates and helps see up close the benefit of logger naming conventions. Just remember to change its name accordingly to the class you use it in.

  • Tweak #3: Naming the logger by class name provides 3x performance boost.

Taking the loggers or the threads name off added some 40k-50k entries per minute. No need to write information you’re not going to use. Going minimal also proved to be a bit more effective.

  • Tweak #4: Compared to the default pattern, using only the Level and Message fields provided 127k more entries per minute.

Third Benchmark: Dear prudence, won’t you come out to play?

In prudent mode a single log file can be accessed from multiple JVMs. This of course takes a hit on performance because of the need to handle another lock. We tested prudent mode on 2 JVMs writing to a single file using the same benchmark we ran earlier.


Prudent mode takes a hit as expected, although my first guess was that the impact would be a stronger.

  • Tweak #5: Use prudent mode only when you absolutely need it to avoid a throughput decrease.

<appender name="FILE_PRUDENT" class="ch.qos.logback.core.FileAppender">

** Configuring Prudent mode on a FileAppender

Takipi for Logs

Takipi log links

Fourth Benchmark: How to speed up synchronous logging?

Let’s see how synchronous appenders other than the FileAppender perform. The ConsoleAppender writes to system.out or system.err (defaults to system.out) and of course can also be piped to a file. That’s how we we’re able to count the results. The SocketAppender writes to a specified network resource over a TCP socket. If the target is offline, the message is dropped. Otherwise, it’s received as if it was generated locally. For the benchmark, the socket was was sending data to the same machine so we avoided network issues and concerns.

Synchronous Logging

To our surprise, explicit file access through FIleAppender is more expensive than writing to console and piping it to a file. The same result, a different approach, and some 200k more log entries per minute. SocketAppender performed similarly to FileAppender in spite of adding serialization in between, the network resource if existed would have beared most of the overhead.

  • Tweak #6: Piping ConsoleAppender to a file provided 13% higher throughput than using FileAppender.

Fifth Benchmark: Now can we kick it up a notch?

Another useful method we have in our toolbelt is the SiftingAppender. Sifting allows to break the log to multiple files. Our logic here was to create 4 separate logs, each holding the logs of 2 or 3 out of the 10 threads we run in the test. This is done by indicating a discriminator, in our case, logid, which determines the file name of the logs:

<appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
<appender name="FILE-${logid}" class="ch.qos.logback.core.FileAppender">

** Configuring a SiftingAppender


Once again our FileAppender takes a beat down. The more output targets, the less stress on the locks and fewer context switching. The main bottleneck in logging, same as with the Async example, proves to be synchronising a file.

  • Tweak #7: Using a SiftingAppender can allow a 3.1x improvement in throughput.


We found that the way to achieve the highest throughput is by using a customized AsyncAppender. If you must use synchronous logging, it’s better to sift through the results and use multiple files by some logic. I hope you’ve found the insights from the Logback benchmark useful and look forward to hear your thoughts at the comments below.



Some kind of monster @ Takipi, GDG Haifa lead.
  • Jim Smith

    the fastest logging is to switch to log4j2 async logging.

  • Pablo Fernandez

    Would love to see this appender included in the tests (it uses jdk7 AsynchronousFileChannel):


    • http://www.takipi.com/ Alex Zhitnitsky

      Hi @disqus_E3fvmSPPMF:disqus, thanks for the reference! The benchmark is available on GitHub if you’d like to try: https://github.com/takipi/the-logback-olympics

      • Vadim Arefiev

        What would you recommend if I need to get new log file for each day, but performance is the key.

  • ericacm

    Could you go into more detail about Tweak #3: Naming the logger by class name provides 3x performance boost? Thanks,

    • http://www.takipi.com/ Alex Zhitnitsky

      Sure, let’s say you want the corresponding class name in your logs, instead of adding another field to the log pattern (%class), you can go by the practice of naming the logger by the class name. The benchmark showed that extracting and passing the class name by %class is an expensive operation that significantly slowed us down, using the logger’s name provided 3x better results.

      • ericacm

        Thanks – The downside then would be the need to declare a logger for each class, rather than one that covers a set of classes?

        • http://www.takipi.com/ Alex Zhitnitsky

          Right, of course it can also make sense to cover a set of classes when you don’t need the exact class name.

          • ericacm

            Thanks, this post was really useful.

  • Remko Popma

    Am I correct in assuming that all these tests were measuring single-threaded performance? That is unfortunate because most real-life applications are now multi-threaded and this has radically different performance characteristics.

    The AsyncAppender provided by Logback and Log4j-1.2, although better than synchronous logging, will suffer from thread contention when used in a multi-threaded application. A much better alternative (at least from a performance perspective) are Log4j2’s Asynchronous Loggers (http://logging.apache.org/log4j/2.x/manual/async.html#Performance ).

    Log4j2’s Async Loggers can log 18x more messages per second than Logback. But what is even more interesting is latency (perhaps a better term is “response time”): Log4j2’s Async Loggers have low and _predictable_ latency, even in multi-threaded apps. On the contrary, the latency of Logback’s AsyncAppender is an order of magnitude larger _on average_. If your business cares not just about average response time, but also about the worst case, then the gap widens: Logback has outliers that are four orders of magnitude worse than Log4j2’s Async Logger’s outliers. For example, in our tests with 64 threads, Log4j2 would have peaks of 16 microseconds, where Logback has peaks of 200 milliseconds; a 10,000x difference.

    • http://www.takipi.com/ Alex Zhitnitsky

      Actually that’s not correct, the tests were multi-threaded (it’s written in the introduction). We used 10 threads, you can check it out here: https://github.com/takipi/the-logback-olympics
      Thanks for the Log4j2 insights and for sharing your results. It does provide an impressive performance benefit with throughput and latency. Congrats on the stable release! :)

      • Remko Popma

        Ah, looks like I missed that, thanks for correcting me. Interesting that AsyncAppender with a small queue is faster than with a large queue. I wonder why that is. Anyway, interesting write up. Let me know if you ever want to blog about Log4j2 and you’d like me to help out.

        • http://www.takipi.com/ Alex Zhitnitsky

          Thanks Remko, I’ll keep that in mind towards our next posts!

  • Valentin Brigite Cocaud

    I’m not sure I have all understood so my question can be stupid, Why not trying to use SiftingAppender with AsyncAppender rather than FileAppender ?