Blocking vs non-blocking http servers performance

Przemysław Sobierajski

In this article, I’d like to take a look at the performance and the scalability of both blocking and non-blocking HTTP servers. I’ll compare average response time for multiple REST requests sent to simple endpoints built with Spring Boot and Ratpack.

Spring Boot is based on Tomcat which is a Java Servlet Container. It uses thread pool mechanism to process requests simultaneously. The thread pool has fixed size, so there can be a number of other requests in the queue waiting for a thread to become available.

Ratpack is a set of Java libraries for building scalable HTTP applications. It is built on the highly performant and efficient Netty event-driven networking engine. A Netty Event Loop is a loop that keeps looking for new events like incoming HTTP requests. When an event occurs, it  is passed on to the appropriate event handler. The Event Loop doesn’t wait for request processing finish. It means it’s a non-blocking HTTP server.

Spring Boot application

I created simple REST controller which accepts request and wait with the response for given time. It should simulate some I/O operations.

@SpringBootApplication
@RestController
public class BootApp {

    private final BlockingOperationSimulator simulator
            = new BlockingOperationSimulator();

    public static void main(String[] args) {
        SpringApplication.run(BootApp.class, args);
    }

    @GetMapping("blockFor/{milliseconds}")
    public int blockFor(@PathVariable int milliseconds)
            throws InterruptedException {
        return simulator.blockFor(milliseconds);
    }
}

The BlockingOperationSimulator causes the currently executing thread to sleep for the specified number of milliseconds.

public class BlockingOperationSimulator {

    public int blockFor(int milliseconds) throws InterruptedException {
        Thread.sleep(milliseconds);
        return milliseconds;
    }
}

Ratpack application

The Ratpack application does exactly the same.

public class RatpackApp {

    private final BlockingOperationSimulator simulator
            = new BlockingOperationSimulator();

    public static void main(String[] args) throws Exception {
        new RatpackApp().startServer();
    }

    private void startServer() throws Exception {
        RatpackServer.start(s -> s
                .handlers(h ->
                        h.get("blockFor/:milliseconds", this::block)
                )
        );
    }

    private void block(Context ctx) {
        int milliseconds = ctx.getPathTokens().asInt("milliseconds");
        Blocking.get(() -> simulator.blockFor(milliseconds))
                .map(String::valueOf)
                .then(ctx::render);
    }
}

You can download complete code from my github

Comparison technique

I’m using Apache Benchmark to measure the response time of sent out requests.

ab -n 1000 -c 200 localhost:8080/blockFor/100

Above command sends 1000 requests. 200 multiple requests are being sent at the same time. The output of the command:

Results

I made 16 measurements – a cartesian product of 10, 100, 1000, 10000 concurrent requests with blocking operation for 50, 200, 500, 2000 ms.

Summary

It seems that Netty can be much more scalable than Tomcat. The response time for the REST request with Ratpack is almost always smaller than Spring Boot. There is a huge difference when there is a lot of concurrent requests and each request cause a lot of I/O operation, which make the processor waiting.

It may lead to conclusion that HTTP non-blocking servers are better choice if you care about performance and scalability of your high traffic web application. Ratpack is just an example of very simple non-blocking web server. Instead, you can use e.g. Spring Boot 2, Vert.x, Akka… It depends on your project needs.

Poznaj mageek of j‑labs i daj się zadziwić, jak może wyglądać praca z j‑People!

Skontaktuj się z nami