Micronaut 2: Don't Let Event Loops Own You

Micronaut 2: Don't Let Event Loops Own You

By James Kleeh, OCI Micronaut Development Lead

June 2020

Introduction

Micronaut has been designed from its inception to work with different server implementations. The default implementation is based on Netty.

Netty simplifies network programming and is incredibly flexible. Micronaut's Netty implementation is based around the concept of a multi-threaded event loop. The event loop has been widely documented and written about extensively. If you want a deeper understanding of event loops, there are lots of resources online and in book form; the Netty In Action book is a good place to start.

In this article we’ll explore how Micronaut makes the event loop accessible to you and what options are available to control the threading model for your application. If you aren't familiar with event loops, the primary concern should be to never, ever, ever, block the event loop.

The first rule of event loops is don't block the event loop.

The second rule of event loops is don't block the event loop!

What Does Blocking Mean?

Now that you know to never block the event loop, you may be wondering, "What does it mean to 'block'?"

Blocking, in general, is executing an action that requires you to wait for an external resource to be accessed. This applies to anything that involves a network call, but it also includes database queries, HTTP requests, publishing messages to a broker, and many other possibilities. It also includes some things that are local to the server, such as accessing the disk. Unless you use non-blocking IO, reading or writing to disk is a blocking operation.

And What If I Block?

If blocking operations are executed on the event loop, the amount of concurrency your application can handle will diminish to a small fraction of what it could potentially process. The event loop usually has a relatively small number of threads. In Micronaut, that number by default is the number of CPU cores * 2. If you execute blocking operations on the event loop, those threads will quickly be stuck in a waiting state and no more requests will be accepted.

Micronaut 1

From the beginning, Micronaut has attempted to make the management of the event loop and blocking operations transparent to the user. For a large percentage of Java backend developers, an event loop is a foreign concept. Many developers are comfortable with servlet-based servers where executing blocking operations at the application level is normal. We wanted to ease the transition into this model for those developers.

In Micronaut 1.x, we conceived of a scheme where Micronaut would automatically decide for you where to execute your application code. Because Micronaut does compile-time analysis of your application, the framework has all of the information about your classes and methods. If it was determined a method was blocking, the method would be offloaded to a separate thread pool. Within Micronaut, we call that thread pool the IO thread pool.

Reactive and Blocking

In Micronaut 1.x, your method’s return type determines where Micronaut will execute your method.

  • For methods that return a reactive type, the method is executed on the event loop.
  • For methods that return "blocking" types, the method is executed on the IO thread pool.

A reactive type includes anything that implements org.reactivestreams.Publisher, or any of the specific types in RxJava, Project Reactor, and Java futures. All other types are considered blocking types.

Even with this automatic determination, it is still possible for the developer to decide where the method should be executed. Two annotations have been created for this purpose:

  1. @Blocking
    If the @Blocking annotation is used on the method, that method will be executed on the IO thread pool, regardless of the return type.
  2. @NonBlocking
    If the @NonBlocking annotation is used, the method will be executed on the event loop, regardless of the return type.

Through feedback and help from the community, we have come to the conclusion that this model (as implemented in Micronaut 1.x) is not the best path forward as we evolve the framework. The model conflates the concepts of reactive programming and non-blocking operations, which are in fact not related at all. While it is common for things like asynchronous database drivers and asynchronous http clients to use reactive types (ours does), that does not inherently mean a method returning a reactive type does not block. In fact, the reactive streams of the most common libraries run on the current thread (synchronously), unless the stream is configured to use other threads.

Improving The Model

In keeping with the conventions of semantic versioning, Micronaut 1.x will not ship any breaking changes. For Micronaut 1.x that means that there can be no breaking changes until version 2.

Starting in Micronaut 1.3, a configuration option was added to allow developers to control how their methods would be executed. The configuration option micronaut.server.thread-selection can be set to 1 of 3 thread selection options:

MANUAL
The controller methods will always be executed on the event loop. If some part of the operation should be offloaded to prevent blocking, that logic is up to the user.
IO
The controller methods will always be executed on the IO thread pool.
AUTO (DEFAULT)
The default behavior of automatically selecting where to execute the method based on the return type.

This configuration option is a big improvement for those who need finer control over their method executions. However, for developers who want to configure their own thread pools and use them instead of the existing options, more changes to the framework were required.

Micronaut 2

With Micronaut 2 we have improved the model even further and remedied the shortcomings of Micronaut 1. Micronaut 2 still contains the thread selection configuration, however the default is now MANUAL. That means that without any additional configuration or annotations, all controller methods will be executed on the event loop.

This is the most impactful breaking change in Micronaut 2, and the consequences of not understanding it could have a significant negative effect on application performance.

For backwards compatibility, developers can configure the thread selection to AUTO, and the previous behavior will be used. That is a great option for those who want a quick interim solution, allowing time to review their methods before deciding which thread pool they should be executed on.

Additional Options

Execute On

A new annotation has been introduced in Micronaut 2 to allow users to have finer control over what thread pool gets used. The @ExecuteOn annotation can be applied to a controller class or any of its methods, and tells Micronaut what thread pool to use. For example:

@ExecuteOn(TaskExecutors.IO)
@Controller
public class BookController {
 
    @Get("/{id}")
    Book get(Long id) {
        //This will be a thread on the IO thread pool, so it is OK to block
    }
}

Micronaut expects a thread pool with the name provided in the annotation to already exist. Any custom thread pools must be specified in the application configuration in order to use them with this annotation.

The @ExecuteOn annotation has precedence over any of the thread selection configuration options. That means for users that want to use the old behavior by setting it to AUTO, the annotation can still be used. That will allow for a gradual transition before finally moving your thread selection entirely to MANUAL mode.

If you would like to take this approach, set micronaut.server.thread-selection: AUTO in your configuration, then apply the @ExecuteOn annotation to any controller classes or methods that should be offloaded to another executor service.

Event Loop Group Configuration

Micronaut 2 also introduces the ability to configure Netty event loops beyond the one used for the server and client. This configuration is separate from thread pool configuration (configured through micronaut.executors), however the name of the event loop can also be used in the @ExecuteOn annotation.

Whether you want to use a standard thread pool, or an event loop, depends on your use case. It would be up to each team to research and understand the differences and then make a decision based on the intended usage. Remember the first rule of event loops though!

See https://docs.micronaut.io/snapshot/guide/index.html#threadPools for more information on configuring thread pools and Netty event loops in Micronaut 2.

Advanced Usage

For those who are comfortable with reactive libraries, thread control can be done entirely with those libraries. RxJava 2 allows you to control which thread will be used for each part of your reactive stream. For example:

    @Inject EventLoopGroup eventLoopGroup;
 
    @Inject @Named(TaskExecutors.IO) ExecutorService ioExecutor;
 
    @Get("/{id}")
    Single<Map<String, Object>> book(Long id) {
        return Single.fromCallable(() -> {
                    // This will be executed on the IO thread pool
                    //retrieve the book from the database based on the ID
                })
                .observeOn(Schedulers.from(eventLoopGroup))
                .map(book -> {
                    // This will be executed on the netty event loop
                    Map<String, Object> data = new LinkedHashMap<>(2);
                    data.put("id", book.getId());
                    data.put("title", book.getTitle());
                    return data;
                })
                .subscribeOn(Schedulers.from(ioExecutor));
    }

The code above is offloading the call to the database to the IO thread pool, but then continues on the event loop to create the response body.

Conclusion

We believe the new thread selection model in Micronaut 2 puts more power in the hands of developers and provides more flexibility in the process. Micronaut 2 has many more exciting features and changes, so be sure to read the documentation for full details. Remember, if you have any questions about this topic or Micronaut in general, please join our Gitter channel.

Software Engineering Tech Trends (SETT) is a regular publication featuring emerging trends in software engineering.


secret