Thereactorsoftware design pattern is anevent handling strategy that can respond to many potential service requestsconcurrently. The pattern's key component is anevent loop, running in asinglethread orprocess, whichdemultiplexes incoming requests and dispatches them to the correct request handler.[1]
By relying on event-based mechanisms rather thanblocking I/O ormulti-threading, a reactor can handle many concurrentI/O bound requests with minimal delay.[2]A reactor also allows for easily modifying or expanding specific request handler routines, though the pattern does have some drawbacks and limitations.[1]
With its balance of simplicity andscalability, the reactor has become a central architectural element in severalserver applications andsoftware frameworks fornetworking. Derivations such as themultireactor andproactor also exist for special cases where even greater throughput, performance, or request complexity are necessary.[1][2][3][4]
Practical considerations for theclient–server model in large networks, such as theC10k problem forweb servers, were the original motivation for the reactor pattern.[5]
A naive approach to handle service requests from many potential endpoints, such asnetwork sockets orfile descriptors, is to listen for new requests from within an event loop, then immediatelyread the earliest request. Once the entire request has been read, it can be processed and forwarded on by directly calling the appropriate handler. An entirely "iterative" server like this, which handles one request from start-to-finish per iteration of the event loop, is logically valid. However, it will fall behind once it receives multiple requests in quick succession. The iterative approach cannot scale because reading the requestblocks the server's only thread until the full request is received, and I/O operations are typically much slower than other computations.[2]
One strategy to overcome this limitation is multi-threading: by immediately splitting off each new request into its own worker thread, the first request will no longer block the event loop, which can immediately iterate and handle another request. This "thread per connection" design scales better than a purely iterative one, but it still contains multiple inefficiencies and will struggle past a point. From a standpoint of underlyingsystem resources, each new thread or process imposesoverhead costs inmemory and processing time (due tocontext switching). The fundamental inefficiency of each thread waiting for I/O to finish isn't resolved either.[1][2]
From a design standpoint, both approachestightly couple the general demultiplexer with specific request handlers too, making the server code brittle and tedious to modify. These considerations suggest a few major design decisions:
Combining these insights leads to the reactor pattern, which balances the advantages of single-threading with high throughput and scalability.[1][2]
The reactor pattern can be a good starting point for any concurrent, event-handling problem. The pattern is not restricted to network sockets either; hardware I/O,file system ordatabase access,inter-process communication, and even abstractmessage passing systems are all possible use-cases.[citation needed]
However, the reactor pattern does have limitations, a major one being the use of callbacks, which makeprogram analysis anddebugging more difficult, a problem common to designs withinverted control.[1] The simpler thread-per-connection and fully iterative approaches avoid this and can be valid solutions if scalability or high-throughput are not required.[a][citation needed]
Single-threading can also become a drawback in use-cases that require maximum throughput, or when requests involve significant processing. Different multi-threaded designs can overcome these limitations, and in fact, some still use the reactor pattern as a sub-component for handling events and I/O.[1]
The reactor pattern (or a variant of it) has found a place in many web servers,application servers, and networking frameworks:
A reactive application consists of several moving parts and will rely on some support mechanisms:[1]
The standard reactor pattern is sufficient for many applications, but for particularly demanding ones, tweaks can provide even more power at the price of extra complexity.
One basic modification is to invoke event handlers in their own threads for more concurrency. Running the handlers in athread pool, rather than spinning up new threads as needed, will further simplify the multi-threading and minimize overhead. This makes the thread pool a natural complement to the reactor pattern in many use-cases.[2]
Another way to maximize throughput is to partly reintroduce the approach of the "thread per connection" server, with replicated dispatchers / event loops running concurrently. However, rather than the number of connections, one configures the dispatcher count to match the availableCPU cores of the underlying hardware.
Known as a multireactor, this variant ensures a dedicated server is fully using the hardware's processing power. Because the distinct threads are long-running event loops, the overhead of creating and destroying threads is limited to server startup and shutdown. With requests distributed across independent dispatchers, a multireactor also provides better availability and robustness; should an error occur and a single dispatcher fail, it will only interrupt requests allocated to that event loop.[3][4]
For particularly complex services, where synchronous and asynchronous demands must be combined, one other alternative is the proactor pattern. This pattern is more intricate than a reactor, with its own engineering details, but it still makes use of a reactor subcomponent to solve the problem of blocking IO.[3]
Related patterns:
Specific applications:
Sample implementations: