Decouple when a message or event is sent from when it is processed.
Unless you live under one of the few rocks that still lack Internet access,you’ve probably already heard of an“event queue”.If not, maybe “message queue”, or “event loop”, or “message pump” rings a bell.To refresh your memory, let’s walk through a couple of common manifestations ofthe pattern.
For most of the chapter, I use “event” and “message” interchangeably. Where thedistinction matters, I’ll make it obvious.
If you’ve ever done anyuser interfaceprogramming, then you’re well acquainted withevents. Every time the userinteracts with your program — clicks a button, pulls down a menu, or presses akey — the operating system generates an event. It throws this object at yourapp, and your job is to grab it and hook it up to some interesting behavior.
This application style is so common, it’s considered a paradigm:event-drivenprogramming.
In order to receive these missives, somewhere deep in the bowels of your code isanevent loop. It looks roughly like this:
while(running){Eventevent=getNextEvent();// Handle event...}
The call togetNextEvent()
pulls a bit of unprocessed user input into yourapp. You route it to an event handler and, like magic, your application comes tolife. The interesting part is that the applicationpulls in the event whenit wants it. The operating system doesn’t just immediately
In contrast,interrupts from the operating systemdo work like that. When aninterrupt happens, the OS stops whatever your app was doing and forces it tojump to an interrupt handler. This abruptness is why interrupts are so hard towork with.
That means when user input comes in, it needs to go somewhere so that theoperating system doesn’t lose it between when the device driver reported theinput and when your app gets around to callinggetNextEvent()
. That“somewhere” is aqueue.
When user input comes in, the OS adds it to a queue of unprocessed events. Whenyou callgetNextEvent()
, that pulls the oldest event off the queue and hands itto your application.
Mostgames aren’t event-driven like this, but itis common for a game to have its own event queue as the backbone of its nervoussystem. You’ll often hear “central”, “global”, or “main” used to describe it.It’s used for high level communication between game systems that want to staydecoupled.
If you want to knowwhy they aren’t event-driven, crack open theGame Loop chapter.
Say your game has atutorial system to display helpboxes after specific in-game events. For example, the first time the playervanquishes a foul beastie, you want to show a little balloon that says, “Press Xto grab the loot!”
Tutorial systems are a pain to implement gracefully, and most players will spendonly a fraction of their time using in-game help, so it feels like they aren’tworth the effort. But that fraction where theyare using the tutorial can beinvaluable for easing the player into your game.
Your gameplay and combat code are likely complex enough as it is. The last thingyou want to do is stuff a bunch of checks for triggering tutorials in there.Instead, you could have a central event queue. Any game system can send to it,so the combat code can add an “enemy died” event every time you slay a foe.
Likewise, any game system canreceive eventsfrom the queue. The tutorial engine registers itself with the queue andindicates it wants to receive “enemy died” events. This way, knowledge of anenemy dying makes its way from the combat system over to the tutorial enginewithout the two being directly aware of each other.
This model where you have a shared space that entities can post information toand get notified by is similar toblackboard systems inthe AI field.
I thought about using this as the example for the rest of the chapter, but I’mnot generally a fan of big global systems. Event queues don’t have to be forcommunicating across the entire game engine. They can be just as useful within a single class or domain.
So, instead, let’s add sound to our game. Humans are mainly visual animals, buthearing is deeply connected to our emotions and our sense of physical space. Theright simulated echo can make a black screen feel like an enormous cavern, and awell-timed violin adagio can make your heartstrings hum in sympatheticresonance.
To get our game wound for sound, we’ll start with the simplest possible approachand see how it goes. We’ll add a little“audioengine” that has an API for playing a sound given an identifier and avolume:
While I almost always shy away from theSingleton pattern, this is one of the places where itmay fit since the machine likely only has one set of speakers. I’m taking asimpler approach and just making the method static.
classAudio{public:staticvoidplaySound(SoundIdid,intvolume);};
It’s responsible for loading the appropriate sound resource, finding anavailable channel to play it on, and starting it up. This chapter isn’t aboutsome platform’s real audio API, so I’ll conjure one up that we can presume isimplemented elsewhere. Using it, we write our method like so:
voidAudio::playSound(SoundIdid,intvolume){ResourceIdresource=loadSound(id);intchannel=findOpenChannel();if(channel==-1)return;startSound(resource,channel,volume);}
We check that in, create a few sound files, and start sprinklingplaySound()
calls through our codebase like some magical audio fairy. For example, in our UIcode, we play a little bloop when the selected menu item changes:
classMenu{public:voidonSelect(intindex){Audio::playSound(SOUND_BLOOP,VOL_MAX);// Other stuff...}};
After doing this, we notice that sometimes when you switch menu items, the wholescreen freezes for a few frames. We’ve hit our first issue:
OurplaySound()
method issynchronous — it doesn’t return back to thecaller until bloops are coming out of the speakers. If a sound file has to beloaded from disc first, that may take a while. In the meantime, the rest of thegame is frozen.
Ignoring that for now, we move on. In the AI code, we add a call to let out awail of anguish when an enemy takes damage from the player. Nothing warms agamer’s heart like inflicting simulated pain on a virtual living being.
It works, but sometimes when the hero does a mighty attack, it hits two enemiesin the exact same frame. That causes the game to play the wail sound twicesimultaneously.If you know anything about audio,you know mixing multiple sounds together sums their waveforms. When those arethesame waveform, it’s the same asone sound playedtwice as loud. It’sjarringly loud.
I ran into this exact issue working onHenry Hatsworth in the PuzzlingAdventure. My solution there is similar to what we’ll cover here.
We have a related problem in boss fights when piles of minions are runningaround causing mayhem. The hardware can only play so many sounds at one time.When we go over that limit, sounds get ignored or cut off.
To handle these issues, we need to look at the entireset of sound calls toaggregate and prioritize them. Unfortunately, our audio API handles eachplaySound()
call independently. It sees requests through a pinhole, one at atime.
These problems seem like mere annoyances compared to the next issue that fallsin our lap. By now, we’ve strewnplaySound()
calls throughout the codebase inlots of different game systems. But our game engine is running on modernmulti-core hardware. To take advantage of those cores, we distribute thosesystems on different threads — rendering on one, AI on another, etc.
Since our API is synchronous, it runs on thecaller’s thread. When we call itfrom different game systems, we’re hitting our API concurrently from multiplethreads. Look at that sample code. See any thread synchronization? Me neither.
This is particularly egregious because we intended to have aseparate threadfor audio. It’s just sitting there totally idle while these other threads arebusy stepping all over each other and breaking things.
The common theme to these problems is that the audio engine interprets a call toplaySound()
to mean, “Drop everything and play the sound right now!”Immediacy is the problem. Other game systems callplaySound()
attheirconvenience, but not necessarily when it’s convenient for the audio engine tohandle that request. To fix that, we’ll decouplereceiving a request fromprocessing it.
Aqueue stores a series ofnotifications or requests in first-in,first-out order. Sending a notificationenqueues the request and returns.The request processor thenprocesses items from the queue at a later time.Requests can behandled directly orrouted to interested parties. Thisdecouples the sender from the receiver bothstatically andin time.
If you only want to decouplewho receives a message from its sender, patternslikeObserver andCommandwill take care of this with lesscomplexity. You onlyneed a queue when you want to decouple somethingin time.
I mention this in nearly every chapter, but it’s worth emphasizing. Complexityslows you down, so treat simplicity as a precious resource.
I think of it in terms of pushing and pulling. You have some code A that wantsanother chunk B to do some work. The natural way for A to initiate that is bypushing the request to B.
Meanwhile, the natural way for B to process that request is bypulling it inat a convenient time inits run cycle. When you have a push model on one endand a pull model on the other, you need a buffer between them. That’s what aqueue provides that simpler decoupling patterns don’t.
Queues give control to the code that pulls from it — the receiver can delayprocessing, aggregate requests, or discard them entirely. But queues do this bytaking controlaway from the sender. All the sender can do is throw a request on thequeue and hope for the best. This makes queues a poor fit when the sender needsa response.
Unlike some more modest patterns in this book, event queues are complex and tendto have a wide-reaching effect on the architecture of our games. That meansyou’ll want to think hard about how — or if — you use one.
One common use of this pattern is for a sort of Grand Central Station that allparts of the game can route messages through. It’s a powerful piece ofinfrastructure, butpowerful doesn’t always meangood.
It took a while, but most of us learned the hard way that global variables arebad. When you have a piece of state that any part of the program can poke at,all sorts of subtle interdependencies creep in. This pattern wraps that state ina nice little protocol, but it’s still a global, with all of the danger thatentails.
Say some AI code posts an “entity died” event to a queue when a virtual minionshuffles off its mortal coil. That event hangs out in the queue for who knowshow many frames until it eventually works its way to the front and getsprocessed.
Meanwhile, the experience system wants to track the heroine’s body count andreward her for her grisly efficiency. It receives each “entity died” eventand determines the kind of entity slain and the difficulty of the kill so itcan dish out an appropriate reward.
That requires various pieces of state in the world. We need the entity that diedso we can see how tough it was. We may want to inspect its surroundings to seewhat other obstacles or minions were nearby. But if the event isn’t receiveduntil later, that stuff may be gone. The entity may have been deallocated, andother nearby foes may have wandered off.
When you receive an event, you have to be careful not to assume thecurrentstate of the world reflects how the world waswhen the event was raised. Thismeans queued events tend to be more data heavy than events in synchronous systems. Withthe latter, the notification can say “something happened” and the receivercan look around for the details. With a queue, those ephemeral details must becaptured when the event is sent so they can be used later.
All event and message systems have to worry about cycles:
When your messaging system issynchronous, you find cycles quickly — theyoverflow the stack and crash your game. With a queue, the asynchrony unwinds thestack, so the game may keep running even though spurious events are
A little debug logging in your event system is probably a good idea too.
We’ve already seen some code. It’s not perfect, but it has the right basicfunctionality — the public API we want and the right low-level audio calls. Allthat’s left for us to do now is fix its problems.
The first is that our APIblocks. When a piece of code plays a sound, it can’tdo anything else untilplaySound()
finishes loading the resource and actuallystarts making the speaker wiggle.
We want to defer that work until later so thatplaySound()
can return quickly.To do that, we need toreify the request to play a sound. We need a littlestructure that stores the details of a pending request so we can keep it arounduntil later:
structPlayMessage{SoundIdid;intvolume;};
Next, we need to giveAudio
some storage space to keep track of these pendingplay messages. Now, youralgorithms professor mighttell you to use some exciting data structure here like aFibonacciheap or askiplist, or, hell, at least alinkedlist. But in practice, the best way to store a bunch of homogenous things isalmost always a plain old array:
Algorithm researchers get paid to publish analyses of novel data structures.They aren’t exactly incentivized to stick to the basics.
No dynamic allocation.
No memory overhead for bookkeeping information or pointers.
Cache-friendly contiguous memory usage.
For lots more on what being “cache friendly” means, see the chapter onData Locality.
So let’s do that:
classAudio{public:staticvoidinit(){numPending_=0;}// Other stuff...private:staticconstintMAX_PENDING=16;staticPlayMessagepending_[MAX_PENDING];staticintnumPending_;};
We can tune the array size to cover our worst case. To play a sound, we simplyslot a new message in there at the end:
voidAudio::playSound(SoundIdid,intvolume){assert(numPending_<MAX_PENDING);pending_[numPending_].id=id;pending_[numPending_].volume=volume;numPending_++;}
This letsplaySound()
return almost instantly, but we do still have to playthe sound, of course. That code needs to go somewhere, and that somewhere is anupdate()
method:
classAudio{public:staticvoidupdate(){for(inti=0;i<numPending_;i++){ResourceIdresource=loadSound(pending_[i].id);intchannel=findOpenChannel();if(channel==-1)return;startSound(resource,channel,pending_[i].volume);}numPending_=0;}// Other stuff...};
As the name implies, this is theUpdate Method pattern.
Now, we need to call that from somewhere convenient. What “convenient” meansdepends on your game. It may mean calling it from the maingame loop or from a dedicated audiothread.
This works fine, but it does presume we can processevery sound request in asingle call toupdate()
. If you’re doing something like processing a requestasynchronously after its sound resource is loaded, that won’t work. Forupdate()
to work on one request at a time, it needs to be able to pullrequests out of the buffer while leaving the rest. In other words, we need anactual queue.
There are a bunch of ways to implement queues, but my favorite is called aringbuffer. It preserves everything that’s great about arrays while letting usincrementally remove items from the front of the queue.
Now, I know what you’re thinking. If we remove items from the beginning of thearray, don’t we have to shift all of the remaining items over? Isn’t that slow?
This is why they made us learn linked lists — you can remove nodes from themwithout having to shift things around. Well, it turns out you can implement aqueue without any shifting in an array too. I’ll walk you through it, but firstlet’s get precise on some terms:
Thehead of the queue is where requests areread from. The head is the oldest pending request.
Thetail is the other end. It’s the slot in the array where the next enqueued request will bewritten. Note that it’s justpast the end of the queue. You can think of it as a half-open range, if that helps.
SinceplaySound()
appends new requests at the end of the array, the headstarts at element zero and the tail grows to the right.
Let’s code that up. First, we’ll tweak our fields a bit to make these twomarkers explicit in the class:
classAudio{public:staticvoidinit(){head_=0;tail_=0;}// Methods...private:staticinthead_;staticinttail_;// Array...};
In the implementation ofplaySound()
,numPending_
has been replaced withtail_
, but otherwise it’s the same:
voidAudio::playSound(SoundIdid,intvolume){assert(tail_<MAX_PENDING);// Add to the end of the list.pending_[tail_].id=id;pending_[tail_].volume=volume;tail_++;}
The more interesting change is inupdate()
:
voidAudio::update(){// If there are no pending requests, do nothing.if(head_==tail_)return;ResourceIdresource=loadSound(pending_[head_].id);intchannel=findOpenChannel();if(channel==-1)return;startSound(resource,channel,pending_[head_].volume);head_++;}
We process the request at the head and then discard it by advancing the headpointer to the right. We detect anempty queue byseeing if there’s any distance between the head and tail.
This is why we made the tail onepast the last item. It means that the queuewill be empty if the head and tail are the same index.
Now we’ve got a queue — we can add to the end and remove from the front.There’s an obvious problem, though. As we run requests through the queue, thehead and tail keep crawling to the right. Eventually,tail_
hits the endof the array, andparty time is over. This is where itgets clever.
Do you want party time to be over? No. You do not.
Notice that while the tail is creeping forward, thehead is too. That meanswe’ve got array elements at thebeginning of the array that aren’t being usedanymore. So what we do is wrap the tail back around to the beginning of thearray when it runs off the end. That’s why it’s called aring buffer — it actslike a circular array of cells.
Implementing that is remarkably easy. When we enqueue an item, we just need tomake sure the tail wraps around to the beginning of the array when it reachesthe end:
voidAudio::playSound(SoundIdid,intvolume){assert((tail_+1)%MAX_PENDING!=head_);// Add to the end of the list.pending_[tail_].id=id;pending_[tail_].volume=volume;tail_=(tail_+1)%MAX_PENDING;}
Replacingtail_++
with an increment modulo the array size wraps the tail backaround. The other change is the assertion. We need to ensure the queue doesn’toverflow. As long as there are fewer thanMAX_PENDING
requests in the queue,there will be a little gap of unused cells between the head and the tail. If thequeue fills up, those will be gone and, like some weird backwards Ouroboros, thetail will collide with the head and start overwriting it. The assertion ensuresthat this doesn’t happen.
Inupdate()
, we wrap the head around too:
voidAudio::update(){// If there are no pending requests, do nothing.if(head_==tail_)return;ResourceIdresource=loadSound(pending_[head_].id);intchannel=findOpenChannel();if(channel==-1)return;startSound(resource,channel,pending_[head_].volume);head_=(head_+1)%MAX_PENDING;}
There you go — a queue withno dynamic allocation,no copying elements around, and the cache-friendliness of a simple array.
If the maximum capacity bugs you, you can use a growable array. When the queuegets full, allocate a new array twice the size of the current array (or someother multiple), then copy the items over.
Even though you copy when they array grows, enqueuing an item still has constantamortized complexity.
Now that we’ve got a queue in place, we can move onto the other problems. Thefirst is that multiple requests to play the same sound end up too loud. Since weknow which requests are waiting to be processed now, all we need to do is mergea request if it matches an already pending one:
voidAudio::playSound(SoundIdid,intvolume){// Walk the pending requests.for(inti=head_;i!=tail_;i=(i+1)%MAX_PENDING){if(pending_[i].id==id){// Use the larger of the two volumes.pending_[i].volume=max(volume,pending_[i].volume);// Don't need to enqueue.return;}}// Previous code...}
When we get two requests to play the same sound, we collapse them to a singlerequest for whichever is loudest. This “aggregation” is pretty rudimentary, butwe could use the same idea to do more interesting batching.
Note that we’re merging when the request isenqueued, not when it’sprocessed. That’s easier on our queue since we don’t waste slots on redundantrequests that will end up being collapsed later. It’s also simpler to implement.
It does, however, put the processing burden on the caller. A call toplaySound()
will walk the entire queue before it returns, which could beupdate()
instead.
Another way to avoid theO(n) cost of scanning the queue is to use a differentdata structure. If we use a hash table keyed on theSoundId
, then we can checkfor duplicates in constant time.
There’s something important to keep in mind here. The window of “simultaneous”requests that we can aggregate is only as big as the queue. If we processrequests more quickly and the queue size stays small, then we’ll have feweropportunities to batch things together. Likewise, if processing lags behind andthe queue gets full, we’ll find more things to collapse.
This pattern insulates the requester from knowing when the request getsprocessed, but when you treat the entire queue as a live data structure to beplayed with, then lag between making a request and processing it can visiblyaffect behavior. Make sure you’re OK with that before doing this.
Finally, the most pernicious problem. With our synchronous audio API, whateverthread calledplaySound()
was the thread that processed the request. That’soften not what we want.
On today’smulti-core hardware, you need more thanone thread if you want to get the most out of your chip. There are infinite waysto distribute code across threads, but a common strategy is to move each domainof the game onto its own thread — audio, rendering, AI, etc.
Straight-line code only runs on a single core at a time. If you don’t usethreads, even if you do the asynchronous-style programming that’s in vogue, thebest you’ll do is keep one core busy, which is a fraction of your CPU’sabilities.
Server programmers compensate for that by splitting their application intomultiple independentprocesses. That lets the OS run them concurrently ondifferent cores. Games are almost always a single process, so a bit of threadingreally helps.
We’re in good shape to do that now that we have three critical pieces:
The code for requesting a sound is decoupled from the code that plays it.
We have a queue for marshalling between the two.
The queue is encapsulated from the rest of the program.
All that’s left is to make the methods that modify the queue — playSound()
andupdate()
— thread-safe. Normally, I’d whip up some concrete code to dothat, but since this is a book about architecture, I don’t want to get mired inthe details of any specific API or locking mechanism.
At a high level, all we need to do is ensure that the queue isn’t modifiedconcurrently. SinceplaySound()
does a very small amount of work — basicallyjust assigning a few fields — it can lock without blocking processing for long.Inupdate()
, we wait on something like a condition variable so that we don’tburn CPU cycles until there’s a request to process.
Many games use event queues as a key part of their communication structure, andyou can spend a ton of time designing all sorts of complex routing and filteringfor messages. But before you go off and build something like the Los Angelestelephone switchboard, I encourage you to start simple. Here’s a few starterquestions to consider:
I’ve used “event” and “message” interchangeably so far because it mostly doesn’tmatter. You get the same decoupling and aggregation abilities regardless of whatyou’re stuffing in the queue, but there are some conceptual differences.
If you queue events:
An “event” or “notification” describes something thatalready happened,like “monster died”. You queue it so that other objects canrespond to theevent, sort of like an asynchronousObserver pattern.
You are likely to allow multiple listeners. Since the queue contains things that already happened, the sender probably doesn’t care who receives it. From its perspective, the event is in the past and is already forgotten.
The scope of the queue tends to be broader. Event queues are often used tobroadcast events to any and all interested parties. To allow maximum flexibility for which parties can be interested, these queues tend to be more globally visible.
If you queue messages:
A“message” or “request”describes an action that wewant to happenin the future, like “playsound”. You can think of this as an asynchronous API to a service.
Another word for “request” is “command”, as in theCommand pattern, and queues can be used there too.
You are morelikely to have a single listener. In the example, the queued messages are requests specifically forthe audio API to play a sound. If other random parts of the game engine started stealing messages off the queue, it wouldn’t do much good.
I say “more likely” here, because you can enqueue messages withoutcaring which code processes it, as long as it gets processedhow youexpect. In that case, you’re doing something akin to aservice locator.
In our example, the queue is encapsulated and only theAudio
class can readfrom it. In a user interface’s event system, you can register listeners to yourheart’s content. You sometimes hear the terms “single-cast” and “broadcast” todistinguish these, and both styles are useful.
A single-cast queue:
This is the natural fit when a queue is part of a class’s API. Like in ouraudio example, from the caller’s perspective, they just see aplaySound()
method they can call.
The queue becomes an implementation detail of the reader. All the sender knows is that it sent a message.
The queue is more encapsulated. All other things being equal, more encapsulation is usually better.
You don’t have to worry about contention between listeners. With multiple listeners, you have to decide if theyall get every item (broadcast) or ifeach item in the queue is parceled out toone listener (something more like a work queue).
In either case, the listeners may end up doing redundant work orinterfering with each other, and you have to think carefully about thebehavior you want. With a single listener, that complexity disappears.
A broadcast queue:
This is how most “event” systems work. If you have ten listeners when anevent comes in, all ten of them see the event.
Events can get dropped on the floor. A corollary to the previous point is that if you havezero listeners, all zero of them see the event. In most broadcast systems, if there are no listeners at the point in time that an event is processed, the event gets discarded.
You may need to filter events. Broadcast queues are often widely visible to much of the program, and you can end up with a bunch of listeners. Multiply lots of events times lots of listeners, and you end up with a ton of event handlers to invoke.
To cut that down to size, most broadcast event systems let a listenerwinnow down the set of events they receive. For example, they may saythey only want to receive mouse events or events within a certainregion of the UI.
A work queue:
Like a broadcast queue, here you have multiple listeners too. The differenceis that each item in the queue only goes toone of them. This is a commonpattern for parceling out jobs to a pool of concurrently running threads.
This is the flip side of the previous design choice. This pattern works with allof the possible read/writeconfigurations:one-to-one, one-to-many, many-to-one, or many-to-many.
You sometimes hear “fan-in” used to describe many-to-one communication systemsand “fan-out” for one-to-many.
With one writer:
This style is most similar to the synchronousObserver pattern. You have oneprivileged object that generates events that others can then receive.
You implicitly know where the event is coming from. Since there’s only one object that can add to the queue, any listener can safely assume that’s the sender.
You usually allow multiple readers. You can have a one-sender-one-receiver queue, but that starts to feel less like the communication system this pattern is about and more like a vanilla queue data structure.
With multiple writers:
This is how our audio engine example works. SinceplaySound()
is a publicmethod, any part of the codebase can add a request to the queue. “Global” or“central” event buses work like this too.
You have to be more careful of cycles. Since anything can potentially put something onto the queue, it’s easier to accidentally enqueue something in the middle of handling an event. If you aren’t careful, that may trigger a feedback loop.
You’ll likely want some reference to the sender in the event itself. When a listener gets an event, it doesn’t know who sent it, since it could be anyone. If that’s something they need to know, you’ll want to pack that into the event object so that the listener can use it.
With a synchronous notification, execution doesn’t return to the sender untilall of the receivers have finished processing the message. That means themessage itself can safely live in a local variable on the stack. With a queue,the message outlives the call that enqueues it.
If you’re using a garbage collected language, you don’t need to worry about thistoo much. Stuff the message in the queue, and it will stick around in memory aslong as it’s needed. In C or C++, it’s up to you to ensure the object lives longenough.
Pass ownership:
This is thetraditional way to do things whenmanaging memory manually. When a message gets queued, the queue claims itand the sender no longer owns it. When it gets processed, the receiver takesownership and is responsible for deallocating it.
In C++,unique_ptr<T>
gives you these exact semantics out of the box.
Share ownership:
These days, now that even C++ programmers are more comfortable with garbagecollection,shared ownership is more acceptable.With this, the message sticks around as long as anything has a reference toit and is automatically freed when forgotten.
Likewise, the C++ type for this isshared_ptr<T>
.
The queue owns it:
Another option is to have messagesalways live onthe queue. Instead of allocating the message itself, the sender requests a“fresh” one from the queue. The queue returns a reference to a messagealready in memory inside the queue, and the sender fills it in. When themessage gets processed, the receiver refers to the same message in thequeue.
In other words, the backing store for the queue is anobject pool.
I’ve mentioned this a few times already, but in many ways, this pattern is the asynchronous cousin to the well-knownObserver pattern.
Like many patterns, event queues go by a number of aliases. One established term is “message queue”. It’s usually referring to a higher-level manifestation. Where our event queues arewithin an application, message queues are usually used for communicatingbetween them.
Another term is “publish/subscribe”, sometimes abbreviated to “pubsub”. Like“message queue”, it usually refers to larger distributed systems unlikethe humble coding pattern we’re focused on.
Afinite state machine, similar to the Gang of Four’sState pattern, requires a stream of inputs. If you want it to respond to those asynchronously, it makes sense to queue them.
When you have a bunch of state machines sending messages to each other, eachwith a little queue of pending inputs (called amailbox), then you’vere-invented theactor model ofcomputation.
TheGo programming language’s built-in “channel” type is essentially an event or message queue.