Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

(2.14) Introduce asynchronous file writing for stream snapshots#7492

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
sciascid wants to merge1 commit intomain
base:main
Choose a base branch
Loading
fromraft-two-step-snapshot

Conversation

@sciascid
Copy link
Contributor

@sciascidsciascid commentedOct 30, 2025
edited
Loading

The monitorStream function can block for a long time when creating and
installing a snapshot of a stream's state. This can lead to increased
tail latency.
This commit extends the RaftNode interface with a new InstallSnapshotAsync
method. This method performs snapshot writing and WAL compaction in a
in a separate goroutine, making the process non-blocking. The existing
InstallSnapshot method is now a synchronous wrapper around the new
asynchronous implementation.

Signed-off-by: Daniele Sciasciadaniele@nats.io

@sciascid
Copy link
ContributorAuthor

sciascid commentedOct 30, 2025
edited
Loading

This pull request tries to reduce the (tail) latency of publishing to a stream.
Below are results of publishing to a single stream, on a 3 node cluster deployed on a single machine. The table shows throughput in msg/s and various latency measures (min, 50th percentile, 90th percentile and so on) in microseconds. In each experiment the client runs for 60 seconds.

Under low load, single client publishing one message at a time, latency varies between 61 microseconds, up to ~20 milliseconds for unlucky messages. Under high load, single client publishing 500 messages at a time, latency goes from ~1ms all the way to ~40ms.

batchthroughputminp50p90p99p99.9max
110131619610913120820364
500284923100314561969136882128039995

One cause for the high tail latency is due to periodic snapshotting of the stream. Taking a stream snapshot involves writing a file and sync it to disk, and doing it in a safe way requires more than one sync call (710407c).
During this time, the monitorStream goroutine is blocked and can't process incoming messages. The above patch "moves" the writing and syncing to disk, off the stream's main thread. By doing so tale latency improves significantly:

batchthroughputminp50p90p99p99.9max
11023858951091302137654
5003138911073148419903434465313250

In addition, under high load, there was ~10% improvement in throughput.

I used the "lats" benchmark client:https://github.com/synadia-labs/lats

wallyqs reacted with rocket emoji

@sciascidsciascidforce-pushed theraft-two-step-snapshot branch 7 times, most recently fromf55eda9 toa7e89d1CompareNovember 3, 2025 14:50
@sciascidsciascid marked this pull request as ready for reviewNovember 3, 2025 14:52
@sciascidsciascid requested a review froma team as acode ownerNovember 3, 2025 14:52
Copy link
Member

@MauriceVanVeenMauriceVanVeen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Could you add a couple tests, for example toraft_test, to cover the use ofInstallSnapshotAsync as well as whenInstallSnapshot is called when already snapshotting?

// InstallSnapshot installs a snapshot for the current applied index. This is a
// synchronous call that will block until the snapshot is installed, and will
// release all of the log entries up to the applied index.
func (n*raft)InstallSnapshot(data []byte)error {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Does the performance change due to this now creating both a newch for every call and spinning up a goroutine? Specifically for the paths that don't use async snapshots like meta and consumer.

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Good question. I did it this way because the goroutine that is spawned acquires a lock on raft only after it wrote the snapshot file. My thinking was that we can we can get some of the benefits also when calling InstallSnapshot. And now that I think about it, it might work very nicely if combined with some of the optimizations in#7355.
I believe there is not too much overhead in spawning a goroutine and making the channel, compared to what the rest of the method is doing (create the snapshot file + a few syncs + compact). I could measure the overhead by changing doSnapshotAsync call with doSnapshot calls, and run the same benchmark and compare it to baseline.
We could also consider to use InstallSnapshotAsync with meta and consumers. I haven't done so because I don't have a way to benchmark those yet.

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I tried to evaluate the overhead as explained above. Basically, inmonitorStream I replaceddoSnapshotAsync withdoSnapshot. I got the following results:
overhead.pdf

As expected, overhead is small and sometimes we do get some benefits in terms of latency. Alternatively, we could keep a goroutine ready to go, linked to a IPQueue.
Or if we want to err on the safe side, I could change the patch so that nothing changes in the case of regular InstallSnapshot

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Think it's fine for the scope of this PR. But maybe indeed we'll need to do some more profiling (also including the other perf improvements for 2.14) and see whether we should keep a goroutine ready to go and link to a IPQueue like you mention.

@sciascid
Copy link
ContributorAuthor

I was a little skeptical of the throughput improvement, and in fact I think that was partially due to the experiment running on a single machine: three servers saturate my cores, and no latency between the servers. So I did some experiments on EC2. Three servers deployed on eu-central-1, each one in different AZs. I repeated the experiments with a batched client and a pipelined client, with and without this optimization. In this setting, there still is a sharp reduction in tail latency, but only a slight increase (if any) in throughput.

batch.pdf
pipeline.pdf

@sciascidsciascidforce-pushed theraft-two-step-snapshot branch 3 times, most recently from2c3a58e to16a5caaCompareNovember 5, 2025 15:06
Copy link
Member

@MauriceVanVeenMauriceVanVeen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

LGTM

Marked this PR for 2.14, assuming we want to wait and bundle this improvement with the other planned Raft improvements?

// InstallSnapshot installs a snapshot for the current applied index. This is a
// synchronous call that will block until the snapshot is installed, and will
// release all of the log entries up to the applied index.
func (n*raft)InstallSnapshot(data []byte)error {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Think it's fine for the scope of this PR. But maybe indeed we'll need to do some more profiling (also including the other perf improvements for 2.14) and see whether we should keep a goroutine ready to go and link to a IPQueue like you mention.

@MauriceVanVeenMauriceVanVeen changed the titleIntroduce asynchronous file writing for stream snapshots(2.14) Introduce asynchronous file writing for stream snapshotsNov 5, 2025
@sciascidsciascidforce-pushed theraft-two-step-snapshot branch 2 times, most recently from0562a89 toc6657a4CompareNovember 5, 2025 15:21
// This shouldn't happen for streams like it can for pull
// consumers on idle streams but better to be safe than sorry!
ne,nb:=n.Size()
ifcurState==lastState&&ne<compactNumMin&&nb<compactSizeMin {
Copy link
Member

@neilalexanderneilalexanderNov 13, 2025
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I'm wondering if this does the right thing, given that there's also thefirstNeedsUpdate andlastNeedsUpdate bools. If either of those are true then theFirst andLast may be untrustworthy for this comparison. My feeling is that they shouldn't be set for a filtered state of_EMPTY_ but possible to double check?

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I tried to make sure that the optimization in this patch makes no changes in the logic of when to create a snapshot. And I also verified that under the same workload, this patch will roughly result in the same number of snapshot operations as the original code.
Having said that, I do think that checking the store's state is unnecessary. And I did play with changing these conditions a bit. However, I'd prefer to do this in a separate patch.

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Just elaborating my previous comment. The primary reason for taking snapshots of the stream is to trim raft's log. The logic that is implemented here involves the size and number of entries in the log, and this curState and lastState comparison. But this does not tell us anything about how much of raft's log we can get rid of. A better strategy would be the following: we compact if we can get rid of least one entire block. And we should not bother compacting partial blocks during normal operation.

The monitorStream function can block for a long time when creating andinstalling a snapshot of a stream's state. This can lead to increasedtail latency.This commit extends the RaftNode interface with a new InstallSnapshotAsyncmethod. This method performs snapshot writing and WAL compaction in ain a separate goroutine, making the process non-blocking. The existingInstallSnapshot method is now a synchronous wrapper around the newasynchronous implementation.Signed-off-by: Daniele Sciascia <daniele@nats.io>
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@neilalexanderneilalexanderneilalexander left review comments

@MauriceVanVeenMauriceVanVeenMauriceVanVeen approved these changes

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

4 participants

@sciascid@neilalexander@MauriceVanVeen

[8]ページ先頭

©2009-2025 Movatter.jp