Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork1.7k
Raft batching#7355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Draft
sciascid wants to merge5 commits intomainChoose a base branch fromraft-batching
base:main
Could not load branches
Branch not found:{{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline, and old review comments may become outdated.
Draft
Raft batching#7355
Uh oh!
There was an error while loading.Please reload this page.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters
ContributorAuthor
sciascid commentedSep 25, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
An easy way to collect batch sizes.For performance testing only. Will be removed.
This is the baseline for perfomance testing Raft's batchingcapabilities. The behavior of the batching mechanism Raftis easier to observe if disk writes are synchronous.I.e we want to write() + fsync() the Raft log. So thatproducers can easily keep the proposal queue busy.To do so one can set "sync_interval= always". However, thatresults in disastrous performance: when the leader receivesacks for a "big" batch of log entries, the upper layer willwrite() and fsync() all entries in the batch, individually.So this commit disables "sync always" on stream writes.This *should* work in principle because the data is already inthe raft log. Alternatively, one could implement "group commit"for streams, i.e. fsync() only one time after processing a batchof entries.For performance testing only at this point.
This commit removes a "pathological" case from the current Raftbatching mechanism: if the proposal queue contains more entriesthan one batch can fit, then raft will send a full batch, followedby a small batch containing the leftovers.However, it was observed that its quite possible that while thefirst batch was being stored and sent, clients may already havepushed more stuff into the proposal queue in the meantime.With this fix the server will compose and send a full batch, thenthe leftovers are handled as follows: if more proposals were pushedinto the proposal queue, then we carry over the leftovers to thenext iteration. So that the leftovers are batched together with theproposals that were added pushed in the meantime.If there are no more proposals, then we send the leftovers right away.For performance testing only at point.
This is an attempt to reduce contention between Propose() andsendAppendEntry(). Change Propose() to acquire a read lock on Raft, andavoid locking Raft during storeToWAL() (which potentially does IO andmay take a long time). This works as long as sendAppendEntry() is calledfrom the Raft's goroutine only, unless the entry does not require to bestored to the Raft log. So the rest of the changes are for enforcing theabove requirement: * Change EntryLeaderTransfer so that it is not store to the Raft log. * Push EntryPeerState and EntrySnapshot entries to the proposal queue. * Make sure EntrySnapshot entries skip the leader check, so make sure those are not batched together with other entries.For performance testing only at this point.
Limit batch size based on the configured max_payload.
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Changes used to evaluate and improve batching at the Raft level.
These are proof-of-concepts, not necessarily complete nor sufficiently tested,
performance evaluation only!