Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[Suggestion] Compression for stream replication#14978

Unanswered
the-mikedavis asked this question inIdeas
Discussion options

RabbitMQ series

4.2.x

Operating system (distribution) used

macOS

How is RabbitMQ deployed?

Community Docker image

What would you like to suggest for a future version of RabbitMQ?

Streaming incurs considerable cross-AZ replication cost. Stream replication in RabbitMQ (and Apache Kafka) make identical copies on replica members for the sake of durability and availability. In the Kafka world, blog posts and "disk-less Kafka" KIPs cite cross-AZ replication as a major cost. (Aiven blog,KIP-1150 "Disk-less Topics",KIP-1176 "Slack's KIP",Confluent blog.)

We could add an opt-in feature to compress replication traffic to make it possible to reduce this cost. This option would trade lower traffic costs for lower max-throughput and higher latencies. Osiris replica-readers can compress each chunk before sending it to the Osiris replica. This necessary changes torabbitmq/osiris are small, we we'd need even smaller accompanying changes in the server. Here's a proof-of-concept change:rabbitmq/osiris@ab18236. This is based onZstandard compression at level 3 (Erlang/OTP default, a reasonable tradeoff). Zstd is considered the currentPareto front for compression algorithms: it balances the least costly tradeoffs (CPU time) with the most reward (compression factor). Zstd is included in the Erlang standard library since OTP 28.

Initial testing

For realistic-ish bodies we use the--json-body feature ofperf-test. (Sidenote: this would be a nice addition tostream-perf-test as well - could be contributed separately.)

% perf-test -sq -u sq -x 1 -y 0 --json-body --time 60# compressionid: test-150059-817, sending rate avg: 54157 msg/s# baselineid: test-145319-916, sending rate avg: 75478 msg/s

Max potential throughput drops in this scenario by 28.24%. Keep in mind this is AMQP 0-9-1 without publisher confirms.

In another scenario we keep the ingress consistent and, by measuring bytes sent for replication, we see an overall reduction for replication of 32.5%.

% perf-test -sq -u sq -x 1 -y 0 --json-body --size 1000 --rate 30000 --time 60# compression1_545_106_008 bytes# baseline2_288_388_489 bytes

I measured this by adding a counter toosiris_log for bytes replicated.

While not shown here, commit and end-to-end latency seems to increase by some small but significant factor, say 30% (eyeballing it). Further testing is needed.

Configuration

# rabbitmq.conf example:stream.compression.algorithm = zstd# prefer lower CPU cost for larger size:stream.compression.level = -7

Compression algorithm and maybe also compression arguments could be configured in Cuttlefish config. Users serious about throughput would not enable compression but lower-throughput users aiming for cost efficiency could consider it. The configuration for this applies per-writer-node and can be changed - a cluster could even run a mixed set of compression configurations (during the rollout of a config change for example).

Algorithms

zstd is a nice choice for a new feature like this since it's now included in the standard library and can be tuned. By default we could also include the algorithms available fromzlib. And we could consider making the interface pluggable so that other algorithms could be introduced via NIFs. LZ4 and Snappy are popular algorithms that trade faster compression times for lower compression yield - these are not provided as BIFs in Erlang but could be considered as plugins backed by NIFs.

You must be logged in to vote

Replies: 1 comment 4 replies

Comment options

I have a fundamental aversion towards putting any additional compute / latency into the replication/consensus part of any data system. Although there may be some (artificial?) use cases where compression can seem beneficial we have to look at the overall stability of the entire broker cluster, not just a single link. I'd rather not have the option than risk users enabling it (for "efficiency") and then experience hard to debug cluster issues due to undue computational work affecting scheduling.

There issome benefit in perhaps doing compression for very few selected WAN links over the stream protocol but that is a different aspect all together.

If costs are a concern there are cloud providers that do not charge for "AZ" transfers...

You must be logged in to vote
4 replies
@kjnilsson
Comment options

Oh in addition streams support compressed sub-batches that can be used for stream protocol only user cases which offload compression compute work on to client applications (which can then be scaled to cover the additional compute)

@the-mikedavis
Comment options

the-mikedavisNov 21, 2025
Maintainer Author

Great points, thanks for weighing in! Yeah I imagine you would not want to use replication compression if you were already using client-side compression. The two together probably wouldn't work well, the replication compression would probably just be wasteful. And client-side compression saves you on traffic in publishing, consuming and on-disk. I was thinking it would be mostly useful for messaging protocols which don't have a way to use the streams-protocol batching+compression.

I will do some testing at scale when I get the chance and try to figure out if replication compression makes sense on a busy Rabbit cluster or if it does more harm than good. I could forsee users possibly shooting themselves in the foot with this if it's more intensive than I am thinking as you say.

Adding the option in Osiris could be interesting even if it is not configurable in RabbitMQ directly. Some sort of shovel-related tool that transmits the stream data outside of the cluster might be able to take advantage of it. (I think I saw some feature in the Kafka world like that but I can't find it in my notes now :/)

@gomoripeti
Comment options

First I also thought that the stream client already supports compression. AMQP clients could also compress the payload, but if the payloads are small (especially in case of MQTT) and this proposal works on the stream chunk level, compressing many messages together, it seems to have an advantage of broker-side compression too.

I was going to propose a hookable solution, so compression does not have to go to upstream RabbitMQ, but there are already a lot of callback points in RabbitMQ. Instead only making it configurable in osiris is a good idea. (ofc if tests show this whole thing makes sense)

@kjnilsson
Comment options

Adding the option in Osiris could be interesting even if it is not configurable in RabbitMQ directly. Some sort of shovel-related tool that transmits the stream data outside of the cluster might be able to take advantage of it. (I think I saw some feature in the Kafka world like that but I can't find it in my notes now :/)

I dont think such a tool would use the replication "protocol" but rather be done over the stream protocol, so more in line with what I suggested above where it would make more sense to compress over the stream protocol.

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Category
Ideas
Labels
None yet
3 participants
@the-mikedavis@kjnilsson@gomoripeti

[8]ページ先頭

©2009-2025 Movatter.jp