Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[Messenger] Added support for auto trimming of redis streams#31825

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged

Conversation

@Toflar
Copy link
Contributor

@ToflarToflar commentedJun 3, 2019
edited
Loading

QA
Branch?4.4
Bug fix?no
New feature?yes
BC breaks?no
Deprecations?no
Tests pass?yes
Fixed tickets
LicenseMIT
Doc PRsymfony/symfony-docs#11870

Right now the Redis stream will just grow indefinitely. However, there are means to have it delete old entries from time to time.
Note: I could not use theXADD mystream MAXLEN ~ 1000 * notation because the PHP redis extension does not support theMAXLEN option afaics so I went for the extraXTRIM command.
I explicitly enabled the approximate flag because it makes absolutely no sense to hardcode the limit for us although we could even have this configurable too (but I don't think we should).
The whole idea of this PR is to enable occasional trimming of the stream so it doesn't grow forever, so when you configure something like20000 it may well happen that trimming only happens at25000 depending on your settings.

Ping@soyuka@alexander-schranz@chalasr :)

@Tobion
Copy link
Contributor

Tobion commentedJun 3, 2019
edited
Loading

So this is similar to TTL in rabbitmq:https://www.rabbitmq.com/ttl.html
Can't we use redis TTL for this as well? Having random trims of random old items seems strange to me.

If we can implement it for redis and doctrine, we could add a genericExpirationStamp that could work for all transports.

@nicolas-grekasnicolas-grekas added this to thenext milestoneJun 3, 2019
@Toflar
Copy link
ContributorAuthor

There is no such thing as TTL in Redis Streams :)

@jderusse
Copy link
Member

Redis stream's ID are timestamp, you may run a script to remove older items

localt=redis.call('TIME')localkeys=redis.call('XRANGE',ARGV[1],'-', (t[1]-ARGV[2])*1000)fori=1,#keys,1doredis.call('XDEL',ARGV[1],keys[i][1])endreturn#keys

withARGV[1] the name of the stream andARGV[2] the TTL in seconds

note: I'm not an expert of redis, I don't know the performance impact of such script.

@Toflar
Copy link
ContributorAuthor

There‘s a reason why TTL support is not built-in in Redis (yet). You can read more about it here:https://github.com/antirez/redis/issues/4450
The maxlength feature is the only way to trim it as of today so I think this one has to be Redis-specific.

soyuka reacted with thumbs up emoji

@Tobion
Copy link
Contributor

I don't see the need to implement that as a core feature. If messages need to have a ttl then people should use the tools that support it, e.g. rabbitmq. If it's purely about local development where you might not run the workers, then it would be enough to have a script that deletes old items from redis that you can call manually or in a cron or whatever people prefer.
In it's current form it just deletes arbitrary data. Do you want to enable that in prod? What's the use-case?

@jderusse
Copy link
Member

jderusse commentedJun 4, 2019
edited
Loading

I don't see the need to implement that as a core feature.

@Tobion with redis implémentation, consumed message are not deleted. This PR is not about providing a feature to delete message. But about cleaning the server.

2 options:

  • using a MAXLEN (fast, but not reliable, because redis may delete unconsumed messages when you push too much messages in the queue)
  • using a TTL (slow, may block the redis thread during the garbage collection, and can consume a lot of memory with high throughput producer)

@Toflar I'm aware of that issue. When using redis stream as a messages storage there is a trade off between performance and reliability

In my experience I already had situation where consumers were out during few hours and queue grown 100 000 times bigger than usual. In such situation, capping with MAXLEN would have lost messages or would requires to set a extremely high MAXLEN (which will consume a lot of space even when the situation is ok)

Maybe we can consider combining the 2 solutions: counting outdated elements withXRANGE and usingXLEN to define a newMAXLEN. ie.

localt=redis.call('TIME')localkeys=redis.call('XRANGE',ARGV[1],'-', (t[1]-ARGV[2])*1000,'COUNT',ARGV[3])if#keys>0thenlocall=redis.call('XLEN',ARGV[1])redis.call('XTRIM',ARGV[1],'MAXLEN','~',l-#keys)return#keyselsereturn0end"

what do you think?

@Toflar
Copy link
ContributorAuthor

Do you want to enable that in prod? What's the use-case?

The use case is that I already have Redis running and Streams are pretty much explicitly built for message handling. Why would I want to manage yet another tool if Redis can serve me well? 😄

In my experience I already had situation where consumers were out during few hours and queue grown 100 000 times bigger than usual. In such situation, capping with MAXLEN would have lost messages or would requires to set a extremely high MAXLEN (which will consume a lot of space even when the situation is ok)

Hmmm, so I see why my PR does not seem practical under certain circumstances.
So maybe we have to go a different path here and provide an explicit cleanup command?

These are the use cases I see:

  • Purge all messages, no matter the age and status
  • Purge messages that are older than<dateinterval> no matter the status
  • Purge messages that are older than<dateinterval> but onlyhandled ones, notpending
  • Purge messages of all age but onlyhandled ones.

And all of those optionally restricted to receivers.

So I don't know but maybe something like

  • bin/console messenger:cleanup <receivers>
  • bin/console messenger:cleanup --older-than=P7D <receivers>
  • bin/console messenger:cleanup --older-than=P7D --handled <receivers>
  • bin/console messenger:cleanup --handled <receivers>

(I did not really think about naming and defaults, I just think aloud here.)
And then we'll have an interface and every transport can implement that and optimize for the best way in their case. Also, this would ensure things happen in a separate cleanup process that (in best case) do not affect production performance.

Does this seem like a better approach?

@jderusse
Copy link
Member

hmm. I'm not sure about a dedicated command, because: running a periodic small cleanup (like you did with the trimProbability) or a big one every X minutes, will, in both case, block the Redis thread and block every consumers. BUT, the more you call the trim algorithm, the smaller is the subset of messages to remove, and the shorter will be the blocking time window.

FYI, I've made a small bench with very simple data (I don't know if the result will be the same with larger messages) it takes ~50ms to remove a small subset of ~10 000 (that's why I was thinking about removing the oldest 2 000 messages every 1 000 XADD)

But thinking about that, I really wonder if such complexity worth it. Maybe, I'm just exposing an edge case, and your first idea is the good one. We may warned user in the documentation, and recommend to use a real messaging bus for high throughput cases.

@alexander-schranz
Copy link
Contributor

alexander-schranz commentedJun 4, 2019
edited
Loading

@Toflar

Note: I could not use the XADD mystream MAXLEN ~ 1000 * notation because the PHP redis extension does not support the MAXLEN option afaics so I went for the extra XTRIM command.

The 4 parameter of thexadd function is the maxlen e.g.:

$redis->xadd("new3","*", ["key" =>"value"],10);

its just missing in the phpdocs (my fault 🙈 ).

@Toflar
Copy link
ContributorAuthor

Ah 😄 Well, that can be easily changed. Anything you want to add to the discussion?

@alexander-schranz
Copy link
Contributor

alexander-schranz commentedJun 11, 2019
edited
Loading

@Toflar I think by default we should not autotrim because a default value is difficult to set here, but it would be good to have a way to configure this. Is the performance better when usingxadd? The 5th argument would be also a boolean flag which represents the approximate (~) operator if set to true so the trim_probability could be removed and used the redis internal mode for this.

@Toflar
Copy link
ContributorAuthor

I think by default we should not autotrim because a default value is difficult to set here

Agreed.

Is the performance better when usingxadd?

I have absolutely no clue. I don't know which is better,xadd withMAXLEN ~ all the time or anxtrim every now and then.

@alexander-schranz
Copy link
Contributor

A simple bench the xadd seems to be a better choice:

Testvalues adding 100000 messages with a trim size of 100:

normal xAdd: 4.6s
xAdd with approximate flag: 4.7s
xAdd with fixed size: 5.2s
xAdd with xtrim: 8.9s

So I think we should go with:

$this->connection->xadd($this->stream,'*', ...,$this->maxlength, (bool)$this->maxlength);

As its optional I think its ok to implement this for the redis transport?

/cc@sroze@chalasr

@Toflar
Copy link
ContributorAuthor

Thanks for benchmarking 👍 I'll update the PR accordingly but I'd like to wait for feedback by Sam and Robin.

@chalasr
Copy link
Member

Thanks for raising the issue@Toflar.
👍 for passing an approximate max length toxadd.

Copy link
Contributor

@srozesroze left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Looks good to me 👍

@srozesrozeforce-pushed theredis-messenger-transport-max-length branch from1942fec to7fe06bcCompareJuly 2, 2019 20:18
@sroze
Copy link
Contributor

Thank you@Toflar.

@srozesroze merged commit7fe06bc intosymfony:4.4Jul 2, 2019
sroze added a commit that referenced this pull requestJul 2, 2019
…treams (Toflar)This PR was squashed before being merged into the 4.4 branch (closes#31825).Discussion----------[Messenger] Added support for auto trimming of redis streams| Q             | A| ------------- | ---| Branch?       | 4.4| Bug fix?      | no| New feature?  | yes| BC breaks?    | no| Deprecations? | no| Tests pass?   | yes| Fixed tickets || License       | MIT| Doc PR        | will submit if concept is okayRight now the Redis stream will just grow indefinitely. However, there are means to have it delete old entries from time to time.Note: I could not use the `XADD mystream MAXLEN ~ 1000 *` notation because the PHP redis extension does not support the `MAXLEN` option afaics so I went for the extra `XTRIM` command.I explicitly enabled the approximate flag because it makes absolutely no sense to hardcode the limit for us although we could even have this configurable too (but I don't think we should).The whole idea of this PR is to enable occasional trimming of the stream so it doesn't grow forever, so when you configure something like `20000` it may well happen that trimming only happens at `25000` depending on your settings.Ping@soyuka@alexander-schranz@chalasr :)Commits-------7fe06bc [Messenger] Added support for auto trimming of redis streams
@ToflarToflar deleted the redis-messenger-transport-max-length branchJuly 3, 2019 06:21
javiereguiluz added a commit to javiereguiluz/symfony-docs that referenced this pull requestJul 3, 2019
… option (Toflar)This PR was submitted for the master branch but it was merged into the 4.4 branch instead (closessymfony#11870).Discussion----------Documented redis transport new stream_max_entries optionDocs forsymfony/symfony#31825Commits-------d1aa071 Documented redis transport new stream_max_entries option
@nicolas-grekasnicolas-grekas modified the milestones:next,4.4Oct 27, 2019
This was referencedNov 12, 2019
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@chalasrchalasrchalasr approved these changes

+2 more reviewers

@srozesrozesroze approved these changes

@alexander-schranzalexander-schranzalexander-schranz approved these changes

Reviewers whose approvals may not affect merge requirements

Assignees

No one assigned

Projects

None yet

Milestone

4.4

Development

Successfully merging this pull request may close these issues.

8 participants

@Toflar@Tobion@jderusse@alexander-schranz@chalasr@sroze@nicolas-grekas@carsonbot

[8]ページ先頭

©2009-2025 Movatter.jp