- Notifications
You must be signed in to change notification settings - Fork927
feat: add WaitUntilEmpty to LogSender#12159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
spikecurtis commentedFeb 15, 2024 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
This stack of pull requests is managed by Graphite.Learn more about stacking. Join@spikecurtis and the rest of your teammates on |
case <-nevermind: | ||
return | ||
} | ||
}() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Why are we duplicating logic fromSendLoop
here? Since this method doesn't attempt to send, it's quite pointless unless the signaling is happening from a runningSendLoop
anyway.
Edit: Ah, nevermind, just realized this is only here to handle the user provided context.
I think this could be greatly simplified:
func (l*LogSender)WaitUntilEmpty(ctx context.Context)error {select {case<-ctx.Done():returnctx.Err()case<-l.allSent:returnnil}}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
The problem with anallSent
channel is how to arrange when it should read. Closing the channel won't work, because you can't "unclose" it if more data gets queued.
Writing to the channel won't work if there are more than one caller to WaitUntilEmpty.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
True, it would work better/simpler if the send loop was channel-based as well. In that case, one approach could be this:
func (l*LogSender)WaitUntilEmpty(ctx context.Context)error {wait:=make(chanstruct{})l.waitUntilEmpty<-waitselect {case<-ctx.Done():returnctx.Err()case<-wait:returnnil}}// SendLoopvarwaiters []chanstruct{}for {select {case<-tick:case<-l.waitUntilEmpty:waiters=append(waiters,wait)}// ...iflen(l.queues)==0 {for_,wait:=rangewaiters {close(wait)}waiters=nil}
But it's not quite as nice when retrofitted into the mutex style loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
The problem there is that it requiresSendLoop
to actually be running in order forWaitUntilEmpty()
to return, which it might not be but we are still empty.
Channels are great for communicating between goroutines. Here what we really, actually want is to know when acondition is satisfied, regardless of other running goroutines, and for thatsync.Cond
is your friend.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I had a suggestion for changingWaitUntilEmpty
, but I'll leave it up to you whether or not to implement. Works as-is, too, although I feel mutexes are starting to make this service overly complex (vs using channel based communication).
if len(l.queues) == 0 { | ||
return nil | ||
} | ||
return ctx.Err() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
iflen(l.queues)==0 { | |
returnnil | |
} | |
returnctx.Err() | |
returnctx.Err() |
We don't actually need this check, this way we give priority to the context cancellation, even if we happen to be done at the same time (this can be preferable in some cases).
2a2203e
to900e32a
Comparef39d152
to2f52a67
Compare2f52a67
todfe07f9
CompareMerge activity
|
Uh oh!
There was an error while loading.Please reload this page.
We'll need this to be able to tell when all outstanding logs have been sent, as part of graceful shutdown.