Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork399
concurrency bug fixes/ improvements#4663
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
base:master
Are you sure you want to change the base?
Conversation
This is the right thing to do because othewise it is not possible tocreate new ideStates in a single instance of the executable. This willbe useful if the hls executable is supposed to talk to multiple clientsand lives beyond a single client disconnecting.
Previously, when there was no shutdown message by a client and theclient disconnected, resulting in the handlers to be GC'd the race thatwas supposed to free resources for the HieDB & co. would throw a harderror talking about the MVar being unreachable. We would like to insteadfinish gracefully because finishing the race as soon as the MVar wasGC'd is the right thing to do anyway.
-- Rethrows any exceptions. | ||
untilMVar::MonadUnliftIOm=>MVar()->m()->m() | ||
untilMVar mvar io= void$ | ||
waitAnyCancel=<<traverse async [ io , readMVar mvar ] | ||
untilMVar mvar io= race_ (readMVar mvar`catch`\BlockedIndefinitelyOnMVar->pure()) io |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This seems a little bit round-about, is this really preferable over
untilMVar mvar io= race_ (readMVar mvar`catch`\BlockedIndefinitelyOnMVar->pure()) io | |
untilMVar mvar io= race_ (readMVar mvar) (io`finally` putMVar mvar()) |
Also, I am not quite sure if I understand whether theio
thread dying without putting the MVar is a bug on its own that needs fixing? What does dying mean here, does the thread crash for some reason?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
The io is not the thing putting the MVar. The MVar is put my the shutdown notification. If none is sent but the connection dies anyway then the MVar gets GC'd and the thread that tries to read the MVar gets a blocked indefinitely on MVar exception. Thats why your proposed change wouldn't work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
The "real" fix is to put the MVar as a bracket around the server dying, too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Maybe the server shouldn't crash but gracefully shutdown if the connection is dropped?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Im not sure it crashes. It may just be that the thread dies before it receives a shutdown notification. That's very well possible if the client doesn't/ can't implement graceful shutdown
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
But the server can handle the connection termination gracefully, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Yea. The thread just drops. Which makes the putMVar drop, too. Which triggers a threadBlockedIndefinitelyOnMVar exception
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Right, but can we avoid relying on the rts for this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Well I'm assuming that the io will also finish so it's not a problem to not rely on the RTS, perhaps it just finishes later most of the time. But that's what I outlined above. The ideal fix is to hand the MVar somewhere else where it can be put as part of some bracketing operation. But since that ideal fix doesn't give us any but conceptual advantages, idk if it's necessary for now
fendor commentedJul 17, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
The second commit seems to be neither a bug fix nor a concurrency improvement. |
It doesn't appear to be one as of now ;) Later when we create multiple clients per run of the executable, it's important that we can create multiple ide states, too. |
fendor commentedJul 17, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
Right, as of now it is a random change :P As you know, I don't think the complexity that might be introduced by handling multiple clients at the same time should be handled within HLS. Perhaps if the complexity was encapsulated in a separate module / executable. |
Wellthis complexity is necessary if we ever want a single executable with multiple clients which I think is actuallyrequired to make it feasible at all, mainly wrt memory footprint. |
I'll create a ticket. |
@fendor so whether or not I create the ticket, are these changes controversial? |
No, these changes are not controversial |
Hi. I have two concurrency bug fixes/ improvements that pave the way towards multi client haskell-language-server.
[fix] don't bake ide state mvar into setup and getIdeState
This is the right thing to do because othewise it is not possible to
create new ideStates in a single instance of the executable. This will
be useful if the hls executable is supposed to talk to multiple clients
and lives beyond a single client disconnecting
[fix] don't throw hard errors when no shutdown message is handled
Previously, when there was no shutdown message by a client and the
client disconnected, resulting in the handlers to be GC'd the race that
was supposed to free resources for the HieDB & co. would throw a hard
error talking about the MVar being unreachable. We would like to instead
finish gracefully because finishing the race as soon as the MVar was
GC'd is the right thing to do anyway.