Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

concurrency bug fixes/ improvements#4663

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
MangoIV wants to merge2 commits intohaskell:master
base:master
Choose a base branch
Loading
fromMangoIV:mangoiv/concurrency-bug-fixes

Conversation

MangoIV
Copy link
Contributor

Hi. I have two concurrency bug fixes/ improvements that pave the way towards multi client haskell-language-server.

  1. [fix] don't bake ide state mvar into setup and getIdeState
    This is the right thing to do because othewise it is not possible to
    create new ideStates in a single instance of the executable. This will
    be useful if the hls executable is supposed to talk to multiple clients
    and lives beyond a single client disconnecting

  2. [fix] don't throw hard errors when no shutdown message is handled
    Previously, when there was no shutdown message by a client and the
    client disconnected, resulting in the handlers to be GC'd the race that
    was supposed to free resources for the HieDB & co. would throw a hard
    error talking about the MVar being unreachable. We would like to instead
    finish gracefully because finishing the race as soon as the MVar was
    GC'd is the right thing to do anyway.

This is the right thing to do because othewise it is not possible tocreate new ideStates in a single instance of the executable. This willbe useful if the hls executable is supposed to talk to multiple clientsand lives beyond a single client disconnecting.
Previously, when there was no shutdown message by a client and theclient disconnected, resulting in the handlers to be GC'd the race thatwas supposed to free resources for the HieDB & co. would throw a harderror talking about the MVar being unreachable. We would like to insteadfinish gracefully because finishing the race as soon as the MVar wasGC'd is the right thing to do anyway.
@MangoIVMangoIV requested a review fromwz1000 as acode ownerJuly 16, 2025 13:13
-- Rethrows any exceptions.
untilMVar::MonadUnliftIOm=>MVar()->m()->m()
untilMVar mvar io= void$
waitAnyCancel=<<traverse async [ io , readMVar mvar ]
untilMVar mvar io= race_ (readMVar mvar`catch`\BlockedIndefinitelyOnMVar->pure()) io
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

This seems a little bit round-about, is this really preferable over

Suggested change
untilMVar mvar io= race_ (readMVar mvar`catch`\BlockedIndefinitelyOnMVar->pure()) io
untilMVar mvar io= race_ (readMVar mvar) (io`finally` putMVar mvar())

Also, I am not quite sure if I understand whether theio thread dying without putting the MVar is a bug on its own that needs fixing? What does dying mean here, does the thread crash for some reason?

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

The io is not the thing putting the MVar. The MVar is put my the shutdown notification. If none is sent but the connection dies anyway then the MVar gets GC'd and the thread that tries to read the MVar gets a blocked indefinitely on MVar exception. Thats why your proposed change wouldn't work.

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

The "real" fix is to put the MVar as a bracket around the server dying, too.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Maybe the server shouldn't crash but gracefully shutdown if the connection is dropped?

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Im not sure it crashes. It may just be that the thread dies before it receives a shutdown notification. That's very well possible if the client doesn't/ can't implement graceful shutdown

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

But the server can handle the connection termination gracefully, right?

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Yea. The thread just drops. Which makes the putMVar drop, too. Which triggers a threadBlockedIndefinitelyOnMVar exception

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Right, but can we avoid relying on the rts for this case?

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Well I'm assuming that the io will also finish so it's not a problem to not rely on the RTS, perhaps it just finishes later most of the time. But that's what I outlined above. The ideal fix is to hand the MVar somewhere else where it can be put as part of some bracketing operation. But since that ideal fix doesn't give us any but conceptual advantages, idk if it's necessary for now

@fendor
Copy link
Collaborator

fendor commentedJul 17, 2025
edited
Loading

The second commit seems to be neither a bug fix nor a concurrency improvement.
However, as the change is benign, it seems fine to me.

@MangoIV
Copy link
ContributorAuthor

It doesn't appear to be one as of now ;)

Later when we create multiple clients per run of the executable, it's important that we can create multiple ide states, too.

@fendor
Copy link
Collaborator

fendor commentedJul 17, 2025
edited
Loading

Right, as of now it is a random change :P

As you know, I don't think the complexity that might be introduced by handling multiple clients at the same time should be handled within HLS. Perhaps if the complexity was encapsulated in a separate module / executable.
In any way, we can discuss this in detail once you created a ticket :)

@MangoIV
Copy link
ContributorAuthor

Wellthis complexity is necessary if we ever want a single executable with multiple clients which I think is actuallyrequired to make it feasible at all, mainly wrt memory footprint.

@MangoIV
Copy link
ContributorAuthor

I'll create a ticket.

@MangoIV
Copy link
ContributorAuthor

@fendor so whether or not I create the ticket, are these changes controversial?

@fendor
Copy link
Collaborator

No, these changes are not controversial

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@fendorfendorfendor left review comments

@wz1000wz1000Awaiting requested review from wz1000wz1000 is a code owner

At least 1 approving review is required to merge this pull request.

Assignees
No one assigned
Labels
None yet
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

2 participants
@MangoIV@fendor

[8]ページ先頭

©2009-2025 Movatter.jp