Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit03ed951

Browse files
authored
fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`:https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.This is just tracking some experimentation to fix that race condition## Run results: ##- Run 1: peer test failure- Run 2: peer test failure- Run 3: `TestWorkspaceHistory/CreateHistory` -https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45```status code 412: The provided project history is running. Wait for it to complete importing!````- Run 4: `TestWorkspaceHistory/CreateHistory` -https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176``` workspacehistory_test.go:122: Error Trace:workspacehistory_test.go:122 Error: Condition never satisfied Test: TestWorkspaceHistory/CreateHistory```- Run 5: peer failure- Run 6: Pass ✅ - Run 7: Peer failure## Open Questions: ##### Is `dRPC` or `websocket` at fault for the data race?It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.From `dRPC`'s perspective, the callstack looks like this:- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253) - [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65) - [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128) - with finally the data race happening here:```go// AppendFrame appends a marshaled form of the frame to the provided buffer.func AppendFrame(buf []byte, fr Frame) []byte {...out := bufout = append(out, control). // <---------```This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__:https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).### Why does cloning on `Read` fail?Get a bunch of errors like:```2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:02022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0```# UPDATE:We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:- Run 1: ✅ - Run 2:https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338- Run 3: ✅ - Run 4:https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168- Run 5: ✅
1 parent2a76b60 commit03ed951

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

‎coderd/provisionerdaemons.go‎

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,10 @@ func (api *api) provisionerDaemons(rw http.ResponseWriter, r *http.Request) {
4848

4949
// Serves the provisioner daemon protobuf API over a WebSocket.
5050
func (api*api)provisionerDaemonsServe(rw http.ResponseWriter,r*http.Request) {
51-
conn,err:=websocket.Accept(rw,r,nil)
51+
conn,err:=websocket.Accept(rw,r,&websocket.AcceptOptions{
52+
// Need to disable compression to avoid a data-race
53+
CompressionMode:websocket.CompressionDisabled,
54+
})
5255
iferr!=nil {
5356
httpapi.Write(rw,http.StatusBadRequest, httpapi.Response{
5457
Message:fmt.Sprintf("accept websocket: %s",err),

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp