Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork939
Refactored lsp--create-filter-function for lower latency and higher efficiency#4796
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
base:master
Are you sure you want to change the base?
Uh oh!
There was an error while loading.Please reload this page.
Conversation
4387762 to009f023CompareThe CI is failing which indicates that there might be edge case where this change fails to work. Can you make the CI passes? |
eval-exec commentedMay 27, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
Thanks for pointing it out. I’ll take a look and fix it later. |
2807ef2 to5876bbfCompare@kiennq Hello, CI passed, request code review. Thank you. |
lsp-mode.el Outdated
| ;; Copy and decode body | ||
| (with-current-buffer json-body-buffer | ||
| (erase-buffer) | ||
| (insert-buffer-substring input-buffer body-start body-end) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Do we need this? Can't we do that in theinput-buffer, i.e., move the cursor to header end and just calllsp-json-read-buffer, we save one copy to body buffer
lsp-mode.el Outdated
| while (let* ((header-end (search-forward "\r\n\r\n" nil t)) | ||
| (content-length-start (search-backward "Content-Length:" nil t))) | ||
| (when header-end | ||
| (let* ((headers (buffer-substring content-length-start (- header-end 4))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Usebuffer-substring-no-properties
| (lsp--on-notification workspace json-data)) | ||
| ('request (lsp--on-request workspace json-data))))))) | ||
| (defun lsp--create-filter-function (workspace) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Can we have a benchmark to see if this is really faster than the old one?
The old one already using temp buffer at the final step as part of optimization, this change expands that more so that we don't keep the list of input chunks but concat them right away.
Another thing withlsp--parser-on-message is that we can dispatch it on a timer, so we don't have to wait for a handler to run to parse the next one. The previous approach is we parsing all available message first and dispatch them one by one. The new approach on the other hand will try to executelsp--parser-on-message as soon as we have data, however that can led to a long waiting period, especially around something like waiting for completion result.
e580c8b todc49d69CompareSigned-off-by: Eval EXEC <execvy@gmail.com>
dc49d69 toa2d1a6eCompare
Uh oh!
There was an error while loading.Please reload this page.
I added a new entry toCHANGELOG.md
I updated documentation if applicable (
docsfolder)This PR rewrites
lsp--create-filter-functionto use a buffer-based approach for processing incoming LSP messages. The new implementation accumulates raw input in a dedicated buffer, parses headers to determine message boundaries, and only decodes and processes complete messages. This design:lsp--parser-on-messageimmediately for each complete message instead of batching, improving responsiveness.The new logic closely follows the LSP protocol framing, ensuring correctness and better performance, especially with large or fragmented messages.
No changes to external API or user-facing behavior.
All message parsing and dispatching are now handled more efficiently and reliably.