Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Suggestions for identifying/reproducing performance bottlenecks?#491

Unanswered
tfh-cri asked this question inQ&A
Discussion options

I've been using pylsp for a while along with Emacs/emacs-lsp for working on a Django project, and periodically, although it seems like increasingly more commonly, getting random hangs of the entire stack withpylsp sitting at 100% on at least one core for 60s+, and sometimes never seems to return.

I'm looking for suggestions on how to instrument or otherwise debug what's going on, since on its own that's not a particularly useful bug report.

There's nothing particularly useful in the*pylsp::stderr* output at the default WARN level, and any lower gets pretty spammy since this problem doesn't occur all that regularly.

I can sometimes attachpy-spy to the pylsp process, and get something along the lines of:

 %Own   %Total  OwnTime  TotalTime  Function (filename) 27.00%  27.00%   27.95s    27.95s   _enqueue_output (jedi/inference/compiled/subprocess/__init__.py) 27.00%  36.00%   21.93s    29.50s   _check_fs (jedi/inference/references.py) 13.00%  14.00%   18.24s    20.62s   join (posixpath.py)  5.00%   8.00%   15.84s    17.48s   parse_parts (pathlib.py) 11.00%  11.00%   11.91s    11.91s   _worker (concurrent/futures/thread.py)  5.00%  16.00%    9.28s    25.10s   <setcomp> (jedi/inference/references.py)  5.00%   6.00%    7.46s     9.08s   _walk (os.py)  6.00%   6.00%    3.56s     4.16s   read (parso/file_io.py)  5.00%  64.00%    3.45s    78.25s   recurse_find_python_folders_and_files (jedi/inference/references.py)  3.00%  11.00%    3.34s    20.82s   _parse_args (pathlib.py)  1.00%  24.00%    2.68s    35.39s   <listcomp> (jedi/file_io.py)  2.00%   2.00%    2.41s     2.41s   _get_sep (posixpath.py)  4.00%  19.00%    2.34s    25.89s   __new__ (pathlib.py)  3.00%  14.00%    2.33s    23.15s   _from_parts (pathlib.py)  1.00%  20.00%    2.26s    28.15s   __init__ (parso/file_io.py)  7.00%   7.00%    1.85s     1.85s   name (pathlib.py)  3.00%   3.00%    1.64s     1.64s   splitroot (pathlib.py)  1.00%   1.00%    1.39s     1.39s   islink (posixpath.py)  1.00%   1.00%    1.28s     1.28s   parse (ast.py)  0.00%   2.00%    1.21s     1.99s   suffix (pathlib.py)  0.00%  30.00%   0.890s    45.40s   walk (jedi/file_io.py)  2.00%   3.00%   0.550s     2.99s   detect_encoding (parso/utils.py)  0.00%   3.00%   0.410s     3.40s   python_bytes_to_unicode (parso/utils.py)...

but not sure if that's actually indicative of a specific problem/cause in jedi, and even if it is, what might be causing it.

I've not encountered much trouble with other projects, so I assume it's something data-specific to this codebase, but it's an internal work app I can't share.

It might be just recency bias, but it seems like it happens more often when editing in pytesttest_blah.py type files, so plausibly something is getting a little carried away in the jedi code that handles magic pytest fixtures or something?

Any ideas on where to go next?

You must be logged in to vote

Replies: 0 comments

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Category
Q&A
Labels
None yet
1 participant
@tfh-cri

[8]ページ先頭

©2009-2025 Movatter.jp