Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Add incremental git log limit for commits#4909

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Draft
KarlK90 wants to merge2 commits intojesseduffield:master
base:master
Choose a base branch
Loading
fromKarlK90:fix/commit-chunk-fetches

Conversation

KarlK90
Copy link

PR Description

The existing git log logic fetched the first 300 commits of a repo and displayed them in the local and sub-commit views. Once a user selected a commit beyond a threshold of 200 commits the whole repository was loaded. This is problematic with large repos e.g. the linux kernel with currently ~138k commits as lazygit slows down substantially with such a large number of commits in memory.

This commit replaces the current all or only the first 300 commits logic with an incremental fetching approach:

  1. The first 400 commits of repo are loaded by default.
  2. If the user selects a commit beyond a threshold (current limit-100) the git log limit is increased by 400 if there are more commits available. If there are more commits available is currently checked by comparing the previous log limit with the real commit count in the model, if the commit count is less then the limit it is assumed that we reached the end of the commit log. Ideally it would be better to callgit rev-list --count xyz in the right places and compare with this result, but this requires more changes.

Adding a "paginated implementation" by utilizinggit log --skip=x --max-count=y and appending commits to the model instead of replacing the whole collection would be nice, but this requires deeper changes to keep everything consistent.

Please check if the PR fulfills these requirements

  • Cheatsheets are up-to-date (rungo generate ./...)
  • Code has been formatted (seehere)
  • Tests have been added/updated (seehere for the integration test guide)
  • Text is internationalised (seehere)
  • If a new UserConfig entry was added, make sure it can be hot-reloaded (seehere)
  • Docs have been updated if necessary
  • You've read through your own file changes for silly mistakes etc

@stefanhaller
Copy link
Collaborator

I like this change, although I do have to say that it doesn't address my own problem with large repos. There are two situations in which we turn off the 300 limit: scrolling to the end, or searching. For me, it's almost always searching (in which case we should remove the limit entirely, rather than add another batch of commits, and your branch does that, which is good), so I don't gain anything from the incremental limit.

But I guess the change is still good to have for those who do scroll to the end of the list sometimes. Which makes me wonder: if people want to force loading all commits for some reason, there is no longer an obvious way to do that (previously they could press> to do that), short of pressing/,esc. I wonder if this could be a problem (maybe not).

I rebased the branch so that it includes the recent change to support running tests with go 1.25, and force-pushed (hope you don't mind). I also pushed a fixup commit that addresses a crash:ArgIf doesn't do short-circuiting, i.e. it evaluates both arguments before it calls the function, so this always crashes whenopts.LogLimit is nil. This makes me wonder if it would be better to model the limit as a plain int, which -1 meaning there's no limit. Yeah I know, magic numbers, but it's not our fault that go doesn't have a proper optional type (like std::optional in C++), so this is simple and pragmatic.

Last question: what is the reason to change our previous limit of 300 to 400? I worry that this might be a (probably very slight, but still) performance regression at startup, and fast startup times are really crucial.

@stefanhaller
Copy link
Collaborator

If there are more commits available is currently checked by comparing the previous log limit with the real commit count in the model, if the commit count is less then the limit it is assumed that we reached the end of the commit log.

I forgot to respond to this. If I understand this right, the only problem with this is that when you have a repo whose number of commits happens to be an exact multiple of 400, then when you get to the end of the list there will be one more, unnecessary refresh; is that right? This wouldn't bother me too much, given how unlikely that is, and I don't think it's worth callinggit rev-list --count to improve this.

KarlK90and others added2 commitsSeptember 22, 2025 09:49
The existing git log logic fetched the first 300 commits of a repo anddisplayed them in the local and sub-commit views. Once a user selected acommit beyond a threshold of 200 commits the whole repository wasloaded. This is problematic with large repos e.g. the linux kernel withcurrently ~138k commits as lazygit slows down substantially with such alarge number of commits in memory.This commit replaces the current all or only the first 300 commits logicwith an incremental fetching approach:1. The first 400 commits of repo are loaded by default.2. If the user selects a commit beyond a threshold (current limit-100)   the git log limit is increased by 400 if there are more commits   available. If there are more commits available is currently checked   by comparing the previous log limit with the real commit count in the   model, if the commit count is less then the limit it is assumed that   we reached the end of the commit log. Ideally it would be better to   call `git rev-list --count xyz` in the right places and compare with   this result, but this requires more changes.Adding a "paginated implementation" by utilizing `git log --skip=x--max-count=y` and appending commits to the model instead of replacingthe whole collection would be nice, but this requires deeper changes tokeep everything consistent.Signed-off-by: Stefan Kerkmann <s.kerkmann@pengutronix.de>
@KarlK90KarlK90force-pushed thefix/commit-chunk-fetches branch from99a367b to43f31eaCompareSeptember 22, 2025 07:49
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers
No reviews
Assignees
No one assigned
Labels
None yet
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

2 participants
@KarlK90@stefanhaller

[8]ページ先頭

©2009-2025 Movatter.jp