Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

chore: remove excess join in GetQuotaConsumedForUser query#15338

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
Emyrk merged 5 commits intomainfromstevenmasley/get_quota
Nov 5, 2024

Conversation

Emyrk
Copy link
Member

@EmyrkEmyrk commentedNov 1, 2024
edited
Loading

Followup of#15261

Filter is applied in original workspace query. We do not need to joinworkspaces twice.Also used build_number instead ofcreated_at for determining the last build.

Plans below run on dev.coder.com database. Interestingly theseq scan onworkspaces goes away. It was happening on the second join? Because of this, ourSIReadLock becomes finer on theworkspaces table. It goes fromworkspaces/relation/SIReadLock toworkspaces/tuple/SIReadLock: page=0 tuple=1

Before

Plan

SIReadLock Locks:

7866-<nil> [granted] workspace_builds/tuple/SIReadLock: page=0 tuple=17866-<nil> [granted] workspace_builds_workspace_id_build_number_key/page/SIReadLock: page=17866-<nil> [granted] workspaces/relation/SIReadLock: 7866-<nil> [granted] workspaces_owner_id_lower_idx/page/SIReadLock: page=1
Raw locks
7867-<nil> [granted] <nil>/virtualxid/ExclusiveLock: waiting to acquire virtual tx id lock7866-<nil> [granted] <nil>/virtualxid/ExclusiveLock: waiting to acquire virtual tx id lock7867-<nil> [granted] pg_locks/relation/AccessShareLock: 7866-<nil> [granted] workspace_builds/relation/AccessShareLock: 7866-<nil> [granted] workspace_builds/tuple/SIReadLock: page=0 tuple=17866-<nil> [granted] workspace_builds_job_id_key/relation/AccessShareLock: 7866-<nil> [granted] workspace_builds_pkey/relation/AccessShareLock: 7866-<nil> [granted] workspace_builds_workspace_id_build_number_key/relation/AccessShareLock: 7866-<nil> [granted] workspace_builds_workspace_id_build_number_key/page/SIReadLock: page=17866-<nil> [granted] workspaces/relation/AccessShareLock: 7866-<nil> [granted] workspaces/relation/SIReadLock: 7866-<nil> [granted] workspaces_owner_id_lower_idx/relation/AccessShareLock: 7866-<nil> [granted] workspaces_owner_id_lower_idx/page/SIReadLock: page=17866-<nil> [granted] workspaces_pkey/relation/AccessShareLock:

After

Plan

SIReadLock Locks:

8052-<nil> [granted] workspace_builds/tuple/SIReadLock: page=0 tuple=18052-<nil> [granted] workspace_builds_workspace_id_build_number_key/page/SIReadLock: page=18052-<nil> [granted] workspaces/tuple/SIReadLock: page=0 tuple=18052-<nil> [granted] workspaces_owner_id_lower_idx/page/SIReadLock: page=1
Raw locks
8053-<nil> [granted] <nil>/virtualxid/ExclusiveLock: waiting to acquire virtual tx id lock8052-<nil> [granted] <nil>/virtualxid/ExclusiveLock: waiting to acquire virtual tx id lock8053-<nil> [granted] pg_locks/relation/AccessShareLock: 8052-<nil> [granted] workspace_builds/tuple/SIReadLock: page=0 tuple=18052-<nil> [granted] workspace_builds/relation/AccessShareLock: 8052-<nil> [granted] workspace_builds_job_id_key/relation/AccessShareLock: 8052-<nil> [granted] workspace_builds_pkey/relation/AccessShareLock: 8052-<nil> [granted] workspace_builds_workspace_id_build_number_key/relation/AccessShareLock: 8052-<nil> [granted] workspace_builds_workspace_id_build_number_key/page/SIReadLock: page=18052-<nil> [granted] workspaces/relation/AccessShareLock: 8052-<nil> [granted] workspaces/tuple/SIReadLock: page=0 tuple=18052-<nil> [granted] workspaces_owner_id_lower_idx/relation/AccessShareLock: 8052-<nil> [granted] workspaces_owner_id_lower_idx/page/SIReadLock: page=18052-<nil> [granted] workspaces_pkey/relation/AccessShareLock:

Comment on lines 28 to +33
INNER JOIN
workspacesonwb.workspace_id=workspaces.id
WHERE
-- Only return workspaces that match the user + organization.
-- Quotas are calculated per user per organization.
NOTworkspaces.deletedAND
Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Thisworkspaces.deleted being placed in this join sub query prevents the workspacesseq scan from what I can tell.

The alternative place is to place it in the final query asNOT workspaces.deleted.


This is equivalent though because we are doing anINNER JOIN onworkspace_builds andworkspaces. If theworkspace is deleted, it's builds are completely omitted fromlatest_builds

@Emyrk
Copy link
MemberAuthor

Emyrk commentedNov 1, 2024
edited
Loading

@spikecurtis You did ask in the other thread if a Bitmap Heap Scan locks the entireworkspaces table.Experimentally it only locks the rows it fetches. I added a second workspace to my test, and got theseSIReadLocks. You can see both tuples.

8167-<nil> [granted] workspace_builds/tuple/SIReadLock: page=0 tuple=18167-<nil> [granted] workspace_builds/tuple/SIReadLock: page=0 tuple=28167-<nil> [granted] workspace_builds_workspace_id_build_number_key/page/SIReadLock: page=18167-<nil> [granted] workspaces/tuple/SIReadLock: page=0 tuple=18167-<nil> [granted] workspaces/tuple/SIReadLock: page=0 tuple=28167-<nil> [granted] workspaces_owner_id_lower_idx/page/SIReadLock: page=1

I do not know at what scale these fine locks get more coarse.

I imagine adding indices would just do a page lock on an index? Not really sure, but I am going to hold off adding more indices at this time.

Copy link
Contributor

@spikecurtisspikecurtis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

LGTM!

@EmyrkEmyrk merged commit2d00b50 intomainNov 5, 2024
29 checks passed
@EmyrkEmyrk deleted the stevenmasley/get_quota branchNovember 5, 2024 14:12
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@spikecurtisspikecurtisspikecurtis approved these changes

Assignees

@EmyrkEmyrk

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

2 participants

@Emyrk@spikecurtis

[8]ページ先頭

©2009-2025 Movatter.jp