Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

chore: add global caching to rbac#7439

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
ammario merged 11 commits intomainfromrbac-global-cache
May 8, 2023
Merged

chore: add global caching to rbac#7439

ammario merged 11 commits intomainfromrbac-global-cache
May 8, 2023

Conversation

ammario
Copy link
Member

@ammarioammario commentedMay 5, 2023
edited
Loading

I'm assuming all calls to Authorize are pure in this implementation.

@ammario
Copy link
MemberAuthor

thoughts on this approach@Emyrk?

@ammarioammario linked an issueMay 5, 2023 that may beclosed by this pull request
Copy link
Member

@EmyrkEmyrk left a comment
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I'm not opposed to this. I'm curious what kind of savings this would yield. We should not use the full json payload as a key though.


The caching per request was intentional as a single request often does the same rbac check a lot. Example is fetching a workspace. You have to fetch the workspace build and the provisioner job, which all do the same rbac check on the workspace.

Going global means that we might lose this savings per request if we get more than the 4096 requests in the same time of a single request. Probably unlikely? So it's probably ok?

Just something to think about. Since the per request cache will guarantee duplicate hits. Might be overkill to have both a per request cache and global cache.

Comment on lines 638 to 640
_=enc.Encode(subject)
_=enc.Encode(action)
_=enc.Encode(object)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

This is unfortunate. The json payload is quite large, and that being a key would be even more verbose.

It would be better to "hash" the fields somehow to reduce this.

I intentionally made the fileastvalue.go to avoid json.Marshalling this data when going to the rego evaluator.


We can use the role name for the hash and ignore the permissions. Whatever "hash" function we create will be faster than a json.Marshal I bet.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

The main thing is to make sure the unique hash would be changed if the user changes roles/groups. Lmk if you want me to write this, I can get you a much smaller key.

@ammario
Copy link
MemberAuthor

@Emyrk — I have a benchmark that is just calling/users/me and the cache is taking us from 3.3ms/req to 0.7ms/req, so quite significant. I agree that hashing the key is a good idea, and since that would reduce storage overhead, I think we can just make the cache really big, like 64k entries.

@Emyrk
Copy link
Member

From a correctness POV, as long as our unique key contains all the information required to match a unique RBAC check, then it will be correct in all future calls too.

So this can work for sure 👍.

@ammario
Copy link
MemberAuthor

@Emyrk very happy to hear that :)

@Emyrk
Copy link
Member

@ammario If we go to 64k, then I have no issues with dropping the request cache. Can I write a hash function and commit it to this branch?

@ammario
Copy link
MemberAuthor

@Emyrk go for it

Emyrk reacted with thumbs up emoji

Emyrkand others added5 commitsMay 5, 2023 14:19
This is useful for caching purposes.Order matters for the hash key for slices, so a slice withthe same elements in a different order is a different hash key.Athlough they are functionally equivalent, our slices have deterministicorders in the database, so our subjects/objects will match.There is no need to spend the time or memory to sort these lists.
@ammarioammario marked this pull request as ready for reviewMay 5, 2023 19:42
@ammarioammarioenabled auto-merge (squash)May 5, 2023 20:34
@ammarioammario requested a review fromEmyrkMay 5, 2023 20:34
@ammarioammario mentioned this pull requestMay 7, 2023
Copy link
Member

@EmyrkEmyrk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

👍

@ammarioammario merged commit8899dd8 intomainMay 8, 2023
@ammarioammario deleted the rbac-global-cache branchMay 8, 2023 13:59
@github-actionsgithub-actionsbot locked and limited conversation to collaboratorsMay 8, 2023
Sign up for freeto subscribe to this conversation on GitHub. Already have an account?Sign in.

Reviewers

@EmyrkEmyrkEmyrk approved these changes

Assignees

@ammarioammario

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

OPA is CPU inefficient

2 participants

@ammario@Emyrk

[8]ページ先頭

©2009-2025 Movatter.jp