Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

test(Makefile): fix postgresql memory usage#16170

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
mafredri merged 1 commit intomainfrommafredri/test-fix-pg-memory-usage
Jan 17, 2025

Conversation

mafredri
Copy link
Member

Since setting work mem will raise the memory consumption ofall
connections, let's keep it in range of what we're actually giving the
container and how many total connections we allow.

Copy link
Member

@johnstcnjohnstcn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Nice find!

Makefile Outdated
test-postgres-docker:
docker rm -f test-postgres-docker-${POSTGRES_VERSION}||true
# Make sure to not overallocate work_mem and max_connections as each
# connection will _always_ allocate this much:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Got a reference for this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Also, YIKES
Good catch

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Thanks for double checking, I misremembered this part and it's not in fact pre-allocated.

Sets the base maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files.

It's still a good idea to limit this because we don't want 1000 connections trying to (potentially) use 1GB of memory, since we're limiting this on the container to 16GB there's a chance for PostgreSQL to be killed too unless we have enough. This is especially volatile since the whole DB is kept in memory (tmpfs) and writes there could kill the container. And since that's happening outside of PostgreSQL memory handling, even if PG memory handling could otherwise keep it in check, it seems very risky.

I'll update the comment.

Copy link
Contributor

@dannykoppingdannykoppingJan 17, 2025
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

This appears to have been set in#7522 without any explanation.
Do we have any telemetry we can use to tune this correctly?

Do we need 1000 parallel connections in the first place?
How do we know 8MB will be enough?

Let's try make some data-driven decisions here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

@dannykopping Our current visiblity into the CI environment is very limited. I think the only way to determine this right now is to run tests locally and observe behaviour.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

The effect of this being too small is, as the docs describe, writing data to disk (in this case memory due to tmpfs).

Keep in mind that temp is not always backed by RAM. I think on Windows this is not the case.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Keep in mind that temp is not always backed by RAM. I think on Windows this is not the case.

I think that's fine. Even if a few operations would be written to disk, it wouldn't slow us down more than using the database in general (this is why fsync and synchronous_commit are off too, although I don't have any data on how much those help in Windows).

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Comparing the pg tests of this branch with main, they run at about the same speed. So we're not losing anything in terms of that by lowering thework_mem.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

OK:shipit: 👍

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I ranwhile docker exec --user root test-postgres-docker-16 bash -c "du -chs /tmp 2>/dev/null | grep total"; do :; done while running our test suite on dogfood.

The database stayed <4GB (three runs: 2.7GB, 3.4GB, 2.9GB) at worst. So it can get quite big but should be enough given the configuration defined here.

In CI this size is unlikely as we have less parallelism.

GOMAXPROCS=8 resulted in <700MB DB.

I also ran againstmain and the DB went up to 3.5GB.

johnstcn reacted with heart emoji
Since setting work mem will raise the memory consumption of _all_connections, let's keep it in range of what we're actually giving thecontainer and how many total connections we allow.
@mafredrimafredriforce-pushed themafredri/test-fix-pg-memory-usage branch from82def17 to170accaCompareJanuary 17, 2025 12:26
@mafredrimafredri merged commit7f46e3b intomainJan 17, 2025
34 checks passed
@mafredrimafredri deleted the mafredri/test-fix-pg-memory-usage branchJanuary 17, 2025 14:07
@github-actionsgithub-actionsbot locked and limited conversation to collaboratorsJan 17, 2025
Sign up for freeto subscribe to this conversation on GitHub. Already have an account?Sign in.
Reviewers

@dannykoppingdannykoppingdannykopping approved these changes

@johnstcnjohnstcnjohnstcn approved these changes

Assignees

@mafredrimafredri

Labels
None yet
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

3 participants
@mafredri@dannykopping@johnstcn

[8]ページ先頭

©2009-2025 Movatter.jp