- Notifications
You must be signed in to change notification settings - Fork1k
test(Makefile): fix postgresql memory usage#16170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Nice find!
Makefile Outdated
test-postgres-docker: | ||
docker rm -f test-postgres-docker-${POSTGRES_VERSION}||true | ||
# Make sure to not overallocate work_mem and max_connections as each | ||
# connection will _always_ allocate this much: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Got a reference for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Also, YIKES
Good catch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Thanks for double checking, I misremembered this part and it's not in fact pre-allocated.
Sets the base maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files.
It's still a good idea to limit this because we don't want 1000 connections trying to (potentially) use 1GB of memory, since we're limiting this on the container to 16GB there's a chance for PostgreSQL to be killed too unless we have enough. This is especially volatile since the whole DB is kept in memory (tmpfs) and writes there could kill the container. And since that's happening outside of PostgreSQL memory handling, even if PG memory handling could otherwise keep it in check, it seems very risky.
I'll update the comment.
dannykoppingJan 17, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This appears to have been set in#7522 without any explanation.
Do we have any telemetry we can use to tune this correctly?
Do we need 1000 parallel connections in the first place?
How do we know 8MB will be enough?
Let's try make some data-driven decisions here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
@dannykopping Our current visiblity into the CI environment is very limited. I think the only way to determine this right now is to run tests locally and observe behaviour.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
The effect of this being too small is, as the docs describe, writing data to disk (in this case memory due to tmpfs).
Keep in mind that temp is not always backed by RAM. I think on Windows this is not the case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Keep in mind that temp is not always backed by RAM. I think on Windows this is not the case.
I think that's fine. Even if a few operations would be written to disk, it wouldn't slow us down more than using the database in general (this is why fsync and synchronous_commit are off too, although I don't have any data on how much those help in Windows).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Comparing the pg tests of this branch with main, they run at about the same speed. So we're not losing anything in terms of that by lowering thework_mem
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
OK 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I ranwhile docker exec --user root test-postgres-docker-16 bash -c "du -chs /tmp 2>/dev/null | grep total"; do :; done
while running our test suite on dogfood.
The database stayed <4GB (three runs: 2.7GB, 3.4GB, 2.9GB) at worst. So it can get quite big but should be enough given the configuration defined here.
In CI this size is unlikely as we have less parallelism.
GOMAXPROCS=8 resulted in <700MB DB.
I also ran againstmain
and the DB went up to 3.5GB.
Since setting work mem will raise the memory consumption of _all_connections, let's keep it in range of what we're actually giving thecontainer and how many total connections we allow.
82def17
to170acca
Compare7f46e3b
intomainUh oh!
There was an error while loading.Please reload this page.
Since setting work mem will raise the memory consumption ofall
connections, let's keep it in range of what we're actually giving the
container and how many total connections we allow.