Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork1.3k
Implement Batch query package#2942
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
base:master
Are you sure you want to change the base?
Uh oh!
There was an error while loading.Please reload this page.
Conversation
The MIT License (MIT) | ||
Copyright (c) 2013-2020 Ankush Chadda |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Is it okay to have my name in the copyright ? I am not aware if this is the correct thing or not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
yea absolutely okay. you can get boilerplate MIT license text from github or other places 👍
whoah this is really awesome! I'm excited to help get this over the finishline & release it. |
@charmander any thoughts on this? I think it looks p great! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Some initial comments on things we should change. And needs some tests around errors, error handling, and ensuring the client is still usable after an error (you typically need to flush after an error I believe? Could be wrong there...tests will uncover these things though 😄 )
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
} | ||
catch(err) { | ||
process.nextTick(() => { | ||
throw err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
why catch & throw on next tick? Doesn't this create an uncatchable error which will crash a node process?
assert.strictEqual(response.rowCount, 1) | ||
} | ||
}) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
would be good to see some tests for errors / error handling / error recovery
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
addedtest-error-handling.ts
If it’s going to be a separate package instead of an API change to pg’s default query type – why in this repo? |
@brianc@charmander please have a look at the PR and provide feedback |
iamkhush commentedMay 2, 2023 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
I had to make changes in the types/pg to get the build running again |
abenhamdine commentedMay 16, 2023 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
waouh, so happy to see that ! |
Uh oh!
There was an error while loading.Please reload this page.
abenhamdineMay 16, 2023 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
IMHO, it would be great to add explanations about :
- the benefits of batched queries vs non-batched
- the difference with pipelining (which is not implemented yet)
- perhaps also some hints on the underlying mechanism (multiple BIND etc) for future contributors
I also wonder if any advice about the number of parameters is needed ? is there a hard limit ?
Uh oh!
There was an error while loading.Please reload this page.
I think this important question remains unanswered
in other words : are maintainers of this repo willing to take responsability for this new package ? |
@abenhamdine I have followed what@brianc advised mehere. |
Moving it to draft to add typescript definitions |
hillac commentedJul 25, 2024
Is this still alive / any help needed to get this done? I would like to use this myself. |
Hey@hillac , happy to continue the work if maintainers would confirm on what wasasked above |
hillac commentedMar 13, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
Given this is a fully separate package, I can just publish and use this on its own without having to maintain my own fork of node postgres right? I might do that. Edit: tried it out on some micro benchmarks. For updates I saw no real advantage compared to building a single big parametrized query. And for more than 50 rows, the big parametrized query is significantly faster. For inserts I'm seeing worse results compared to a big parametrized query. |
@hillac could you show the code where it didnt work for you ? Im curious since the bench.ts file showed triple the qps I had seen otherwise. |
hillac commentedMar 24, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
https://github.com/hillac/pg-batch-bench I should change it to do the inserts and updates multiple times to get some percentiles and means, but you can pretty clearly see a trend that just building a large parameterized query is faster. The batch is much nicer to use though. And might be a bit faster for single inserts, but that could just be a jit warmup thing. Once again, needs better statistics. (Edit: Added some more iterations of inserting 1, 2 and 3 rows and batch is definitely slower, so likely just a jit warmup issue.) I think your benchmark compared multiple round trip single inserts/ updates, which is slower. This benchmark does a single query with all the rows in it. |
Attempts tofix#2257