Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Postgres.js - The Fastest full featured PostgreSQL client for Node.js, Deno, Bun and CloudFlare

License

NotificationsYou must be signed in to change notification settings

porsager/postgres

Repository files navigation

Fastest full PostgreSQL nodejs client


Getting started


Good UX with Postgres.js


Installation

$ npm install postgres

Usage

Create yoursql database instance

// db.jsimportpostgresfrom'postgres'constsql=postgres({/* options */})// will use psql environment variablesexportdefaultsql

Simply import for use elsewhere

// users.jsimportsqlfrom'./db.js'asyncfunctiongetUsersOver(age){constusers=awaitsql`    select      name,      age    from users    where age >${age}  `// users = Result [{ name: "Walter", age: 80 }, { name: 'Murray', age: 68 }, ...]returnusers}asyncfunctioninsertUser({ name, age}){constusers=awaitsql`    insert into users      (name, age)    values      (${name},${age})    returning name, age  `// users = Result [{ name: "Murray", age: 68 }]returnusers}

ESM dynamic imports

The library can be used with ESM dynamic imports as well as shown here.

const{default:postgres}=awaitimport('postgres')

Table of Contents

Connection

postgres([url], [options])

You can use either apostgres:// url connection string or the options to define your database connection properties. Options in the object will override any present in the url. Options will fall back to the same environment variables as psql.

constsql=postgres('postgres://username:password@host:port/database',{host                 :'',// Postgres ip address[s] or domain name[s]port                 :5432,// Postgres server port[s]database             :'',// Name of database to connect tousername             :'',// Username of database userpassword             :'',// Password of database user  ...andmore})

More options can be found in theConnection details section.

Queries

await sql`...` -> Result[]

Postgres.js utilizesTagged template functions to process query parametersbefore interpolation. Using tagged template literals benefits developers by:

  1. Enforcing safe query generation
  2. Giving thesql`` function powerfulutility andquery building features.

Any generic value will be serialized according to an inferred type, and replaced by a PostgreSQL protocol placeholder$1, $2, .... The parameters are then sent separately to the database which handles escaping & casting.

All queries will return aResult array, with objects mapping column names to each row.

constxs=awaitsql`  insert into users (    name, age  ) values (    'Murray', 68  )  returning *`// xs = [{ user_id: 1, name: 'Murray', age: 68 }]

Please note that queries are first executed whenawaited – or instantly by using.execute().

Query parameters

Parameters are automatically extracted and handled by the database so that SQL injection isn't possible. No special handling is necessary, simply use tagged template literals as usual.

constname='Mur',age=60constusers=awaitsql`  select    name,    age  from users  where    name like${name+'%'}    and age >${age}`// users = [{ name: 'Murray', age: 68 }]

Be careful with quotation marks here. Because Postgres infers column types, you do not need to wrap your interpolated parameters in quotes like'${name}'. This will cause an error because the tagged template replaces${name} with$1 in the query string, leaving Postgres to do the interpolation. If you wrap that in a string, Postgres will see'$1' and interpret it as a string as opposed to a parameter.

Dynamic column selection

constcolumns=['name','age']awaitsql`  select${sql(columns)}  from users`// Which results in:select"name","age"fromusers

Dynamic inserts

constuser={name:'Murray',age:68}awaitsql`  insert into users${sql(user,'name','age')}`// Which results in:insertintousers("name","age")values($1,$2)// The columns can also be given with an arrayconstcolumns=['name','age']awaitsql`  insert into users${sql(user,columns)}`

You can omit column names and simply executesql(user) to get all the fields from the object as columns. Be careful not to allow users to supply columns that you do not want to be inserted.

Multiple inserts in one query

If you need to insert multiple rows at the same time it's also much faster to do it with a singleinsert. Simply pass an array of objects tosql().

constusers=[{name:'Murray',age:68,garbage:'ignore'},{name:'Walter',age:80}]awaitsql`insert into users${sql(users,'name','age')}`// Is translated to:insertintousers("name","age")values($1,$2),($3,$4)// Here you can also omit column names which will use object keys as columnsawaitsql`insert into users${sql(users)}`// Which results in:insertintousers("name","age")values($1,$2),($3,$4)

Dynamic columns in updates

This is also useful for update queries

constuser={id:1,name:'Murray',age:68}awaitsql`  update users set${sql(user,'name','age')}  where user_id =${user.id}`// Which results in:updateusersset"name"=$1,"age"=$2whereuser_id=$3// The columns can also be given with an arrayconstcolumns=['name','age']awaitsql`  update users set${sql(user,columns)}  where user_id =${user.id}`

Multiple updates in one query

To create multiple updates in a single query, it is necessary to use arrays instead of objects to ensure that the order of the items correspond with the column names.

constusers=[[1,'John',34],[2,'Jane',27],]awaitsql`  update users set name = update_data.name, age = (update_data.age)::int  from (values${sql(users)}) as update_data (id, name, age)  where users.id = (update_data.id)::int  returning users.id, users.name, users.age`

Dynamic values andwhere in

Value lists can also be created dynamically, makingwhere in queries simple too.

constusers=awaitsql`  select    *  from users  where age in${sql([68,75,23])}`

or

const[{ a, b, c}]=awaitsql`  select    *  from (values${sql(['a','b','c'])}) as x(a, b, c)`

Building queries

Postgres.js features a simple dynamic query builder by conditionally appending/omitting query fragments.It works by nestingsql`` fragments within othersql`` calls or fragments. This allows you to build dynamic queries safely without risking sql injections through usual string concatenation.

Partial queries

constolderThan=x=>sql`and age >${x}`constfilterAge=trueawaitsql`  select   *  from users  where name is not null${filterAge      ?olderThan(50)      :sql``}`// Which results in:select*fromuserswherenameisnotnull// Orselect*fromuserswherenameisnotnullandage>50

Dynamic filters

awaitsql`  select    *  from users${id      ?sql`where user_id =${id}`      :sql``}`// Which results in:select*fromusers// Orselect*fromuserswhereuser_id=$1

Dynamic ordering

constid=1constorder={username:'asc'created_at:'desc'}awaitsql`  select    *  from ticket  where account =${id}  order by${Object.entries(order).flatMap(([column,order],i)=>[i ?sql`,` :sql``,sql`${sql(column)}${order==='desc' ?sql`desc` :sql`asc`}`])}`

SQL functions

Using keywords or calling functions dynamically is also possible by usingsql`` fragments.

constdate=nullawaitsql`  update users set updated_at =${date||sql`now()`}`// Which results in:updateuserssetupdated_at=now()

Table names

Dynamic identifiers like table names and column names is also supported like so:

consttable='users',column='id'awaitsql`  select${sql(column)} from${sql(table)}`// Which results in:select"id"from"users"

Quick primer on interpolation

Here's a quick oversight over all the ways to do interpolation in a query template string:

Interpolation syntaxUsageExample
${ sql`` }for keywords or sql fragmentsawait sql`SELECT * FROM users ${sql`order by age desc` }`
${ sql(string) }for identifiersawait sql`SELECT * FROM ${sql('table_name')`
${ sql([] or {}, ...) }for helpersawait sql`INSERT INTO users ${sql({ name: 'Peter'})}`
${ 'somevalue' }for valuesawait sql`SELECT * FROM users WHERE age = ${42}`

Advanced query methods

Cursors

await sql``.cursor([rows = 1], [fn])

Use cursors if you need to throttle the amount of rows being returned from a query. You can use a cursor either as anasync iterable or with a callback function. For a callback function new results won't be requested until the promise / async callback function has resolved.

callback function
awaitsql`  select    *  from generate_series(1,4) as x`.cursor(async([row])=>{// row = { x: 1 }awaithttp.request('https://example.com/wat',{ row})})
for await...of
// for await...ofconstcursor=sql`select * from generate_series(1,4) as x`.cursor()forawait(const[row]ofcursor){// row = { x: 1 }awaithttp.request('https://example.com/wat',{ row})}

A single row will be returned by default, but you can also request batches by setting the number of rows desired in each batch as the first argument to.cursor:

awaitsql`  select    *  from generate_series(1,1000) as x`.cursor(10,asyncrows=>{// rows = [{ x: 1 }, { x: 2 }, ... ]awaitPromise.all(rows.map(row=>http.request('https://example.com/wat',{ row})))})

If an error is thrown inside the callback function no more rows will be requested and the outer promise will reject with the thrown error.

You can close the cursor early either by callingbreak in thefor await...of loop, or by returning the tokensql.CLOSE from the callback function.

awaitsql`  select * from generate_series(1,1000) as x`.cursor(row=>{returnMath.random()>0.9&&sql.CLOSE// or sql.END})

Instant iteration

await sql``.forEach(fn)

If you want to handle rows returned by a query one by one, you can use.forEach which returns a promise that resolves once there are no more rows.

awaitsql`  select created_at, name from events`.forEach(row=>{// row = { created_at: '2019-11-22T14:22:00Z', name: 'connected' }})// No more rows

Query Descriptions

await sql``.describe() -> Result[]

Rather than executing a given query,.describe will return information utilized in the query process. This information can include the query identifier, column types, etc.

This is useful for debugging and analyzing your Postgres queries. Furthermore,.describe will give you access to the final generated query string that would be executed.

Rows as Array of Values

sql``.values()

Using.values will return rows as an array of values for each column, instead of objects.

This can be useful to receive identically named columns, or for specific performance/transformation reasons. The column definitions are still included on the result array, plus access to parsers for each column.

Rows as Raw Array of Buffers

sql``.raw()

Using.raw will return rows as an array withBuffer values for each column, instead of objects.

This can be useful for specific performance/transformation reasons. The column definitions are still included on the result array, plus access to parsers for each column.

Queries in Files

await sql.file(path, [args], [options]) -> Result[]

Using a file for a query is also supported with optional parameters to use if the file includes$1, $2, etc

constresult=awaitsql.file('query.sql',['Murray',68])

Multiple statements in one query

await sql``.simple()

The postgres wire protocol supports"simple" and"extended" queries. "simple" queries supports multiple statements, but does not support any dynamic parameters. "extended" queries support parameters but only one statement. To use "simple" queries you can usesql``.simple(). That will create it as a simple query.

awaitsql`select 1; select 2;`.simple()

Copy to/from as Streams

Postgres.js supportsCOPY ... queries, which are exposed asNode.js streams.

await sql`copy ... from stdin`.writable() -> Writable

import{pipeline}from'node:stream/promises'// Stream of users with the default tab delimitated cells and new-line delimitated rowsconstuserStream=Readable.from(['Murray\t68\n','Walter\t80\n'])constquery=awaitsql`copy users (name, age) from stdin`.writable()awaitpipeline(userStream,query);

await sql`copy ... to stdout`.readable() -> Readable

Using Stream Pipeline
import{pipeline}from'node:stream/promises'import{createWriteStream}from'node:fs'constreadableStream=awaitsql`copy users (name, age) to stdout`.readable()awaitpipeline(readableStream,createWriteStream('output.tsv'))// output.tsv content: `Murray\t68\nWalter\t80\n`
Usingfor await...of
constreadableStream=awaitsql`  copy (    select name, age    from users    where age = 68  ) to stdout`.readable()forawait(constchunkofreadableStream){// chunk.toString() === `Murray\t68\n`}

NOTE This is a low-level API which does not provide any type safety. To make this work, you must match yourcopy query parameters correctly to yourNode.js stream read or write code. EnsureNode.js stream backpressure is handled correctly to avoid memory exhaustion.

Canceling Queries in Progress

Postgres.js supports,canceling queries in progress. It works by opening a new connection with a protocol level startup message to cancel the current query running on a specific connection. That means there is no guarantee that the query will be canceled, and due to the possible race conditions it might even result in canceling another query. This is fine for long running queries, but in the case of high load and fast queries it might be better to simply ignore results instead of canceling.

constquery=sql`select pg_sleep 100`.execute()setTimeout(()=>query.cancel(),100)constresult=awaitquery

Execute

await sql``.execute()

The lazy Promise implementation in Postgres.js is what allows it to distinguishNested Fragments from the main outer query. This also means that queries are always executed at the earliest in the following tick. If you have a specific need to execute the query in the same tick, you can call.execute()

Unsafe raw string queries

Advanced unsafe use cases

await sql.unsafe(query, [args], [options]) -> Result[]

If you know what you're doing, you can useunsafe to pass any string you'd like to postgres. Please note that this can lead to SQL injection if you're not careful.

sql.unsafe('select '+danger+' from users where id = '+dragons)

By default,sql.unsafe assumes thequery string is sufficiently dynamic that prepared statements do not make sense, and so defaults them to off. If you'd like to re-enable prepared statements, you can pass{ prepare: true }.

You can also nestsql.unsafe within a safesql expression. This is useful if only part of your fraction has unsafe elements.

consttriggerName='friend_created'consttriggerFnName='on_friend_created'consteventType='insert'constschema_name='app'consttable_name='friends'awaitsql`  create or replace trigger${sql(triggerName)}  after${sql.unsafe(eventType)} on${sql.unsafe(`${schema_name}.${table_name}`)}  for each row  execute function${sql(triggerFnName)}()`awaitsql`  create role friend_service with login password${sql.unsafe(`'${password}'`)}`

Transactions

BEGIN / COMMITawait sql.begin([options = ''], fn) -> fn()

Usesql.begin to start a new transaction. Postgres.js will reserve a connection for the transaction and supply a scopedsql instance for all transaction uses in the callback function.sql.begin will resolve with the returned value from the callback function.

BEGIN is automatically sent with the optional options, and if anything failsROLLBACK will be called so the connection can be released and execution can continue.

const[user,account]=awaitsql.begin(asyncsql=>{const[user]=awaitsql`    insert into users (      name    ) values (      'Murray'    )    returning *  `const[account]=awaitsql`    insert into accounts (      user_id    ) values (${user.user_id}    )    returning *  `return[user,account]})

Do note that you can often achieve the same result usingWITH queries (Common Table Expressions) instead of using transactions.

It's also possible to pipeline the requests in a transaction if needed by returning an array with queries from the callback function like this:

constresult=awaitsql.begin(sql=>[sql`update ...`,sql`update ...`,sql`insert ...`])

SAVEPOINTawait sql.savepoint([name], fn) -> fn()

sql.begin('read write',asyncsql=>{const[user]=awaitsql`    insert into users (      name    ) values (      'Murray'    )  `const[account]=(awaitsql.savepoint(sql=>sql`      insert into accounts (        user_id      ) values (${user.user_id}      )    `).catch(err=>{// Account could not be created. ROLLBACK SAVEPOINT is called because we caught the rejection.}))||[]return[user,account]}).then(([user,account])=>{// great success - COMMIT succeeded}).catch(()=>{// not so good - ROLLBACK was called})

PREPARE TRANSACTIONawait sql.prepare([name]) -> fn()

Indicates that the transactions should be prepared using thePREPARE TRANSACTION [NAME] statementinstead of being committed.

sql.begin('read write',asyncsql=>{const[user]=awaitsql`    insert into users (      name    ) values (      'Murray'    )  `awaitsql.prepare('tx1')})

Data Transformation

Postgres.js allows for transformation of the data passed to or returned from a query by using thetransform option.

Built in transformation functions are:

  • For camelCase -postgres.camel,postgres.toCamel,postgres.fromCamel
  • For PascalCase -postgres.pascal,postgres.toPascal,postgres.fromPascal
  • For Kebab-Case -postgres.kebab,postgres.toKebab,postgres.fromKebab

These built in transformations will only convert to/from snake_case. For example, using{ transform: postgres.toCamel } will convert the column names to camelCase only if the column names are in snake_case to begin with.{ transform: postgres.fromCamel } will convert camelCase only to snake_case.

By default, usingpostgres.camel,postgres.pascal andpostgres.kebab will perform a two-way transformation - both the data passed to the query and the data returned by the query will be transformed:

// Transform the column names to and from camel caseconstsql=postgres({transform:postgres.camel})awaitsql`CREATE TABLE IF NOT EXISTS camel_case (a_test INTEGER, b_test TEXT)`awaitsql`INSERT INTO camel_case${sql([{aTest:1,bTest:1}])}`constdata=awaitsql`SELECT${sql('aTest','bTest')} FROM camel_case`console.log(data)// [ { aTest: 1, bTest: '1' } ]

To only perform half of the transformation (eg. only the transformationto orfrom camel case), use the other transformation functions:

// Transform the column names only to camel case// (for the results that are returned from the query)postgres({transform:postgres.toCamel})awaitsql`CREATE TABLE IF NOT EXISTS camel_case (a_test INTEGER)`awaitsql`INSERT INTO camel_case${sql([{a_test:1}])}`constdata=awaitsql`SELECT a_test FROM camel_case`console.log(data)// [ { aTest: 1 } ]
// Transform the column names only from camel case// (for interpolated inserts, updates, and selects)constsql=postgres({transform:postgres.fromCamel})awaitsql`CREATE TABLE IF NOT EXISTS camel_case (a_test INTEGER)`awaitsql`INSERT INTO camel_case${sql([{aTest:1}])}`constdata=awaitsql`SELECT${sql('aTest')} FROM camel_case`console.log(data)// [ { a_test: 1 } ]

Note that Postgres.js does not rewrite the static parts of the tagged template strings. So to transform column names in your queries, thesql() helper must be used - eg.${ sql('columnName') } as in the examples above.

Transformundefined Values

By default, Postgres.js will throw the errorUNDEFINED_VALUE: Undefined values are not allowed when undefined values are passed

// Transform the column names to and from camel caseconstsql=postgres({transform:{undefined:null}})awaitsql`CREATE TABLE IF NOT EXISTS transform_undefined (a_test INTEGER)`awaitsql`INSERT INTO transform_undefined${sql([{a_test:undefined}])}`constdata=awaitsql`SELECT a_test FROM transform_undefined`console.log(data)// [ { a_test: null } ]

To combine with the built in transform functions, spread the transform in thetransform object:

// Transform the column names to and from camel caseconstsql=postgres({transform:{    ...postgres.camel,undefined:null}})awaitsql`CREATE TABLE IF NOT EXISTS transform_undefined (a_test INTEGER)`awaitsql`INSERT INTO transform_undefined${sql([{aTest:undefined}])}`constdata=awaitsql`SELECT${sql('aTest')} FROM transform_undefined`console.log(data)// [ { aTest: null } ]

Custom Transform Functions

To specify your own transformation functions, you can use thecolumn,value androw options inside oftransform, each an object possibly includingto andfrom keys:

  • to: The function to transform the outgoing query column name to, i.eSELECT ${ sql('aName') } toSELECT a_name when usingpostgres.toCamel.
  • from: The function to transform the incoming query result column name to, see example below.

Both parameters are optional, if not provided, the default transformation function will be used.

// Implement your own functions, look at postgres.toCamel, etc// as a reference:// https://github.com/porsager/postgres/blob/4241824ffd7aa94ffb482e54ca9f585d9d0a4eea/src/types.js#L310-L328functiontransformColumnToDatabase(){/* ... */}functiontransformColumnFromDatabase(){/* ... */}constsql=postgres({transform:{column:{to:transformColumnToDatabase,from:transformColumnFromDatabase,},value:{/* ... */},row:{/* ... */}}})

Listen & notify

When you call.listen, a dedicated connection will be created to ensure that you receive notifications instantly. This connection will be used for any further calls to.listen. The connection will automatically reconnect according to a backoff reconnection pattern to not overload the database server.

Listenawait sql.listen(channel, onnotify, [onlisten]) -> { state }

.listen takes the channel name, a function to handle each notify, and an optional function to run every time listen is registered and ready (happens on initial connect and reconnects). It returns a promise which resolves once theLISTEN query to Postgres completes, or if there is already a listener active.

awaitsql.listen('news',payload=>{constjson=JSON.parse(payload)console.log(json.this)// logs 'is'})

The optionalonlisten method is great to use for a very simply queue mechanism:

awaitsql.listen('jobs',(x)=>run(JSON.parse(x)),()=>sql`select unfinished_jobs()`.forEach(run))functionrun(job){// And here you do the work you please}

Notifyawait sql.notify(channel, payload) -> Result[]

Notify can be done as usual in SQL, or by using thesql.notify method.

sql.notify('news',JSON.stringify({no:'this',is:'news'}))

Realtime subscribe

Postgres.js implements the logical replication protocol of PostgreSQL to support subscription to real-time updates ofinsert,update anddelete operations.

NOTE To make this work you mustcreate the proper publications in your database, enable logical replication by settingwal_level = logical inpostgresql.conf and connect using either a replication or superuser.

Quick start

Create a publication (eg. in migration)

CREATE PUBLICATION alltables FOR ALL TABLES

Subscribe to updates

constsql=postgres({publications:'alltables'})const{ unsubscribe}=awaitsql.subscribe('insert:events',(row,{ command, relation, key, old})=>{// Callback function for each row change// tell about new event row over eg. websockets or do something else},()=>{// Callback on initial connect and potential reconnects})

Subscribe pattern

You can subscribe to specific operations, tables, or even rows with primary keys.

operation:schema.table=primary_key

operation is one of* | insert | update | delete and defaults to*

schema defaults topublic

table is a specific table name and defaults to*

primary_key can be used to only subscribe to specific rows

Examples

sql.subscribe('*',()=>/* everything */)sql.subscribe('insert',()=>/* all inserts */)sql.subscribe('*:users',()=>/* all operations on the public.users table */)sql.subscribe('delete:users',()=>/* all deletes on the public.users table */)sql.subscribe('update:users=1',()=>/* all updates on the users row with a primary key = 1 */)

Numbers, bigint, numeric

Number in javascript is only able to represent 253-1 safely which means that types in PostgreSQLs likebigint andnumeric won't fit intoNumber.

Since Node.js v10.4 we can useBigInt to match the PostgreSQL typebigint which is returned for eg.count(*). Unfortunately, it doesn't work withJSON.stringify out of the box, so Postgres.js will return it as a string.

If you want to useBigInt you can add this custom type:

constsql=postgres({types:{bigint:postgres.BigInt}})

There is currently no guaranteed way to handlenumeric /decimal types in native Javascript.These [and similar] types will be returned as astring. The best way in this case is to usecustom types.

Result Array

TheResult Array returned from queries is a custom array allowing for easy destructuring or passing on directly to JSON.stringify or general Array usage. It includes the following properties.

.count

Thecount property is the number of affected rows returned by the database. This is useful for insert, update and delete operations to know the number of rows since .length will be 0 in these cases if not usingRETURNING ....

.command

Thecommand run by the query - eg. one ofSELECT,UPDATE,INSERT,DELETE

.columns

Thecolumns returned by the query useful to determine types, or map to the result values when using.values()

{name  :String,// Column name,type  :oid,// PostgreSQL oid column typeparser:Function// The function used by Postgres.js for parsing}

.statement

Thestatement contains information about the statement implicitly created by Postgres.js.

{name    :String,// The auto generated statement namestring  :String,// The actual query string executedtypes   :[oid],// An array of oid expected as input parameterscolumns :[Column]// Array of columns - same as Result.columns}

.state

This is the state{ pid, secret } of the connection that executed the query.

Connection details

All Postgres options

constsql=postgres('postgres://username:password@host:port/database',{host                 :'',// Postgres ip address[es] or domain name[s]port                 :5432,// Postgres server port[s]path                 :'',// unix socket path (usually '/tmp')database             :'',// Name of database to connect tousername             :'',// Username of database userpassword             :'',// Password of database userssl                  :false,// true, prefer, require, tls.connect optionsmax                  :10,// Max number of connectionsmax_lifetime         :null,// Max lifetime in seconds (more info below)idle_timeout         :0,// Idle connection timeout in secondsconnect_timeout      :30,// Connect timeout in secondsprepare              :true,// Automatic creation of prepared statementstypes                :[],// Array of custom types, see more belowonnotice             :fn,// Default console.log, set false to silence NOTICEonparameter          :fn,// (key, value) when server param changedebug                :fn,// Is called with (connection, query, params, types)socket               :fn,// fn returning custom socket to usetransform            :{undefined          :undefined,// Transforms undefined values (eg. to null)column             :fn,// Transforms incoming column namesvalue              :fn,// Transforms incoming row valuesrow                :fn// Transforms entire rows},connection           :{application_name   :'postgres.js',// Default application_name    ...// Other connection parameters, see https://www.postgresql.org/docs/current/runtime-config-client.html},target_session_attrs :null,// Use 'read-write' with multiple hosts to// ensure only connecting to primaryfetch_types          :true,// Automatically fetches types on connect// on initial connection.})

Note thatmax_lifetime = 60 * (30 + Math.random() * 30) by default. This resolves to an interval between 30 and 60 minutes to optimize for the benefits of prepared statementsand working nicely with Linux's OOM killer.

Dynamic passwords

When clients need to use alternative authentication schemes such as access tokens or connections to databases with rotating passwords, provide either a synchronous or asynchronous function that will resolve the dynamic password value at connection time.

constsql=postgres(url,{// Other connection config  ...// Password function for the database userpassword :async()=>awaitsigner.getAuthToken(),})

SSL

Althoughvulnerable to MITM attacks, a common configuration for thessl option for some cloud providers is to setrejectUnauthorized tofalse (ifNODE_ENV isproduction):

constsql=process.env.NODE_ENV==='production'    ?// "Unless you're using a Private or Shield Heroku Postgres database, Heroku Postgres does not currently support verifiable certificates"// https://help.heroku.com/3DELT3RK/why-can-t-my-third-party-utility-connect-to-heroku-postgres-with-sslpostgres({ssl:{rejectUnauthorized:false}})    :postgres()

For more information regardingssl withpostgres, check out theNode.js documentation for tls.

Multi-host connections - High Availability (HA)

Multiple connection strings can be passed topostgres() in the form ofpostgres('postgres://localhost:5432,localhost:5433', ...). This works the same as native thepsql command. Read more atmultiple host URIs.

Connections will be attempted in order of the specified hosts/ports. On a successful connection, all retries will be reset. This ensures that hosts can come up and down seamlessly.

If you specifytarget_session_attrs: 'primary' orPGTARGETSESSIONATTRS=primary Postgres.js will only connect to the primary host, allowing for zero downtime failovers.

The Connection Pool

Connections are created lazily once a query is created. This means that simply doing constsql = postgres(...) won't have any effect other than instantiating a newsql instance.

No connection will be made until a query is made.

For example:

constsql=postgres()// no connections are openedawaitsql`...`// one connection is now openedawaitsql`...`// previous opened connection is reused// two connections are opened nowawaitPromise.all([sql`...`,sql`...`])

When there are high amount of concurrent queries,postgres will open as many connections as needed up untilmax number of connections is reached. By defaultmax is 10. This can be changed by settingmax in thepostgres() call. Example -postgres('connectionURL', { max: 20 }).

This means that we get a much simpler story for error handling and reconnections. Queries will be sent over the wire immediately on the next available connection in the pool. Connections are automatically taken out of the pool if you start a transaction usingsql.begin(), and automatically returned to the pool once your transaction is done.

Any query which was already sent over the wire will be rejected if the connection is lost. It'll automatically defer to the error handling you have for that query, and since connections are lazy it'll automatically try to reconnect the next time a query is made. The benefit of this is no weird generic "onerror" handler that tries to get things back to normal, and also simpler application code since you don't have to handle errors out of context.

There are no guarantees about queries executing in order unless using a transaction withsql.begin() or settingmax: 1. Of course doing a series of queries, one awaiting the other will work as expected, but that's just due to the nature of js async/promise handling, so it's not necessary for this library to be concerned with ordering.

Since this library automatically creates prepared statements, it also has a default max lifetime for connections to prevent memory bloat on the database itself. This is a random interval for each connection between 45 and 90 minutes. This allows multiple connections to independently come up and down without affecting the service.

Connection timeout

By default, connections will not close until.end() is called. However, it may be useful to have them close automatically when:

  • re-instantiating multiplesql`` instances
  • using Postgres.js in a Serverless environment (Lambda, etc.)
  • using Postgres.js with a database service that automatically closes connections after some time (seeECONNRESET issue)

This can be done using theidle_timeout ormax_lifetime options. These configuration options specify the number of seconds to wait before automatically closing an idle connection and the maximum time a connection can exist, respectively.

For example, to close a connection that has either been idle for 20 seconds or existed for more than 30 minutes:

constsql=postgres({idle_timeout:20,max_lifetime:60*30})

Cloudflare Workers support

Postgres.js has built-in support for theTCP socket API in Cloudflare Workers, which ison-track to be standardized and adopted in Node.js and other JavaScript runtimes, such as Deno.

You can use Postgres.js directly in a Worker, or to benefit from connection pooling and query caching, via theHyperdrive service available to Workers by passing the HyperdriveconnectionString when creating a newpostgres client as follows:

// Requires Postgres.js 3.4.0 or laterimportpostgresfrom'postgres'interfaceEnv{HYPERDRIVE:Hyperdrive;}exportdefaultasyncfetch(req:Request,env:Env,ctx:ExecutionContext){// The Postgres.js library accepts a connection string directlyconstsql=postgres(env.HYPERDRIVE.connectionString)constresults=awaitsql`SELECT * FROM users LIMIT 10`returnResponse.json(results)}

Inwrangler.toml you will need to enable thenodejs_compat compatibility flag to allow Postgres.js to operate in the Workers environment:

compatibility_flags = ["nodejs_compat"]

Auto fetching of array types

Postgres.js will automatically fetch table/array-type information when it first connects to a database.

If you have revoked access topg_catalog this feature will no longer work and will need to be disabled.

You can disable this feature by settingfetch_types tofalse.

Environmental variables

It is also possible to connect to the database without a connection string or any options. Postgres.js will fall back to the common environment variables used bypsql as in the table below:

constsql=postgres()
OptionEnvironment Variables
hostPGHOST
portPGPORT
databasePGDATABASE
usernamePGUSERNAME orPGUSER
passwordPGPASSWORD
application_namePGAPPNAME
idle_timeoutPGIDLE_TIMEOUT
connect_timeoutPGCONNECT_TIMEOUT

Prepared statements

Prepared statements will automatically be created for any queries where it can be inferred that the query is static. This can be disabled by using theprepare: false option. For instance — this is useful whenusing PGBouncer intransaction mode.

update:since 1.21.0PGBouncer supports protocol-level named prepared statements whenconfiguredproperly

Custom Types

You can add ergonomic support for custom types, or simply usesql.typed(value, type) inline, where type is the PostgreSQLoid for the type and the correctly serialized string.(oid values for types can be found in thepg_catalog.pg_type table.)

Adding Query helpers is the cleanest approach which can be done like this:

constsql=postgres({types:{rect:{// The pg_types oid to pass to the db along with the serialized value.to        :1337,// An array of pg_types oids to handle when parsing values coming from the db.from      :[1337],//Function that transform values before sending them to the db.serialize :({ x, y, width, height})=>[x,y,width,height],// Function that transforms values coming from the db.parse     :([x,y,width,height])=>{x,y,width,height}}}})// Now you can use sql.typed.rect() as specified aboveconst[custom]=awaitsql`  insert into rectangles (    name,    rect  ) values (    'wat',${sql.typed.rect({x:13,y:37,width:42,height:80})}  )  returning *`// custom = { name: 'wat', rect: { x: 13, y: 37, width: 42, height: 80 }}

Custom socket

Easily do in-process ssh tunneling to your database by providing a custom socket for Postgres.js to use. The function (optionally async) must return a socket-like duplex stream.

Here's a sample usingssh2

importssh2from'ssh2'constsql=postgres({  ...options,socket:({host:[host],port:[port]})=>newPromise((resolve,reject)=>{constssh=newssh2.Client()ssh.on('error',reject).on('ready',()=>ssh.forwardOut('127.0.0.1',12345,host,port,(err,socket)=>err ?reject(err) :resolve(socket))).connect(sshOptions)})})

Teardown / Cleanup

To ensure proper teardown and cleanup on server restarts useawait sql.end() beforeprocess.exit().

Callingsql.end() will reject new queries and return a Promise which resolves when all queries are finished and the underlying connections are closed. If a{ timeout } option is provided any pending queries will be rejected once the timeout (in seconds) is reached and the connections will be destroyed.

Sample shutdown usingPrexit

importprexitfrom'prexit'prexit(async()=>{awaitsql.end({timeout:5})awaitnewPromise(r=>server.close(r))})

Reserving connections

await sql.reserve()

Thereserve method pulls out a connection from the pool, and returns a client that wraps the single connection. This can be used for running queries on an isolated connection.

constreserved=awaitsql.reserve()awaitreserved`select * from users`awaitreserved.release()

reserved.release()

Once you have finished with the reserved connection, callrelease to add it back to the pool.

Error handling

Errors are all thrown to related queries and never globally. Errors coming from database itself are always in thenative Postgres format, and the same goes for anyNode.js errors eg. coming from the underlying connection.

Query errors will contain a stored error with the origin of the query to aid in tracing errors.

Query errors will also contain thequery string and theparameters. These are not enumerable to avoid accidentally leaking confidential information in logs. To log these it is required to specifically accesserror.query anderror.parameters, or setdebug: true in options.

There are also the following errors specifically for this library.

UNSAFE_TRANSACTION

Only use sql.begin or max: 1

To ensure statements in a transaction runs on the same connection (which is required for them to run inside the transaction), you must usesql.begin(...) or only allow a single connection in options (max: 1).

UNDEFINED_VALUE

Undefined values are not allowed

Postgres.js won't acceptundefined as values in tagged template queries since it becomes ambiguous what to do with the value. If you want to set something to null, usenull explicitly.

MESSAGE_NOT_SUPPORTED

X (X) is not supported

Whenever a message is received from Postgres which is not supported by this library. Feel free to file an issue if you think something is missing.

MAX_PARAMETERS_EXCEEDED

Max number of parameters (65534) exceeded

The postgres protocol doesn't allow more than 65534 (16bit) parameters. If you run into this issue there are various workarounds such as usingsql([...]) to escape values instead of passing them as parameters.

SASL_SIGNATURE_MISMATCH

Message type X not supported

When using SASL authentication the server responds with a signature at the end of the authentication flow which needs to match the one on the client. This is to avoidman-in-the-middle attacks. If you receive this error the connection was canceled because the server did not reply with the expected signature.

NOT_TAGGED_CALL

Query not called as a tagged template literal

Making queries has to be done using the sql function as atagged template. This is to ensure parameters are serialized and passed to Postgres as query parameters with correct types and to avoid SQL injection.

AUTH_TYPE_NOT_IMPLEMENTED

Auth type X not implemented

Postgres supports many different authentication types. This one is not supported.

CONNECTION_CLOSED

write CONNECTION_CLOSED host:port

This error is thrown if the connection was closed without an error. This should not happen during normal operations, so please create an issue if this was unexpected.

CONNECTION_ENDED

write CONNECTION_ENDED host:port

This error is thrown if the user has calledsql.end() and performed a query afterward.

CONNECTION_DESTROYED

write CONNECTION_DESTROYED host:port

This error is thrown for any queries that were pending when the timeout tosql.end({ timeout: X }) was reached.

CONNECT_TIMEOUT

write CONNECT_TIMEOUT host:port

This error is thrown if the startup phase of the connection (tcp, protocol negotiation, and auth) took more than the default 30 seconds or what was specified usingconnect_timeout orPGCONNECT_TIMEOUT.

TypeScript support

postgres has TypeScript support. You can pass a row list type for your queries in this way:

interfaceUser{id:numbername:string}constusers=awaitsql<User[]>`SELECT * FROM users`users[0].id// ok => numberusers[1].name// ok => stringusers[0].invalid// fails: `invalid` does not exists on `User`

However, be sure to check the array length to avoid accessing properties ofundefined rows:

constusers=awaitsql<User[]>`SELECT * FROM users WHERE id =${id}`if(!users.length)thrownewError('Not found')returnusers[0]

You can also prefer destructuring when you only care about a fixed number of rows.In this case, we recommend you to prefer using tuples to handleundefined properly:

const[user]:[User?]=awaitsql`SELECT * FROM users WHERE id =${id}`if(!user)// => User | undefinedthrownewError('Not found')returnuser// => User// NOTE:const[first,second]:[User?]=awaitsql`SELECT * FROM users WHERE id =${id}`// fails: `second` does not exist on `[User?]`const[first,second]=awaitsql<[User?]>`SELECT * FROM users WHERE id =${id}`// don't fail : `second: User | undefined`

We do our best to type all the public API, however types are not always updated when features are added or changed. Feel free to open an issue if you have trouble with types.

Migration tools

Postgres.js doesn't come with any migration solution since it's way out of scope, but here are some modules that support Postgres.js for migrations:

Thank you

A really big thank you to@JAForbes who introduced me to Postgres and still holds my hand navigating all the great opportunities we have.

Thanks to@ACXgit for initial tests and dogfooding.

Also thanks toRyan Dahl for letting me have thepostgres npm package name.

About

Postgres.js - The Fastest full featured PostgreSQL client for Node.js, Deno, Bun and CloudFlare

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors74


[8]ページ先頭

©2009-2025 Movatter.jp