Getting started with Spanner and PGAdapter

Objectives

This tutorial walks you through the following steps using the SpannerPGAdapter local proxy for PostgreSQL drivers:

  • Create a Spanner instance and database.
  • Write, read, and execute SQL queries on data in the database.
  • Update the database schema.
  • Update data using a read-write transaction.
  • Add a secondary index to the database.
  • Use the index to read and execute SQL queries on data.
  • Retrieve data using a read-only transaction.

Costs

This tutorial uses Spanner, which is a billable component of theGoogle Cloud. For information on the cost of using Spanner, seePricing.

Before you begin

Complete the steps described inSet up, which cover creating andsetting a default Google Cloud project, enabling billing, enabling theCloud Spanner API, and setting up OAuth 2.0 to get authentication credentials to usethe Cloud Spanner API.

In particular, make sure that you rungcloud authapplication-default loginto set up your local development environment with authenticationcredentials.

Note: If you don't plan to keep the resources that you create in this tutorial,consider creating a new Google Cloud project instead of selecting an existingproject. After you finish the tutorial, you can delete the project, removing allresources associated with the project.

Prepare your local PGAdapter environment

You can use PostgreSQL drivers in combination with PGAdapterto connect to Spanner. PGAdapter is a local proxy thattranslates the PostgreSQL network protocol to theSpanner gRPC protocol.

Tip: SeeLatency Comparisonsfor benchmarks comparing the latency of PGAdapter and the regularSpanner client libraries.

PGAdapter requires either Java or Docker to run.

  1. Install one of the following on your development machine if none of them are already installed:

  2. Clone the sample app repository to your local machine:

    git clone https://github.com/GoogleCloudPlatform/pgadapter.git
  3. Change to the directory that contains the Spanner sample code:

    psql

    cd pgadapter/samples/snippets/psql-snippets

    Java

    cdpgadapter/samples/snippets/java-snippetsmvnpackage-DskipTests

    Go

    cd pgadapter/samples/snippets/golang-snippets

    Node.js

    cd pgadapter/samples/snippets/nodejs-snippetsnpm install

    Python

    cd pgadapter/samples/snippets/python-snippetspython -m venv ./venvpip install -r requirements.txtcd samples

    C#

    cd pgadapter/samples/snippets/dotnet-snippets

    PHP

    cd pgadapter/samples/snippets/php-snippetscomposer installcd samples

Create an instance

Tip: You can skip this step if you are using PGAdapter with theEmulator. PGAdapter automatically creates the instance on theemulator when you connect to PGAdapter.

When you first use Spanner, you must create an instance, which is anallocation of resources that are used by Spanner databases. When youcreate an instance, you choose aninstance configuration, which determineswhere your data is stored, and also the number of nodes to use, which determinesthe amount of serving and storage resources in your instance.

SeeCreate an instanceto learn how to create a Spanner instance using any of thefollowing methods. You can name your instancetest-instance to use it withother topics in this document that reference an instance namedtest-instance.

  • The Google Cloud CLI
  • The Google Cloud console
  • A client library (C++, C#, Go, Java, Node.js, PHP, Python, or Ruby)

Look through sample files

The samples repository contains a sample that shows how to use Spannerwith PGAdapter.

Take a look through thesamples/snippets folder, which shows how to useSpanner. The code shows how to create and use a new database. The datauses the example schema shown in theSchema and data model page.

Start PGAdapter

Start PGAdapter on your local development machine and point it to theinstance that you created.

The following commands assume that you have executedgcloud auth application-default login.

Java Application

wget https://storage.googleapis.com/pgadapter-jar-releases/pgadapter.tar.gz \    && tar -xzvf pgadapter.tar.gzjava -jar pgadapter.jar -i test-instance

Docker

docker pull gcr.io/cloud-spanner-pg-adapter/pgadapterdocker run \    --name pgadapter \    --rm -d -p 5432:5432 \    -v "$HOME/.config/gcloud":/gcloud:ro \    --env CLOUDSDK_CONFIG=/gcloud \    gcr.io/cloud-spanner-pg-adapter/pgadapter \    -i test-instance -x
Note: Replace$HOME/.config/gcloud/ with%APPDATA%\gcloud if you are onWindows, or rungcloud info --format='value(config.paths.global_config_dir)'to find your currentgcloud configuration folder.

Emulator

docker pull gcr.io/cloud-spanner-pg-adapter/pgadapter-emulatordocker run \    --name pgadapter-emulator \    --rm -d \    -p 5432:5432 \    -p 9010:9010 \    -p 9020:9020 \    gcr.io/cloud-spanner-pg-adapter/pgadapter-emulator

This starts PGAdapter with an embedded Spanneremulator. This embedded emulator automatically creates any Spannerinstance or database that you connect to without the need to manually createthem beforehand.

We recommend that you run PGAdapter in production as either a side-carcontainer or as an in-process dependency. For more information on deployingPGAdapter in production, seeChoose a method for running PGAdapter.

Create a database

Tip: You can skip this step if you are using PGAdapter with theEmulator. PGAdapter automatically creates the database on theemulator when you connect to PGAdapter.
gcloud spanner databases create example-db --instance=test-instance \--database-dialect=POSTGRESQL

You should see:

Creating database...done.

Create tables

The following code creates two tables in the database.Note: The subsequent code samples use these two tables. If you don't executethis code, then create the tables by using the Google Cloud console or thegcloud CLI. For more information, see theexample schema.

psql

#!/bin/bash# Set the connection variables for psql.# The following statements use the existing value of the variable if it has# already been set, and otherwise assigns a default value.exportPGHOST="${PGHOST:-localhost}"exportPGPORT="${PGPORT:-5432}"exportPGDATABASE="${PGDATABASE:-example-db}"# Create two tables in one batch.psql <<SQL--CreatethesingerstableCREATETABLEsingers(singer_idbigintnotnullprimarykey,first_namecharactervarying(1024),last_namecharactervarying(1024),singer_infobytea,full_namecharactervarying(2048)GENERATEDALWAYSAS(first_name||' '||last_name)STORED);--Createthealbumstable.Thistableisinterleavedintheparenttable--"singers".CREATETABLEalbums(singer_idbigintnotnull,album_idbigintnotnull,album_titlecharactervarying(1024),primarykey(singer_id,album_id))--The'interleave in parent'clauseisaSpanner-specificextensionto--open-sourcePostgreSQL.INTERLEAVEINPARENTsingersONDELETECASCADE;SQLecho"Created Singers & Albums tables in database: [${PGDATABASE}]"

Java

importjava.sql.Connection;importjava.sql.DriverManager;importjava.sql.SQLException;importjava.sql.Statement;classCreateTables{staticvoidcreateTables(Stringhost,intport,Stringdatabase)throwsSQLException{StringconnectionUrl=String.format("jdbc:postgresql://%s:%d/%s",host,port,database);try(Connectionconnection=DriverManager.getConnection(connectionUrl)){try(Statementstatement=connection.createStatement()){//Createtwotablesinonebatch.statement.addBatch("create table singers ("+"  singer_id   bigint primary key not null,"+"  first_name  varchar(1024),"+"  last_name   varchar(1024),"+"  singer_info bytea,"+"  full_name   varchar(2048) generated always as (\n"+"      case when first_name is null then last_name\n"+"          when last_name  is null then first_name\n"+"          else first_name || ' ' || last_name\n"+"      end) stored"+")");statement.addBatch("create table albums ("+"  singer_id     bigint not null,"+"  album_id      bigint not null,"+"  album_title   varchar,"+"  primary key (singer_id, album_id)"+") interleave in parent singers on delete cascade");statement.executeBatch();System.out.println("Created Singers & Albums tables in database: ["+database+"]");}}}}

Go

import("context""fmt""github.com/jackc/pgx/v5")funcCreateTables(hoststring,portint,databasestring)error{ctx:=context.Background()connString:=fmt.Sprintf("postgres://uid:pwd@%s:%d/%s?sslmode=disable",host,port,database)conn,err:=pgx.Connect(ctx,connString)iferr!=nil{returnerr}deferconn.Close(ctx)//CreatetwotablesinonebatchonSpanner.br:=conn.SendBatch(ctx, &pgx.Batch{QueuedQueries:[]*pgx.QueuedQuery{{SQL:"create table singers ("+"  singer_id   bigint primary key not null,"+"  first_name  character varying(1024),"+"  last_name   character varying(1024),"+"  singer_info bytea,"+"  full_name   character varying(2048) generated "+"  always as (first_name || ' ' || last_name) stored"+")"},{SQL:"create table albums ("+"  singer_id     bigint not null,"+"  album_id      bigint not null,"+"  album_title   character varying(1024),"+"  primary key (singer_id, album_id)"+") interleave in parent singers on delete cascade"},}})cmd,err:=br.Exec()iferr!=nil{returnerr}ifcmd.String()!="CREATE"{returnfmt.Errorf("unexpected command tag: %v",cmd.String())}iferr:=br.Close();err!=nil{returnerr}fmt.Printf("Created Singers & Albums tables in database: [%s]\n",database)returnnil}

Node.js

import{Client}from'pg';asyncfunctioncreateTables(host:string,port:number,database:string):Promise<void>{//ConnecttoSpannerthroughPGAdapter.constconnection=newClient({host:host,port:port,database:database,});awaitconnection.connect();//Createtwotablesinonebatch.awaitconnection.query("start batch ddl");awaitconnection.query("create table singers ("+"  singer_id   bigint primary key not null,"+"  first_name  character varying(1024),"+"  last_name   character varying(1024),"+"  singer_info bytea,"+"  full_name   character varying(2048) generated "+"  always as (first_name || ' ' || last_name) stored"+")");awaitconnection.query("create table albums ("+"  singer_id     bigint not null,"+"  album_id      bigint not null,"+"  album_title   character varying(1024),"+"  primary key (singer_id, album_id)"+") interleave in parent singers on delete cascade");awaitconnection.query("run batch");console.log(`CreatedSingers &Albumstablesindatabase:[${database}]`);//Closetheconnection.awaitconnection.end();}

Python

importstringimportpsycopgdefcreate_tables(host:string,port:int,database:string):# Connect to Cloud Spanner using psycopg3 through PGAdapter.withpsycopg.connect("host={host} port={port} ""dbname={database} ""sslmode=disable".format(host=host,port=port,database=database))asconn:# Enable autocommit to execute DDL statements, as psycopg otherwise# tries to use a read/write transaction.conn.autocommit=True# Use a pipeline to execute multiple DDL statements in one batch.withconn.pipeline():conn.execute("create table singers ("+"  singer_id   bigint primary key not null,"+"  first_name  character varying(1024),"+"  last_name   character varying(1024),"+"  singer_info bytea,"+"  full_name   character varying(2048) generated "+"  always as (first_name || ' ' || last_name) stored"+")")conn.execute("create table albums ("+"  singer_id     bigint not null,"+"  album_id      bigint not null,"+"  album_title   character varying(1024),"+"  primary key (singer_id, album_id)"+") interleave in parent singers on delete cascade")print("Created Singers & Albums tables in database: [{database}]".format(database=database))

C#

usingNpgsql;namespacedotnet_snippets;publicstaticclassCreateTablesSample{publicstaticvoidCreateTables(stringhost,intport,stringdatabase){varconnectionString=$"Host={host};Port={port};Database={database};SSL Mode=Disable";usingvarconnection=newNpgsqlConnection(connectionString);connection.Open();//Createtwotablesinonebatch.varbatch=connection.CreateBatch();batch.BatchCommands.Add(newNpgsqlBatchCommand("create table singers ("+"  singer_id   bigint primary key not null,"+"  first_name  varchar(1024),"+"  last_name   varchar(1024),"+"  singer_info bytea,"+"  full_name   varchar(2048) generated always as (\n"+"      case when first_name is null then last_name\n"+"          when last_name  is null then first_name\n"+"          else first_name || ' ' || last_name\n"+"      end) stored"+")"));batch.BatchCommands.Add(newNpgsqlBatchCommand("create table albums ("+"  singer_id     bigint not null,"+"  album_id      bigint not null,"+"  album_title   varchar,"+"  primary key (singer_id, album_id)"+") interleave in parent singers on delete cascade"));batch.ExecuteNonQuery();Console.WriteLine($"Created Singers & Albums tables in database: [{database}]");}}

PHP

functioncreate_tables(string $host, string $port, string $database):void{//ConnecttoSpannerthroughPGAdapterusingthePostgreSQLPDOdriver.$dsn=sprintf("pgsql:host=%s;port=%s;dbname=%s",$host,$port,$database);$connection=newPDO($dsn);//Createtwotablesinonebatch.$connection->exec("start batch ddl");$connection->exec("create table singers ("."  singer_id   bigint primary key not null,"."  first_name  character varying(1024),"."  last_name   character varying(1024),"."  singer_info bytea,"."  full_name   character varying(2048) generated "."  always as (first_name || ' ' || last_name) stored".")");$connection->exec("create table albums ("."  singer_id     bigint not null,"."  album_id      bigint not null,"."  album_title   character varying(1024),"."  primary key (singer_id, album_id)".") interleave in parent singers on delete cascade");$connection->exec("run batch");print("Created Singers & Albums tables in database: [{$database}]\n");$connection=null;}

Run the sample with the following command:

psql

PGDATABASE=example-db ./create_tables.sh example-db

Java

java -jar target/pgadapter-snippets/pgadapter-samples.jar createtables example-db

Go

gorunsample_runner.gocreatetablesexample-db

Node.js

npm start createtables example-db

Python

python create_tables.py example-db

C#

dotnet run createtables example-db

PHP

php create_tables.php example-db

The next step is to write data to your database.

Create a connection

Before you can do reads or writes, you must create a connection toPGAdapter. All of your interactions with Spanner must gothrough aConnection. The database name is specified in the connection string.

psql

#!/bin/bash# Set the connection variables for psql.# The following statements use the existing value of the variable if it has# already been set, and otherwise assigns a default value.exportPGHOST="${PGHOST:-localhost}"exportPGPORT="${PGPORT:-5432}"exportPGDATABASE="${PGDATABASE:-example-db}"# Connect to Cloud Spanner using psql through PGAdapter# and execute a simple query.psql-c"select 'Hello world!' as hello"

Java

importjava.sql.Connection;importjava.sql.DriverManager;importjava.sql.ResultSet;importjava.sql.SQLException;classCreateConnection{staticvoidcreateConnection(Stringhost,intport,Stringdatabase)throwsSQLException{StringconnectionUrl=String.format("jdbc:postgresql://%s:%d/%s",host,port,database);try(Connectionconnection=DriverManager.getConnection(connectionUrl)){try(ResultSetresultSet=connection.createStatement().executeQuery("select 'Hello world!' as hello")){while(resultSet.next()){System.out.printf("Greeting from Cloud Spanner PostgreSQL:%s\n",resultSet.getString(1));}}}}}

Go

import("context""fmt""github.com/jackc/pgx/v5")funcCreateConnection(hoststring,portint,databasestring)error{ctx:=context.Background()//ConnecttoCloudSpannerusingpgxthroughPGAdapter.//'sslmode=disable'isoptional,butaddingitreducestheconnectiontime,//aspgxwillthenskipfirsttryingtocreateanSSLconnection.connString:=fmt.Sprintf("postgres://uid:pwd@%s:%d/%s?sslmode=disable",host,port,database)conn,err:=pgx.Connect(ctx,connString)iferr!=nil{returnerr}deferconn.Close(ctx)row:=conn.QueryRow(ctx,"select 'Hello world!' as hello")varmsgstringiferr:=row.Scan(&msg);err!=nil{returnerr}fmt.Printf("Greeting from Cloud Spanner PostgreSQL:%s\n",msg)returnnil}

Node.js

import{Client}from'pg';asyncfunctioncreateConnection(host:string,port:number,database:string):Promise<void>{//ConnecttoSpannerthroughPGAdapter.constconnection=newClient({host:host,port:port,database:database,});awaitconnection.connect();constresult=awaitconnection.query("select 'Hello world!' as hello");console.log(`GreetingfromCloudSpannerPostgreSQL:${result.rows[0]['hello']}`);//Closetheconnection.awaitconnection.end();}

Python

importstringimportpsycopgdefcreate_connection(host:string,port:int,database:string):# Connect to Cloud Spanner using psycopg3 through PGAdapter.# 'sslmode=disable' is optional, but adding it reduces the connection time,# as psycopg3 will then skip first trying to create an SSL connection.withpsycopg.connect("host={host} port={port} dbname={database} ""sslmode=disable".format(host=host,port=port,database=database))asconn:conn.autocommit=Truewithconn.cursor()ascur:cur.execute("select 'Hello world!' as hello")print("Greeting from Cloud Spanner PostgreSQL:",cur.fetchone()[0])

C#

usingNpgsql;namespacedotnet_snippets;publicstaticclassCreateConnectionSample{publicstaticvoidCreateConnection(stringhost,intport,stringdatabase){varconnectionString=$"Host={host};Port={port};Database={database};SSL Mode=Disable";usingvarconnection=newNpgsqlConnection(connectionString);connection.Open();usingvarcmd=newNpgsqlCommand("select 'Hello World!' as hello",connection);usingvarreader=cmd.ExecuteReader();while(reader.Read()){vargreeting=reader.GetString(0);Console.WriteLine($"Greeting from Cloud Spanner PostgreSQL: {greeting}");}}}

PHP

functioncreate_connection(string $host, string $port, string $database):void{//ConnecttoSpannerthroughPGAdapterusingthePostgreSQLPDOdriver.$dsn=sprintf("pgsql:host=%s;port=%s;dbname=%s",$host,$port,$database);$connection=newPDO($dsn);//ExecuteaqueryonSpannerthroughPGAdapter.$statement=$connection->query("select 'Hello world!' as hello");$rows=$statement->fetchAll();printf("Greeting from Cloud Spanner PostgreSQL: %s\n",$rows[0][0]);//Cleanupresources.$rows=null;$statement=null;$connection=null;}

Run the sample with the following command:

psql

PGDATABASE=example-db ./create_connection.sh

Java

java -jar target/pgadapter-snippets/pgadapter-samples.jar createconnection example-db

Go

gorunsample_runner.gocreateconnectionexample-db

Node.js

npm start createconnection example-db

Python

python create_connection.py example-db

C#

dotnet run createconnection example-db

PHP

php create_connection.php example-db

Write data with DML

You can insert data using Data Manipulation Language (DML) in a read-writetransaction.

These samples show how to execute a DML statement on Spannerusing a PostgreSQL driver.

psql

#!/bin/bashexportPGHOST="${PGHOST:-localhost}"exportPGPORT="${PGPORT:-5432}"exportPGDATABASE="${PGDATABASE:-example-db}"psql-c"INSERT INTO singers (singer_id, first_name, last_name) VALUES                             (12, 'Melissa', 'Garcia'),                             (13, 'Russel', 'Morales'),                             (14, 'Jacqueline', 'Long'),                             (15, 'Dylan', 'Shaw')"echo"4 records inserted"

Java

importjava.sql.Connection;importjava.sql.DriverManager;importjava.sql.PreparedStatement;importjava.sql.SQLException;importjava.util.Arrays;importjava.util.List;classWriteDataWithDml{staticclassSinger{privatefinallongsingerId;privatefinalStringfirstName;privatefinalStringlastName;Singer(finallongid,finalStringfirst,finalStringlast){this.singerId=id;this.firstName=first;this.lastName=last;}}staticvoidwriteDataWithDml(Stringhost,intport,Stringdatabase)throwsSQLException{StringconnectionUrl=String.format("jdbc:postgresql://%s:%d/%s",host,port,database);try(Connectionconnection=DriverManager.getConnection(connectionUrl)){//Add4rowsinonestatement.//JDBCalwaysuses'?'asaparameterplaceholder.try(PreparedStatementpreparedStatement=connection.prepareStatement("INSERT INTO singers (singer_id, first_name, last_name) VALUES "+"(?, ?, ?), "+"(?, ?, ?), "+"(?, ?, ?), "+"(?, ?, ?)")){finalList<Singer>singers=Arrays.asList(newSinger(/*SingerId=*/12L,"Melissa","Garcia"),newSinger(/*SingerId=*/13L,"Russel","Morales"),newSinger(/*SingerId=*/14L,"Jacqueline","Long"),newSinger(/*SingerId=*/15L,"Dylan","Shaw"));//NotethatJDBCparametersstartatindex1.intparamIndex=0;for(Singersinger:singers){preparedStatement.setLong(++paramIndex,singer.singerId);preparedStatement.setString(++paramIndex,singer.firstName);preparedStatement.setString(++paramIndex,singer.lastName);}intupdateCount=preparedStatement.executeUpdate();System.out.printf("%d records inserted.\n",updateCount);}}}}

Go

import("context""fmt""github.com/jackc/pgx/v5")funcWriteDataWithDml(hoststring,portint,databasestring)error{ctx:=context.Background()connString:=fmt.Sprintf("postgres://uid:pwd@%s:%d/%s?sslmode=disable",host,port,database)conn,err:=pgx.Connect(ctx,connString)iferr!=nil{returnerr}deferconn.Close(ctx)tag,err:=conn.Exec(ctx,"INSERT INTO singers (singer_id, first_name, last_name) "+"VALUES ($1, $2, $3), ($4, $5, $6), "+"       ($7, $8, $9), ($10, $11, $12)",12,"Melissa","Garcia",13,"Russel","Morales",14,"Jacqueline","Long",15,"Dylan","Shaw")iferr!=nil{returnerr}fmt.Printf("%v records inserted\n",tag.RowsAffected())returnnil}

Node.js

import{Client}from'pg';asyncfunctionwriteDataWithDml(host:string,port:number,database:string):Promise<void>{constconnection=newClient({host:host,port:port,database:database,});awaitconnection.connect();constresult=awaitconnection.query("INSERT INTO singers (singer_id, first_name, last_name) "+"VALUES ($1, $2, $3), ($4, $5, $6), "+"       ($7, $8, $9), ($10, $11, $12)",[12,"Melissa","Garcia",13,"Russel","Morales",14,"Jacqueline","Long",15,"Dylan","Shaw"])console.log(`${result.rowCount}recordsinserted`);//Closetheconnection.awaitconnection.end();}

Python

importstringimportpsycopgdefwrite_data_with_dml(host:string,port:int,database:string):withpsycopg.connect("host={host} port={port} dbname={database} ""sslmode=disable".format(host=host,port=port,database=database))asconn:conn.autocommit=Truewithconn.cursor()ascur:cur.execute("INSERT INTO singers (singer_id, first_name, last_name)"" VALUES (%s,%s,%s), (%s,%s,%s), ""        (%s,%s,%s), (%s,%s,%s)",(12,"Melissa","Garcia",13,"Russel","Morales",14,"Jacqueline","Long",15,"Dylan","Shaw",))print("%d records inserted"%cur.rowcount)

C#

usingNpgsql;namespacedotnet_snippets;publicstaticclassWriteDataWithDmlSample{readonlystructSinger{publicSinger(longsingerId,stringfirstName,stringlastName){SingerId=singerId;FirstName=firstName;LastName=lastName;}publiclongSingerId{get;}publicstringFirstName{get;}publicstringLastName{get;}}publicstaticvoidWriteDataWithDml(stringhost,intport,stringdatabase){varconnectionString=$"Host={host};Port={port};Database={database};SSL Mode=Disable";usingvarconnection=newNpgsqlConnection(connectionString);connection.Open();//Add4rowsinonestatement.usingvarcmd=newNpgsqlCommand("INSERT INTO singers (singer_id, first_name, last_name) VALUES "+"($1, $2, $3), "+"($4, $5, $6), "+"($7, $8, $9), "+"($10, $11, $12)",connection);List<Singer>singers=[newSinger(/*SingerId=*/12L,"Melissa","Garcia"),newSinger(/*SingerId=*/13L,"Russel","Morales"),newSinger(/*SingerId=*/14L,"Jacqueline","Long"),newSinger(/*SingerId=*/15L,"Dylan","Shaw")];foreach(varsingerinsingers){cmd.Parameters.Add(newNpgsqlParameter{Value=singer.SingerId});cmd.Parameters.Add(newNpgsqlParameter{Value=singer.FirstName});cmd.Parameters.Add(newNpgsqlParameter{Value=singer.LastName});}varupdateCount=cmd.ExecuteNonQuery();Console.WriteLine($"{updateCount} records inserted.");}}

PHP

functionwrite_data_with_dml(string $host, string $port, string $database):void{$dsn=sprintf("pgsql:host=%s;port=%s;dbname=%s",$host,$port,$database);$connection=newPDO($dsn);$sql="INSERT INTO singers (singer_id, first_name, last_name)"." VALUES (?, ?, ?), (?, ?, ?), "."        (?, ?, ?), (?, ?, ?)";$statement=$connection->prepare($sql);$statement->execute([12,"Melissa","Garcia",13,"Russel","Morales",14,"Jacqueline","Long",15,"Dylan","Shaw"]);printf("%d records inserted\n",$statement->rowCount());$statement=null;$connection=null;}

Run the sample with the following command:

psql

PGDATABASE=example-db ./write_data_with_dml.sh

Java

java -jar target/pgadapter-snippets/pgadapter-samples.jar writeusingdml example-db

Go

gorunsample_runner.gowriteusingdmlexample-db

Node.js

npm start writeusingdml example-db

Python

python write_data_with_dml.py example-db

C#

dotnet run writeusingdml example-db

PHP

php write_data_with_dml.php example-db

You should see the following response:

 4 records inserted.
Note: There are limits to commit size. SeeCRUD limitfor more information.

Write data with a DML batch

PGAdapter supports executing DML batches. Sending multiple DMLstatements in one batch reduces the number of round-trips toSpanner and improves the performance of your application.

Tip: You can also execute DML batches with theSTART BATCH DML command.

psql

#!/bin/bashexportPGHOST="${PGHOST:-localhost}"exportPGPORT="${PGPORT:-5432}"exportPGDATABASE="${PGDATABASE:-example-db}"# Create a prepared insert statement and execute this prepared# insert statement three times in one SQL string. The single# SQL string with three insert statements will be executed as# a single DML batch on Spanner.psql-c"PREPARE insert_singer AS           INSERT INTO singers (singer_id, first_name, last_name)           VALUES (\$1, \$2, \$3)"\-c"EXECUTE insert_singer (16, 'Sarah', 'Wilson');         EXECUTE insert_singer (17, 'Ethan', 'Miller');         EXECUTE insert_singer (18, 'Maya', 'Patel');"echo"3 records inserted"

Java

importjava.sql.Connection;importjava.sql.DriverManager;importjava.sql.PreparedStatement;importjava.sql.SQLException;importjava.util.Arrays;importjava.util.List;classWriteDataWithDmlBatch{staticclassSinger{privatefinallongsingerId;privatefinalStringfirstName;privatefinalStringlastName;Singer(finallongid,finalStringfirst,finalStringlast){this.singerId=id;this.firstName=first;this.lastName=last;}}staticvoidwriteDataWithDmlBatch(Stringhost,intport,Stringdatabase)throwsSQLException{StringconnectionUrl=String.format("jdbc:postgresql://%s:%d/%s",host,port,database);try(Connectionconnection=DriverManager.getConnection(connectionUrl)){//AddmultiplerowsinoneDMLbatch.//JDBCalwaysuses'?'asaparameterplaceholder.try(PreparedStatementpreparedStatement=connection.prepareStatement("INSERT INTO singers (singer_id, first_name, last_name) VALUES (?, ?, ?)")){finalList<Singer>singers=Arrays.asList(newSinger(/*SingerId=*/16L,"Sarah","Wilson"),newSinger(/*SingerId=*/17L,"Ethan","Miller"),newSinger(/*SingerId=*/18L,"Maya","Patel"));for(Singersinger:singers){//NotethatJDBCparametersstartatindex1.intparamIndex=0;preparedStatement.setLong(++paramIndex,singer.singerId);preparedStatement.setString(++paramIndex,singer.firstName);preparedStatement.setString(++paramIndex,singer.lastName);preparedStatement.addBatch();}int[]updateCounts=preparedStatement.executeBatch();System.out.printf("%d records inserted.\n",Arrays.stream(updateCounts).sum());}}}}

Go

import("context""fmt""github.com/jackc/pgx/v5")funcWriteDataWithDmlBatch(hoststring,portint,databasestring)error{ctx:=context.Background()connString:=fmt.Sprintf("postgres://uid:pwd@%s:%d/%s?sslmode=disable",host,port,database)conn,err:=pgx.Connect(ctx,connString)iferr!=nil{returnerr}deferconn.Close(ctx)sql:="INSERT INTO singers (singer_id, first_name, last_name) "+"VALUES ($1, $2, $3)"batch:= &pgx.Batch{}batch.Queue(sql,16,"Sarah","Wilson")batch.Queue(sql,17,"Ethan","Miller")batch.Queue(sql,18,"Maya","Patel")br:=conn.SendBatch(ctx,batch)_,err=br.Exec()iferr:=br.Close();err!=nil{returnerr}iferr!=nil{returnerr}fmt.Printf("%v records inserted\n",batch.Len())returnnil}

Node.js

import{Client}from'pg';asyncfunctionwriteDataWithDmlBatch(host:string,port:number,database:string):Promise<void>{constconnection=newClient({host:host,port:port,database:database,});awaitconnection.connect();//node-postgresdoesnotsupportPostgreSQLpipelinemode,sowemustusethe//`startbatchdml`/`runbatch`statementstoexecuteaDMLbatch.constsql="INSERT INTO singers (singer_id, first_name, last_name) VALUES ($1, $2, $3)";awaitconnection.query("start batch dml");awaitconnection.query(sql,[16,"Sarah","Wilson"]);awaitconnection.query(sql,[17,"Ethan","Miller"]);awaitconnection.query(sql,[18,"Maya","Patel"]);constresult=awaitconnection.query("run batch");//RUNBATCHreturnstheupdatecountsasanarrayofstrings,withoneelementforeach//DMLstatementinthebatch.Thiscalculatesthetotalnumberofaffectedrowsfromthatarray.constupdateCount=result.rows[0]["UPDATE_COUNTS"].map((s:string)=>parseInt(s)).reduce((c:number,current:number)=>c+current,0);console.log(`${updateCount}recordsinserted`);//Closetheconnection.awaitconnection.end();}

Python

importstringimportpsycopgdefwrite_data_with_dml_batch(host:string,port:int,database:string):withpsycopg.connect("host={host} port={port} dbname={database} ""sslmode=disable".format(host=host,port=port,database=database))asconn:conn.autocommit=Truewithconn.cursor()ascur:cur.executemany("INSERT INTO singers ""(singer_id, first_name, last_name) ""VALUES (%s,%s,%s)",[(16,"Sarah","Wilson",),(17,"Ethan","Miller",),(18,"Maya","Patel",),])print("%d records inserted"%cur.rowcount)

C#

usingNpgsql;namespacedotnet_snippets;publicstaticclassWriteDataWithDmlBatchSample{readonlystructSinger{publicSinger(longsingerId,stringfirstName,stringlastName){SingerId=singerId;FirstName=firstName;LastName=lastName;}publiclongSingerId{get;}publicstringFirstName{get;}publicstringLastName{get;}}publicstaticvoidWriteDataWithDmlBatch(stringhost,intport,stringdatabase){varconnectionString=$"Host={host};Port={port};Database={database};SSL Mode=Disable";usingvarconnection=newNpgsqlConnection(connectionString);connection.Open();//AddmultiplerowsinoneDMLbatch.conststringsql="INSERT INTO singers (singer_id, first_name, last_name) VALUES ($1, $2, $3)";List<Singer>singers=[newSinger(/*SingerId=*/16L,"Sarah","Wilson"),newSinger(/*SingerId=*/17L,"Ethan","Miller"),newSinger(/*SingerId=*/18L,"Maya","Patel")];usingvarbatch=newNpgsqlBatch(connection);foreach(varsingerinsingers){batch.BatchCommands.Add(newNpgsqlBatchCommand{CommandText=sql,Parameters={newNpgsqlParameter{Value=singer.SingerId},newNpgsqlParameter{Value=singer.FirstName},newNpgsqlParameter{Value=singer.LastName}}});}varupdateCount=batch.ExecuteNonQuery();Console.WriteLine($"{updateCount} records inserted.");}}

PHP

functionwrite_data_with_dml_batch(string $host, string $port, string $database):void{$dsn=sprintf("pgsql:host=%s;port=%s;dbname=%s",$host,$port,$database);$connection=newPDO($dsn);//UseSTARTBATCHDML/RUNBATCHtorunabatchofDMLstatements.//CreateapreparedstatementfortheDMLthatshouldbeexecuted.$sql="INSERT INTO singers (singer_id, first_name, last_name) VALUES (?, ?, ?)";$statement=$connection->prepare($sql);//StartaDMLbatch.$connection->exec("START BATCH DML");$statement->execute([16,"Sarah","Wilson"]);$statement->execute([17,"Ethan","Miller"]);$statement->execute([18,"Maya","Patel"]);//RuntheDMLbatch.Usethe'query(..)'method,astheupdatecountsarereturnedasarow//containinganarraywiththeupdatecountofeachstatementinthebatch.$statement=$connection->query("RUN BATCH");$result=$statement->fetchAll();$update_count=$result[0][0];printf("%s records inserted\n",$update_count);$statement=null;$connection=null;}

Run the sample with the following command:

psql

PGDATABASE=example-db ./write_data_with_dml_batch.sh

Java

java -jar target/pgadapter-snippets/pgadapter-samples.jar writeusingdmlbatch example-db

Go

gorunsample_runner.gowriteusingdmlbatchexample-db

Node.js

npm start writeusingdmlbatch example-db

Python

python write_data_with_dml_batch.py example-db

C#

dotnet run writeusingdmlbatch example-db

PHP

php write_data_with_dml_batch.php example-db

You should see:

3recordsinserted.

Write data with mutations

You can also insert data usingmutations.

PGAdapter translates the PostgreSQLCOPY command tomutations. UsingCOPY is the most efficient way to quickly insert data in yourSpanner database.

COPY operations are by default atomic. Atomic operations onSpanner are bound by the commit size limit. SeeCRUD limit for more information.

These examples show how to execute a non-atomicCOPY operation. This lets theCOPY operation exceed the commit size limit.

Tip: SeeCOPY supportfor more information on how to copy data directly from PostgreSQL toSpanner. You can alsoimport data directly from MySQLusing theCOPY command.

psql

#!/bin/bash# Get the source directory of this script.directory=${BASH_SOURCE%/*}/exportPGHOST="${PGHOST:-localhost}"exportPGPORT="${PGPORT:-5432}"exportPGDATABASE="${PGDATABASE:-example-db}"# Copy data to Spanner from a tab-separated text file using the COPY command.psql-c"COPY singers (singer_id, first_name, last_name) FROM STDIN"\  <"${directory}singers_data.txt"psql-c"COPY albums FROM STDIN"\  <"${directory}albums_data.txt"echo"Copied singers and albums"

Java

importjava.io.IOException;importjava.sql.Connection;importjava.sql.DriverManager;importjava.sql.SQLException;importorg.postgresql.PGConnection;importorg.postgresql.copy.CopyManager;classWriteDataWithCopy{staticvoidwriteDataWithCopy(Stringhost,intport,Stringdatabase)throwsSQLException,IOException{StringconnectionUrl=String.format("jdbc:postgresql://%s:%d/%s",host,port,database);try(Connectionconnection=DriverManager.getConnection(connectionUrl)){//UnwrapthePostgreSQLJDBCconnectioninterfacetogetaccessto//aCopyManager.PGConnectionpgConnection=connection.unwrap(PGConnection.class);CopyManagercopyManager=pgConnection.getCopyAPI();//Enable'partitioned_non_atomic'mode.ThisensuresthattheCOPYoperation//willsucceedevenifitexceedsSpanner's mutation limit per transaction.connection.createStatement().execute("set spanner.autocommit_dml_mode='partitioned_non_atomic'");longnumSingers=copyManager.copyIn("COPY singers (singer_id, first_name, last_name) FROM STDIN",WriteDataWithCopy.class.getResourceAsStream("singers_data.txt"));System.out.printf("Copied%d singers\n",numSingers);longnumAlbums=copyManager.copyIn("COPY albums FROM STDIN",WriteDataWithCopy.class.getResourceAsStream("albums_data.txt"));System.out.printf("Copied%d albums\n",numAlbums);}}}

Go

import("context""fmt""os""github.com/jackc/pgx/v5")funcWriteDataWithCopy(hoststring,portint,databasestring)error{ctx:=context.Background()connString:=fmt.Sprintf("postgres://uid:pwd@%s:%d/%s?sslmode=disable",host,port,database)conn,err:=pgx.Connect(ctx,connString)iferr!=nil{returnerr}deferconn.Close(ctx)//Enable'partitioned_non_atomic'mode.ThisensuresthattheCOPYoperation//willsucceedevenifitexceedsSpanner's mutation limit per transaction.conn.Exec(ctx,"set spanner.autocommit_dml_mode='partitioned_non_atomic'")file,err:=os.Open("samples/singers_data.txt")iferr!=nil{returnerr}tag,err:=conn.PgConn().CopyFrom(ctx,file,"copy singers (singer_id, first_name, last_name) from stdin")iferr!=nil{returnerr}fmt.Printf("Copied %v singers\n",tag.RowsAffected())file,err=os.Open("samples/albums_data.txt")iferr!=nil{returnerr}tag,err=conn.PgConn().CopyFrom(ctx,file,"copy albums from stdin")iferr!=nil{returnerr}fmt.Printf("Copied %v albums\n",tag.RowsAffected())returnnil}

Node.js

import{Client}from'pg';import{pipeline}from'node:stream/promises'importfsfrom'node:fs'import{fromascopyFrom}from'pg-copy-streams'importpathfrom"path";asyncfunctionwriteDataWithCopy(host:string,port:number,database:string):Promise<void>{constconnection=newClient({host:host,port:port,database:database,});awaitconnection.connect();//Enable'partitioned_non_atomic'mode.ThisensuresthattheCOPYoperation//willsucceedevenifitexceedsSpanner's mutation limit per transaction.awaitconnection.query("set spanner.autocommit_dml_mode='partitioned_non_atomic'");//CopydatafromacsvfiletoSpannerusingtheCOPYcommand.//Notethateventhoughthecommandsays'from stdin',theactualinputcomesfromafile.constcopySingersStream=copyFrom('copy singers (singer_id, first_name, last_name) from stdin');constingestSingersStream=connection.query(copySingersStream);constsourceSingersStream=fs.createReadStream(path.join(__dirname,'singers_data.txt'));awaitpipeline(sourceSingersStream,ingestSingersStream);console.log(`Copied${copySingersStream.rowCount}singers`);constcopyAlbumsStream=copyFrom('copy albums from stdin');constingestAlbumsStream=connection.query(copyAlbumsStream);constsourceAlbumsStream=fs.createReadStream(path.join(__dirname,'albums_data.txt'));awaitpipeline(sourceAlbumsStream,ingestAlbumsStream);console.log(`Copied${copyAlbumsStream.rowCount}albums`);//Closetheconnection.awaitconnection.end();}

Python

importosimportstringimportpsycopgdefwrite_data_with_copy(host:string,port:int,database:string):withpsycopg.connect("host={host} port={port} dbname={database} ""sslmode=disable".format(host=host,port=port,database=database))asconn:script_dir=os.path.dirname(os.path.abspath(__file__))singers_file_path=os.path.join(script_dir,"singers_data.txt")albums_file_path=os.path.join(script_dir,"albums_data.txt")conn.autocommit=Trueblock_size=1024withconn.cursor()ascur:withopen(singers_file_path,"r")asf:withcur.copy("COPY singers (singer_id, first_name, last_name) ""FROM STDIN")ascopy:whiledata:=f.read(block_size):copy.write(data)print("Copied%d singers"%cur.rowcount)withopen(albums_file_path,"r")asf:withcur.copy("COPY albums ""FROM STDIN")ascopy:whiledata:=f.read(block_size):copy.write(data)print("Copied%d albums"%cur.rowcount)

C#

usingNpgsql;namespacedotnet_snippets;publicstaticclassWriteDataWithCopySample{publicstaticvoidWriteDataWithCopy(stringhost,intport,stringdatabase){varconnectionString=$"Host={host};Port={port};Database={database};SSL Mode=Disable";usingvarconnection=newNpgsqlConnection(connectionString);connection.Open();//Enable'partitioned_non_atomic'mode.ThisensuresthattheCOPYoperation//willsucceedevenifitexceedsSpanner's mutation limit per transaction.usingvarcmd=newNpgsqlCommand("set spanner.autocommit_dml_mode='partitioned_non_atomic'",connection);cmd.ExecuteNonQuery();varsingerCount=0;usingvarsingerReader=newStreamReader("singers_data.txt");using(varsingerWriter=connection.BeginTextImport("COPY singers (singer_id, first_name, last_name) FROM STDIN")){while(singerReader.ReadLine()is{}line){singerWriter.WriteLine(line);singerCount++;}}Console.WriteLine($"Copied {singerCount} singers");varalbumCount=0;usingvaralbumReader=newStreamReader("albums_data.txt");using(varalbumWriter=connection.BeginTextImport("COPY albums FROM STDIN")){while(albumReader.ReadLine()is{}line){albumWriter.WriteLine(line);albumCount++;}}Console.WriteLine($"Copied {albumCount} albums");}}

PHP

functionwrite_data_with_copy(string $host, string $port, string $database):void{$dsn=sprintf("pgsql:host=%s;port=%s;dbname=%s",$host,$port,$database);$connection=newPDO($dsn);$dir=dirname(__FILE__);$connection->pgsqlCopyFromFile("singers",sprintf("%s/singers_data.txt",$dir),"\t","\\\\N","singer_id, first_name, last_name");print("Copied 5 singers\n");$connection->pgsqlCopyFromFile("albums",sprintf("%s/albums_data.txt",$dir));print("Copied 5 albums\n");$connection=null;}

Run the sample with the following command:

psql

PGDATABASE=example-db ./write_data_with_copy.sh

Java

java -jar target/pgadapter-snippets/pgadapter-samples.jar write example-db

Go

gorunsample_runner.gowriteexample-db

Node.js

npm start write example-db

Python

python write_data_with_copy.py example-db

C#

dotnet run write example-db

PHP

php write_data_with_copy.php example-db

You should see:

Copied 5 singersCopied 5 albums
Note: There are limits to commit size. SeeCRUD limitfor more information.

Query data using SQL

Spanner supports a SQL interface for reading data, which you canaccess on the command line using the Google Cloud CLI orprogrammatically usinga PostgreSQL driver.

On the command line

Execute the following SQL statement to read the values of all columns from theAlbums table:

gcloud spanner databases execute-sql example-db --instance=test-instance \    --sql='SELECT singer_id, album_id, album_title FROM albums'
Note: For the GoogleSQL reference, seeQuery syntax in GoogleSQLand for PostgreSQL reference, seePostgreSQL lexical structure and syntax.

The result shows:

SingerIdAlbumIdAlbumTitle11TotalJunk12Go,Go,Go21Green22ForeverHoldYourPeace23Terrified

Use a PostgreSQL driver

In addition to executing a SQL statement on the command line, you can issue thesame SQL statement programmatically using a PostgreSQL driver.

psql

#!/bin/bashexportPGHOST="${PGHOST:-localhost}"exportPGPORT="${PGPORT:-5432}"exportPGDATABASE="${PGDATABASE:-example-db}"psql-c"SELECT singer_id, album_id, album_title         FROM albums"

Java

importjava.sql.Connection;importjava.sql.DriverManager;importjava.sql.ResultSet;importjava.sql.SQLException;classQueryData{staticvoidqueryData(Stringhost,intport,Stringdatabase)throwsSQLException{StringconnectionUrl=String.format("jdbc:postgresql://%s:%d/%s",host,port,database);try(Connectionconnection=DriverManager.getConnection(connectionUrl)){try(ResultSetresultSet=connection.createStatement().executeQuery("SELECT singer_id, album_id, album_title FROM albums")){while(resultSet.next()){System.out.printf("%d%d%s\n",resultSet.getLong("singer_id"),resultSet.getLong("album_id"),resultSet.getString("album_title"));}}}}}

Go

import("context""fmt""github.com/jackc/pgx/v5")funcQueryData(hoststring,portint,databasestring)error{ctx:=context.Background()connString:=fmt.Sprintf("postgres://uid:pwd@%s:%d/%s?sslmode=disable",host,port,database)conn,err:=pgx.Connect(ctx,connString)iferr!=nil{returnerr}deferconn.Close(ctx)rows,err:=conn.Query(ctx,"SELECT singer_id, album_id, album_title "+"FROM albums")deferrows.Close()iferr!=nil{returnerr}forrows.Next(){varsingerId,albumIdint64vartitlestringerr=rows.Scan(&singerId, &albumId, &title)iferr!=nil{returnerr}fmt.Printf("%v %v %v\n",singerId,albumId,title)}returnrows.Err()}

Node.js

import{Client}from'pg';asyncfunctionqueryData(host:string,port:number,database:string):Promise<void>{//ConnecttoSpannerthroughPGAdapter.constconnection=newClient({host:host,port:port,database:database,});awaitconnection.connect();constresult=awaitconnection.query("SELECT singer_id, album_id, album_title "+"FROM albums");for(constrowofresult.rows){console.log(`${row["singer_id"]}${row["album_id"]}${row["album_title"]}`);}//Closetheconnection.awaitconnection.end();}

Python

importstringimportpsycopgdefquery_data(host:string,port:int,database:string):withpsycopg.connect("host={host} port={port} dbname={database} ""sslmode=disable".format(host=host,port=port,database=database))asconn:conn.autocommit=Truewithconn.cursor()ascur:cur.execute("SELECT singer_id, album_id, album_title ""FROM albums")foralbumincur:print(album)

C#

usingNpgsql;namespacedotnet_snippets;publicstaticclassQueryDataSample{publicstaticvoidQueryData(stringhost,intport,stringdatabase){varconnectionString=$"Host={host};Port={port};Database={database};SSL Mode=Disable";usingvarconnection=newNpgsqlConnection(connectionString);connection.Open();usingvarcmd=newNpgsqlCommand("SELECT singer_id, album_id, album_title FROM albums",connection);usingvarreader=cmd.ExecuteReader();while(reader.Read()){Console.WriteLine($"{reader.GetInt64(0)} {reader.GetInt64(1)} {reader.GetString(2)}");}}}

PHP

functionquery_data(string $host, string $port, string $database):void{$dsn=sprintf("pgsql:host=%s;port=%s;dbname=%s",$host,$port,$database);$connection=newPDO($dsn);$statement=$connection->query("SELECT singer_id, album_id, album_title "."FROM albums "."ORDER BY singer_id, album_id");$rows=$statement->fetchAll();foreach($rowsas$album){printf("%s\t%s\t%s\n",$album["singer_id"],$album["album_id"],$album["album_title"]);}$rows=null;$statement=null;$connection=null;}

Run the sample with the following command:

psql

PGDATABASE=example-db ./query_data.sh

Java

java -jar target/pgadapter-snippets/pgadapter-samples.jar query example-db

Go

gorunsample_runner.goqueryexample-db

Node.js

npm start query example-db

Python

python query_data.py example-db

C#

dotnet run query example-db

PHP

php query_data.php example-db

You should see the following result:

11TotalJunk12Go,Go,Go21Green22ForeverHoldYourPeace23Terrified

Query using a SQL parameter

If your application has a frequently executed query, you can improve itsperformance by parameterizing it. The resulting parametric query can be cachedand reused, which reduces compilation costs. For more information, seeUse query parameters to speed up frequently executed queries.

Here is an example of using a parameter in theWHERE clause to query recordscontaining a specific value forLastName.

psql

#!/bin/bashexportPGHOST="${PGHOST:-localhost}"exportPGPORT="${PGPORT:-5432}"exportPGDATABASE="${PGDATABASE:-example-db}"# Create a prepared statement to use a query parameter.# Using a prepared statement for executing the same SQL string multiple# times increases the execution speed of the statement.psql-c"PREPARE select_singer AS         SELECT singer_id, first_name, last_name         FROM singers         WHERE last_name = \$1"\-c"EXECUTE select_singer ('Garcia')"

Java

importjava.sql.Connection;importjava.sql.DriverManager;importjava.sql.PreparedStatement;importjava.sql.ResultSet;importjava.sql.SQLException;classQueryDataWithParameter{staticvoidqueryDataWithParameter(Stringhost,intport,Stringdatabase)throwsSQLException{StringconnectionUrl=String.format("jdbc:postgresql://%s:%d/%s",host,port,database);try(Connectionconnection=DriverManager.getConnection(connectionUrl)){try(PreparedStatementstatement=connection.prepareStatement("SELECT singer_id, first_name, last_name "+"FROM singers "+"WHERE last_name = ?")){statement.setString(1,"Garcia");try(ResultSetresultSet=statement.executeQuery()){while(resultSet.next()){System.out.printf("%d%s%s\n",resultSet.getLong("singer_id"),resultSet.getString("first_name"),resultSet.getString("last_name"));}}}}}}

Go

import("context""fmt""github.com/jackc/pgx/v5")funcQueryDataWithParameter(hoststring,portint,databasestring)error{ctx:=context.Background()connString:=fmt.Sprintf("postgres://uid:pwd@%s:%d/%s?sslmode=disable",host,port,database)conn,err:=pgx.Connect(ctx,connString)iferr!=nil{returnerr}deferconn.Close(ctx)rows,err:=conn.Query(ctx,"SELECT singer_id, first_name, last_name "+"FROM singers "+"WHERE last_name = $1","Garcia")deferrows.Close()iferr!=nil{returnerr}forrows.Next(){varsingerIdint64varfirstName,lastNamestringerr=rows.Scan(&singerId, &firstName, &lastName)iferr!=nil{returnerr}fmt.Printf("%v %v %v\n",singerId,firstName,lastName)}returnrows.Err()}

Node.js

import{Client}from'pg';asyncfunctionqueryWithParameter(host:string,port:number,database:string):Promise<void>{//ConnecttoSpannerthroughPGAdapter.constconnection=newClient({host:host,port:port,database:database,});awaitconnection.connect();constresult=awaitconnection.query("SELECT singer_id, first_name, last_name "+"FROM singers "+"WHERE last_name = $1",["Garcia"]);for(constrowofresult.rows){console.log(`${row["singer_id"]}${row["first_name"]}${row["last_name"]}`);}//Closetheconnection.awaitconnection.end();}

Python

importstringimportpsycopgdefquery_data_with_parameter(host:string,port:int,database:string):withpsycopg.connect("host={host} port={port} dbname={database} ""sslmode=disable".format(host=host,port=port,database=database))asconn:conn.autocommit=Truewithconn.cursor()ascur:cur.execute("SELECT singer_id, first_name, last_name ""FROM singers ""WHERE last_name =%s",("Garcia",))forsingerincur:print(singer)

C#

usingNpgsql;namespacedotnet_snippets;publicstaticclassQueryDataWithParameterSample{publicstaticvoidQueryDataWithParameter(stringhost,intport,stringdatabase){varconnectionString=$"Host={host};Port={port};Database={database};SSL Mode=Disable";usingvarconnection=newNpgsqlConnection(connectionString);connection.Open();usingvarcmd=newNpgsqlCommand("SELECT singer_id, first_name, last_name "+"FROM singers "+"WHERE last_name = $1",connection);cmd.Parameters.Add(newNpgsqlParameter{Value="Garcia"});usingvarreader=cmd.ExecuteReader();while(reader.Read()){Console.WriteLine($"{reader["singer_id"]} {reader["first_name"]} {reader["last_name"]}");}}}

PHP

functionquery_data_with_parameter(string $host, string $port, string $database):void{$dsn=sprintf("pgsql:host=%s;port=%s;dbname=%s",$host,$port,$database);$connection=newPDO($dsn);$statement=$connection->prepare("SELECT singer_id, first_name, last_name "."FROM singers "."WHERE last_name = ?");$statement->execute(["Garcia"]);$rows=$statement->fetchAll();foreach($rowsas$singer){printf("%s\t%s\t%s\n",$singer["singer_id"],$singer["first_name"],$singer["last_name"]);}$rows=null;$statement=null;$connection=null;}

Run the sample with the following command:

psql

PGDATABASE=example-db ./query_data_with_parameter.sh

Java

java -jar target/pgadapter-snippets/pgadapter-samples.jar querywithparameter example-db

Go

gorunsample_runner.goquerywithparameterexample-db

Node.js

npm start querywithparameter example-db

Python

python query_data_with_parameter.py example-db

C#

dotnet run querywithparameter example-db

PHP

php query_data_with_parameter.php example-db

You should see the following result:

12MelissaGarcia

Update the database schema

Assume you need to add a new column calledMarketingBudget to theAlbumstable. Adding a new column to an existing table requires an update to yourdatabase schema. Spanner supports schema updates to a database while thedatabase continues to serve traffic. Schema updates don't require taking thedatabase offline and they don't lock entire tables or columns; you can continuewriting data to the database during the schema update. Read more about supportedschema updates and schema change performance inMake schema updates.

Add a column

You can add a column on the command line using the Google Cloud CLI orprogrammatically usinga PostgreSQL driver.

On the command line

Use the followingALTER TABLE command toadd the new column to the table:

gcloud spanner databases ddl update example-db --instance=test-instance \    --ddl='ALTER TABLE albums ADD COLUMN marketing_budget BIGINT'

You should see:

Schema updating...done.

Use a PostgreSQL driver

Execute the DDL statement using a PostgreSQL driver to modify theschema:

psql

#!/bin/bashexportPGHOST="${PGHOST:-localhost}"exportPGPORT="${PGPORT:-5432}"exportPGDATABASE="${PGDATABASE:-example-db}"psql-c"ALTER TABLE albums ADD COLUMN marketing_budget bigint"echo"Added marketing_budget column"

Java

importjava.sql.Connection;importjava.sql.DriverManager;importjava.sql.SQLException;classAddColumn{staticvoidaddColumn(Stringhost,intport,Stringdatabase)throwsSQLException{StringconnectionUrl=String.format("jdbc:postgresql://%s:%d/%s",host,port,database);try(Connectionconnection=DriverManager.getConnection(connectionUrl)){connection.createStatement().execute("alter table albums add column marketing_budget bigint");System.out.println("Added marketing_budget column");}}}

Go

import("context""fmt""github.com/jackc/pgx/v5")funcAddColumn(hoststring,portint,databasestring)error{ctx:=context.Background()connString:=fmt.Sprintf("postgres://uid:pwd@%s:%d/%s?sslmode=disable",host,port,database)conn,err:=pgx.Connect(ctx,connString)iferr!=nil{returnerr}deferconn.Close(ctx)_,err=conn.Exec(ctx,"ALTER TABLE albums "+"ADD COLUMN marketing_budget bigint")iferr!=nil{returnerr}fmt.Println("Added marketing_budget column")returnnil}

Node.js

import{Client}from'pg';asyncfunctionaddColumn(host:string,port:number,database:string):Promise<void>{constconnection=newClient({host:host,port:port,database:database,});awaitconnection.connect();awaitconnection.query("ALTER TABLE albums "+"ADD COLUMN marketing_budget bigint");console.log("Added marketing_budget column");//Closetheconnection.awaitconnection.end();}

Python

importstringimportpsycopgdefadd_column(host:string,port:int,database:string):withpsycopg.connect("host={host} port={port} dbname={database} ""sslmode=disable".format(host=host,port=port,database=database))asconn:# DDL can only be executed when autocommit=True.conn.autocommit=Truewithconn.cursor()ascur:cur.execute("ALTER TABLE albums ""ADD COLUMN marketing_budget bigint")print("Added marketing_budget column")

C#

usingNpgsql;namespacedotnet_snippets;publicstaticclassAddColumnSample{publicstaticvoidAddColumn(stringhost,intport,stringdatabase){varconnectionString=$"Host={host};Port={port};Database={database};SSL Mode=Disable";usingvarconnection=newNpgsqlConnection(connectionString);connection.Open();usingvarcmd=connection.CreateCommand();cmd.CommandText="alter table albums add column marketing_budget bigint";cmd.ExecuteNonQuery();Console.WriteLine("Added marketing_budget column");}}

PHP

functionadd_column(string $host, string $port, string $database):void{$dsn=sprintf("pgsql:host=%s;port=%s;dbname=%s",$host,$port,$database);$connection=newPDO($dsn);$connection->exec("ALTER TABLE albums ADD COLUMN marketing_budget bigint");print("Added marketing_budget column\n");$connection=null;}

Run the sample with the following command:

psql

PGDATABASE=example-db ./add_column.sh

Java

java -jar target/pgadapter-snippets/pgadapter-samples.jar addmarketingbudget example-db

Go

gorunsample_runner.goaddmarketingbudgetexample-db

Node.js

npm start addmarketingbudget example-db

Python

python add_column.py example-db

C#

dotnet run addmarketingbudget example-db

PHP

php add_column.php example-db

You should see:

Added marketing_budget column

Execute a DDL batch

It is recommended to execute multiple schema modifications in one batch. You canexecute multiple DDL statements in one batch by using the built-in batchingfeature of your PostgreSQL driver, by submitting all the DDL statementsas one SQL string separated by semicolons, or by using theSTART BATCH DDL andRUN BATCH statements.

psql

#!/bin/bashexportPGHOST="${PGHOST:-localhost}"exportPGPORT="${PGPORT:-5432}"exportPGDATABASE="${PGDATABASE:-example-db}"# Use a single SQL command to batch multiple statements together.# Executing multiple DDL statements as one batch is more efficient# than executing each statement individually.# Separate the statements with semicolons.psql <<SQLCREATETABLEvenues(venue_idbigintnotnullprimarykey,namevarchar(1024),descriptionjsonb);CREATETABLEconcerts(concert_idbigintnotnullprimarykey,venue_idbigintnotnull,singer_idbigintnotnull,start_timetimestamptz,end_timetimestamptz,constraintfk_concerts_venuesforeignkey(venue_id)referencesvenues(venue_id),constraintfk_concerts_singersforeignkey(singer_id)referencessingers(singer_id));SQLecho"Added venues and concerts tables"

Java

importjava.sql.Connection;importjava.sql.DriverManager;importjava.sql.SQLException;importjava.sql.Statement;classDdlBatch{staticvoidddlBatch(Stringhost,intport,Stringdatabase)throwsSQLException{StringconnectionUrl=String.format("jdbc:postgresql://%s:%d/%s",host,port,database);try(Connectionconnection=DriverManager.getConnection(connectionUrl)){try(Statementstatement=connection.createStatement()){//Createtwonewtablesinonebatch.statement.addBatch("CREATE TABLE venues ("+"  venue_id    bigint not null primary key,"+"  name        varchar(1024),"+"  description jsonb"+")");statement.addBatch("CREATE TABLE concerts ("+"  concert_id bigint not null primary key ,"+"  venue_id   bigint not null,"+"  singer_id  bigint not null,"+"  start_time timestamptz,"+"  end_time   timestamptz,"+"  constraint fk_concerts_venues foreign key"+"    (venue_id) references venues (venue_id),"+"  constraint fk_concerts_singers foreign key"+"    (singer_id) references singers (singer_id)"+")");statement.executeBatch();}System.out.println("Added venues and concerts tables");}}}

Go

import("context""fmt""github.com/jackc/pgx/v5")funcDdlBatch(hoststring,portint,databasestring)error{ctx:=context.Background()connString:=fmt.Sprintf("postgres://uid:pwd@%s:%d/%s?sslmode=disable",host,port,database)conn,err:=pgx.Connect(ctx,connString)iferr!=nil{returnerr}deferconn.Close(ctx)//ExecutingmultipleDDLstatementsasonebatchis//moreefficientthanexecutingeachstatement//individually.br:=conn.SendBatch(ctx, &pgx.Batch{QueuedQueries:[]*pgx.QueuedQuery{{SQL:"CREATE TABLE venues ("+"  venue_id    bigint not null primary key,"+"  name        varchar(1024),"+"  description jsonb"+")"},{SQL:"CREATE TABLE concerts ("+"  concert_id bigint not null primary key ,"+"  venue_id   bigint not null,"+"  singer_id  bigint not null,"+"  start_time timestamptz,"+"  end_time   timestamptz,"+"  constraint fk_concerts_venues foreign key"+"    (venue_id) references venues (venue_id),"+"  constraint fk_concerts_singers foreign key"+"    (singer_id) references singers (singer_id)"+")"},}})if_,err:=br.Exec();err!=nil{returnerr}iferr:=br.Close();err!=nil{returnerr}fmt.Println("Added venues and concerts tables")returnnil}

Node.js

import{Client}from'pg';asyncfunctionddlBatch(host:string,port:number,database:string):Promise<void>{constconnection=newClient({host:host,port:port,database:database,});awaitconnection.connect();//ExecutingmultipleDDLstatementsasonebatchis//moreefficientthanexecutingeachstatement//individually.awaitconnection.query("start batch ddl");awaitconnection.query("CREATE TABLE venues ("+"  venue_id    bigint not null primary key,"+"  name        varchar(1024),"+"  description jsonb"+")");awaitconnection.query("CREATE TABLE concerts ("+"  concert_id bigint not null primary key ,"+"  venue_id   bigint not null,"+"  singer_id  bigint not null,"+"  start_time timestamptz,"+"  end_time   timestamptz,"+"  constraint fk_concerts_venues foreign key"+"    (venue_id) references venues (venue_id),"+"  constraint fk_concerts_singers foreign key"+"    (singer_id) references singers (singer_id)"+")");awaitconnection.query("run batch");console.log("Added venues and concerts tables");//Closetheconnection.awaitconnection.end();}

Python

importstringimportpsycopgdefddl_batch(host:string,port:int,database:string):withpsycopg.connect("host={host} port={port} dbname={database} ""sslmode=disable".format(host=host,port=port,database=database))asconn:# DDL can only be executed when autocommit=True.conn.autocommit=True# Use a pipeline to batch multiple statements together.# Executing multiple DDL statements as one batch is# more efficient than executing each statement# individually.withconn.pipeline():# The following statements are buffered on PGAdapter# until the pipeline ends.conn.execute("CREATE TABLE venues (""  venue_id    bigint not null primary key,""  name        varchar(1024),""  description jsonb"")")conn.execute("CREATE TABLE concerts (""  concert_id bigint not null primary key ,""  venue_id   bigint not null,""  singer_id  bigint not null,""  start_time timestamptz,""  end_time   timestamptz,""  constraint fk_concerts_venues foreign key""    (venue_id) references venues (venue_id),""  constraint fk_concerts_singers foreign key""    (singer_id) references singers (singer_id)"")")print("Added venues and concerts tables")

C#

usingNpgsql;namespacedotnet_snippets;publicstaticclassDdlBatchSample{publicstaticvoidDdlBatch(stringhost,intport,stringdatabase){varconnectionString=$"Host={host};Port={port};Database={database};SSL Mode=Disable";usingvarconnection=newNpgsqlConnection(connectionString);connection.Open();//Createtwonewtablesinonebatch.varbatch=connection.CreateBatch();batch.BatchCommands.Add(newNpgsqlBatchCommand("CREATE TABLE venues ("+"  venue_id    bigint not null primary key,"+"  name        varchar(1024),"+"  description jsonb"+")"));batch.BatchCommands.Add(newNpgsqlBatchCommand("CREATE TABLE concerts ("+"  concert_id bigint not null primary key ,"+"  venue_id   bigint not null,"+"  singer_id  bigint not null,"+"  start_time timestamptz,"+"  end_time   timestamptz,"+"  constraint fk_concerts_venues foreign key"+"    (venue_id) references venues (venue_id),"+"  constraint fk_concerts_singers foreign key"+"    (singer_id) references singers (singer_id)"+")"));batch.ExecuteNonQuery();Console.WriteLine("Added venues and concerts tables");}}

PHP

functionddl_batch(string $host, string $port, string $database):void{$dsn=sprintf("pgsql:host=%s;port=%s;dbname=%s",$host,$port,$database);$connection=newPDO($dsn);//ExecutingmultipleDDLstatementsasonebatchis//moreefficientthanexecutingeachstatement//individually.$connection->exec("start batch ddl");$connection->exec("CREATE TABLE venues ("."  venue_id    bigint not null primary key,"."  name        varchar(1024),"."  description jsonb".")");$connection->exec("CREATE TABLE concerts ("."  concert_id bigint not null primary key ,"."  venue_id   bigint not null,"."  singer_id  bigint not null,"."  start_time timestamptz,"."  end_time   timestamptz,"."  constraint fk_concerts_venues foreign key"."    (venue_id) references venues (venue_id),"."  constraint fk_concerts_singers foreign key"."    (singer_id) references singers (singer_id)".")");$connection->exec("run batch");print("Added venues and concerts tables\n");$connection=null;}

Run the sample with the following command:

psql

PGDATABASE=example-db ./ddl_batch.sh

Java

java -jar target/pgadapter-snippets/pgadapter-samples.jar ddlbatch example-db

Go

gorunsample_runner.goddlbatchexample-db

Node.js

npm start ddlbatch example-db

Python

python ddl_batch.py example-db

C#

dotnet run ddlbatch example-db

PHP

php ddl_batch.php example-db

You should see:

Added venues and concerts tables

Write data to the new column

The following code writes data to the new column. It setsMarketingBudget to100000 for the row keyed byAlbums(1, 1) and to500000 for the row keyedbyAlbums(2, 2).

PGAdapter translates the PostgreSQLCOPY command tomutations.COPY commands are by default translated toInsert mutations.Executeset spanner.copy_upsert=true to translateCOPY commands toInsertOrUpdate mutations. This can be used to update existing data inSpanner.

Tip: For a full list of commands that can be used to accessSpanner features with PGAdapter, seePGAdapter session management commands.

psql

#!/bin/bashexportPGHOST="${PGHOST:-localhost}"exportPGPORT="${PGPORT:-5432}"exportPGDATABASE="${PGDATABASE:-example-db}"# Instruct PGAdapter to use insert-or-update for COPY statements.# This enables us to use COPY to update data.psql-c"set spanner.copy_upsert=true"\-c"COPY albums (singer_id, album_id, marketing_budget) FROM STDIN         WITH (DELIMITER ';')"\<<DATA1;1;1000002;2;500000DATAecho"Copied albums using upsert"

Java

importjava.io.IOException;importjava.io.StringReader;importjava.sql.Connection;importjava.sql.DriverManager;importjava.sql.SQLException;importorg.postgresql.PGConnection;importorg.postgresql.copy.CopyManager;classUpdateDataWithCopy{staticvoidupdateDataWithCopy(Stringhost,intport,Stringdatabase)throwsSQLException,IOException{StringconnectionUrl=String.format("jdbc:postgresql://%s:%d/%s",host,port,database);try(Connectionconnection=DriverManager.getConnection(connectionUrl)){//UnwrapthePostgreSQLJDBCconnectioninterfacetogetaccessto//aCopyManager.PGConnectionpgConnection=connection.unwrap(PGConnection.class);CopyManagercopyManager=pgConnection.getCopyAPI();//Enable'partitioned_non_atomic'mode.ThisensuresthattheCOPYoperation//willsucceedevenifitexceedsSpanner's mutation limit per transaction.connection.createStatement().execute("set spanner.autocommit_dml_mode='partitioned_non_atomic'");//InstructPGAdaptertouseinsert-or-updateforCOPYstatements.//ThisenablesustouseCOPYtoupdateexistingdata.connection.createStatement().execute("set spanner.copy_upsert=true");//COPYusesmutationstoinsertorupdateexistingdatainSpanner.longnumAlbums=copyManager.copyIn("COPY albums (singer_id, album_id, marketing_budget) FROM STDIN",newStringReader("1\t1\t100000\n"+"2\t2\t500000\n"));System.out.printf("Updated%d albums\n",numAlbums);}}}

Go

import("context""fmt""io""github.com/jackc/pgx/v5")funcUpdateDataWithCopy(hoststring,portint,databasestring)error{ctx:=context.Background()connString:=fmt.Sprintf("postgres://uid:pwd@%s:%d/%s?sslmode=disable",host,port,database)conn,err:=pgx.Connect(ctx,connString)iferr!=nil{returnerr}deferconn.Close(ctx)//Enablenon-atomicmode.ThismakestheCOPYoperationnon-atomic,//andallowsittoexceedtheSpannermutationlimit.if_,err:=conn.Exec(ctx,"set spanner.autocommit_dml_mode='partitioned_non_atomic'");err!=nil{returnerr}//InstructPGAdaptertouseinsert-or-updateforCOPYstatements.//ThisenablesustouseCOPYtoupdatedata.if_,err:=conn.Exec(ctx,"set spanner.copy_upsert=true");err!=nil{returnerr}//Createapipethatcanbeusedtowritethedatamanuallythatwewanttocopy.reader,writer:=io.Pipe()//Writethedatatothepipeusingaseparategoroutine.Thisallowsustostreamthedata//totheCOPYoperationrow-by-row.gofunc()error{for_,record:=range[]string{"1\t1\t100000\n","2\t2\t500000\n"}{if_,err:=writer.Write([]byte(record));err!=nil{returnerr}}iferr:=writer.Close();err!=nil{returnerr}returnnil}()tag,err:=conn.PgConn().CopyFrom(ctx,reader,"COPY albums (singer_id, album_id, marketing_budget) FROM STDIN")iferr!=nil{returnerr}fmt.Printf("Updated %v albums\n",tag.RowsAffected())returnnil}

Node.js

import{Client}from'pg';import{pipeline}from'node:stream/promises'import{fromascopyFrom}from'pg-copy-streams'import{Readable}from"stream";asyncfunctionupdateDataWithCopy(host:string,port:number,database:string):Promise<void>{constconnection=newClient({host:host,port:port,database:database,});awaitconnection.connect();//Enable'partitioned_non_atomic'mode.ThisensuresthattheCOPYoperation//willsucceedevenifitexceedsSpanner's mutation limit per transaction.awaitconnection.query("set spanner.autocommit_dml_mode='partitioned_non_atomic'");//InstructPGAdaptertouseinsert-or-updateforCOPYstatements.//ThisenablesustouseCOPYtoupdateexistingdata.awaitconnection.query("set spanner.copy_upsert=true");//CopydatatoSpannerusingtheCOPYcommand.constcopyStream=copyFrom('COPY albums (singer_id, album_id, marketing_budget) FROM STDIN');constingestStream=connection.query(copyStream);//Createasourcestreamandattachthesourcetothedestination.constsourceStream=newReadable();constoperation=pipeline(sourceStream,ingestStream);//ManuallypushdatatothesourcestreamtowritedatatoSpanner.sourceStream.push("1\t1\t100000\n");sourceStream.push("2\t2\t500000\n");//Pusha'null'toindicatetheendofthestream.sourceStream.push(null);//Waitforthecopyoperationtofinish.awaitoperation;console.log(`Updated${copyStream.rowCount}albums`);//Closetheconnection.awaitconnection.end();}

Python

importstringimportpsycopgdefupdate_data_with_copy(host:string,port:int,database:string):withpsycopg.connect("host={host} port={port} dbname={database} ""sslmode=disable".format(host=host,port=port,database=database))asconn:conn.autocommit=Truewithconn.cursor()ascur:# Instruct PGAdapter to use insert-or-update for COPY statements.# This enables us to use COPY to update data.cur.execute("set spanner.copy_upsert=true")# COPY uses mutations to insert or update existing data in Spanner.withcur.copy("COPY albums (singer_id, album_id, marketing_budget) ""FROM STDIN")ascopy:copy.write_row((1,1,100000))copy.write_row((2,2,500000))print("Updated%d albums"%cur.rowcount)

C#

usingNpgsql;namespacedotnet_snippets;publicstaticclassUpdateDataWithCopySample{publicstaticvoidUpdateDataWithCopy(stringhost,intport,stringdatabase){varconnectionString=$"Host={host};Port={port};Database={database};SSL Mode=Disable";usingvarconnection=newNpgsqlConnection(connectionString);connection.Open();//Enable'partitioned_non_atomic'mode.ThisensuresthattheCOPYoperation//willsucceedevenifitexceedsSpanner's mutation limit per transaction.usingvarcmd=connection.CreateCommand();cmd.CommandText="set spanner.autocommit_dml_mode='partitioned_non_atomic'";cmd.ExecuteNonQuery();//InstructPGAdaptertouseinsert-or-updateforCOPYstatements.//ThisenablesustouseCOPYtoupdateexistingdata.cmd.CommandText="set spanner.copy_upsert=true";cmd.ExecuteNonQuery();//COPYusesmutationstoinsertorupdateexistingdatainSpanner.using(varalbumWriter=connection.BeginTextImport("COPY albums (singer_id, album_id, marketing_budget) FROM STDIN")){albumWriter.WriteLine("1\t1\t100000");albumWriter.WriteLine("2\t2\t500000");}Console.WriteLine($"Updated 2 albums");}}

PHP

functionupdate_data_with_copy(string $host, string $port, string $database):void{$dsn=sprintf("pgsql:host=%s;port=%s;dbname=%s",$host,$port,$database);$connection=newPDO($dsn);//InstructPGAdaptertouseinsert-or-updateforCOPYstatements.//ThisenablesustouseCOPYtoupdatedata.$connection->exec("set spanner.copy_upsert=true");//COPYusesmutationstoinsertorupdateexistingdatainSpanner.$connection->pgsqlCopyFromArray("albums",["1\t1\t100000","2\t2\t500000"],"\t","\\\\N","singer_id, album_id, marketing_budget",);print("Updated 2 albums\n");$connection=null;}

Run the sample with the following command:

psql

PGDATABASE=example-db ./update_data_with_copy.sh

Java

java -jar target/pgadapter-snippets/pgadapter-samples.jar update example-db

Go

gorunsample_runner.goupdateexample-db

Node.js

npm start update example-db

Python

python update_data_with_copy.py example-db

C#

dotnet run update example-db

PHP

php update_data_with_copy.php example-db

You should see:

Updated 2 albums

You can also execute a SQL query to fetch the values that you just wrote.

Here's the code to execute the query:

psql

#!/bin/bashexportPGHOST="${PGHOST:-localhost}"exportPGPORT="${PGPORT:-5432}"exportPGDATABASE="${PGDATABASE:-example-db}"psql-c"SELECT singer_id, album_id, marketing_budget         FROM albums         ORDER BY singer_id, album_id"

Java

importjava.sql.Connection;importjava.sql.DriverManager;importjava.sql.ResultSet;importjava.sql.SQLException;classQueryDataWithNewColumn{staticvoidqueryDataWithNewColumn(Stringhost,intport,Stringdatabase)throwsSQLException{StringconnectionUrl=String.format("jdbc:postgresql://%s:%d/%s",host,port,database);try(Connectionconnection=DriverManager.getConnection(connectionUrl)){try(ResultSetresultSet=connection.createStatement().executeQuery("SELECT singer_id, album_id, marketing_budget "+"FROM albums "+"ORDER BY singer_id, album_id")){while(resultSet.next()){System.out.printf("%d%d%s\n",resultSet.getLong("singer_id"),resultSet.getLong("album_id"),resultSet.getString("marketing_budget"));}}}}}

Go

import("context""database/sql""fmt""github.com/jackc/pgx/v5")funcQueryDataWithNewColumn(hoststring,portint,databasestring)error{ctx:=context.Background()connString:=fmt.Sprintf("postgres://uid:pwd@%s:%d/%s?sslmode=disable",host,port,database)conn,err:=pgx.Connect(ctx,connString)iferr!=nil{returnerr}deferconn.Close(ctx)rows,err:=conn.Query(ctx,"SELECT singer_id, album_id, marketing_budget "+"FROM albums "+"ORDER BY singer_id, album_id")deferrows.Close()iferr!=nil{returnerr}forrows.Next(){varsingerId,albumIdint64varmarketingBudgetsql.NullStringerr=rows.Scan(&singerId, &albumId, &marketingBudget)iferr!=nil{returnerr}varbudgetstringifmarketingBudget.Valid{budget=marketingBudget.String}else{budget="NULL"}fmt.Printf("%v %v %v\n",singerId,albumId,budget)}returnrows.Err()}

Node.js

import{Client}from'pg';asyncfunctionqueryDataWithNewColumn(host:string,port:number,database:string):Promise<void>{constconnection=newClient({host:host,port:port,database:database,});awaitconnection.connect();constresult=awaitconnection.query("SELECT singer_id, album_id, marketing_budget "+"FROM albums "+"ORDER BY singer_id, album_id");for(constrowofresult.rows){console.log(`${row["singer_id"]}${row["album_id"]}${row["marketing_budget"]}`);}//Closetheconnection.awaitconnection.end();}

Python

importstringimportpsycopgdefquery_data_with_new_column(host:string,port:int,database:string):withpsycopg.connect("host={host} port={port} dbname={database} ""sslmode=disable".format(host=host,port=port,database=database))asconn:conn.autocommit=Truewithconn.cursor()ascur:cur.execute("SELECT singer_id, album_id, marketing_budget ""FROM albums ""ORDER BY singer_id, album_id")foralbumincur:print(album)

C#

usingNpgsql;namespacedotnet_snippets;publicstaticclassQueryDataWithNewColumnSample{publicstaticvoidQueryWithNewColumnData(stringhost,intport,stringdatabase){varconnectionString=$"Host={host};Port={port};Database={database};SSL Mode=Disable";usingvarconnection=newNpgsqlConnection(connectionString);connection.Open();usingvarcmd=newNpgsqlCommand("SELECT singer_id, album_id, marketing_budget "+"FROM albums "+"ORDER BY singer_id, album_id",connection);usingvarreader=cmd.ExecuteReader();while(reader.Read()){Console.WriteLine($"{reader["singer_id"]} {reader["album_id"]} {reader["marketing_budget"]}");}}}

PHP

functionquery_data_with_new_column(string $host, string $port, string $database):void{$dsn=sprintf("pgsql:host=%s;port=%s;dbname=%s",$host,$port,$database);$connection=newPDO($dsn);$statement=$connection->query("SELECT singer_id, album_id, marketing_budget "."FROM albums "."ORDER BY singer_id, album_id");$rows=$statement->fetchAll();foreach($rowsas$album){printf("%s\t%s\t%s\n",$album["singer_id"],$album["album_id"],$album["marketing_budget"]);}$rows=null;$statement=null;$connection=null;}

Run the query with this command:

psql

PGDATABASE=example-db ./query_data_with_new_column.sh

Java

java -jar target/pgadapter-snippets/pgadapter-samples.jar querymarketingbudget example-db

Go

gorunsample_runner.goquerymarketingbudgetexample-db

Node.js

npm start querymarketingbudget example-db

Python

python query_data_with_new_column.py example-db

C#

dotnet run querymarketingbudget example-db

PHP

php query_data_with_new_column.php example-db

You should see:

1110000012null21null2250000023null

Update data

You can update data using DML in a read-write transaction.

psql

#!/bin/bashexportPGHOST="${PGHOST:-localhost}"exportPGPORT="${PGPORT:-5432}"exportPGDATABASE="${PGDATABASE:-example-db}"psql <<SQL--Transfermarketingbudgetfromonealbumtoanother.--Wedoitinatransactiontoensurethatthetransferisatomic.--Beginaread/writetransaction.begin;--Increasethemarketingbudgetofalbum1ifalbum2hasenoughbudget.--Theconditionthatalbum2hasenoughbudgetisguaranteedforthe--durationofthetransaction,asread/writetransactionsinSpanneruse--externalconsistencyasthedefaultisolationlevel.updatealbumssetmarketing_budget=marketing_budget+200000wheresinger_id=1andalbum_id=1andexists(selectalbum_idfromalbumswheresinger_id=2andalbum_id=2andmarketing_budget >200000);--Decreasethemarketingbudgetofalbum2.updatealbumssetmarketing_budget=marketing_budget-200000wheresinger_id=2andalbum_id=2andmarketing_budget >200000;--Committhetransactiontomakethechangestobothmarketingbudgets--durablystoredinthedatabaseandvisibletoothertransactions.commit;SQLecho"Transferred marketing budget from Album 2 to Album 1"

Java

importjava.sql.Connection;importjava.sql.DriverManager;importjava.sql.PreparedStatement;importjava.sql.ResultSet;importjava.sql.SQLException;classUpdateDataWithTransaction{staticvoidwriteWithTransactionUsingDml(Stringhost,intport,Stringdatabase)throwsSQLException{StringconnectionUrl=String.format("jdbc:postgresql://%s:%d/%s",host,port,database);try(Connectionconnection=DriverManager.getConnection(connectionUrl)){//SetAutoCommit=falsetoenabletransactions.connection.setAutoCommit(false);//Transfermarketingbudgetfromonealbumtoanother.Wedoitina//transactiontoensurethatthetransferisatomic.Thereisnoneed//toexplicitlystartthetransaction.Thefirststatementonthe//connectionwillstartatransactionwhenAutoCommit=false.StringselectMarketingBudgetSql="SELECT marketing_budget from albums WHERE singer_id = ? and album_id = ?";longalbum2Budget=0;try(PreparedStatementselectMarketingBudgetStatement=connection.prepareStatement(selectMarketingBudgetSql)){//BindthequeryparameterstoSingerId=2andAlbumId=2.selectMarketingBudgetStatement.setLong(1,2);selectMarketingBudgetStatement.setLong(2,2);try(ResultSetresultSet=selectMarketingBudgetStatement.executeQuery()){while(resultSet.next()){album2Budget=resultSet.getLong("marketing_budget");}}//Thetransactionwillonlybecommittedifthisconditionstillholds//atthetimeofcommit.Otherwise,thetransactionwillbeaborted.finallongtransfer=200000;if(album2Budget >=transfer){longalbum1Budget=0;//Re-usetheexistingPreparedStatementforselectingthe//marketing_budgettogetthebudgetforAlbum1.//BindthequeryparameterstoSingerId=1andAlbumId=1.selectMarketingBudgetStatement.setLong(1,1);selectMarketingBudgetStatement.setLong(2,1);try(ResultSetresultSet=selectMarketingBudgetStatement.executeQuery()){while(resultSet.next()){album1Budget=resultSet.getLong("marketing_budget");}}//TransferpartofthemarketingbudgetofAlbum2toAlbum1.album1Budget+=transfer;album2Budget-=transfer;StringupdateSql="UPDATE albums "+"SET marketing_budget = ? "+"WHERE singer_id = ? and album_id = ?";try(PreparedStatementupdateStatement=connection.prepareStatement(updateSql)){//UpdateAlbum1.intparamIndex=0;updateStatement.setLong(++paramIndex,album1Budget);updateStatement.setLong(++paramIndex,1);updateStatement.setLong(++paramIndex,1);//CreateaDMLbatchbycallingaddBatch//onthecurrentPreparedStatement.updateStatement.addBatch();//UpdateAlbum2inthesameDMLbatch.paramIndex=0;updateStatement.setLong(++paramIndex,album2Budget);updateStatement.setLong(++paramIndex,2);updateStatement.setLong(++paramIndex,2);updateStatement.addBatch();//ExecutebothDMLstatementsinonebatch.updateStatement.executeBatch();}}}//Committhecurrenttransaction.connection.commit();System.out.println("Transferred marketing budget from Album 2 to Album 1");}}}

Go

import("context""fmt""github.com/jackc/pgx/v5")funcWriteWithTransactionUsingDml(hoststring,portint,databasestring)error{ctx:=context.Background()connString:=fmt.Sprintf("postgres://uid:pwd@%s:%d/%s?sslmode=disable",host,port,database)conn,err:=pgx.Connect(ctx,connString)iferr!=nil{returnerr}deferconn.Close(ctx)//Transfermarketingbudgetfromonealbumtoanother.Wedoitina//transactiontoensurethatthetransferisatomic.tx,err:=conn.Begin(ctx)iferr!=nil{returnerr}constselectSql="SELECT marketing_budget "+"from albums "+"WHERE singer_id = $1 and album_id = $2"//Getthemarketing_budgetofsinger2/album2.row:=tx.QueryRow(ctx,selectSql,2,2)varbudget2int64iferr:=row.Scan(&budget2);err!=nil{tx.Rollback(ctx)returnerr}consttransfer=20000//Thetransactionwillonlybecommittedifthisconditionstillholds//atthetimeofcommit.Otherwise,thetransactionwillbeaborted.ifbudget2 >=transfer{//Getthemarketing_budgetofsinger1/album1.row:=tx.QueryRow(ctx,selectSql,1,1)varbudget1int64iferr:=row.Scan(&budget1);err!=nil{tx.Rollback(ctx)returnerr}//TransferpartofthemarketingbudgetofAlbum2toAlbum1.budget1+=transferbudget2-=transferconstupdateSql="UPDATE albums "+"SET marketing_budget = $1 "+"WHERE singer_id = $2 and album_id = $3"//StartaDMLbatchandexecuteitaspartofthecurrenttransaction.batch:= &pgx.Batch{}batch.Queue(updateSql,budget1,1,1)batch.Queue(updateSql,budget2,2,2)br:=tx.SendBatch(ctx,batch)_,err=br.Exec()iferr:=br.Close();err!=nil{tx.Rollback(ctx)returnerr}}//Committhecurrenttransaction.tx.Commit(ctx)fmt.Println("Transferred marketing budget from Album 2 to Album 1")returnnil}

Node.js

import{Client}from'pg';asyncfunctionwriteWithTransactionUsingDml(host:string,port:number,database:string):Promise<void>{constconnection=newClient({host:host,port:port,database:database,});awaitconnection.connect();//Transfermarketingbudgetfromonealbumtoanother.Wedoitina//transactiontoensurethatthetransferisatomic.node-postgres//requiresyoutoexplicitlystartthetransactionbyexecuting'begin'.awaitconnection.query("begin");constselectMarketingBudgetSql="SELECT marketing_budget "+"from albums "+"WHERE singer_id = $1 and album_id = $2";//Getthemarketing_budgetofsinger2/album2.constalbum2BudgetResult=awaitconnection.query(selectMarketingBudgetSql,[2,2]);letalbum2Budget=album2BudgetResult.rows[0]["marketing_budget"];consttransfer=200000;//Thetransactionwillonlybecommittedifthisconditionstillholds//atthetimeofcommit.Otherwise,thetransactionwillbeaborted.if(album2Budget >=transfer){//Getthemarketingbudgetofsinger1/album1.constalbum1BudgetResult=awaitconnection.query(selectMarketingBudgetSql,[1,1]);letalbum1Budget=album1BudgetResult.rows[0]["marketing_budget"];//TransferpartofthemarketingbudgetofAlbum2toAlbum1.album1Budget+=transfer;album2Budget-=transfer;constupdateSql="UPDATE albums "+"SET marketing_budget = $1 "+"WHERE singer_id = $2 and album_id = $3";//StartaDMLbatch.Thisbatchwillbecomepartofthecurrenttransaction.//TODO:Enablewhenhttps://github.com/googleapis/java-spanner/pull/3114hasbeenmerged//awaitconnection.query("start batch dml");//Updatethemarketingbudgetofbothalbums.awaitconnection.query(updateSql,[album1Budget,1,1]);awaitconnection.query(updateSql,[album2Budget,2,2]);//TODO:Enablewhenhttps://github.com/googleapis/java-spanner/pull/3114hasbeenmerged//awaitconnection.query("run batch");}//Committhecurrenttransaction.awaitconnection.query("commit");console.log("Transferred marketing budget from Album 2 to Album 1");//Closetheconnection.awaitconnection.end();}

Python

importstringimportpsycopgdefupdate_data_with_transaction(host:string,port:int,database:string):withpsycopg.connect("host={host} port={port} dbname={database} ""sslmode=disable".format(host=host,port=port,database=database))asconn:# Set autocommit=False to use transactions.# The first statement that is executed starts the transaction.conn.autocommit=Falsewithconn.cursor()ascur:# Transfer marketing budget from one album to another.# We do it in a transaction to ensure that the transfer is atomic.# There is no need to explicitly start the transaction. The first# statement on the connection will start a transaction when# AutoCommit=false.select_marketing_budget_sql=("SELECT marketing_budget ""from albums ""WHERE singer_id =%s ""and album_id =%s")# Get the marketing budget of Album #2.cur.execute(select_marketing_budget_sql,(2,2))album2_budget=cur.fetchone()[0]transfer=200000ifalbum2_budget >transfer:# Get the marketing budget of Album #1.cur.execute(select_marketing_budget_sql,(1,1))album1_budget=cur.fetchone()[0]# Transfer the marketing budgets and write the update back# to the database.album1_budget+=transferalbum2_budget-=transferupdate_sql=("update albums ""set marketing_budget =%s ""where singer_id =%s ""and   album_id =%s")# Use a pipeline to execute two DML statements in one batch.withconn.pipeline():cur.execute(update_sql,(album1_budget,1,1,))cur.execute(update_sql,(album2_budget,2,2,))else:print("Insufficient budget to transfer")# Commit the transaction.conn.commit()print("Transferred marketing budget from Album 2 to Album 1")

C#

usingNpgsql;usingSystem.Data;namespacedotnet_snippets;publicstaticclassTagsSample{publicstaticvoidTags(stringhost,intport,stringdatabase){varconnectionString=$"Host={host};Port={port};Database={database};SSL Mode=Disable";usingvarconnection=newNpgsqlConnection(connectionString);connection.Open();//StartatransactionwithisolationlevelSerializable.//Spanneronlysupportsthisisolationlevel.Tryingtousealower//isolationlevel(includingthedefaultisolationlevelREADCOMMITTED),//willresultinanerror.vartransaction=connection.BeginTransaction(IsolationLevel.Serializable);//Createacommandthatusesthecurrenttransaction.usingvarcmd=connection.CreateCommand();cmd.Transaction=transaction;//SettheTRANSACTION_TAGsessionvariabletosetatransactiontag//forthecurrenttransaction.cmd.CommandText="set spanner.transaction_tag='example-tx-tag'";cmd.ExecuteNonQuery();//SettheSTATEMENT_TAGsessionvariabletosettherequesttag//thatshouldbeincludedwiththenextSQLstatement.cmd.CommandText="set spanner.statement_tag='query-marketing-budget'";cmd.ExecuteNonQuery();//Getthemarketing_budgetofAlbum(1,1).cmd.CommandText="select marketing_budget from albums where singer_id=$1 and album_id=$2";cmd.Parameters.Add(newNpgsqlParameter{Value=1L});cmd.Parameters.Add(newNpgsqlParameter{Value=1L});varmarketingBudget=(long?)cmd.ExecuteScalar();//Reducethemarketingbudgetby10%ifitismorethan1,000.if(marketingBudget >1000L){marketingBudget-=(long)(marketingBudget*0.1);//Setthestatementtagtousefortheupdatestatement.cmd.Parameters.Clear();cmd.CommandText="set spanner.statement_tag='reduce-marketing-budget'";cmd.ExecuteNonQuery();cmd.CommandText="update albums set marketing_budget=$1 where singer_id=$2 AND album_id=$3";cmd.Parameters.Add(newNpgsqlParameter{Value=marketingBudget});cmd.Parameters.Add(newNpgsqlParameter{Value=1L});cmd.Parameters.Add(newNpgsqlParameter{Value=1L});cmd.ExecuteNonQuery();}//Committhecurrenttransaction.transaction.Commit();Console.WriteLine("Reduced marketing budget");}}

PHP

functionupdate_data_with_transaction(string $host, string $port, string $database):void{$dsn=sprintf("pgsql:host=%s;port=%s;dbname=%s",$host,$port,$database);$connection=newPDO($dsn);//Startaread/writetransaction.$connection->beginTransaction();//Transfermarketingbudgetfromonealbumtoanother.//Wedoitinatransactiontoensurethatthetransferisatomic.//Createapreparedstatementthatwecanusetoexecutethesame//SQLstringmultipletimeswithdifferentparametervalues.$select_marketing_budget_statement=$connection->prepare("SELECT marketing_budget "."from albums "."WHERE singer_id = ? "."and album_id = ?");//GetthemarketingbudgetofAlbum#2.$select_marketing_budget_statement->execute([2,2]);$album2_budget=$select_marketing_budget_statement->fetchAll()[0][0];$select_marketing_budget_statement->closeCursor();$transfer=200000;if($album2_budget >$transfer){//GetthemarketingbudgetofAlbum#1.$select_marketing_budget_statement->execute([1,1]);$album1_budget=$select_marketing_budget_statement->fetchAll()[0][0];$select_marketing_budget_statement->closeCursor();//Transferthemarketingbudgetsandwritetheupdateback//tothedatabase.$album1_budget+=$transfer;$album2_budget-=$transfer;//PHPPDOalsosupportsnamedqueryparameters.$update_statement=$connection->prepare("update albums "."set marketing_budget = :budget "."where singer_id = :singer_id "."and   album_id = :album_id");//StartaDMLbatch.Thisbatchwillbecomepartofthecurrenttransaction.//$connection->exec("start batch dml");//Updatethemarketingbudgetofbothalbums.$update_statement->execute(["budget"=>$album1_budget,"singer_id"=>1,"album_id"=>1]);$update_statement->execute(["budget"=>$album2_budget,"singer_id"=>2,"album_id"=>2]);//$connection->exec("run batch");}else{print("Insufficient budget to transfer\n");}//Committhetransaction.$connection->commit();print("Transferred marketing budget from Album 2 to Album 1\n");$connection=null;}

Run the sample with the following command:

psql

PGDATABASE=example-db ./update_data_with_transaction.sh

Java

java -jar target/pgadapter-snippets/pgadapter-samples.jar writewithtransactionusingdml example-db

Go

gorunsample_runner.gowritewithtransactionusingdmlexample-db

Node.js

npm start writewithtransactionusingdml example-db

Python

python update_data_with_transaction.py example-db

C#

dotnet run writewithtransactionusingdml example-db

PHP

php update_data_with_transaction.php example-db

You should see:

Transferred marketing budget from Album 2 to Album 1

Transaction tags and request tags

Usetransaction tags and request tagsto troubleshoot transactions and queries in Spanner. You can settransaction tags and request tags with theSPANNER.TRANSACTION_TAG andSPANNER.STATEMENT_TAG session variables.

psql

#!/bin/bashexportPGHOST="${PGHOST:-localhost}"exportPGPORT="${PGPORT:-5432}"exportPGDATABASE="${PGDATABASE:-example-db}"psql <<SQL--Startatransaction.begin;--SettheTRANSACTION_TAGsessionvariabletosetatransactiontag--forthecurrenttransaction.Thiscanonlybeexecutedatthestart--ofthetransaction.setspanner.transaction_TAG='example-tx-tag';--SettheSTATEMENT_TAGsessionvariabletosettherequesttag--thatshouldbeincludedwiththenextSQLstatement.setspanner.statement_tag='query-marketing-budget';selectmarketing_budgetfromalbumswheresinger_id=1andalbum_id=1;--Reducethemarketingbudgetby10%ifitismorethan1,000.--Setastatementtagfortheupdatestatement.setspanner.statement_tag='reduce-marketing-budget';updatealbumssetmarketing_budget=marketing_budget-(marketing_budget*0.1)::bigintwheresinger_id=1andalbum_id=1andmarketing_budget >1000;commit;SQLecho"Reduced marketing budget"

Java

importjava.sql.Connection;importjava.sql.DriverManager;importjava.sql.PreparedStatement;importjava.sql.ResultSet;importjava.sql.SQLException;classTags{staticvoidtags(Stringhost,intport,Stringdatabase)throwsSQLException{StringconnectionUrl=String.format("jdbc:postgresql://%s:%d/%s",host,port,database);try(Connectionconnection=DriverManager.getConnection(connectionUrl)){//SetAutoCommit=falsetoenabletransactions.connection.setAutoCommit(false);//SettheTRANSACTION_TAGsessionvariabletosetatransactiontag//forthecurrenttransaction.connection.createStatement().execute("set spanner.transaction_tag='example-tx-tag'");//SettheSTATEMENT_TAGsessionvariabletosettherequesttag//thatshouldbeincludedwiththenextSQLstatement.connection.createStatement().execute("set spanner.statement_tag='query-marketing-budget'");longmarketingBudget=0L;longsingerId=1L;longalbumId=1L;try(PreparedStatementstatement=connection.prepareStatement("select marketing_budget from albums where singer_id=? and album_id=?")){statement.setLong(1,singerId);statement.setLong(2,albumId);try(ResultSetalbumResultSet=statement.executeQuery()){while(albumResultSet.next()){marketingBudget=albumResultSet.getLong(1);}}}//Reducethemarketingbudgetby10%ifitismorethan1,000.finallongmaxMarketingBudget=1000L;finalfloatreduction=0.1f;if(marketingBudget >maxMarketingBudget){marketingBudget-=(long)(marketingBudget*reduction);connection.createStatement().execute("set spanner.statement_tag='reduce-marketing-budget'");try(PreparedStatementstatement=connection.prepareStatement("update albums set marketing_budget=? where singer_id=? AND album_id=?")){intparamIndex=0;statement.setLong(++paramIndex,marketingBudget);statement.setLong(++paramIndex,singerId);statement.setLong(++paramIndex,albumId);statement.executeUpdate();}}//Committhecurrenttransaction.connection.commit();System.out.println("Reduced marketing budget");}}}

Go

import("context""fmt""github.com/jackc/pgx/v5")funcTags(hoststring,portint,databasestring)error{ctx:=context.Background()connString:=fmt.Sprintf("postgres://uid:pwd@%s:%d/%s?sslmode=disable",host,port,database)conn,err:=pgx.Connect(ctx,connString)iferr!=nil{returnerr}deferconn.Close(ctx)tx,err:=conn.Begin(ctx)iferr!=nil{returnerr}//SettheTRANSACTION_TAGsessionvariabletosetatransactiontag//forthecurrenttransaction._,_=tx.Exec(ctx,"set spanner.transaction_tag='example-tx-tag'")//SettheSTATEMENT_TAGsessionvariabletosettherequesttag//thatshouldbeincludedwiththenextSQLstatement._,_=tx.Exec(ctx,"set spanner.statement_tag='query-marketing-budget'")row:=tx.QueryRow(ctx,"select marketing_budget "+"from albums "+"where singer_id=$1 and album_id=$2",1,1)varbudgetint64iferr:=row.Scan(&budget);err!=nil{tx.Rollback(ctx)returnerr}//Reducethemarketingbudgetby10%ifitismorethan1,000.ifbudget >1000{budget=int64(float64(budget)-float64(budget)*0.1)_,_=tx.Exec(ctx,"set spanner.statement_tag='reduce-marketing-budget'")if_,err:=tx.Exec(ctx,"update albums set marketing_budget=$1 where singer_id=$2 AND album_id=$3",budget,1,1);err!=nil{tx.Rollback(ctx)returnerr}}//Committhecurrenttransaction.tx.Commit(ctx)fmt.Println("Reduced marketing budget")returnnil}

Node.js

import{Client}from'pg';asyncfunctiontags(host:string,port:number,database:string):Promise<void>{constconnection=newClient({host:host,port:port,database:database,});awaitconnection.connect();awaitconnection.query("begin");//SettheTRANSACTION_TAGsessionvariabletosetatransactiontag//forthecurrenttransaction.awaitconnection.query("set spanner.transaction_tag='example-tx-tag'");//SettheSTATEMENT_TAGsessionvariabletosettherequesttag//thatshouldbeincludedwiththenextSQLstatement.awaitconnection.query("set spanner.statement_tag='query-marketing-budget'");constbudgetResult=awaitconnection.query("select marketing_budget "+"from albums "+"where singer_id=$1 and album_id=$2",[1,1])letbudget=budgetResult.rows[0]["marketing_budget"];//Reducethemarketingbudgetby10%ifitismorethan1,000.if(budget >1000){budget=budget-budget*0.1;awaitconnection.query("set spanner.statement_tag='reduce-marketing-budget'");awaitconnection.query("update albums set marketing_budget=$1 "+"where singer_id=$2 AND album_id=$3",[budget,1,1]);}//Committhecurrenttransaction.awaitconnection.query("commit");console.log("Reduced marketing budget");//Closetheconnection.awaitconnection.end();}

Python

importstringimportpsycopgdeftags(host:string,port:int,database:string):withpsycopg.connect("host={host} port={port} dbname={database} ""sslmode=disable".format(host=host,port=port,database=database))asconn:# Set autocommit=False to enable transactions.conn.autocommit=Falsewithconn.cursor()ascur:# Set the TRANSACTION_TAG session variable to set a transaction tag# for the current transaction.cur.execute("set spanner.transaction_TAG='example-tx-tag'")# Set the STATEMENT_TAG session variable to set the request tag# that should be included with the next SQL statement.cur.execute("set spanner.statement_tag='query-marketing-budget'")singer_id=1album_id=1cur.execute("select marketing_budget ""from albums ""where singer_id =%s ""  and album_id  =%s",(singer_id,album_id,))marketing_budget=cur.fetchone()[0]# Reduce the marketing budget by 10% if it is more than 1,000.max_marketing_budget=1000reduction=0.1ifmarketing_budget >max_marketing_budget:# Make sure the marketing_budget remains an int.marketing_budget-=int(marketing_budget*reduction)# Set a statement tag for the update statement.cur.execute("set spanner.statement_tag='reduce-marketing-budget'")cur.execute("update albums set marketing_budget =%s ""where singer_id =%s ""  and album_id  =%s",(marketing_budget,singer_id,album_id,))else:print("Marketing budget already less than or equal to 1,000")# Commit the transaction.conn.commit()print("Reduced marketing budget")

C#

usingNpgsql;usingSystem.Data;namespacedotnet_snippets;publicstaticclassTagsSample{publicstaticvoidTags(stringhost,intport,stringdatabase){varconnectionString=$"Host={host};Port={port};Database={database};SSL Mode=Disable";usingvarconnection=newNpgsqlConnection(connectionString);connection.Open();//StartatransactionwithisolationlevelSerializable.//Spanneronlysupportsthisisolationlevel.Tryingtousealower//isolationlevel(includingthedefaultisolationlevelREADCOMMITTED),//willresultinanerror.vartransaction=connection.BeginTransaction(IsolationLevel.Serializable);//Createacommandthatusesthecurrenttransaction.usingvarcmd=connection.CreateCommand();cmd.Transaction=transaction;//SettheTRANSACTION_TAGsessionvariabletosetatransactiontag//forthecurrenttransaction.cmd.CommandText="set spanner.transaction_tag='example-tx-tag'";cmd.ExecuteNonQuery();//SettheSTATEMENT_TAGsessionvariabletosettherequesttag//thatshouldbeincludedwiththenextSQLstatement.cmd.CommandText="set spanner.statement_tag='query-marketing-budget'";cmd.ExecuteNonQuery();//Getthemarketing_budgetofAlbum(1,1).cmd.CommandText="select marketing_budget from albums where singer_id=$1 and album_id=$2";cmd.Parameters.Add(newNpgsqlParameter{Value=1L});cmd.Parameters.Add(newNpgsqlParameter{Value=1L});varmarketingBudget=(long?)cmd.ExecuteScalar();//Reducethemarketingbudgetby10%ifitismorethan1,000.if(marketingBudget >1000L){marketingBudget-=(long)(marketingBudget*0.1);//Setthestatementtagtousefortheupdatestatement.cmd.Parameters.Clear();cmd.CommandText="set spanner.statement_tag='reduce-marketing-budget'";cmd.ExecuteNonQuery();cmd.CommandText="update albums set marketing_budget=$1 where singer_id=$2 AND album_id=$3";cmd.Parameters.Add(newNpgsqlParameter{Value=marketingBudget});cmd.Parameters.Add(newNpgsqlParameter{Value=1L});cmd.Parameters.Add(newNpgsqlParameter{Value=1L});cmd.ExecuteNonQuery();}//Committhecurrenttransaction.transaction.Commit();Console.WriteLine("Reduced marketing budget");}}

PHP

functiontags(string $host, string $port, string $database):void{$dsn=sprintf("pgsql:host=%s;port=%s;dbname=%s",$host,$port,$database);$connection=newPDO($dsn);//Startaread/writetransaction.$connection->beginTransaction();//SettheTRANSACTION_TAGsessionvariabletosetatransactiontag//forthecurrenttransaction.$connection->exec("set spanner.transaction_TAG='example-tx-tag'");//SettheSTATEMENT_TAGsessionvariabletosettherequesttag//thatshouldbeincludedwiththenextSQLstatement.$connection->exec("set spanner.statement_tag='query-marketing-budget'");$singer_id=1;$album_id=1;$statement=$connection->prepare("select marketing_budget "."from albums "."where singer_id = ? "."  and album_id  = ?");$statement->execute([1,1]);$marketing_budget=$statement->fetchAll()[0][0];$statement->closeCursor();#Reducethemarketingbudgetby10% if it is more than 1,000.$max_marketing_budget=1000;$reduction=0.1;if($marketing_budget >$max_marketing_budget){//Makesurethemarketing_budgetremainsanint.$marketing_budget-=intval($marketing_budget*$reduction);//Setastatementtagfortheupdatestatement.$connection->exec("set spanner.statement_tag='reduce-marketing-budget'");$update_statement=$connection->prepare("update albums set marketing_budget = :budget "."where singer_id = :singer_id "."  and album_id  = :album_id");$update_statement->execute(["budget"=>$marketing_budget,"singer_id"=>$singer_id,"album_id"=>$album_id,]);}else{print("Marketing budget already less than or equal to 1,000\n");}//Committhetransaction.$connection->commit();print("Reduced marketing budget\n");$connection=null;}

Run the sample with the following command:

psql

PGDATABASE=example-db ./tags.sh

Java

java -jar target/pgadapter-snippets/pgadapter-samples.jar tags example-db

Go

gorunsample_runner.gotagsexample-db

Node.js

npm start tags example-db

Python

python tags.py example-db

C#

dotnet run tags example-db

PHP

php tags.php example-db
Tip: For a full list of commands that can be used to accessSpanner features with PGAdapter, seePGAdaptersession management commands.

Retrieve data using read-only transactions

Suppose you want to execute more than one read at the same timestamp.Read-onlytransactions observe a consistentprefix of the transaction commit history, so your application always getsconsistent data.Set the connection to read-only or use theSET TRANSACTION READ ONLY SQLstatement to execute a read-only transaction.

Tip: PGAdapter supports multiple additional SQL statements forexecuting specific types of transactions and batches, and for accessing specificSpanner features. For a full list of supported statements, seePGAdapter session management commands

The following shows how to run a query and perform a read in the same read-onlytransaction:

psql

#!/bin/bashexportPGHOST="${PGHOST:-localhost}"exportPGPORT="${PGPORT:-5432}"exportPGDATABASE="${PGDATABASE:-example-db}"psql <<SQL--Beginatransaction.begin;--Changethecurrenttransactiontoaread-onlytransaction.--Thisstatementcanonlybeexecutedatthestartofatransaction.settransactionreadonly;--Thefollowingtwoqueriesusethesameread-onlytransaction.selectsinger_id,album_id,album_titlefromalbumsorderbysinger_id,album_id;selectsinger_id,album_id,album_titlefromalbumsorderbyalbum_title;--Read-onlytransactionsmustalsobecommittedorrolledbacktomark--theendofthetransaction.Thereisnosemanticdifferencebetween--rollingbackorcommittingaread-onlytransaction.commit;SQL

Java

importjava.sql.Connection;importjava.sql.DriverManager;importjava.sql.ResultSet;importjava.sql.SQLException;classReadOnlyTransaction{staticvoidreadOnlyTransaction(Stringhost,intport,Stringdatabase)throwsSQLException{StringconnectionUrl=String.format("jdbc:postgresql://%s:%d/%s",host,port,database);try(Connectionconnection=DriverManager.getConnection(connectionUrl)){//SetAutoCommit=falsetoenabletransactions.connection.setAutoCommit(false);//ThisSQLstatementinstructstheJDBCdrivertouse//aread-onlytransaction.connection.createStatement().execute("set transaction read only");try(ResultSetresultSet=connection.createStatement().executeQuery("SELECT singer_id, album_id, album_title "+"FROM albums "+"ORDER BY singer_id, album_id")){while(resultSet.next()){System.out.printf("%d%d%s\n",resultSet.getLong("singer_id"),resultSet.getLong("album_id"),resultSet.getString("album_title"));}}try(ResultSetresultSet=connection.createStatement().executeQuery("SELECT singer_id, album_id, album_title "+"FROM albums "+"ORDER BY album_title")){while(resultSet.next()){System.out.printf("%d%d%s\n",resultSet.getLong("singer_id"),resultSet.getLong("album_id"),resultSet.getString("album_title"));}}//Endtheread-onlytransactionbycallingcommit().connection.commit();}}}

Go

import("context""fmt""github.com/jackc/pgx/v5")funcReadOnlyTransaction(hoststring,portint,databasestring)error{ctx:=context.Background()connString:=fmt.Sprintf("postgres://uid:pwd@%s:%d/%s?sslmode=disable",host,port,database)conn,err:=pgx.Connect(ctx,connString)iferr!=nil{returnerr}deferconn.Close(ctx)//Startaread-onlytransactionbysupplyingadditionaltransactionoptions.tx,err:=conn.BeginTx(ctx,pgx.TxOptions{AccessMode:pgx.ReadOnly})albumsOrderedById,err:=tx.Query(ctx,"SELECT singer_id, album_id, album_title FROM albums ORDER BY singer_id, album_id")deferalbumsOrderedById.Close()iferr!=nil{returnerr}foralbumsOrderedById.Next(){varsingerId,albumIdint64vartitlestringerr=albumsOrderedById.Scan(&singerId, &albumId, &title)iferr!=nil{returnerr}fmt.Printf("%v %v %v\n",singerId,albumId,title)}albumsOrderedTitle,err:=tx.Query(ctx,"SELECT singer_id, album_id, album_title FROM albums ORDER BY album_title")deferalbumsOrderedTitle.Close()iferr!=nil{returnerr}foralbumsOrderedTitle.Next(){varsingerId,albumIdint64vartitlestringerr=albumsOrderedTitle.Scan(&singerId, &albumId, &title)iferr!=nil{returnerr}fmt.Printf("%v %v %v\n",singerId,albumId,title)}//Endtheread-onlytransactionbycallingCommit().returntx.Commit(ctx)}

Node.js

import{Client}from'pg';asyncfunctionreadOnlyTransaction(host:string,port:number,database:string):Promise<void>{constconnection=newClient({host:host,port:port,database:database,});awaitconnection.connect();//Startatransaction.awaitconnection.query("begin");//ThisSQLstatementinstructsthePGAdaptertomakeitaread-onlytransaction.awaitconnection.query("set transaction read only");constalbumsOrderById=awaitconnection.query("SELECT singer_id, album_id, album_title "+"FROM albums "+"ORDER BY singer_id, album_id");for(constrowofalbumsOrderById.rows){console.log(`${row["singer_id"]}${row["album_id"]}${row["album_title"]}`);}constalbumsOrderByTitle=awaitconnection.query("SELECT singer_id, album_id, album_title "+"FROM albums "+"ORDER BY album_title");for(constrowofalbumsOrderByTitle.rows){console.log(`${row["singer_id"]}${row["album_id"]}${row["album_title"]}`);}//Endtheread-onlytransactionbyexecutingcommit.awaitconnection.query("commit");//Closetheconnection.awaitconnection.end();}

Python

importstringimportpsycopgdefread_only_transaction(host:string,port:int,database:string):with(psycopg.connect("host={host} port={port} dbname={database} ""sslmode=disable".format(host=host,port=port,database=database))asconn):# Set autocommit=False to enable transactions.conn.autocommit=Falsewithconn.cursor()ascur:# Change the current transaction to a read-only transaction.# This statement can only be executed at the start of a transaction.cur.execute("set transaction read only")# The following two queries use the same read-only transaction.cur.execute("select singer_id, album_id, album_title ""from albums ""order by singer_id, album_id")foralbumincur:print(album)cur.execute("select singer_id, album_id, album_title ""from albums ""order by album_title")foralbumincur:print(album)# Read-only transactions must also be committed or rolled back to mark# the end of the transaction. There is no semantic difference between# rolling back or committing a read-only transaction.conn.commit()

C#

usingNpgsql;usingSystem.Data;namespacedotnet_snippets;publicstaticclassReadOnlyTransactionSample{publicstaticvoidReadOnlyTransaction(stringhost,intport,stringdatabase){varconnectionString=$"Host={host};Port={port};Database={database};SSL Mode=Disable";usingvarconnection=newNpgsqlConnection(connectionString);connection.Open();//Startaread-onlytransaction.//YoumustspecifySerializableastheisolationlevel,asthenpgsqldriver//willotherwiseautomaticallysettheisolationleveltoread-committed.vartransaction=connection.BeginTransaction(IsolationLevel.Serializable);usingvarcmd=connection.CreateCommand();cmd.Transaction=transaction;//ThisSQLstatementinstructsthenpgsqldrivertouse//aread-onlytransaction.cmd.CommandText="set transaction read only";cmd.ExecuteNonQuery();cmd.CommandText="SELECT singer_id, album_id, album_title "+"FROM albums "+"ORDER BY singer_id, album_id";using(varreader=cmd.ExecuteReader()){while(reader.Read()){Console.WriteLine($"{reader["singer_id"]} {reader["album_id"]} {reader["album_title"]}");}}cmd.CommandText="SELECT singer_id, album_id, album_title "+"FROM albums "+"ORDER BY album_title";using(varreader=cmd.ExecuteReader()){while(reader.Read()){Console.WriteLine($"{reader["singer_id"]} {reader["album_id"]} {reader["album_title"]}");}}//Endtheread-onlytransactionbycallingcommit().transaction.Commit();}}

PHP

functionread_only_transaction(string $host, string $port, string $database):void{$dsn=sprintf("pgsql:host=%s;port=%s;dbname=%s",$host,$port,$database);$connection=newPDO($dsn);//Startatransaction.$connection->beginTransaction();//Changethecurrenttransactiontoaread-onlytransaction.//Thisstatementcanonlybeexecutedatthestartofatransaction.$connection->exec("set transaction read only");//Thefollowingtwoqueriesusethesameread-onlytransaction.$statement=$connection->query("select singer_id, album_id, album_title "."from albums "."order by singer_id, album_id");$rows=$statement->fetchAll();foreach($rowsas$album){printf("%s\t%s\t%s\n",$album["singer_id"],$album["album_id"],$album["album_title"]);}$statement=$connection->query("select singer_id, album_id, album_title "."from albums "."order by album_title");$rows=$statement->fetchAll();foreach($rowsas$album){printf("%s\t%s\t%s\n",$album["singer_id"],$album["album_id"],$album["album_title"]);}#Read-onlytransactionsmustalsobecommittedorrolledbacktomark#theendofthetransaction.Thereisnosemanticdifferencebetween#rollingbackorcommittingaread-onlytransaction.$connection->commit();$rows=null;$statement=null;$connection=null;}

Run the sample with the following command:

psql

PGDATABASE=example-db ./read_only_transaction.sh

Java

java -jar target/pgadapter-snippets/pgadapter-samples.jar readonlytransaction example-db

Go

gorunsample_runner.goreadonlytransactionexample-db

Node.js

npm start readonlytransaction example-db

Python

python read_only_transaction.py example-db

C#

dotnet run readonlytransaction example-db

PHP

php read_only_transaction.php example-db

You should see output similar to:

11TotalJunk12Go,Go,Go21Green22ForeverHoldYourPeace23Terrified22ForeverHoldYourPeace12Go,Go,Go21Green23Terrified11TotalJunk

Partitioned queries and Data Boost

ThepartitionQueryAPI divides a query into smaller pieces, or partitions, and uses multiplemachines to fetch the partitions in parallel. Each partition is identified by apartition token. The PartitionQuery API has higher latency than the standardquery API, because it is only intended for bulk operations such as exporting orscanning the whole database.

Data Boostlets you execute analytics queries and data exports with near-zeroimpact to existing workloads on the provisioned Spanner instance.Data Boost only supportspartitioned queries.

psql

#!/bin/bashexportPGHOST="${PGHOST:-localhost}"exportPGPORT="${PGPORT:-5432}"exportPGDATABASE="${PGDATABASE:-example-db}"# 'set spanner.data_boost_enabled=true' enables Data Boost for# all partitioned queries on this connection.# 'run partitioned query' is a shortcut for partitioning the query# that follows and executing each of the partitions that is returned# by Spanner.psql-c"set spanner.data_boost_enabled=true"\-c"run partitioned query         select singer_id, first_name, last_name         from singers"

Java

importjava.sql.Connection;importjava.sql.DriverManager;importjava.sql.ResultSet;importjava.sql.SQLException;classDataBoost{staticvoiddataBoost(Stringhost,intport,Stringdatabase)throwsSQLException{StringconnectionUrl=String.format("jdbc:postgresql://%s:%d/%s",host,port,database);try(Connectionconnection=DriverManager.getConnection(connectionUrl)){//ThisenablesDataBoostforallpartitionedqueriesonthisconnection.connection.createStatement().execute("set spanner.data_boost_enabled=true");//Runapartitionedquery.ThisquerywilluseDataBoost.try(ResultSetresultSet=connection.createStatement().executeQuery("run partitioned query "+"select singer_id, first_name, last_name "+"from singers")){while(resultSet.next()){System.out.printf("%d%s%s\n",resultSet.getLong("singer_id"),resultSet.getString("first_name"),resultSet.getString("last_name"));}}}}}

Go

import("context""fmt""github.com/jackc/pgx/v5")funcDataBoost(hoststring,portint,databasestring)error{ctx:=context.Background()connString:=fmt.Sprintf("postgres://uid:pwd@%s:%d/%s?sslmode=disable",host,port,database)conn,err:=pgx.Connect(ctx,connString)iferr!=nil{returnerr}deferconn.Close(ctx)//ThisenablesDataBoostforallpartitionedqueriesonthisconnection._,_=conn.Exec(ctx,"set spanner.data_boost_enabled=true")//Runapartitionedquery.ThisquerywilluseDataBoost.rows,err:=conn.Query(ctx,"run partitioned query select singer_id, first_name, last_name from singers")deferrows.Close()iferr!=nil{returnerr}forrows.Next(){varsingerIdint64varfirstName,lastNamestringerr=rows.Scan(&singerId, &firstName, &lastName)iferr!=nil{returnerr}fmt.Printf("%v %v %v\n",singerId,firstName,lastName)}returnnil}

Node.js

import{Client}from'pg';asyncfunctiondataBoost(host:string,port:number,database:string):Promise<void>{constconnection=newClient({host:host,port:port,database:database,});awaitconnection.connect();//ThisenablesDataBoostforallpartitionedqueriesonthisconnection.awaitconnection.query("set spanner.data_boost_enabled=true");//Runapartitionedquery.ThisquerywilluseDataBoost.constsingers=awaitconnection.query("run partitioned query "+"select singer_id, first_name, last_name "+"from singers");for(constrowofsingers.rows){console.log(`${row["singer_id"]}${row["first_name"]}${row["last_name"]}`);}//Closetheconnection.awaitconnection.end();}

Python

importstringimportpsycopgdefdata_boost(host:string,port:int,database:string):with(psycopg.connect("host={host} port={port} dbname={database} ""sslmode=disable".format(host=host,port=port,database=database))asconn):# Set autocommit=True so each query uses a separate transaction.conn.autocommit=Truewithconn.cursor()ascur:# This enables Data Boost for all partitioned queries on this# connection.cur.execute("set spanner.data_boost_enabled=true")# Run a partitioned query. This query will use Data Boost.cur.execute("run partitioned query ""select singer_id, first_name, last_name ""from singers")forsingerincur:print(singer)

C#

usingNpgsql;namespacedotnet_snippets;publicstaticclassDataBoostSample{publicstaticvoidDataBoost(stringhost,intport,stringdatabase){varconnectionString=$"Host={host};Port={port};Database={database};SSL Mode=Disable";usingvarconnection=newNpgsqlConnection(connectionString);connection.Open();usingvarcmd=connection.CreateCommand();//ThisenablesDataBoostforallpartitionedqueriesonthisconnection.cmd.CommandText="set spanner.data_boost_enabled=true";cmd.ExecuteNonQuery();//Runapartitionedquery.ThisquerywilluseDataBoost.cmd.CommandText="run partitioned query "+"select singer_id, first_name, last_name "+"from singers";usingvarreader=cmd.ExecuteReader();while(reader.Read()){Console.WriteLine($"{reader["singer_id"]} {reader["first_name"]} {reader["last_name"]}");}}}

PHP

functiondata_boost(string $host, string $port, string $database):void{$dsn=sprintf("pgsql:host=%s;port=%s;dbname=%s",$host,$port,$database);$connection=newPDO($dsn);//ThisenablesDataBoostforallpartitionedqueriesonthis//connection.$connection->exec("set spanner.data_boost_enabled=true");//Runapartitionedquery.ThisquerywilluseDataBoost.$statement=$connection->query("run partitioned query "."select singer_id, first_name, last_name "."from singers");$rows=$statement->fetchAll();foreach($rowsas$singer){printf("%s\t%s\t%s\n",$singer["singer_id"],$singer["first_name"],$singer["last_name"]);}$rows=null;$statement=null;$connection=null;}

Run the sample with the following command:

psql

PGDATABASE=example-db ./data_boost.sh

Java

java -jar target/pgadapter-snippets/pgadapter-samples.jar databoost example-db

Go

gorunsample_runner.godataboostexample-db

Node.js

npm start databoost example-db

Python

python data_boost.py example-db

C#

dotnet run databoost example-db

PHP

php data_boost.php example-db

For more information on running partitioned queries and using Data Boost withPGAdapter, see:Data Boost and partitioned query statements

Partitioned DML

Partitioned Data Manipulation Language (DML) isdesigned for the following types of bulk updates and deletes:

  • Periodic cleanup and garbage collection.
  • Backfilling new columns with default values.

psql

#!/bin/bashexportPGHOST="${PGHOST:-localhost}"exportPGPORT="${PGPORT:-5432}"exportPGDATABASE="${PGDATABASE:-example-db}"# Change the DML mode that is used by this connection to Partitioned# DML. Partitioned DML is designed for bulk updates and deletes.# See https://cloud.google.com/spanner/docs/dml-partitioned for more# information.psql-c"set spanner.autocommit_dml_mode='partitioned_non_atomic'"\-c"update albums         set marketing_budget=0         where marketing_budget is null"echo"Updated albums using Partitioned DML"

Java

importjava.sql.Connection;importjava.sql.DriverManager;importjava.sql.SQLException;classPartitionedDml{staticvoidpartitionedDml(Stringhost,intport,Stringdatabase)throwsSQLException{StringconnectionUrl=String.format("jdbc:postgresql://%s:%d/%s",host,port,database);try(Connectionconnection=DriverManager.getConnection(connectionUrl)){//EnablePartitionedDMLonthisconnection.connection.createStatement().execute("set spanner.autocommit_dml_mode='partitioned_non_atomic'");//Back-filladefaultvaluefortheMarketingBudgetcolumn.longlowerBoundUpdateCount=connection.createStatement().executeUpdate("update albums set marketing_budget=0 where marketing_budget is null");System.out.printf("Updated at least%d albums\n",lowerBoundUpdateCount);}}}

Go

import("context""fmt""github.com/jackc/pgx/v5")funcPartitionedDML(hoststring,portint,databasestring)error{ctx:=context.Background()connString:=fmt.Sprintf("postgres://uid:pwd@%s:%d/%s?sslmode=disable",host,port,database)conn,err:=pgx.Connect(ctx,connString)iferr!=nil{returnerr}deferconn.Close(ctx)//EnablePartitionedDMLonthisconnection.if_,err:=conn.Exec(ctx,"set spanner.autocommit_dml_mode='partitioned_non_atomic'");err!=nil{returnerr}//Back-filladefaultvaluefortheMarketingBudgetcolumn.tag,err:=conn.Exec(ctx,"update albums set marketing_budget=0 where marketing_budget is null")iferr!=nil{returnerr}fmt.Printf("Updated at least %v albums\n",tag.RowsAffected())returnnil}

Node.js

import{Client}from'pg';asyncfunctionpartitionedDml(host:string,port:number,database:string):Promise<void>{constconnection=newClient({host:host,port:port,database:database,});awaitconnection.connect();//EnablePartitionedDMLonthisconnection.awaitconnection.query("set spanner.autocommit_dml_mode='partitioned_non_atomic'");//Back-filladefaultvaluefortheMarketingBudgetcolumn.constlowerBoundUpdateCount=awaitconnection.query("update albums "+"set marketing_budget=0 "+"where marketing_budget is null");console.log(`Updatedatleast${lowerBoundUpdateCount.rowCount}albums`);//Closetheconnection.awaitconnection.end();}

Python

importstringimportpsycopgdefexecute_partitioned_dml(host:string,port:int,database:string):withpsycopg.connect("host={host} port={port} dbname={database} ""sslmode=disable".format(host=host,port=port,database=database))asconn:conn.autocommit=Truewithconn.cursor()ascur:# Change the DML mode that is used by this connection to Partitioned# DML. Partitioned DML is designed for bulk updates and deletes.# See https://cloud.google.com/spanner/docs/dml-partitioned for more# information.cur.execute("set spanner.autocommit_dml_mode='partitioned_non_atomic'")# The following statement will use Partitioned DML.cur.execute("update albums ""set marketing_budget=0 ""where marketing_budget is null")print("Updated at least%d albums"%cur.rowcount)

C#

usingNpgsql;namespacedotnet_snippets;publicstaticclassPartitionedDmlSample{publicstaticvoidPartitionedDml(stringhost,intport,stringdatabase){varconnectionString=$"Host={host};Port={port};Database={database};SSL Mode=Disable";usingvarconnection=newNpgsqlConnection(connectionString);connection.Open();//EnablePartitionedDMLonthisconnection.usingvarcmd=connection.CreateCommand();cmd.CommandText="set spanner.autocommit_dml_mode='partitioned_non_atomic'";cmd.ExecuteNonQuery();//Back-filladefaultvaluefortheMarketingBudgetcolumn.cmd.CommandText="update albums set marketing_budget=0 where marketing_budget is null";varlowerBoundUpdateCount=cmd.ExecuteNonQuery();Console.WriteLine($"Updated at least {lowerBoundUpdateCount} albums");}}

PHP

functionexecute_partitioned_dml(string $host, string $port, string $database):void{$dsn=sprintf("pgsql:host=%s;port=%s;dbname=%s",$host,$port,$database);$connection=newPDO($dsn);//ChangetheDMLmodethatisusedbythisconnectiontoPartitioned//DML.PartitionedDMLisdesignedforbulkupdatesanddeletes.//Seehttps://cloud.google.com/spanner/docs/dml-partitionedformore//information.$connection->exec("set spanner.autocommit_dml_mode='partitioned_non_atomic'");//ThefollowingstatementwillusePartitionedDML.$rowcount=$connection->exec("update albums "."set marketing_budget=0 "."where marketing_budget is null");printf("Updated at least %d albums\n",$rowcount);$statement=null;$connection=null;}

Run the sample with the following command:

psql

PGDATABASE=example-db ./partitioned_dml.sh

Java

java -jar target/pgadapter-snippets/pgadapter-samples.jar partitioneddml example-db

Go

gorunsample_runner.gopartitioneddmlexample-db

Node.js

npm start partitioneddml example-db

Python

python partitioned_dml.py example-db

C#

dotnet run datpartitioneddmlboost example-db

PHP

php partitioned_dml.php example-db

Cleanup

To avoid incurring additional charges to your Cloud Billing account for theresources used in this tutorial, drop the database and delete the instance thatyou created.

Delete the database

If you delete an instance, all databases within it are automatically deleted.This step shows how to delete a database without deleting an instance (you wouldstill incur charges for the instance).

On the command line

gcloud spanner databases delete example-db --instance=test-instance

Using the Google Cloud console

  1. Go to theSpanner Instances page in the Google Cloud console.

    Go to the Instances page

  2. Click the instance.

  3. Click the database that you want to delete.

  4. In theDatabase details page, clickDelete.

  5. Confirm that you want to delete the database and clickDelete.

Delete the instance

Deleting an instance automatically drops all databases created in that instance.

On the command line

gcloud spanner instances delete test-instance

Using the Google Cloud console

  1. Go to theSpanner Instances page in the Google Cloud console.

    Go to the Instances page

  2. Click your instance.

  3. ClickDelete.

  4. Confirm that you want to delete the instance and clickDelete.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.