spanner
packagemoduleThis package is not in the latest version of its module.
Details
Validgo.mod file
The Go module system was introduced in Go 1.11 and is the official dependency management solution for Go.
Redistributable license
Redistributable licenses place minimal restrictions on how software can be used, modified, and redistributed.
Tagged version
Modules with tagged versions give importers more predictable builds.
Stable version
When a project reaches major version v1 it is considered stable.
- Learn more about best practices
Repository
Links
README¶
Cloud Spanner
Example Usage
First create aspanner.Client to use throughout your application:
client, err := spanner.NewClient(ctx, "projects/P/instances/I/databases/D")if err != nil {log.Fatal(err)}// Simple Reads And Writes_, err = client.Apply(ctx, []*spanner.Mutation{spanner.Insert("Users",[]string{"name", "email"},[]interface{}{"alice", "a@example.com"})})if err != nil {log.Fatal(err)}row, err := client.Single().ReadRow(ctx, "Users",spanner.Key{"alice"}, []string{"email"})if err != nil {log.Fatal(err)}Session Leak
AClient object of the Client Library has a limit on the number of maximum sessions. For example thedefault value ofMaxOpened, which is the maximum number of sessions allowed by the session pool in theGolang Client Library, is 400. You can configure these values at the time ofcreating aClient by passing customSessionPoolConfig as part ofClientConfig. When all the sessions are checkedout of the session pool, every new transaction has to wait until a session is returned to the pool.If a session is never returned to the pool (hence causing a session leak), the transactions will have to waitindefinitely and your application will be blocked.
Common Root Causes
The most common reason for session leaks in the Golang client library are:
- Not stopping a
RowIteratorthat is returned byQuery,Readand other methods. Always useRowIterator.Stop()to ensure that theRowIteratoris always closed. - Not closing a
ReadOnlyTransactionwhen you no longer need it. Always callReadOnlyTransaction.Close()after use, to ensure that theReadOnlyTransactionis always closed.
As shown in the example below, thetxn.Close() statement releases the session after it is complete.If you fail to calltxn.Close(), the session is not released back to the pool. The recommended way is to usedefer as shown below.
client, err := spanner.NewClient(ctx, "projects/P/instances/I/databases/D")if err != nil { log.Fatal(err)}txn := client.ReadOnlyTransaction()defer txn.Close()Debugging and Resolving Session Leaks
Logging inactive transactions
This option logs warnings when you have exhausted >95% of your session pool. It is enabled by default.This could mean two things; either you need to increase the max sessions in your session pool (as the numberof queries run using the client side database object is greater than your session pool can serve), or you mayhave a session leak. To help debug which transactions may be causing this session leak, the logs will also contain stack traces oftransactions which have been running longer than expected ifTrackSessionHandles underSessionPoolConfig is enabled.
sessionPoolConfig := spanner.SessionPoolConfig{ TrackSessionHandles: true, InactiveTransactionRemovalOptions: spanner.InactiveTransactionRemovalOptions{ ActionOnInactiveTransaction: spanner.Warn, },}client, err := spanner.NewClientWithConfig(ctx, database, spanner.ClientConfig{SessionPoolConfig: sessionPoolConfig},)if err != nil {log.Fatal(err)}defer client.Close()// Example Log message to warn presence of long running transactions// session <session-info> checked out of pool at <session-checkout-time> is long running due to possible session leak for goroutine// <Stack Trace of transaction>Automatically clean inactive transactions
When the option to automatically clean inactive transactions is enabled, the client library will automatically detectproblematic transactions that are running for a very long time (thus causing session leaks) and close them.The session will be removed from the pool and be replaced by a new session. To dig deeper into which transactions are beingclosed, you can check the logs to see the stack trace of the transactions which might be causing these leaks and furtherdebug them.
sessionPoolConfig := spanner.SessionPoolConfig{ TrackSessionHandles: true, InactiveTransactionRemovalOptions: spanner.InactiveTransactionRemovalOptions{ ActionOnInactiveTransaction: spanner.WarnAndClose, },}client, err := spanner.NewClientWithConfig(ctx, database, spanner.ClientConfig{SessionPoolConfig: sessionPoolConfig},)if err != nil {log.Fatal(err)}defer client.Close()// Example Log message for when transaction is recycled// session <session-info> checked out of pool at <session-checkout-time> is long running and will be removed due to possible session leak for goroutine // <Stack Trace of transaction>Metrics
Cloud Spanner client supportsclient-side metrics that you can use along with server-side metrics to optimize performance and troubleshoot performance issues if they occur.
Client-side metrics are measured from the time a request leaves your application to the time your application receives the response.In contrast, server-side metrics are measured from the time Spanner receives a request until the last byte of data is sent to the client.
These metrics are enabled by default. You can opt out of using client-side metrics with the following code:
client, err := spanner.NewClientWithConfig(ctx, database, spanner.ClientConfig{DisableNativeMetrics: true},)if err != nil {log.Fatal(err)}defer client.Close()You can also disable these metrics by settingSPANNER_DISABLE_BUILTIN_METRICS totrue.
Note: Exporting client-side metrics requires the
monitoring.timeSeries.createIAM permission. To grant this, ask your administrator to assign theMonitoring Metric Writer (roles/monitoring.metricWriter) IAM role to your application's service account.
Documentation¶
Overview¶
Package spanner provides a client for reading and writing to Cloud Spannerdatabases. See the packages under admin for clients that operate on databasesand instances.
Seehttps://cloud.google.com/spanner/docs/getting-started/go/ for anintroduction to Cloud Spanner and additional help on using this API.
Seehttps://godoc.org/cloud.google.com/go for authentication, timeouts,connection pooling and similar aspects of this package.
Creating a Client¶
To start working with this package, create a client that refers to the databaseof interest:
ctx := context.Background()client, err := spanner.NewClient(ctx, "projects/P/instances/I/databases/D")if err != nil { // TODO: Handle error.}defer client.Close()Remember to close the client after use to free up the sessions in the sessionpool.
To use an emulator with this library, you can set the SPANNER_EMULATOR_HOSTenvironment variable to the address at which your emulator is running. This willsend requests to that address instead of to Cloud Spanner. You can then createand use a client as usual:
// Set SPANNER_EMULATOR_HOST environment variable.err := os.Setenv("SPANNER_EMULATOR_HOST", "localhost:9010")if err != nil { // TODO: Handle error.}// Create client as usual.client, err := spanner.NewClient(ctx, "projects/P/instances/I/databases/D")if err != nil { // TODO: Handle error.}Simple Reads and Writes¶
Two Client methods, Apply and Single, work well for simple reads and writes. Asa quick introduction, here we write a new row to the database and read it back:
_, err := client.Apply(ctx, []*spanner.Mutation{ spanner.Insert("Users", []string{"name", "email"}, []interface{}{"alice", "a@example.com"})})if err != nil { // TODO: Handle error.}row, err := client.Single().ReadRow(ctx, "Users", spanner.Key{"alice"}, []string{"email"})if err != nil { // TODO: Handle error.}All the methods used above are discussed in more detail below.
Keys¶
Every Cloud Spanner row has a unique key, composed of one or more columns.Construct keys with a literal of type Key:
key1 := spanner.Key{"alice"}KeyRanges¶
The keys of a Cloud Spanner table are ordered. You can specify ranges of keysusing the KeyRange type:
kr1 := spanner.KeyRange{Start: key1, End: key2}By default, a KeyRange includes its start key but not its end key. Usethe Kind field to specify other boundary conditions:
// include both keyskr2 := spanner.KeyRange{Start: key1, End: key2, Kind: spanner.ClosedClosed}KeySets¶
A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet.Use the KeySets function to build the union of several KeySets:
ks1 := spanner.KeySets(key1, key2, kr1, kr2)
AllKeys returns a KeySet that refers to all the keys in a table:
ks2 := spanner.AllKeys()
Transactions¶
All Cloud Spanner reads and writes occur inside transactions. There are twotypes of transactions, read-only and read-write. Read-only transactions cannotchange the database, do not acquire locks, and may access either the currentdatabase state or states in the past. Read-write transactions can read thedatabase before writing to it, and always apply to the most recent databasestate.
Single Reads¶
The simplest and fastest transaction is a ReadOnlyTransaction that supports asingle read operation. Use Client.Single to create such a transaction. You canchain the call to Single with a call to a Read method.
When you only want one row whose key you know, use ReadRow. Provide the tablename, key, and the columns you want to read:
row, err := client.Single().ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"balance"})Read multiple rows with the Read method. It takes a table name, KeySet, and listof columns:
iter := client.Single().Read(ctx, "Accounts", keyset1, columns)
Read returns a RowIterator. You can call the Do method on the iterator and passa callback:
err := iter.Do(func(row *Row) error { // TODO: use row return nil})RowIterator also follows the standard pattern for the GoogleCloud Client Libraries:
defer iter.Stop()for { row, err := iter.Next() if err == iterator.Done { break } if err != nil { // TODO: Handle error. } // TODO: use row}Always call Stop when you finish using an iterator this way, whether or not youiterate to the end. (Failing to call Stop could lead you to exhaust thedatabase's session quota.)
To read rows with an index, use ReadUsingIndex.
Statements¶
The most general form of reading uses SQL statements. Construct a Statementwith NewStatement, setting any parameters using the Statement's Params map:
stmt := spanner.NewStatement("SELECT First, Last FROM SINGERS WHERE Last >= @start")stmt.Params["start"] = "Dylan"You can also construct a Statement directly with a struct literal, providingyour own map of parameters.
Use the Query method to run the statement and obtain an iterator:
iter := client.Single().Query(ctx, stmt)
Rows¶
Once you have a Row, via an iterator or a call to ReadRow, you can extractcolumn values in several ways. Pass in a pointer to a Go variable of theappropriate type when you extract a value.
You can extract by column position or name:
err := row.Column(0, &name)err = row.ColumnByName("balance", &balance)You can extract all the columns at once:
err = row.Columns(&name, &balance)
Or you can define a Go struct that corresponds to your columns, and extractinto that:
var s struct { Name string; Balance int64 }err = row.ToStruct(&s)For Cloud Spanner columns that may contain NULL, use one of the NullXXX types,like NullString:
var ns spanner.NullStringif err := row.Column(0, &ns); err != nil { // TODO: Handle error.}if ns.Valid { fmt.Println(ns.StringVal)} else { fmt.Println("column is NULL")}Multiple Reads¶
To perform more than one read in a transaction, use ReadOnlyTransaction:
txn := client.ReadOnlyTransaction()defer txn.Close()iter := txn.Query(ctx, stmt1)// ...iter = txn.Query(ctx, stmt2)// ...
You must call Close when you are done with the transaction.
Timestamps and Timestamp Bounds¶
Cloud Spanner read-only transactions conceptually perform all their reads at asingle moment in time, called the transaction's read timestamp. Once a read hasstarted, you can call ReadOnlyTransaction's Timestamp method to obtain the readtimestamp.
By default, a transaction will pick the most recent time (a time where allpreviously committed transactions are visible) for its reads. This provides thefreshest data, but may involve some delay. You can often get a quicker responseif you are willing to tolerate "stale" data. You can control the read timestampselected by a transaction by calling the WithTimestampBound method on thetransaction before using it. For example, to perform a query on data that is atmost one minute stale, use
client.Single(). WithTimestampBound(spanner.MaxStaleness(1*time.Minute)). Query(ctx, stmt)
See the documentation of TimestampBound for more details.
Mutations¶
To write values to a Cloud Spanner database, construct a Mutation. The spannerpackage has functions for inserting, updating and deleting rows. Except for theDelete methods, which take a Key or KeyRange, each mutation-building functioncomes in three varieties.
One takes lists of columns and values along with the table name:
m1 := spanner.Insert("Users", []string{"name", "email"}, []interface{}{"alice", "a@example.com"})One takes a map from column names to values:
m2 := spanner.InsertMap("Users", map[string]interface{}{ "name": "alice", "email": "a@example.com",})And the third accepts a struct value, and determines the columns from thestruct field names:
type User struct { Name, Email string }u := User{Name: "alice", Email: "a@example.com"}m3, err := spanner.InsertStruct("Users", u)Writes¶
To apply a list of mutations to the database, use Apply:
_, err := client.Apply(ctx, []*spanner.Mutation{m1, m2, m3})If you need to read before writing in a single transaction, use aReadWriteTransaction. ReadWriteTransactions may be aborted automatically by thebackend and need to be retried. You pass in a function to ReadWriteTransaction,and the client will handle the retries automatically. Use the transaction'sBufferWrite method to buffer mutations, which will all be executed at the endof the transaction:
_, err := client.ReadWriteTransaction(ctx, func(ctx context.Context, txn *spanner.ReadWriteTransaction) error { var balance int64 row, err := txn.ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"balance"}) if err != nil { // The transaction function will be called again if the error code // of this error is Aborted. The backend may automatically abort // any read/write transaction if it detects a deadlock or other // problems. return err } if err := row.Column(0, &balance); err != nil { return err } if balance <= 10 { return errors.New("insufficient funds in account") } balance -= 10 m := spanner.Update("Accounts", []string{"user", "balance"}, []interface{}{"alice", balance}) // The buffered mutation will be committed. If the commit // fails with an Aborted error, this function will be called // again. return txn.BufferWrite([]*spanner.Mutation{m})})Structs¶
Cloud Spanner STRUCT (aka STRUCT) values(https://cloud.google.com/spanner/docs/data-types#struct-type) can berepresented by a Go struct value.
A proto StructType is built from the field types and field tag information ofthe Go struct. If a field in the struct type definition has a"spanner:<field_name>" tag, then the value of the "spanner" key in the tag isused as the name for that field in the built StructType, otherwise the fieldname in the struct definition is used. To specify a field with an empty fieldname in a Cloud Spanner STRUCT type, use the `spanner:""` tag annotation againstthe corresponding field in the Go struct's type definition.
The spanner tag supports the following options:
| Tag | Description ||-----|-------------|| `spanner:"column_name"` | Set column name to `column_name` || `spanner:"->"` | Read-only field (excluded from writes, included in reads) || `spanner:"column_name;->"` | Set column name and mark as read-only |
A STRUCT value can contain STRUCT-typed and Array-of-STRUCT typed fields andthese can be specified using named struct-typed and []struct-typed fields insidea Go struct. However, embedded struct fields are not allowed. Unexported structfields are ignored.
NULL STRUCT values in Cloud Spanner are typed. A nil pointer to a Go structvalue can be used to specify a NULL STRUCT value of the correspondingStructType. Nil and empty slices of a Go STRUCT type can be used to specifyNULL and empty array values respectively of the corresponding StructType. Aslice of pointers to a Go struct type can be used to specify an array ofNULL-able STRUCT values.
DML and Partitioned DML¶
Spanner supports DML statements like INSERT, UPDATE and DELETE. UseReadWriteTransaction.Update to run DML statements. It returns the number of rowsaffected. (You can call use ReadWriteTransaction.Query with a DML statement. Thefirst call to Next on the resulting RowIterator will return iterator.Done, andthe RowCount field of the iterator will hold the number of affected rows.)
For large databases, it may be more efficient to partition the DML statement.Use client.PartitionedUpdate to run a DML statement in this way. Not all DMLstatements can be partitioned.
Tracing¶
This client has been instrumented to use OpenCensus tracing(http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program"athttps://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8or higher.
Index¶
- Constants
- Variables
- func DisableGfeLatencyAndHeaderMissingCountViews()deprecated
- func EnableGfeHeaderMissingCountView() errordeprecated
- func EnableGfeLatencyAndHeaderMissingCountViews() errordeprecated
- func EnableGfeLatencyView() errordeprecated
- func EnableOpenTelemetryMetrics()
- func EnableStatViews() errordeprecated
- func ErrCode(err error) codes.Code
- func ErrDesc(err error) string
- func ExtractRetryDelay(err error) (time.Duration, bool)
- func IsOpenTelemetryMetricsEnabled() bool
- func NumericString(r *big.Rat) string
- func SelectAll(rows rowIterator, destination interface{}, options ...DecodeOptions) error
- func ToSpannerError(err error) error
- func UseNumberWithJSONDecoderEncoder(useNumber bool)
- type AckOption
- type ActionOnInactiveTransactionKinddeprecated
- type ApplyOption
- func ApplyAtLeastOnce() ApplyOption
- func ApplyCommitOptions(co CommitOptions) ApplyOption
- func ExcludeTxnFromChangeStreams() ApplyOption
- func IsolationLevel(isolationLevel sppb.TransactionOptions_IsolationLevel) ApplyOption
- func Priority(priority sppb.RequestOptions_Priority) ApplyOption
- func TransactionTag(tag string) ApplyOption
- type BatchReadOnlyTransaction
- func (t *BatchReadOnlyTransaction) AnalyzeQuery(ctx context.Context, statement Statement) (*sppb.QueryPlan, error)
- func (t *BatchReadOnlyTransaction) Cleanup(ctx context.Context)
- func (t *BatchReadOnlyTransaction) Close()
- func (t *BatchReadOnlyTransaction) Execute(ctx context.Context, p *Partition) *RowIterator
- func (t *BatchReadOnlyTransaction) PartitionQuery(ctx context.Context, statement Statement, opt PartitionOptions) ([]*Partition, error)
- func (t *BatchReadOnlyTransaction) PartitionQueryWithOptions(ctx context.Context, statement Statement, opt PartitionOptions, ...) ([]*Partition, error)
- func (t *BatchReadOnlyTransaction) PartitionRead(ctx context.Context, table string, keys KeySet, columns []string, ...) ([]*Partition, error)
- func (t *BatchReadOnlyTransaction) PartitionReadUsingIndex(ctx context.Context, table, index string, keys KeySet, columns []string, ...) ([]*Partition, error)
- func (t *BatchReadOnlyTransaction) PartitionReadUsingIndexWithOptions(ctx context.Context, table, index string, keys KeySet, columns []string, ...) ([]*Partition, error)
- func (t *BatchReadOnlyTransaction) PartitionReadWithOptions(ctx context.Context, table string, keys KeySet, columns []string, ...) ([]*Partition, error)
- func (t *BatchReadOnlyTransaction) Query(ctx context.Context, statement Statement) *RowIterator
- func (t *BatchReadOnlyTransaction) QueryWithOptions(ctx context.Context, statement Statement, opts QueryOptions) *RowIterator
- func (t *BatchReadOnlyTransaction) QueryWithStats(ctx context.Context, statement Statement) *RowIterator
- func (t *BatchReadOnlyTransaction) Read(ctx context.Context, table string, keys KeySet, columns []string) *RowIterator
- func (t *BatchReadOnlyTransaction) ReadRow(ctx context.Context, table string, key Key, columns []string) (*Row, error)
- func (t *BatchReadOnlyTransaction) ReadRowUsingIndex(ctx context.Context, table string, index string, key Key, columns []string) (*Row, error)
- func (t *BatchReadOnlyTransaction) ReadRowWithOptions(ctx context.Context, table string, key Key, columns []string, ...) (*Row, error)
- func (t *BatchReadOnlyTransaction) ReadUsingIndex(ctx context.Context, table, index string, keys KeySet, columns []string) (ri *RowIterator)
- func (t *BatchReadOnlyTransaction) ReadWithOptions(ctx context.Context, table string, keys KeySet, columns []string, ...) (ri *RowIterator)
- type BatchReadOnlyTransactionID
- type BatchWriteOptions
- type BatchWriteResponseIterator
- type BeginTransactionOption
- type Client
- func NewClient(ctx context.Context, database string, opts ...option.ClientOption) (*Client, error)
- func NewClientWithConfig(ctx context.Context, database string, config ClientConfig, ...) (c *Client, err error)
- func NewMultiEndpointClient(ctx context.Context, database string, gmeCfg *grpcgcp.GCPMultiEndpointOptions, ...) (*Client, *grpcgcp.GCPMultiEndpoint, error)
- func NewMultiEndpointClientWithConfig(ctx context.Context, database string, config ClientConfig, ...) (c *Client, gme *grpcgcp.GCPMultiEndpoint, err error)
- func (c *Client) Apply(ctx context.Context, ms []*Mutation, opts ...ApplyOption) (commitTimestamp time.Time, err error)
- func (c *Client) BatchReadOnlyTransaction(ctx context.Context, tb TimestampBound) (*BatchReadOnlyTransaction, error)
- func (c *Client) BatchReadOnlyTransactionFromID(tid BatchReadOnlyTransactionID) *BatchReadOnlyTransaction
- func (c *Client) BatchWrite(ctx context.Context, mgs []*MutationGroup) *BatchWriteResponseIterator
- func (c *Client) BatchWriteWithOptions(ctx context.Context, mgs []*MutationGroup, opts BatchWriteOptions) *BatchWriteResponseIterator
- func (c *Client) ClientID() string
- func (c *Client) Close()
- func (c *Client) DatabaseName() string
- func (c *Client) PartitionedUpdate(ctx context.Context, statement Statement) (count int64, err error)
- func (c *Client) PartitionedUpdateWithOptions(ctx context.Context, statement Statement, opts QueryOptions) (count int64, err error)
- func (c *Client) ReadOnlyTransaction() *ReadOnlyTransaction
- func (c *Client) ReadWriteTransaction(ctx context.Context, f func(context.Context, *ReadWriteTransaction) error) (commitTimestamp time.Time, err error)
- func (c *Client) ReadWriteTransactionWithOptions(ctx context.Context, f func(context.Context, *ReadWriteTransaction) error, ...) (resp CommitResponse, err error)
- func (c *Client) Single() *ReadOnlyTransaction
- type ClientConfig
- type CommitOptions
- type CommitResponse
- type DecodeOptions
- type Decoder
- type Encoder
- type Errordeprecated
- type GenericColumnValue
- type InactiveTransactionRemovalOptionsdeprecated
- type Interval
- type Key
- type KeyRange
- type KeyRangeKind
- type KeySet
- type LossOfPrecisionHandlingOption
- type Mutation
- func Ack(queue string, key Key, opts ...AckOption) *Mutation
- func Delete(table string, ks KeySet) *Mutation
- func Insert(table string, cols []string, vals []interface{}) *Mutation
- func InsertMap(table string, in map[string]interface{}) *Mutation
- func InsertOrUpdate(table string, cols []string, vals []interface{}) *Mutation
- func InsertOrUpdateMap(table string, in map[string]interface{}) *Mutation
- func InsertOrUpdateStruct(table string, in interface{}) (*Mutation, error)
- func InsertStruct(table string, in interface{}) (*Mutation, error)
- func Replace(table string, cols []string, vals []interface{}) *Mutation
- func ReplaceMap(table string, in map[string]interface{}) *Mutation
- func ReplaceStruct(table string, in interface{}) (*Mutation, error)
- func Send(queue string, key Key, payload interface{}, opts ...SendOption) *Mutation
- func Update(table string, cols []string, vals []interface{}) *Mutation
- func UpdateMap(table string, in map[string]interface{}) *Mutation
- func UpdateStruct(table string, in interface{}) (*Mutation, error)
- func WrapMutation(proto *sppb.Mutation) (*Mutation, error)
- type MutationGroup
- type NullBool
- type NullDate
- type NullFloat32
- func (n NullFloat32) GormDataType() string
- func (n NullFloat32) IsNull() bool
- func (n NullFloat32) MarshalJSON() ([]byte, error)
- func (n *NullFloat32) Scan(value interface{}) error
- func (n NullFloat32) String() string
- func (n *NullFloat32) UnmarshalJSON(payload []byte) error
- func (n NullFloat32) Value() (driver.Value, error)
- type NullFloat64
- func (n NullFloat64) GormDataType() string
- func (n NullFloat64) IsNull() bool
- func (n NullFloat64) MarshalJSON() ([]byte, error)
- func (n *NullFloat64) Scan(value interface{}) error
- func (n NullFloat64) String() string
- func (n *NullFloat64) UnmarshalJSON(payload []byte) error
- func (n NullFloat64) Value() (driver.Value, error)
- type NullInt64
- func (n NullInt64) GormDataType() string
- func (n NullInt64) IsNull() bool
- func (n NullInt64) MarshalJSON() ([]byte, error)
- func (n *NullInt64) Scan(value interface{}) error
- func (n NullInt64) String() string
- func (n *NullInt64) UnmarshalJSON(payload []byte) error
- func (n NullInt64) Value() (driver.Value, error)
- type NullInterval
- func (n NullInterval) GormDataType() string
- func (n NullInterval) IsNull() bool
- func (n NullInterval) MarshalJSON() ([]byte, error)
- func (n *NullInterval) Scan(value interface{}) error
- func (n NullInterval) String() string
- func (n *NullInterval) UnmarshalJSON(payload []byte) error
- func (n NullInterval) Value() (driver.Value, error)
- type NullJSON
- type NullNumeric
- func (n NullNumeric) GormDataType() string
- func (n NullNumeric) IsNull() bool
- func (n NullNumeric) MarshalJSON() ([]byte, error)
- func (n *NullNumeric) Scan(value interface{}) error
- func (n NullNumeric) String() string
- func (n *NullNumeric) UnmarshalJSON(payload []byte) error
- func (n NullNumeric) Value() (driver.Value, error)
- type NullProtoEnum
- type NullProtoMessage
- type NullRow
- type NullString
- func (n NullString) GormDataType() string
- func (n NullString) IsNull() bool
- func (n NullString) MarshalJSON() ([]byte, error)
- func (n *NullString) Scan(value interface{}) error
- func (n NullString) String() string
- func (n *NullString) UnmarshalJSON(payload []byte) error
- func (n NullString) Value() (driver.Value, error)
- type NullTime
- type NullUUID
- type NullableValue
- type PGJsonB
- type PGNumeric
- func (n PGNumeric) GormDataType() string
- func (n PGNumeric) IsNull() bool
- func (n PGNumeric) MarshalJSON() ([]byte, error)
- func (n *PGNumeric) Scan(value interface{}) error
- func (n PGNumeric) String() string
- func (n *PGNumeric) UnmarshalJSON(payload []byte) error
- func (n PGNumeric) Value() (driver.Value, error)
- type Partition
- type PartitionOptions
- type QueryOptions
- type ReadOnlyTransaction
- func (t *ReadOnlyTransaction) AnalyzeQuery(ctx context.Context, statement Statement) (*sppb.QueryPlan, error)
- func (t *ReadOnlyTransaction) Close()
- func (t *ReadOnlyTransaction) Query(ctx context.Context, statement Statement) *RowIterator
- func (t *ReadOnlyTransaction) QueryWithOptions(ctx context.Context, statement Statement, opts QueryOptions) *RowIterator
- func (t *ReadOnlyTransaction) QueryWithStats(ctx context.Context, statement Statement) *RowIterator
- func (t *ReadOnlyTransaction) Read(ctx context.Context, table string, keys KeySet, columns []string) *RowIterator
- func (t *ReadOnlyTransaction) ReadRow(ctx context.Context, table string, key Key, columns []string) (*Row, error)
- func (t *ReadOnlyTransaction) ReadRowUsingIndex(ctx context.Context, table string, index string, key Key, columns []string) (*Row, error)
- func (t *ReadOnlyTransaction) ReadRowWithOptions(ctx context.Context, table string, key Key, columns []string, ...) (*Row, error)
- func (t *ReadOnlyTransaction) ReadUsingIndex(ctx context.Context, table, index string, keys KeySet, columns []string) (ri *RowIterator)
- func (t *ReadOnlyTransaction) ReadWithOptions(ctx context.Context, table string, keys KeySet, columns []string, ...) (ri *RowIterator)
- func (t *ReadOnlyTransaction) Timestamp() (time.Time, error)
- func (t *ReadOnlyTransaction) WithBeginTransactionOption(option BeginTransactionOption) *ReadOnlyTransaction
- func (t *ReadOnlyTransaction) WithTimestampBound(tb TimestampBound) *ReadOnlyTransaction
- type ReadOptions
- type ReadWriteStmtBasedTransaction
- func NewReadWriteStmtBasedTransaction(ctx context.Context, c *Client) (*ReadWriteStmtBasedTransaction, error)
- func NewReadWriteStmtBasedTransactionWithCallbackForOptions(ctx context.Context, c *Client, opts TransactionOptions, ...) (*ReadWriteStmtBasedTransaction, error)
- func NewReadWriteStmtBasedTransactionWithOptions(ctx context.Context, c *Client, options TransactionOptions) (*ReadWriteStmtBasedTransaction, error)
- func (t *ReadWriteStmtBasedTransaction) AnalyzeQuery(ctx context.Context, statement Statement) (*sppb.QueryPlan, error)
- func (t *ReadWriteStmtBasedTransaction) Commit(ctx context.Context) (time.Time, error)
- func (t *ReadWriteStmtBasedTransaction) CommitWithReturnResp(ctx context.Context) (CommitResponse, error)
- func (t *ReadWriteStmtBasedTransaction) Query(ctx context.Context, statement Statement) *RowIterator
- func (t *ReadWriteStmtBasedTransaction) QueryWithOptions(ctx context.Context, statement Statement, opts QueryOptions) *RowIterator
- func (t *ReadWriteStmtBasedTransaction) QueryWithStats(ctx context.Context, statement Statement) *RowIterator
- func (t *ReadWriteStmtBasedTransaction) Read(ctx context.Context, table string, keys KeySet, columns []string) *RowIterator
- func (t *ReadWriteStmtBasedTransaction) ReadRow(ctx context.Context, table string, key Key, columns []string) (*Row, error)
- func (t *ReadWriteStmtBasedTransaction) ReadRowUsingIndex(ctx context.Context, table string, index string, key Key, columns []string) (*Row, error)
- func (t *ReadWriteStmtBasedTransaction) ReadRowWithOptions(ctx context.Context, table string, key Key, columns []string, ...) (*Row, error)
- func (t *ReadWriteStmtBasedTransaction) ReadUsingIndex(ctx context.Context, table, index string, keys KeySet, columns []string) (ri *RowIterator)
- func (t *ReadWriteStmtBasedTransaction) ReadWithOptions(ctx context.Context, table string, keys KeySet, columns []string, ...) (ri *RowIterator)
- func (t *ReadWriteStmtBasedTransaction) ResetForRetry(ctx context.Context) (*ReadWriteStmtBasedTransaction, error)
- func (t *ReadWriteStmtBasedTransaction) Rollback(ctx context.Context)
- type ReadWriteTransaction
- func (t *ReadWriteTransaction) AnalyzeQuery(ctx context.Context, statement Statement) (*sppb.QueryPlan, error)
- func (t *ReadWriteTransaction) BatchUpdate(ctx context.Context, stmts []Statement) (_ []int64, err error)
- func (t *ReadWriteTransaction) BatchUpdateWithOptions(ctx context.Context, stmts []Statement, opts QueryOptions) (_ []int64, err error)
- func (t *ReadWriteTransaction) BufferWrite(ms []*Mutation) error
- func (t *ReadWriteTransaction) Query(ctx context.Context, statement Statement) *RowIterator
- func (t *ReadWriteTransaction) QueryWithOptions(ctx context.Context, statement Statement, opts QueryOptions) *RowIterator
- func (t *ReadWriteTransaction) QueryWithStats(ctx context.Context, statement Statement) *RowIterator
- func (t *ReadWriteTransaction) Read(ctx context.Context, table string, keys KeySet, columns []string) *RowIterator
- func (t *ReadWriteTransaction) ReadRow(ctx context.Context, table string, key Key, columns []string) (*Row, error)
- func (t *ReadWriteTransaction) ReadRowUsingIndex(ctx context.Context, table string, index string, key Key, columns []string) (*Row, error)
- func (t *ReadWriteTransaction) ReadRowWithOptions(ctx context.Context, table string, key Key, columns []string, ...) (*Row, error)
- func (t *ReadWriteTransaction) ReadUsingIndex(ctx context.Context, table, index string, keys KeySet, columns []string) (ri *RowIterator)
- func (t *ReadWriteTransaction) ReadWithOptions(ctx context.Context, table string, keys KeySet, columns []string, ...) (ri *RowIterator)
- func (t *ReadWriteTransaction) Update(ctx context.Context, stmt Statement) (rowCount int64, err error)
- func (t *ReadWriteTransaction) UpdateWithOptions(ctx context.Context, stmt Statement, opts QueryOptions) (rowCount int64, err error)
- type Row
- func (r *Row) Column(i int, ptr interface{}) error
- func (r *Row) ColumnByName(name string, ptr interface{}) error
- func (r *Row) ColumnIndex(name string) (int, error)
- func (r *Row) ColumnName(i int) string
- func (r *Row) ColumnNames() []string
- func (r *Row) ColumnType(i int) *sppb.Type
- func (r *Row) ColumnValue(i int) *proto3.Value
- func (r *Row) Columns(ptrs ...interface{}) error
- func (r *Row) Size() int
- func (r *Row) String() string
- func (r *Row) ToStruct(p interface{}) error
- func (r *Row) ToStructLenient(p interface{}) error
- type RowIterator
- type SendOption
- type SessionPoolConfigdeprecated
- type Statement
- type TimestampBound
- type TransactionOptions
- type TransactionOutcomeUnknownError
Examples¶
- Client.Apply
- Client.BatchReadOnlyTransaction
- Client.ReadOnlyTransaction
- Client.ReadWriteTransaction
- Client.Single
- Delete
- Delete (KeyRange)
- GenericColumnValue.Decode
- Insert
- InsertMap
- InsertStruct
- KeySets
- NewClient
- NewClientWithConfig
- NewReadWriteStmtBasedTransaction
- NewStatement
- NewStatement (StructLiteral)
- ReadOnlyTransaction.Query
- ReadOnlyTransaction.Read
- ReadOnlyTransaction.ReadRow
- ReadOnlyTransaction.ReadUsingIndex
- ReadOnlyTransaction.ReadWithOptions
- ReadOnlyTransaction.Timestamp
- ReadOnlyTransaction.WithTimestampBound
- Row.ColumnByName
- Row.ColumnIndex
- Row.ColumnName
- Row.ColumnNames
- Row.Columns
- Row.Size
- Row.ToStruct
- Row.ToStructLenient
- RowIterator.Do
- RowIterator.Next
- Statement (ArrayOfStructParam)
- Statement (RegexpContains)
- Statement (StructParam)
- Update
- UpdateMap
- UpdateStruct
Constants¶
const (// Scope is the scope for Cloud Spanner Data API.Scope = "https://www.googleapis.com/auth/spanner.data"// AdminScope is the scope for Cloud Spanner Admin APIs.AdminScope = "https://www.googleapis.com/auth/spanner.admin")
const (// NumericPrecisionDigits is the maximum number of digits in a NUMERIC// value.NumericPrecisionDigits = 38// NumericScaleDigits is the maximum number of digits after the decimal// point in a NUMERIC value.NumericScaleDigits = 9)
const (// GcpResourceNamePrefix is the prefix for Spanner GCP resource names span attribute.GcpResourceNamePrefix = "//spanner.googleapis.com/")const OtInstrumentationScope = "cloud.google.com/go"OtInstrumentationScope is the instrumentation name that will be associated with the emitted telemetry.
Variables¶
var (// OpenSessionCount is a measure of the number of sessions currently opened.// It is EXPERIMENTAL and subject to change or removal without notice.//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get open_session_count metrics.OpenSessionCount =stats.Int64(statsPrefix+"open_session_count","Number of sessions currently opened",stats.UnitDimensionless,)// OpenSessionCountView is a view of the last value of OpenSessionCount.// It is EXPERIMENTAL and subject to change or removal without notice.//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get open_session_count metrics.OpenSessionCountView = &view.View{Measure:OpenSessionCount,Aggregation:view.LastValue(),TagKeys: tagCommonKeys,}// MaxAllowedSessionsCount is a measure of the maximum number of sessions// allowed. Configurable by the user.//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get max_allowed_sessions metrics.MaxAllowedSessionsCount =stats.Int64(statsPrefix+"max_allowed_sessions","The maximum number of sessions allowed. Configurable by the user.",stats.UnitDimensionless,)// MaxAllowedSessionsCountView is a view of the last value of// MaxAllowedSessionsCount.//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get max_allowed_sessions metrics.MaxAllowedSessionsCountView = &view.View{Measure:MaxAllowedSessionsCount,Aggregation:view.LastValue(),TagKeys: tagCommonKeys,}// SessionsCount is a measure of the number of sessions in the pool// including both in-use, idle, and being prepared.//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get num_sessions_in_pool metrics.SessionsCount =stats.Int64(statsPrefix+"num_sessions_in_pool","The number of sessions currently in use.",stats.UnitDimensionless,)// SessionsCountView is a view of the last value of SessionsCount.//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get num_sessions_in_pool metrics.SessionsCountView = &view.View{Measure:SessionsCount,Aggregation:view.LastValue(),TagKeys:append(tagCommonKeys, tagKeyType),}// MaxInUseSessionsCount is a measure of the maximum number of sessions// in use during the last 10 minute interval.//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get max_in_use_sessions metrics.MaxInUseSessionsCount =stats.Int64(statsPrefix+"max_in_use_sessions","The maximum number of sessions in use during the last 10 minute interval.",stats.UnitDimensionless,)// MaxInUseSessionsCountView is a view of the last value of// MaxInUseSessionsCount.//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get max_in_use_sessions metrics.MaxInUseSessionsCountView = &view.View{Measure:MaxInUseSessionsCount,Aggregation:view.LastValue(),TagKeys: tagCommonKeys,}// GetSessionTimeoutsCount is a measure of the number of get sessions// timeouts due to pool exhaustion.//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get get_session_timeouts metrics.GetSessionTimeoutsCount =stats.Int64(statsPrefix+"get_session_timeouts","The number of get sessions timeouts due to pool exhaustion.",stats.UnitDimensionless,)// GetSessionTimeoutsCountView is a view of the last value of// GetSessionTimeoutsCount.//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get get_session_timeouts metrics.GetSessionTimeoutsCountView = &view.View{Measure:GetSessionTimeoutsCount,Aggregation:view.Count(),TagKeys: tagCommonKeys,}// AcquiredSessionsCount is the number of sessions acquired from// the session pool.//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get num_acquired_sessions metrics.AcquiredSessionsCount =stats.Int64(statsPrefix+"num_acquired_sessions","The number of sessions acquired from the session pool.",stats.UnitDimensionless,)// AcquiredSessionsCountView is a view of the last value of// AcquiredSessionsCount.//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get num_acquired_sessions metrics.AcquiredSessionsCountView = &view.View{Measure:AcquiredSessionsCount,Aggregation:view.Count(),TagKeys: tagCommonKeys,}// ReleasedSessionsCount is the number of sessions released by the user// and pool maintainer.//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get num_released_sessions metrics.ReleasedSessionsCount =stats.Int64(statsPrefix+"num_released_sessions","The number of sessions released by the user and pool maintainer.",stats.UnitDimensionless,)// ReleasedSessionsCountView is a view of the last value of// ReleasedSessionsCount.//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get num_released_sessions metrics.ReleasedSessionsCountView = &view.View{Measure:ReleasedSessionsCount,Aggregation:view.Count(),TagKeys: tagCommonKeys,}// GFELatency is the latency between Google's network receiving an RPC and reading back the first byte of the response//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get gfe_latency metrics.GFELatency =stats.Int64(statsPrefix+"gfe_latency","Latency between Google's network receiving an RPC and reading back the first byte of the response",stats.UnitMilliseconds,)// GFELatencyView is the view of distribution of GFELatency values//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get gfe_latency metrics.GFELatencyView = &view.View{Name: "cloud.google.com/go/spanner/gfe_latency",Measure:GFELatency,Description: "Latency between Google's network receives an RPC and reads back the first byte of the response",Aggregation:view.Distribution(0.0, 0.01, 0.05, 0.1, 0.3, 0.6, 0.8, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 8.0, 10.0, 13.0,16.0, 20.0, 25.0, 30.0, 40.0, 50.0, 65.0, 80.0, 100.0, 130.0, 160.0, 200.0, 250.0,300.0, 400.0, 500.0, 650.0, 800.0, 1000.0, 2000.0, 5000.0, 10000.0, 20000.0, 50000.0,100000.0),TagKeys:append(tagCommonKeys, tagKeyMethod),}// GFEHeaderMissingCount is the number of RPC responses received without the server-timing header, most likely means that the RPC never reached Google's network//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get gfe_header_missing_count metrics.GFEHeaderMissingCount =stats.Int64(statsPrefix+"gfe_header_missing_count","Number of RPC responses received without the server-timing header, most likely means that the RPC never reached Google's network",stats.UnitDimensionless,)// GFEHeaderMissingCountView is the view of number of GFEHeaderMissingCount//// Deprecated: OpenCensus project is deprecated. Use OpenTelemetry to get gfe_header_missing_count metrics.GFEHeaderMissingCountView = &view.View{Name: "cloud.google.com/go/spanner/gfe_header_missing_count",Measure:GFEHeaderMissingCount,Description: "Number of RPC responses received without the server-timing header, most likely means that the RPC never reached Google's network",Aggregation:view.Count(),TagKeys:append(tagCommonKeys, tagKeyMethod),})
var (// CommitTimestamp is a special value used to tell Cloud Spanner to insert// the commit timestamp of the transaction into a column. It can be used in// a Mutation, or directly used in InsertStruct or InsertMap. See// ExampleCommitTimestamp. This is just a placeholder and the actual value// stored in this variable has no meaning.CommitTimestamp = commitTimestamp)
var DefaultRetryBackoff =gax.Backoff{Initial: 20 *time.Millisecond,Max: 32 *time.Second,Multiplier: 1.3,}
DefaultRetryBackoff is used for retryers as a fallback value when the serverdid not return any retry information.
var DefaultSessionPoolConfig =SessionPoolConfig{MultiplexSessionCheckInterval: 10 *time.Minute,}
DefaultSessionPoolConfig is the default configuration.
Deprecated: The session pool has been removed. Multiplexed sessions are now usedfor all operations. Only MultiplexSessionCheckInterval is still active.
var (// ErrRowNotFound row not found errorErrRowNotFound =errors.New("row not found"))
Functions¶
funcDisableGfeLatencyAndHeaderMissingCountViewsdeprecatedadded inv1.29.0
func DisableGfeLatencyAndHeaderMissingCountViews()
DisableGfeLatencyAndHeaderMissingCountViews disables GFEHeaderMissingCount and GFELatency metric
Deprecated: OpenCensus project is deprecated. Use OpenTelemetry for capturing metrics.
funcEnableGfeHeaderMissingCountViewdeprecatedadded inv1.29.0
func EnableGfeHeaderMissingCountView()error
EnableGfeHeaderMissingCountView enables GFEHeaderMissingCount metric
Deprecated: OpenCensus project is deprecated.Use EnableOpenTelemetryMetrics to get GfeHeaderMissingCount metrics through OpenTelemetry instrumentation.
funcEnableGfeLatencyAndHeaderMissingCountViewsdeprecatedadded inv1.29.0
func EnableGfeLatencyAndHeaderMissingCountViews()error
EnableGfeLatencyAndHeaderMissingCountViews enables GFEHeaderMissingCount and GFELatency metric
Deprecated: OpenCensus project is deprecated.Use EnableOpenTelemetryMetrics to get GfeLatency and GfeHeaderMissingCount metrics through OpenTelemetry instrumentation.
funcEnableGfeLatencyViewdeprecatedadded inv1.29.0
func EnableGfeLatencyView()error
EnableGfeLatencyView enables GFELatency metric
Deprecated: OpenCensus project is deprecated.Use EnableOpenTelemetryMetrics to get GfeLatency metrics through OpenTelemetry instrumentation.
funcEnableOpenTelemetryMetrics¶added inv1.57.0
func EnableOpenTelemetryMetrics()
EnableOpenTelemetryMetrics enables OpenTelemetery metrics
funcEnableStatViewsdeprecatedadded inv1.5.0
func EnableStatViews()error
EnableStatViews enables all views of metrics relate to session management.
Deprecated: OpenCensus project is deprecated.Use EnableOpenTelemetryMetrics to get Session metrics through OpenTelemetry instrumentation.
funcExtractRetryDelay¶added inv1.8.0
ExtractRetryDelay extracts retry backoff from a *spanner.Error if present.
funcIsOpenTelemetryMetricsEnabled¶added inv1.57.0
func IsOpenTelemetryMetricsEnabled()bool
IsOpenTelemetryMetricsEnabled tells whether OpenTelemtery metrics is enabled or not.
funcNumericString¶added inv1.10.0
NumericString returns a string representing a *big.Rat in a format compatiblewith Spanner SQL. It returns a floating-point literal with 9 digits after thedecimal point.
funcSelectAll¶added inv1.56.0
func SelectAll(rows rowIterator, destination interface{}, options ...DecodeOptions)errorSelectAll iterates all rows to the end. After iterating it closes the rowsand propagates any errors that could pop up with destination slice partially filled.It expects that destination should be a slice. For each row, it scans data and appends it to the destination slice.SelectAll supports both types of slices: slice of pointers and slice of structs or primitives by value,for example:
type Singer struct { ID string Name string}var singersByPtr []*Singervar singersByValue []SingerBoth singersByPtr and singersByValue are valid destinations for SelectAll function.
Add the option `spanner.WithLenient()` to instruct SelectAll to ignore additional columns in the rows that are not present in the destination struct.example:
var singersByPtr []*Singererr := spanner.SelectAll(row, &singersByPtr, spanner.WithLenient())
funcToSpannerError¶added inv1.12.0
ToSpannerError converts a general Go error to *spanner.Error. If the givenerror is already a *spanner.Error, the original error will be returned.
Spanner Errors are normally created by the Spanner client library from thereturned APIError of a RPC. This method can also be used to create Spannererrors for use in tests. The recommended way to create test errors iscalling this method with a status error, e.g.ToSpannerError(status.New(codes.NotFound, "Table not found").Err())
funcUseNumberWithJSONDecoderEncoder¶added inv1.54.0
func UseNumberWithJSONDecoderEncoder(useNumberbool)
UseNumberWithJSONDecoderEncoder specifies whether Cloud Spanner JSON numbers are decodedas Number (preserving precision) or float64 (risking loss).Defaults to the same behavior as the standard Go library, which means decoding to float64.Call this method to enable lossless precision.NOTE 1: Calling this method affects the behavior of all clients created by this library, both existing and future instances.NOTE 2: This method sets a global variable that is used by the client to encode/decode JSON numbers. Access to the global variable is not synchronized. You should only call this method when there are no goroutines encoding/decoding Cloud Spanner JSON values. It is recommended to only call this method during the initialization of your application, and preferably before you create any Cloud Spanner clients, and/or in tests when there are no queries being executed.
Types¶
typeAckOption¶added inv1.88.0
type AckOption func(*Mutation)
AckOption specifies optional fields for Ack mutation
funcWithIgnoreNotFound¶added inv1.88.0
WithIgnoreNotFound returns an AckOption to set field `ignoreNotFound`
typeActionOnInactiveTransactionKinddeprecatedadded inv1.52.0
type ActionOnInactiveTransactionKindint
ActionOnInactiveTransactionKind describes the kind of action taken when there are inactive transactions.
Deprecated: This type is no longer used as the session pool has been removed.
const (// NoAction action does not perform any action on inactive transactions.//// Deprecated: This constant is no longer used as the session pool has been removed.NoAction ActionOnInactiveTransactionKind// Warn action logs inactive transactions. Any inactive transaction gets logged only once.//// Deprecated: This constant is no longer used as the session pool has been removed.Warn// Close action closes inactive transactions without logging.//// Deprecated: This constant is no longer used as the session pool has been removed.Close// WarnAndClose action logs and closes the inactive transactions.//// Deprecated: This constant is no longer used as the session pool has been removed.WarnAndClose)
typeApplyOption¶
type ApplyOption func(*applyOption)
An ApplyOption is an optional argument to Apply.
funcApplyAtLeastOnce¶
func ApplyAtLeastOnce()ApplyOption
ApplyAtLeastOnce returns an ApplyOption that removes replay protection.
With this option, Apply may attempt to apply mutations more than once; ifthe mutations are not idempotent, this may lead to a failure being reportedwhen the mutation was applied more than once. For example, an insert mayfail with ALREADY_EXISTS even though the row did not exist before Apply wascalled. For this reason, most users of the library will prefer not to usethis option. However, ApplyAtLeastOnce requires only a single RPC, whereasApply's default replay protection may require an additional RPC. So thisoption may be appropriate for latency sensitive and/or high throughput blindwriting.
funcApplyCommitOptions¶added inv1.67.0
func ApplyCommitOptions(coCommitOptions)ApplyOption
ApplyCommitOptions returns an ApplyOption that sets the commit options to use for the commit operation.
funcExcludeTxnFromChangeStreams¶added inv1.61.0
func ExcludeTxnFromChangeStreams()ApplyOption
ExcludeTxnFromChangeStreams returns an ApplyOptions that sets whether to exclude recording this commit operation from allowed tracking change streams.
funcIsolationLevel¶added inv1.77.0
func IsolationLevel(isolationLevelsppb.TransactionOptions_IsolationLevel)ApplyOption
IsolationLevel returns an ApplyOptions that sets which isolationLevel for RW transaction
funcPriority¶added inv1.17.0
func Priority(prioritysppb.RequestOptions_Priority)ApplyOption
Priority returns an ApplyOptions that sets the RPC priority to use for thecommit operation.
funcTransactionTag¶added inv1.22.0
func TransactionTag(tagstring)ApplyOption
TransactionTag returns an ApplyOption that will include the given tag as atransaction tag for a write-only transaction.
typeBatchReadOnlyTransaction¶
type BatchReadOnlyTransaction struct {ReadOnlyTransactionIDBatchReadOnlyTransactionID}BatchReadOnlyTransaction is a ReadOnlyTransaction that allows for exportingarbitrarily large amounts of data from Cloud Spanner databases.BatchReadOnlyTransaction partitions a read/query request. Read/query requestcan then be executed independently over each partition while observing thesame snapshot of the database. BatchReadOnlyTransaction can also be sharedacross multiple clients by passing around the BatchReadOnlyTransactionID andthen recreating the transaction using Client.BatchReadOnlyTransactionFromID.
Note: if a client is used only to run partitions, you cancreate it using a ClientConfig with both MinOpened and MaxIdle set tozero to avoid creating unnecessary sessions. You can also avoid excessgRPC channels by setting ClientConfig.NumChannels to the number ofconcurrently active BatchReadOnlyTransactions you expect to have.
func (*BatchReadOnlyTransaction)AnalyzeQuery¶
func (t *BatchReadOnlyTransaction) AnalyzeQuery(ctxcontext.Context, statementStatement) (*sppb.QueryPlan,error)
AnalyzeQuery returns the query plan for statement.
func (*BatchReadOnlyTransaction)Cleanup¶
func (t *BatchReadOnlyTransaction) Cleanup(ctxcontext.Context)
Cleanup cleans up all the resources used by this transaction and makesit unusable. Once this method is invoked, the transaction is no longerusable anywhere, including other clients/processes with which thistransaction was shared.
Calling Cleanup is optional, but recommended. If Cleanup is not called, thetransaction's resources will be freed when the session expires on the backendand is deleted. For more information about recycled sessions, seehttps://cloud.google.com/spanner/docs/sessions.
func (*BatchReadOnlyTransaction)Close¶
func (t *BatchReadOnlyTransaction) Close()
Close marks the txn as closed.
func (*BatchReadOnlyTransaction)Execute¶
func (t *BatchReadOnlyTransaction) Execute(ctxcontext.Context, p *Partition) *RowIterator
Execute runs a single Partition obtained from PartitionRead orPartitionQuery.
func (*BatchReadOnlyTransaction)PartitionQuery¶
func (t *BatchReadOnlyTransaction) PartitionQuery(ctxcontext.Context, statementStatement, optPartitionOptions) ([]*Partition,error)
PartitionQuery returns a list of Partitions that can be used to execute aquery against the database.
func (*BatchReadOnlyTransaction)PartitionQueryWithOptions¶added inv1.3.0
func (t *BatchReadOnlyTransaction) PartitionQueryWithOptions(ctxcontext.Context, statementStatement, optPartitionOptions, qOptsQueryOptions) ([]*Partition,error)
PartitionQueryWithOptions returns a list of Partitions that can be used toexecute a query against the database. The sql query execution will beoptimized based on the given query options.
func (*BatchReadOnlyTransaction)PartitionRead¶
func (t *BatchReadOnlyTransaction) PartitionRead(ctxcontext.Context, tablestring, keysKeySet, columns []string, optPartitionOptions) ([]*Partition,error)
PartitionRead returns a list of Partitions that can be used to read rows fromthe database. These partitions can be executed across multiple processes,even across different machines. The partition size and count hints can beconfigured using PartitionOptions.
func (*BatchReadOnlyTransaction)PartitionReadUsingIndex¶
func (t *BatchReadOnlyTransaction) PartitionReadUsingIndex(ctxcontext.Context, table, indexstring, keysKeySet, columns []string, optPartitionOptions) ([]*Partition,error)
PartitionReadUsingIndex returns a list of Partitions that can be used to readrows from the database using an index.
func (*BatchReadOnlyTransaction)PartitionReadUsingIndexWithOptions¶added inv1.22.0
func (t *BatchReadOnlyTransaction) PartitionReadUsingIndexWithOptions(ctxcontext.Context, table, indexstring, keysKeySet, columns []string, optPartitionOptions, readOptionsReadOptions) ([]*Partition,error)
PartitionReadUsingIndexWithOptions returns a list of Partitions that can beused to read rows from the database using an index. Pass a ReadOptions tomodify the read operation.
func (*BatchReadOnlyTransaction)PartitionReadWithOptions¶added inv1.22.0
func (t *BatchReadOnlyTransaction) PartitionReadWithOptions(ctxcontext.Context, tablestring, keysKeySet, columns []string, optPartitionOptions, readOptionsReadOptions) ([]*Partition,error)
PartitionReadWithOptions returns a list of Partitions that can be used toread rows from the database. These partitions can be executed across multipleprocesses, even across different machines. The partition size and count hintscan be configured using PartitionOptions. Pass a ReadOptions to modify theread operation.
func (*BatchReadOnlyTransaction)Query¶
func (t *BatchReadOnlyTransaction) Query(ctxcontext.Context, statementStatement) *RowIterator
Query executes a query against the database. It returns a RowIterator forretrieving the resulting rows.
Query returns only row data, without a query plan or execution statistics.Use QueryWithStats to get rows along with the plan and statistics. UseAnalyzeQuery to get just the plan.
func (*BatchReadOnlyTransaction)QueryWithOptions¶added inv1.3.0
func (t *BatchReadOnlyTransaction) QueryWithOptions(ctxcontext.Context, statementStatement, optsQueryOptions) *RowIterator
QueryWithOptions executes a SQL statment against the database. It returnsa RowIterator for retrieving the resulting rows. The sql query executionwill be optimized based on the given query options.
func (*BatchReadOnlyTransaction)QueryWithStats¶
func (t *BatchReadOnlyTransaction) QueryWithStats(ctxcontext.Context, statementStatement) *RowIterator
QueryWithStats executes a SQL statement against the database. It returnsa RowIterator for retrieving the resulting rows. The RowIterator will alsobe populated with a query plan and execution statistics.
func (*BatchReadOnlyTransaction)Read¶
func (t *BatchReadOnlyTransaction) Read(ctxcontext.Context, tablestring, keysKeySet, columns []string) *RowIterator
Read returns a RowIterator for reading multiple rows from the database.
func (*BatchReadOnlyTransaction)ReadRow¶
func (t *BatchReadOnlyTransaction) ReadRow(ctxcontext.Context, tablestring, keyKey, columns []string) (*Row,error)
ReadRow reads a single row from the database.
If no row is present with the given key, then ReadRow returns an error(spanner.ErrRowNotFound) wherespanner.ErrCode(err) is codes.NotFound.
To check if the error is spanner.ErrRowNotFound:
if errors.Is(err, spanner.ErrRowNotFound) {...}func (*BatchReadOnlyTransaction)ReadRowUsingIndex¶added inv1.2.0
func (t *BatchReadOnlyTransaction) ReadRowUsingIndex(ctxcontext.Context, tablestring, indexstring, keyKey, columns []string) (*Row,error)
ReadRowUsingIndex reads a single row from the database using an index.
If no row is present with the given index, then ReadRowUsingIndex returns anerror(spanner.ErrRowNotFound) where spanner.ErrCode(err) is codes.NotFound.
To check if the error is spanner.ErrRowNotFound:
if errors.Is(err, spanner.ErrRowNotFound) {...}If more than one row received with the given index, then ReadRowUsingIndexreturns an error where spanner.ErrCode(err) is codes.FailedPrecondition.
func (*BatchReadOnlyTransaction)ReadRowWithOptions¶added inv1.29.0
func (t *BatchReadOnlyTransaction) ReadRowWithOptions(ctxcontext.Context, tablestring, keyKey, columns []string, opts *ReadOptions) (*Row,error)
ReadRowWithOptions reads a single row from the database. Pass a ReadOptions to modify the read operation.
If no row is present with the given key, then ReadRowWithOptions returns an error wherespanner.ErrCode(err) is codes.NotFound.
To check if the error is spanner.ErrRowNotFound:
if errors.Is(err, spanner.ErrRowNotFound) {...}func (*BatchReadOnlyTransaction)ReadUsingIndex¶
func (t *BatchReadOnlyTransaction) ReadUsingIndex(ctxcontext.Context, table, indexstring, keysKeySet, columns []string) (ri *RowIterator)
ReadUsingIndex calls ReadWithOptions with ReadOptions{Index: index}.
func (*BatchReadOnlyTransaction)ReadWithOptions¶
func (t *BatchReadOnlyTransaction) ReadWithOptions(ctxcontext.Context, tablestring, keysKeySet, columns []string, opts *ReadOptions) (ri *RowIterator)
ReadWithOptions returns a RowIterator for reading multiple rows from thedatabase. Pass a ReadOptions to modify the read operation.
typeBatchReadOnlyTransactionID¶
type BatchReadOnlyTransactionID struct {// contains filtered or unexported fields}BatchReadOnlyTransactionID is a unique identifier for aBatchReadOnlyTransaction. It can be used to re-create aBatchReadOnlyTransaction on a different machine or process by callingClient.BatchReadOnlyTransactionFromID.
func (BatchReadOnlyTransactionID)MarshalBinary¶
func (tidBatchReadOnlyTransactionID) MarshalBinary() (data []byte, errerror)
MarshalBinary implements BinaryMarshaler.
func (*BatchReadOnlyTransactionID)UnmarshalBinary¶
func (tid *BatchReadOnlyTransactionID) UnmarshalBinary(data []byte)error
UnmarshalBinary implements BinaryUnmarshaler.
typeBatchWriteOptions¶added inv1.52.0
type BatchWriteOptions struct {// Priority is the RPC priority to use for this request.Prioritysppb.RequestOptions_Priority// The transaction tag to use for this request.TransactionTagstring// If excludeTxnFromChangeStreams == true, modifications from all transactions// in this batch write request will not be recorded in allowed tracking// change treams with DDL option allow_txn_exclusion=true.ExcludeTxnFromChangeStreamsbool// ClientContext contains client-owned context information to be passed with the batch write request.ClientContext *sppb.RequestOptions_ClientContext}BatchWriteOptions provides options for a BatchWriteRequest.
typeBatchWriteResponseIterator¶added inv1.52.0
type BatchWriteResponseIterator struct {// contains filtered or unexported fields}BatchWriteResponseIterator is an iterator over BatchWriteResponse structures returned from BatchWrite RPC.
func (*BatchWriteResponseIterator)Do¶added inv1.52.0
func (r *BatchWriteResponseIterator) Do(f func(r *sppb.BatchWriteResponse)error)error
Do calls the provided function once in sequence for each item in theiteration. If the function returns a non-nil error, Do immediately returnsthat error.
If there are no items in the iterator, Do will return nil without calling theprovided function.
Do always calls Stop on the iterator.
func (*BatchWriteResponseIterator)Next¶added inv1.52.0
func (r *BatchWriteResponseIterator) Next() (*sppb.BatchWriteResponse,error)
Next returns the next result. Its second return value is iterator.Done ifthere are no more results. Once Next returns Done, all subsequent callswill return Done.
func (*BatchWriteResponseIterator)Stop¶added inv1.52.0
func (r *BatchWriteResponseIterator) Stop()
Stop terminates the iteration. It should be called after you finish using theiterator.
typeBeginTransactionOption¶added inv1.83.0
type BeginTransactionOptionint
BeginTransactionOption determines how a transaction is started by the client. A transaction can be started byinlining the BeginTransaction option with the first statement in the transaction, or by executing an explicitBeginTransaction RPC. Inlining the BeginTransaction with the first statement requires one less round-trip to Spanner.Using an explicit BeginTransaction RPC can be more efficient if you want to execute multiple queries in parallel atthe start of the transaction, as only one statement can include a BeginTransaction option, and all other querieshave to wait for the first query to return at least one result before proceeding.
const (// DefaultBeginTransaction instructs the transaction to use the default for the type of transaction. The defaults are:// * ReadWriteTransaction: InlinedBeginTransaction// * ReadWriteStmtBasedTransaction: ExplicitBeginTransaction// * ReadOnlyTransaction: ExplicitBeginTransactionDefaultBeginTransactionBeginTransactionOption =iota// InlinedBeginTransaction instructs the transaction to include the BeginTransaction with the first statement in the// transaction. This is more efficient if the transaction does not execute any statements in parallel at the start// of the transaction. This option is the default for ReadWriteTransaction.InlinedBeginTransaction// ExplicitBeginTransaction instructs the transaction to use a separate BeginTransaction RPC to start the transaction.// This can be more efficient if the transaction executes multiple statements in parallel at the start of the// transaction. This option is the default for ReadOnlyTransaction and ReadWriteStmtBasedTransaction.ExplicitBeginTransaction)
typeClient¶
type Client struct {// contains filtered or unexported fields}Client is a client for reading and writing data to a Cloud Spanner database.A client is safe to use concurrently, except for its Close method.
funcNewClient¶
NewClient creates a client to a database. A valid database name has theform projects/PROJECT_ID/instances/INSTANCE_ID/databases/DATABASE_ID. It usesa default configuration.
Example¶
package mainimport ("context""cloud.google.com/go/spanner")func main() {ctx := context.Background()const myDB = "projects/my-project/instances/my-instance/database/my-db"client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}_ = client // TODO: Use client.}funcNewClientWithConfig¶
func NewClientWithConfig(ctxcontext.Context, databasestring, configClientConfig, opts ...option.ClientOption) (c *Client, errerror)
NewClientWithConfig creates a client to a database. A valid database name hasthe form projects/PROJECT_ID/instances/INSTANCE_ID/databases/DATABASE_ID.
Example¶
package mainimport ("context""cloud.google.com/go/spanner")func main() {ctx := context.Background()const myDB = "projects/my-project/instances/my-instance/database/my-db"client, err := spanner.NewClientWithConfig(ctx, myDB, spanner.ClientConfig{NumChannels: 10,})if err != nil {// TODO: Handle error.}_ = client // TODO: Use client.client.Close() // Close client when done.}funcNewMultiEndpointClient¶added inv1.61.0
func NewMultiEndpointClient(ctxcontext.Context, databasestring, gmeCfg *grpcgcp.GCPMultiEndpointOptions, opts ...option.ClientOption) (*Client, *grpcgcp.GCPMultiEndpoint,error)
NewMultiEndpointClient is the same as NewMultiEndpointClientWithConfig withthe default client configuration.
A valid database name has theform projects/PROJECT_ID/instances/INSTANCE_ID/databases/DATABASE_ID.
funcNewMultiEndpointClientWithConfig¶added inv1.61.0
func NewMultiEndpointClientWithConfig(ctxcontext.Context, databasestring, configClientConfig, gmeCfg *grpcgcp.GCPMultiEndpointOptions, opts ...option.ClientOption) (c *Client, gme *grpcgcp.GCPMultiEndpoint, errerror)
NewMultiEndpointClientWithConfig creates a client to a database using GCPMultiEndpoint.
The purposes of GCPMultiEndpoint are:
- Fallback to an alternative endpoint (host:port) when the originalendpoint is completely unavailable.
- Be able to route a Cloud Spanner call to a specific group of endpoints.
- Be able to reconfigure endpoints in runtime.
The GRPCgcpConfig and DialFunc in the GCPMultiEndpointOptions are optionaland will be configured automatically.
For GCPMultiEndpoint the number of channels is configured via MaxSize of theChannelPool config in the GRPCgcpConfig.
The GCPMultiEndpoint returned can be used to update the endpoints in runtime.
A valid database name has theform projects/PROJECT_ID/instances/INSTANCE_ID/databases/DATABASE_ID.
func (*Client)Apply¶
func (c *Client) Apply(ctxcontext.Context, ms []*Mutation, opts ...ApplyOption) (commitTimestamptime.Time, errerror)
Apply applies a list of mutations atomically to the database.
Example¶
package mainimport ("context""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}m := spanner.Update("Users", []string{"name", "email"}, []interface{}{"alice", "a@example.com"})_, err = client.Apply(ctx, []*spanner.Mutation{m})if err != nil {// TODO: Handle error.}}func (*Client)BatchReadOnlyTransaction¶
func (c *Client) BatchReadOnlyTransaction(ctxcontext.Context, tbTimestampBound) (*BatchReadOnlyTransaction,error)
BatchReadOnlyTransaction returns a BatchReadOnlyTransaction that can be usedfor partitioned reads or queries from a snapshot of the database. This isuseful in batch processing pipelines where one wants to divide the work ofreading from the database across multiple machines.
Note: This transaction does not use the underlying session pool but creates anew session each time, and the session is reused across clients.
You should call Close() after the txn is no longer needed on localclient, and call Cleanup() when the txn is finished for all clients, to freethe session.
Example¶
package mainimport ("context""sync""cloud.google.com/go/spanner""google.golang.org/api/iterator")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()var (client *spanner.Clienttxn *spanner.BatchReadOnlyTransactionerr error)if client, err = spanner.NewClient(ctx, myDB); err != nil {// TODO: Handle error.}defer client.Close()if txn, err = client.BatchReadOnlyTransaction(ctx, spanner.StrongRead()); err != nil {// TODO: Handle error.}defer txn.Close()// Singer represents the elements in a row from the Singers table.type Singer struct {SingerID int64FirstName stringLastName stringSingerInfo []byte}stmt := spanner.Statement{SQL: "SELECT * FROM Singers;"}partitions, err := txn.PartitionQuery(ctx, stmt, spanner.PartitionOptions{})if err != nil {// TODO: Handle error.}// Note: here we use multiple goroutines, but you should use separate// processes/machines.wg := sync.WaitGroup{}for i, p := range partitions {wg.Add(1)go func(i int, p *spanner.Partition) {defer wg.Done()iter := txn.Execute(ctx, p)defer iter.Stop()for {row, err := iter.Next()if err == iterator.Done {break} else if err != nil {// TODO: Handle error.}var s Singerif err := row.ToStruct(&s); err != nil {// TODO: Handle error.}_ = s // TODO: Process the row.}}(i, p)}wg.Wait()}func (*Client)BatchReadOnlyTransactionFromID¶
func (c *Client) BatchReadOnlyTransactionFromID(tidBatchReadOnlyTransactionID) *BatchReadOnlyTransaction
BatchReadOnlyTransactionFromID reconstruct a BatchReadOnlyTransaction fromBatchReadOnlyTransactionID
func (*Client)BatchWrite¶added inv1.52.0
func (c *Client) BatchWrite(ctxcontext.Context, mgs []*MutationGroup) *BatchWriteResponseIterator
BatchWrite applies a list of mutation groups in a collection of efficienttransactions. The mutation groups are applied non-atomically in anunspecified order and thus, they must be independent of each other. Partialfailure is possible, i.e., some mutation groups may have been appliedsuccessfully, while some may have failed. The results of individual batchesare streamed into the response as the batches are applied.
BatchWrite requests are not replay protected, meaning that each mutationgroup may be applied more than once. Replays of non-idempotent mutationsmay have undesirable effects. For example, replays of an insert mutationmay produce an already exists error or if you use generated or committimestamp-based keys, it may result in additional rows being added to themutation's table. We recommend structuring your mutation groups to beidempotent to avoid this issue.
func (*Client)BatchWriteWithOptions¶added inv1.52.0
func (c *Client) BatchWriteWithOptions(ctxcontext.Context, mgs []*MutationGroup, optsBatchWriteOptions) *BatchWriteResponseIterator
BatchWriteWithOptions is same as BatchWrite. It accepts additional options to customize the request.
func (*Client)ClientID¶added inv1.57.0
ClientID returns the id of the Client. This is not recommended for customer applications and used internally for testing.
func (*Client)DatabaseName¶added inv1.19.0
DatabaseName returns the full name of a database, e.g.,"projects/spanner-cloud-test/instances/foo/databases/foodb".
func (*Client)PartitionedUpdate¶
PartitionedUpdate executes a DML statement in parallel across the database,using separate, internal transactions that commit independently. The DMLstatement must be fully partitionable: it must be expressible as the unionof many statements each of which accesses only a single row of the table. Thestatement should also be idempotent, because it may be applied more than once.
PartitionedUpdate returns an estimated count of the number of rows affected.The actual number of affected rows may be greater than the estimate.
func (*Client)PartitionedUpdateWithOptions¶added inv1.3.0
func (c *Client) PartitionedUpdateWithOptions(ctxcontext.Context, statementStatement, optsQueryOptions) (countint64, errerror)
PartitionedUpdateWithOptions executes a DML statement in parallel across the database,using separate, internal transactions that commit independently. The sqlquery execution will be optimized based on the given query options.
func (*Client)ReadOnlyTransaction¶
func (c *Client) ReadOnlyTransaction() *ReadOnlyTransaction
ReadOnlyTransaction returns a ReadOnlyTransaction that can be used formultiple reads from the database. You must call Close() when theReadOnlyTransaction is no longer needed to release resources on the server.
ReadOnlyTransaction will use a strong TimestampBound by default. UseReadOnlyTransaction.WithTimestampBound to specify a differentTimestampBound. A non-strong bound can be used to reduce latency, or"time-travel" to prior versions of the database, see the documentation ofTimestampBound for details.
Example¶
package mainimport ("context""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}t := client.ReadOnlyTransaction()defer t.Close()// TODO: Read with t using Read, ReadRow, ReadUsingIndex, or Query.}func (*Client)ReadWriteTransaction¶
func (c *Client) ReadWriteTransaction(ctxcontext.Context, f func(context.Context, *ReadWriteTransaction)error) (commitTimestamptime.Time, errerror)
ReadWriteTransaction executes a read-write transaction, with retries asnecessary.
The function f will be called one or more times. It must not maintainany state between calls.
If the transaction cannot be committed or if f returns an ABORTED error,ReadWriteTransaction will call f again. It will continue to call f until thetransaction can be committed or the Context times out or is cancelled. If freturns an error other than ABORTED, ReadWriteTransaction will abort thetransaction and return the error.
To limit the number of retries, set a deadline on the Context rather thanusing a fixed limit on the number of attempts. ReadWriteTransaction willretry as needed until that deadline is met.
Seehttps://godoc.org/cloud.google.com/go/spanner#ReadWriteTransaction formore details.
Example¶
package mainimport ("context""errors""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}_, err = client.ReadWriteTransaction(ctx, func(ctx context.Context, txn *spanner.ReadWriteTransaction) error {var balance int64row, err := txn.ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"balance"})if err != nil {// This function will be called again if this is an IsAborted error.return err}if err := row.Column(0, &balance); err != nil {return err}if balance <= 10 {return errors.New("insufficient funds in account")}balance -= 10m := spanner.Update("Accounts", []string{"user", "balance"}, []interface{}{"alice", balance})// The buffered mutation will be committed. If the commit fails with an// IsAborted error, this function will be called again.return txn.BufferWrite([]*spanner.Mutation{m})})if err != nil {// TODO: Handle error.}}func (*Client)ReadWriteTransactionWithOptions¶added inv1.12.0
func (c *Client) ReadWriteTransactionWithOptions(ctxcontext.Context, f func(context.Context, *ReadWriteTransaction)error, optionsTransactionOptions) (respCommitResponse, errerror)
ReadWriteTransactionWithOptions executes a read-write transaction withconfigurable options, with retries as necessary.
ReadWriteTransactionWithOptions is a configurable ReadWriteTransaction.
Seehttps://godoc.org/cloud.google.com/go/spanner#ReadWriteTransaction formore details.
func (*Client)Single¶
func (c *Client) Single() *ReadOnlyTransaction
Single provides a read-only snapshot transaction optimized for the casewhere only a single read or query is needed. This is more efficient thanusing ReadOnlyTransaction() for a single read or query.
Single will use a strong TimestampBound by default. UseReadOnlyTransaction.WithTimestampBound to specify a differentTimestampBound. A non-strong bound can be used to reduce latency, or"time-travel" to prior versions of the database, see the documentation ofTimestampBound for details.
Example¶
package mainimport ("context""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}iter := client.Single().Query(ctx, spanner.NewStatement("SELECT FirstName FROM Singers"))_ = iter // TODO: iterate using Next or Do.}typeClientConfig¶
type ClientConfig struct {// NumChannels is the number of gRPC channels.// If zero, a reasonable default is used based on the execution environment.//// Deprecated: The Spanner client now uses a pool of gRPC connections. Use// option.WithGRPCConnectionPool(numConns) instead to specify the number of// connections the client should use. The client will default to a// reasonable default if this option is not specified.NumChannelsint// SessionPoolConfig is the configuration for session pool.SessionPoolConfig// SessionLabels for the sessions created by this client.// Seehttps://cloud.google.com/spanner/docs/reference/rpc/google.spanner.v1#session// for more info.SessionLabels map[string]string// QueryOptions is the configuration for executing a sql query.QueryOptionsQueryOptions// ReadOptions is the configuration for reading rows from a databaseReadOptionsReadOptions// ApplyOptions is the configuration for applyingApplyOptions []ApplyOption// TransactionOptions is the configuration for a transaction.TransactionOptionsTransactionOptions// BatchWriteOptions is the configuration for a BatchWrite request.BatchWriteOptionsBatchWriteOptions// CallOptions is the configuration for providing custom retry settings that// override the default values.CallOptions *vkit.CallOptions// UserAgent is the prefix to the user agent header. This is used to supply information// such as application name or partner tool.//// Internal Use Only: This field is for internal tracking purpose only,// setting the value for this config is not required.//// Recommended format: “application-or-tool-ID/major.minor.version“.UserAgentstring// DatabaseRole specifies the role to be assumed for all operations on the// database by this client.DatabaseRolestring// DisableRouteToLeader specifies if all the requests of type read-write and PDML// need to be routed to the leader region.//// Default: falseDisableRouteToLeaderbool// Logger is the logger to use for this client. If it is nil, all logging// will be directed to the standard logger.Logger *log.Logger//// Sets the compression to use for all gRPC calls. The compressor must be a valid name.// This will enable compression both from the client to the// server and from the server to the client.//// Supported values are:// gzip: Enable gzip compression// identity: Disable compression//// Default: identityCompressionstring// BatchTimeout specifies the timeout for a batch of sessions managed sessionClient.BatchTimeouttime.Duration// ClientConfig options used to set the DirectedReadOptions for all ReadRequests// and ExecuteSqlRequests for the Client which indicate which replicas or regions// should be used for non-transactional reads or queries.DirectedReadOptions *sppb.DirectedReadOptionsOpenTelemetryMeterProvidermetric.MeterProvider// EnableEndToEndTracing indicates whether end to end tracing is enabled or not. If// it is enabled, trace spans will be created at Spanner layer. Enabling end to end// tracing requires OpenTelemetry to be set up. Simply enabling this option won't// generate traces at Spanner layer.//// Default: falseEnableEndToEndTracingbool// DisableNativeMetrics indicates whether native metrics should be disabled or not.// If true, native metrics will not be emitted.//// Default: falseDisableNativeMetricsbool// Default: falseIsExperimentalHostbool// ClientContext is the default context for all requests made by the client.ClientContext *sppb.RequestOptions_ClientContext}ClientConfig has configurations for the client.
typeCommitOptions¶added inv1.14.0
CommitOptions provides options for committing a transaction in a database.
typeCommitResponse¶added inv1.12.0
type CommitResponse struct {// CommitTs is the commit time for a transaction.CommitTstime.Time// CommitStats is the commit statistics for a transaction.CommitStats *sppb.CommitResponse_CommitStats}CommitResponse provides a response of a transaction commit in a database.
typeDecodeOptions¶added inv1.56.0
type DecodeOptions interface {Apply(s *decodeSetting)}DecodeOptions is the interface to change decode struct settings
funcWithLenient¶added inv1.56.0
func WithLenient()DecodeOptions
WithLenient returns a DecodeOptions that allows decoding into a struct with missing fields in database.
typeDecoder¶added inv1.9.0
type Decoder interface {DecodeSpanner(input interface{})error}Decoder is the interface implemented by a custom type that can be decodedfrom a supported type by Spanner. A code example:
type customField struct { Prefix string Suffix string}// Convert a string to a customField valuefunc (cf *customField) DecodeSpanner(val interface{}) (err error) { strVal, ok := val.(string) if !ok { return fmt.Errorf("failed to decode customField: %v", val) } s := strings.Split(strVal, "-") if len(s) > 1 { cf.Prefix = s[0] cf.Suffix = s[1] } return nil}typeEncoder¶added inv1.9.0
type Encoder interface {EncodeSpanner() (interface{},error)}Encoder is the interface implemented by a custom type that can be encoded toa supported type by Spanner. A code example:
type customField struct { Prefix string Suffix string}// Convert a customField value to a stringfunc (cf customField) EncodeSpanner() (interface{}, error) { var b bytes.Buffer b.WriteString(cf.Prefix) b.WriteString("-") b.WriteString(cf.Suffix) return b.String(), nil} typeErrordeprecated
type Error struct {// Code is the canonical error code for describing the nature of a// particular error.//// Deprecated: The error code should be extracted from the wrapped error by// calling ErrCode(err error). This field will be removed in a future// release.Codecodes.Code// Desc explains more details of the error.Descstring// RequestID is the associated ID that was sent to Google Cloud Spanner's// backend, as the value in the "x-goog-spanner-request-id" gRPC header.RequestIDstring// contains filtered or unexported fields}Error is the structured error returned by Cloud Spanner client.
Deprecated: Unwrap any error that is returned by the Spanner client as an APIErrorto access the error details. Do not try to convert the error to thespanner.Error struct, as that struct may be removed in a future release.
Example:var apiErr *apierror.APIError_, err := spanner.NewClient(context.Background())errors.As(err, &apiErr)
func (*Error)GRPCStatus¶
GRPCStatus returns the corresponding gRPC Status of this Spanner error.This allows the error to be converted to a gRPC status using`status.Convert(error)`.
typeGenericColumnValue¶
GenericColumnValue represents the generic encoded value and type of thecolumn. See google.spanner.v1.ResultSet proto for details. This can beuseful for proxying query results when the result types are not known inadvance.
If you populate a GenericColumnValue from a row using Row.Column or relatedmethods, do not modify the contents of Type and Value.
func (GenericColumnValue)Decode¶
func (vGenericColumnValue) Decode(ptr interface{})error
Decode decodes a GenericColumnValue. The ptr argument should be a pointerto a Go value that can accept v.
Example¶
package mainimport ("fmt""cloud.google.com/go/spanner"sppb "cloud.google.com/go/spanner/apiv1/spannerpb")func main() {// In real applications, rows can be retrieved by methods like client.Single().ReadRow().row, err := spanner.NewRow([]string{"intCol", "strCol"}, []interface{}{42, "my-text"})if err != nil {// TODO: Handle error.}for i := 0; i < row.Size(); i++ {var col spanner.GenericColumnValueif err := row.Column(i, &col); err != nil {// TODO: Handle error.}switch col.Type.Code {case sppb.TypeCode_INT64:var v int64if err := col.Decode(&v); err != nil {// TODO: Handle error.}fmt.Println("int", v)case sppb.TypeCode_STRING:var v stringif err := col.Decode(&v); err != nil {// TODO: Handle error.}fmt.Println("string", v)}}}Output:int 42string my-text
typeInactiveTransactionRemovalOptionsdeprecatedadded inv1.52.0
type InactiveTransactionRemovalOptions struct {// ActionOnInactiveTransaction is the action to take on inactive transactions.//// Deprecated: This option is no longer used as the session pool has been removed.ActionOnInactiveTransactionActionOnInactiveTransactionKind}InactiveTransactionRemovalOptions has configurations for action on long-running transactions.
Deprecated: This type is no longer used as the session pool has been removed.Multiplexed sessions are now used for all operations. Kept for backward compatibility.
typeInterval¶added inv1.80.0
type Interval struct {Monthsint32// Months component of the intervalDaysint32// Days component of the intervalNanos *big.Int// Nanoseconds component of the interval}Interval represents a Spanner INTERVAL type that may be NULL.An interval is a combination of months, days and nanoseconds.Internally, Spanner supports Interval value with the following range of individual fields:months: [-120000, 120000]days: [-3660000, 3660000]nanoseconds: [-316224000000000000000, 316224000000000000000]
funcParseInterval¶added inv1.80.0
ParseInterval parses an ISO8601 duration format string into an Interval.
typeKey¶
type Key []interface{}A Key can be either a Cloud Spanner row's primary key or a secondary indexkey. It is essentially an interface{} array, which represents a set of CloudSpanner columns. A Key can be used as:
- A primary key which uniquely identifies a Cloud Spanner row.
- A secondary index key which maps to a set of Cloud Spanner rows indexed under it.
- An endpoint of primary key/secondary index ranges; see the KeyRange type.
Rows that are identified by the Key type are outputs of read operation ortargets of delete operation in a mutation. Note that forInsert/Update/InsertOrUpdate/Update mutation types, although they don'trequire a primary key explicitly, the column list provided must containenough columns that can comprise a primary key.
Keys are easy to construct. For example, suppose you have a table with aprimary key of username and product ID. To make a key for this table:
key := spanner.Key{"john", 16}See the description of Row and Mutation types for how Go types are mapped toCloud Spanner types. For convenience, Key type supports a wide range of Gotypes:
- int, int8, int16, int32, int64, and NullInt64 are mapped to Cloud Spanner's INT64 type.
- uint8, uint16 and uint32 are also mapped to Cloud Spanner's INT64 type.
- float32, float64, NullFloat64 are mapped to Cloud Spanner's FLOAT64 type.
- bool and NullBool are mapped to Cloud Spanner's BOOL type.
- []byte is mapped to Cloud Spanner's BYTES type.
- string and NullString are mapped to Cloud Spanner's STRING type.
- time.Time and NullTime are mapped to Cloud Spanner's TIMESTAMP type.
- civil.Date and NullDate are mapped to Cloud Spanner's DATE type.
- protoreflect.Enum and NullProtoEnum are mapped to Cloud Spanner's ENUM type.
typeKeyRange¶
type KeyRange struct {// Start specifies the left boundary of the key range; End specifies// the right boundary of the key range.Start, EndKey// Kind describes whether the boundaries of the key range include// their keys.KindKeyRangeKind}A KeyRange represents a range of rows in a table or index.
A range has a Start key and an End key. IncludeStart and IncludeEndindicate whether the Start and End keys are included in the range.
For example, consider the following table definition:
CREATE TABLE UserEvents ( UserName STRING(MAX), EventDate STRING(10),) PRIMARY KEY(UserName, EventDate);
The following keys name rows in this table:
spanner.Key{"Bob", "2014-09-23"}spanner.Key{"Alfred", "2015-06-12"}Since the UserEvents table's PRIMARY KEY clause names two columns, eachUserEvents key has two elements; the first is the UserName, and the secondis the EventDate.
Key ranges with multiple components are interpreted lexicographically bycomponent using the table or index key's declared sort order. For example,the following range returns all events for user "Bob" that occurred in theyear 2015:
spanner.KeyRange{Start: spanner.Key{"Bob", "2015-01-01"},End: spanner.Key{"Bob", "2015-12-31"},Kind: spanner.ClosedClosed,}Start and end keys can omit trailing key components. This affects theinclusion and exclusion of rows that exactly match the provided keycomponents: if IncludeStart is true, then rows that exactly match theprovided components of the Start key are included; if IncludeStart is falsethen rows that exactly match are not included. IncludeEnd and End keybehave in the same fashion.
For example, the following range includes all events for "Bob" that occurredduring and after the year 2000:
spanner.KeyRange{Start: spanner.Key{"Bob", "2000-01-01"},End: spanner.Key{"Bob"},Kind: spanner.ClosedClosed,}The next example retrieves all events for "Bob":
spanner.Key{"Bob"}.AsPrefix()To retrieve events before the year 2000:
spanner.KeyRange{Start: spanner.Key{"Bob"},End: spanner.Key{"Bob", "2000-01-01"},Kind: spanner.ClosedOpen,}Although we specified a Kind for this KeyRange, we didn't need to, becausethe default is ClosedOpen. In later examples we'll omit Kind if it isClosedOpen.
The following range includes all rows in a table or under aindex:
spanner.AllKeys()
This range returns all users whose UserName begins with anycharacter from A to C:
spanner.KeyRange{Start: spanner.Key{"A"},End: spanner.Key{"D"},}This range returns all users whose UserName begins with B:
spanner.KeyRange{Start: spanner.Key{"B"},End: spanner.Key{"C"},}Key ranges honor column sort order. For example, suppose a table is definedas follows:
CREATE TABLE DescendingSortedTable { Key INT64, ...) PRIMARY KEY(Key DESC);The following range retrieves all rows with key values between 1 and 100inclusive:
spanner.KeyRange{Start: spanner.Key{100},End: spanner.Key{1},Kind: spanner.ClosedClosed,}Note that 100 is passed as the start, and 1 is passed as the end, becauseKey is a descending column in the schema.
typeKeyRangeKind¶
type KeyRangeKindint
KeyRangeKind describes the kind of interval represented by a KeyRange:whether it is open or closed on the left and right.
const (// ClosedOpen is closed on the left and open on the right: the Start// key is included, the End key is excluded.ClosedOpenKeyRangeKind =iota// ClosedClosed is closed on the left and the right: both keys are included.ClosedClosed// OpenClosed is open on the left and closed on the right: the Start// key is excluded, the End key is included.OpenClosed// OpenOpen is open on the left and the right: neither key is included.OpenOpen)
typeKeySet¶
type KeySet interface {// contains filtered or unexported methods}A KeySet defines a collection of Cloud Spanner keys and/or key ranges. Allthe keys are expected to be in the same table or index. The keys need not besorted in any particular way.
An individual Key can act as a KeySet, as can a KeyRange. Use the KeySetsfunction to create a KeySet consisting of multiple Keys and KeyRanges. Toobtain an empty KeySet, call KeySets with no arguments.
If the same key is specified multiple times in the set (for example if tworanges, two keys, or a key and a range overlap), the Cloud Spanner backendbehaves as if the key were only specified once.
funcAllKeys¶
func AllKeys()KeySet
AllKeys returns a KeySet that represents all Keys of a table or a index.
funcKeySetFromKeys¶added inv1.11.0
KeySetFromKeys returns a KeySet containing the given slice of keys.
funcKeySets¶
KeySets returns the union of the KeySets. If any of the KeySets is AllKeys,then the resulting KeySet will be equivalent to AllKeys.
Example¶
package mainimport ("context""cloud.google.com/go/spanner""google.golang.org/api/iterator")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}// Get some rows from the Accounts table using a secondary index. In this case we get all users who are in Georgia.iter := client.Single().ReadUsingIndex(context.Background(), "Accounts", "idx_state", spanner.Key{"GA"}, []string{"state"})// Create a empty KeySet by calling the KeySets function with no parameters.ks := spanner.KeySets()// Loop the results of a previous query iterator.for {row, err := iter.Next()if err == iterator.Done {break} else if err != nil {// TODO: Handle error.}var id stringerr = row.ColumnByName("User", &id)if err != nil {// TODO: Handle error.}ks = spanner.KeySets(spanner.KeySets(spanner.Key{id}, ks))}_ = ks //TODO: Go use the KeySet in another query.}typeLossOfPrecisionHandlingOption¶added inv1.25.0
type LossOfPrecisionHandlingOptionint
LossOfPrecisionHandlingOption describes the option to deal with loss ofprecision on numeric values.
const (// NumericRound automatically rounds a numeric value that has a higher// precision than what is supported by Spanner, e.g., 0.1234567895 rounds// to 0.123456790.NumericRoundLossOfPrecisionHandlingOption =iota// NumericError returns an error for numeric values that have a higher// precision than what is supported by Spanner. E.g. the client returns an// error if the application tries to insert the value 0.1234567895.NumericError)
var LossOfPrecisionHandlingLossOfPrecisionHandlingOptionLossOfPrecisionHandling configures how to deal with loss of precision onnumeric values. The value of this configuration is global and will be usedfor all Spanner clients.
typeMutation¶
type Mutation struct {// contains filtered or unexported fields}A Mutation describes a modification to one or more Cloud Spanner rows. Themutation represents an insert, update, delete, etc on a table, or send, ackon a queue.
Many mutations can be applied in a single atomic commit. For purposes ofconstraint checking (such as foreign key constraints), the operations can beviewed as applying in the same order as the mutations are provided (so that,e.g., a row and its logical "child" can be inserted in the same commit).
The Apply function applies series of mutations. For example,
m := spanner.Insert("User", []string{"user_id", "profile"}, []interface{}{UserID, profile}) _, err := client.Apply(ctx, []*spanner.Mutation{m})inserts a new row into the User table. The primary keyfor the new row is UserID (presuming that "user_id" has been declared as theprimary key of the "User" table).
To apply a series of mutations as part of an atomic read-modify-writeoperation, use ReadWriteTransaction.
Updating a row¶
Changing the values of columns in an existing row is very similar toinserting a new row:
m := spanner.Update("User",[]string{"user_id", "profile"},[]interface{}{UserID, profile})_, err := client.Apply(ctx, []*spanner.Mutation{m})Deleting a row¶
To delete a row, use spanner.Delete:
m := spanner.Delete("User", spanner.Key{UserId})_, err := client.Apply(ctx, []*spanner.Mutation{m})spanner.Delete accepts a KeySet, so you can also pass in a KeyRange, or usethe spanner.KeySets function to build any combination of Keys and KeyRanges.
Note that deleting a row in a table may also delete rows from other tablesif cascading deletes are specified in those tables' schemas. Delete doesnothing if the named row does not exist (does not yield an error).
Deleting a field¶
To delete/clear a field within a row, use spanner.Update with the value nil:
m := spanner.Update("User",[]string{"user_id", "profile"},[]interface{}{UserID, nil})_, err := client.Apply(ctx, []*spanner.Mutation{m})The valid Go types and their corresponding Cloud Spanner types that can beused in the Insert/Update/InsertOrUpdate functions are:
string, *string, NullString - STRING[]string, []*string, []NullString - STRING ARRAY[]byte - BYTES[][]byte - BYTES ARRAYint, int64, *int64, NullInt64 - INT64[]int, []int64, []*int64, []NullInt64 - INT64 ARRAYbool, *bool, NullBool - BOOL[]bool, []*bool, []NullBool - BOOL ARRAYfloat64, *float64, NullFloat64 - FLOAT64[]float64, []*float64, []NullFloat64 - FLOAT64 ARRAYtime.Time, *time.Time, NullTime - TIMESTAMP[]time.Time, []*time.Time, []NullTime - TIMESTAMP ARRAYDate, *Date, NullDate - DATE[]Date, []*Date, []NullDate - DATE ARRAYbig.Rat, *big.Rat, NullNumeric - NUMERIC[]big.Rat, []*big.Rat, []NullNumeric - NUMERIC ARRAY
To compare two Mutations for testing purposes, use reflect.DeepEqual.
funcAck¶added inv1.88.0
Ack returns a Mutation to acknowledge (and thus delete) a message from a queue.
funcDelete¶
Delete removes the rows described by the KeySet from the table. It succeedswhether or not the keys were present.
Example¶
package mainimport ("cloud.google.com/go/spanner")func main() {m := spanner.Delete("Users", spanner.Key{"alice"})_ = m // TODO: use with Client.Apply or in a ReadWriteTransaction.}Example (KeyRange)¶
package mainimport ("cloud.google.com/go/spanner")func main() {m := spanner.Delete("Users", spanner.KeyRange{Start: spanner.Key{"alice"},End: spanner.Key{"bob"},Kind: spanner.ClosedClosed,})_ = m // TODO: use with Client.Apply or in a ReadWriteTransaction.}funcInsert¶
Insert returns a Mutation to insert a row into a table. If the row alreadyexists, the write or transaction fails with codes.AlreadyExists.
Example¶
package mainimport ("cloud.google.com/go/spanner")func main() {m := spanner.Insert("Users", []string{"name", "email"}, []interface{}{"alice", "a@example.com"})_ = m // TODO: use with Client.Apply or in a ReadWriteTransaction.}funcInsertMap¶
InsertMap returns a Mutation to insert a row into a table, specified bya map of column name to value. If the row already exists, the write ortransaction fails with codes.AlreadyExists.
Example¶
package mainimport ("cloud.google.com/go/spanner")func main() {m := spanner.InsertMap("Users", map[string]interface{}{"name": "alice","email": "a@example.com",})_ = m // TODO: use with Client.Apply or in a ReadWriteTransaction.}funcInsertOrUpdate¶
InsertOrUpdate returns a Mutation to insert a row into a table. If the rowalready exists, it updates it instead. Any column values not explicitlywritten are preserved.
For a similar example, See Update.
funcInsertOrUpdateMap¶
InsertOrUpdateMap returns a Mutation to insert a row into a table,specified by a map of column to value. If the row already exists, itupdates it instead. Any column values not explicitly written are preserved.
For a similar example, See UpdateMap.
funcInsertOrUpdateStruct¶
InsertOrUpdateStruct returns a Mutation to insert a row into a table,specified by a Go struct. If the row already exists, it updates it instead.Any column values not explicitly written are preserved.
The in argument must be a struct or a pointer to a struct. Its exportedfields specify the column names and values. Use a field tag like`spanner:"name"` to provide an alternative column name, or use `spanner:"-"` toignore the field.
For a similar example, See UpdateStruct.
funcInsertStruct¶
InsertStruct returns a Mutation to insert a row into a table, specified bya Go struct. If the row already exists, the write or transaction fails withcodes.AlreadyExists.
The in argument must be a struct or a pointer to a struct. Its exportedfields specify the column names and values. Use a field tag like `spanner:"name"`to provide an alternative column name, or use `spanner:"-"` to ignore the field.
Example¶
package mainimport ("cloud.google.com/go/spanner")func main() {type User struct {Name, Email string}u := User{Name: "alice", Email: "a@example.com"}m, err := spanner.InsertStruct("Users", u)if err != nil {// TODO: Handle error.}_ = m // TODO: use with Client.Apply or in a ReadWriteTransaction.}funcReplace¶
Replace returns a Mutation to insert a row into a table, deleting anyexisting row. Unlike InsertOrUpdate, this means any values not explicitlywritten become NULL.
For a similar example, See Update.
funcReplaceMap¶
ReplaceMap returns a Mutation to insert a row into a table, deleting anyexisting row. Unlike InsertOrUpdateMap, this means any values not explicitlywritten become NULL. The row is specified by a map of column to value.
For a similar example, See UpdateMap.
funcReplaceStruct¶
ReplaceStruct returns a Mutation to insert a row into a table, deleting anyexisting row. Unlike InsertOrUpdateMap, this means any values not explicitlywritten become NULL. The row is specified by a Go struct.
The in argument must be a struct or a pointer to a struct. Its exportedfields specify the column names and values. Use a field tag like `spanner:"name"`to provide an alternative column name, or use `spanner:"-"` to ignore the field.
For a similar example, See UpdateStruct.
funcSend¶added inv1.88.0
func Send(queuestring, keyKey, payload interface{}, opts ...SendOption) *Mutation
Send returns a Mutation to send a message to a queue.
funcUpdate¶
Update returns a Mutation to update a row in a table. If the row does notalready exist, the write or transaction fails.
Example¶
package mainimport ("context""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}_, err = client.ReadWriteTransaction(ctx, func(ctx context.Context, txn *spanner.ReadWriteTransaction) error {row, err := txn.ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"balance"})if err != nil {return err}var balance int64if err := row.Column(0, &balance); err != nil {return err}return txn.BufferWrite([]*spanner.Mutation{spanner.Update("Accounts", []string{"user", "balance"}, []interface{}{"alice", balance + 10}),})})if err != nil {// TODO: Handle error.}}funcUpdateMap¶
UpdateMap returns a Mutation to update a row in a table, specified bya map of column to value. If the row does not already exist, the write ortransaction fails.
Example¶
This example is the same as the one for Update, except for the use of UpdateMap.
package mainimport ("context""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}_, err = client.ReadWriteTransaction(ctx, func(ctx context.Context, txn *spanner.ReadWriteTransaction) error {row, err := txn.ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"balance"})if err != nil {return err}var balance int64if err := row.Column(0, &balance); err != nil {return err}return txn.BufferWrite([]*spanner.Mutation{spanner.UpdateMap("Accounts", map[string]interface{}{"user": "alice","balance": balance + 10,}),})})if err != nil {// TODO: Handle error.}}funcUpdateStruct¶
UpdateStruct returns a Mutation to update a row in a table, specified by a Gostruct. If the row does not already exist, the write or transaction fails.
Example¶
This example is the same as the one for Update, except for the use ofUpdateStruct.
package mainimport ("context""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}type account struct {User string `spanner:"user"`Balance int64 `spanner:"balance"`}_, err = client.ReadWriteTransaction(ctx, func(ctx context.Context, txn *spanner.ReadWriteTransaction) error {row, err := txn.ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"balance"})if err != nil {return err}var balance int64if err := row.Column(0, &balance); err != nil {return err}m, err := spanner.UpdateStruct("Accounts", account{User: "alice",Balance: balance + 10,})if err != nil {return err}return txn.BufferWrite([]*spanner.Mutation{m})})if err != nil {// TODO: Handle error.}}typeMutationGroup¶added inv1.52.0
type MutationGroup struct {// The Mutations in this groupMutations []*Mutation}A MutationGroup is a list of Mutation to be committed atomically.
typeNullBool¶
type NullBool struct {Boolbool// Bool contains the value when it is non-NULL, and false when NULL.Validbool// Valid is true if Bool is not NULL.}NullBool represents a Cloud Spanner BOOL that may be NULL.
func (NullBool)GormDataType¶added inv1.27.0
GormDataType is used by gorm to determine the default data type for fields with this type.
func (NullBool)MarshalJSON¶added inv1.3.0
MarshalJSON implements json.Marshaler.MarshalJSON for NullBool.
func (*NullBool)UnmarshalJSON¶added inv1.3.0
UnmarshalJSON implements json.Unmarshaler.UnmarshalJSON for NullBool.
typeNullDate¶
type NullDate struct {Datecivil.Date// Date contains the value when it is non-NULL, and a zero civil.Date when NULL.Validbool// Valid is true if Date is not NULL.}NullDate represents a Cloud Spanner DATE that may be null.
func (NullDate)GormDataType¶added inv1.27.0
GormDataType is used by gorm to determine the default data type for fields with this type.
func (NullDate)MarshalJSON¶added inv1.3.0
MarshalJSON implements json.Marshaler.MarshalJSON for NullDate.
func (*NullDate)UnmarshalJSON¶added inv1.3.0
UnmarshalJSON implements json.Unmarshaler.UnmarshalJSON for NullDate.
typeNullFloat32¶added inv1.59.0
type NullFloat32 struct {Float32float32// Float32 contains the value when it is non-NULL, and zero when NULL.Validbool// Valid is true if FLOAT32 is not NULL.}NullFloat32 represents a Cloud Spanner FLOAT32 that may be NULL.
func (NullFloat32)GormDataType¶added inv1.59.0
func (nNullFloat32) GormDataType()string
GormDataType is used by gorm to determine the default data type for fields with this type.
func (NullFloat32)IsNull¶added inv1.59.0
func (nNullFloat32) IsNull()bool
IsNull implements NullableValue.IsNull for NullFloat32.
func (NullFloat32)MarshalJSON¶added inv1.59.0
func (nNullFloat32) MarshalJSON() ([]byte,error)
MarshalJSON implements json.Marshaler.MarshalJSON for NullFloat32.
func (*NullFloat32)Scan¶added inv1.59.0
func (n *NullFloat32) Scan(value interface{})error
Scan implements the sql.Scanner interface.
func (NullFloat32)String¶added inv1.59.0
func (nNullFloat32) String()string
String implements Stringer.String for NullFloat32
func (*NullFloat32)UnmarshalJSON¶added inv1.59.0
func (n *NullFloat32) UnmarshalJSON(payload []byte)error
UnmarshalJSON implements json.Unmarshaler.UnmarshalJSON for NullFloat32.
typeNullFloat64¶
type NullFloat64 struct {Float64float64// Float64 contains the value when it is non-NULL, and zero when NULL.Validbool// Valid is true if Float64 is not NULL.}NullFloat64 represents a Cloud Spanner FLOAT64 that may be NULL.
func (NullFloat64)GormDataType¶added inv1.27.0
func (nNullFloat64) GormDataType()string
GormDataType is used by gorm to determine the default data type for fields with this type.
func (NullFloat64)IsNull¶added inv1.1.0
func (nNullFloat64) IsNull()bool
IsNull implements NullableValue.IsNull for NullFloat64.
func (NullFloat64)MarshalJSON¶added inv1.3.0
func (nNullFloat64) MarshalJSON() ([]byte,error)
MarshalJSON implements json.Marshaler.MarshalJSON for NullFloat64.
func (*NullFloat64)Scan¶added inv1.27.0
func (n *NullFloat64) Scan(value interface{})error
Scan implements the sql.Scanner interface.
func (NullFloat64)String¶
func (nNullFloat64) String()string
String implements Stringer.String for NullFloat64
func (*NullFloat64)UnmarshalJSON¶added inv1.3.0
func (n *NullFloat64) UnmarshalJSON(payload []byte)error
UnmarshalJSON implements json.Unmarshaler.UnmarshalJSON for NullFloat64.
typeNullInt64¶
type NullInt64 struct {Int64int64// Int64 contains the value when it is non-NULL, and zero when NULL.Validbool// Valid is true if Int64 is not NULL.}NullInt64 represents a Cloud Spanner INT64 that may be NULL.
func (NullInt64)GormDataType¶added inv1.27.0
GormDataType is used by gorm to determine the default data type for fields with this type.
func (NullInt64)MarshalJSON¶added inv1.3.0
MarshalJSON implements json.Marshaler.MarshalJSON for NullInt64.
func (*NullInt64)UnmarshalJSON¶added inv1.3.0
UnmarshalJSON implements json.Unmarshaler.UnmarshalJSON for NullInt64.
typeNullInterval¶added inv1.80.0
type NullInterval struct {IntervalInterval// Interval contains the value when it is non-NULLValidbool// Valid is true if Interval is not NULL}NullInterval represents a Spanner INTERVAL that may be NULL.
func (NullInterval)GormDataType¶added inv1.80.0
func (nNullInterval) GormDataType()string
GormDataType implements the gorm.GormDataTypeInterface interface for NullInterval.
func (NullInterval)IsNull¶added inv1.80.0
func (nNullInterval) IsNull()bool
IsNull implements NullableValue.IsNull for NullInterval.
func (NullInterval)MarshalJSON¶added inv1.80.0
func (nNullInterval) MarshalJSON() ([]byte,error)
MarshalJSON implements json.Marshaler.MarshalJSON for NullInterval.
func (*NullInterval)Scan¶added inv1.80.0
func (n *NullInterval) Scan(value interface{})error
Scan implements the sql.Scanner interface for NullInterval.
func (NullInterval)String¶added inv1.80.0
func (nNullInterval) String()string
String implements Stringer.String for NullInterval.
func (*NullInterval)UnmarshalJSON¶added inv1.80.0
func (n *NullInterval) UnmarshalJSON(payload []byte)error
UnmarshalJSON implements json.Unmarshaler.UnmarshalJSON for NullInterval.
typeNullJSON¶added inv1.25.0
type NullJSON struct {Value interface{}// Val contains the value when it is non-NULL, and nil when NULL.Validbool// Valid is true if Json is not NULL.}NullJSON represents a Cloud Spanner JSON that may be NULL.
This type must always be used when encoding values to a JSON column in CloudSpanner.
NullJSON does not implement the driver.Valuer and sql.Scanner interfaces, asthe underlying value can be anything. This means that the type NullJSON mustalso be used when calling sql.Row#Scan(dest ...interface{}) for a JSONcolumn.
func (NullJSON)GormDataType¶added inv1.27.0
GormDataType is used by gorm to determine the default data type for fields with this type.
func (NullJSON)MarshalJSON¶added inv1.25.0
MarshalJSON implements json.Marshaler.MarshalJSON for NullJSON.
func (*NullJSON)UnmarshalJSON¶added inv1.25.0
UnmarshalJSON implements json.Unmarshaler.UnmarshalJSON for NullJSON.
typeNullNumeric¶added inv1.10.0
type NullNumeric struct {Numericbig.Rat// Numeric contains the value when it is non-NULL, and a zero big.Rat when NULL.Validbool// Valid is true if Numeric is not NULL.}NullNumeric represents a Cloud Spanner Numeric that may be NULL.
func (NullNumeric)GormDataType¶added inv1.27.0
func (nNullNumeric) GormDataType()string
GormDataType is used by gorm to determine the default data type for fields with this type.
func (NullNumeric)IsNull¶added inv1.10.0
func (nNullNumeric) IsNull()bool
IsNull implements NullableValue.IsNull for NullNumeric.
func (NullNumeric)MarshalJSON¶added inv1.10.0
func (nNullNumeric) MarshalJSON() ([]byte,error)
MarshalJSON implements json.Marshaler.MarshalJSON for NullNumeric.
func (*NullNumeric)Scan¶added inv1.27.0
func (n *NullNumeric) Scan(value interface{})error
Scan implements the sql.Scanner interface.
func (NullNumeric)String¶added inv1.10.0
func (nNullNumeric) String()string
String implements Stringer.String for NullNumeric
func (*NullNumeric)UnmarshalJSON¶added inv1.10.0
func (n *NullNumeric) UnmarshalJSON(payload []byte)error
UnmarshalJSON implements json.Unmarshaler.UnmarshalJSON for NullNumeric.
typeNullProtoEnum¶added inv1.62.0
type NullProtoEnum struct {ProtoEnumValprotoreflect.Enum// ProtoEnumVal contains the value when Valid is true, and nil when NULL.Validbool// Valid is true if ProtoEnumVal is not NULL.}NullProtoEnum represents a Cloud Spanner ENUM that may be NULL.To write a NULL value using NullProtoEnum set ProtoEnumVal to typed nil and set Valid to true.
func (NullProtoEnum)IsNull¶added inv1.62.0
func (nNullProtoEnum) IsNull()bool
IsNull implements NullableValue.IsNull for NullProtoEnum.
func (NullProtoEnum)MarshalJSON¶added inv1.62.0
func (nNullProtoEnum) MarshalJSON() ([]byte,error)
MarshalJSON implements json.Marshaler.MarshalJSON for NullProtoEnum.
func (NullProtoEnum)String¶added inv1.62.0
func (nNullProtoEnum) String()string
String implements Stringer.String for NullProtoEnum.
func (*NullProtoEnum)UnmarshalJSON¶added inv1.62.0
func (n *NullProtoEnum) UnmarshalJSON(payload []byte)error
UnmarshalJSON implements json.Unmarshaler.UnmarshalJSON for NullProtoEnum.
typeNullProtoMessage¶added inv1.62.0
type NullProtoMessage struct {ProtoMessageValproto.Message// ProtoMessageVal contains the value when Valid is true, and nil when NULL.Validbool// Valid is true if ProtoMessageVal is not NULL.}NullProtoMessage represents a Cloud Spanner PROTO that may be NULL.To write a NULL value using NullProtoMessage set ProtoMessageVal to typed nil and set Valid to true.
func (NullProtoMessage)IsNull¶added inv1.62.0
func (nNullProtoMessage) IsNull()bool
IsNull implements NullableValue.IsNull for NullProtoMessage.
func (NullProtoMessage)MarshalJSON¶added inv1.62.0
func (nNullProtoMessage) MarshalJSON() ([]byte,error)
MarshalJSON implements json.Marshaler.MarshalJSON for NullProtoMessage.
func (NullProtoMessage)String¶added inv1.62.0
func (nNullProtoMessage) String()string
String implements Stringer.String for NullProtoMessage.
func (*NullProtoMessage)UnmarshalJSON¶added inv1.62.0
func (n *NullProtoMessage) UnmarshalJSON(payload []byte)error
UnmarshalJSON implements json.Unmarshaler.UnmarshalJSON for NullProtoMessage.
typeNullRow¶
type NullRow struct {RowRow// Row contains the value when it is non-NULL, and a zero Row when NULL.Validbool// Valid is true if Row is not NULL.}NullRow represents a Cloud Spanner STRUCT that may be NULL.See also the document for Row.Note that NullRow is not a valid Cloud Spanner column Type.
typeNullString¶
type NullString struct {StringValstring// StringVal contains the value when it is non-NULL, and an empty string when NULL.Validbool// Valid is true if StringVal is not NULL.}NullString represents a Cloud Spanner STRING that may be NULL.
func (NullString)GormDataType¶added inv1.27.0
func (nNullString) GormDataType()string
GormDataType is used by gorm to determine the default data type for fields with this type.
func (NullString)IsNull¶added inv1.1.0
func (nNullString) IsNull()bool
IsNull implements NullableValue.IsNull for NullString.
func (NullString)MarshalJSON¶added inv1.3.0
func (nNullString) MarshalJSON() ([]byte,error)
MarshalJSON implements json.Marshaler.MarshalJSON for NullString.
func (*NullString)Scan¶added inv1.27.0
func (n *NullString) Scan(value interface{})error
Scan implements the sql.Scanner interface.
func (NullString)String¶
func (nNullString) String()string
String implements Stringer.String for NullString
func (*NullString)UnmarshalJSON¶added inv1.3.0
func (n *NullString) UnmarshalJSON(payload []byte)error
UnmarshalJSON implements json.Unmarshaler.UnmarshalJSON for NullString.
typeNullTime¶
type NullTime struct {Timetime.Time// Time contains the value when it is non-NULL, and a zero time.Time when NULL.Validbool// Valid is true if Time is not NULL.}NullTime represents a Cloud Spanner TIMESTAMP that may be null.
func (NullTime)GormDataType¶added inv1.27.0
GormDataType is used by gorm to determine the default data type for fields with this type.
func (NullTime)MarshalJSON¶added inv1.3.0
MarshalJSON implements json.Marshaler.MarshalJSON for NullTime.
func (*NullTime)UnmarshalJSON¶added inv1.3.0
UnmarshalJSON implements json.Unmarshaler.UnmarshalJSON for NullTime.
typeNullUUID¶added inv1.81.0
NullUUID represents a Cloud Spanner UUID that may be NULL.
func (NullUUID)GormDataType¶added inv1.81.0
GormDataType is used by gorm to determine the default data type for fields with this type.
func (NullUUID)MarshalJSON¶added inv1.81.0
MarshalJSON NullUUID json.Marshaler.MarshalJSON for NullUUID.
func (*NullUUID)UnmarshalJSON¶added inv1.81.0
UnmarshalJSON implements json.Unmarshaler.UnmarshalJSON for NullUUID.
typeNullableValue¶added inv1.1.0
type NullableValue interface {// IsNull returns true if the underlying database value is null.IsNull()bool}NullableValue is the interface implemented by all null value wrapper types.
typePGJsonB¶added inv1.40.0
type PGJsonB struct {Value interface{}// Val contains the value when it is non-NULL, and nil when NULL.Validbool// Valid is true if PGJsonB is not NULL.// contains filtered or unexported fields}PGJsonB represents a Cloud Spanner PGJsonB that may be NULL.
func (PGJsonB)GormDataType¶added inv1.88.0
GormDataType is used by gorm to determine the default data type for fields with this type.
func (PGJsonB)MarshalJSON¶added inv1.40.0
MarshalJSON implements json.Marshaler.MarshalJSON for PGJsonB.
func (*PGJsonB)UnmarshalJSON¶added inv1.40.0
UnmarshalJSON implements json.Unmarshaler.UnmarshalJSON for PGJsonB.
typePGNumeric¶added inv1.30.0
type PGNumeric struct {Numericstring// Numeric contains the value when it is non-NULL, and an empty string when NULL.Validbool// Valid is true if Numeric is not NULL.}PGNumeric represents a Cloud Spanner PG Numeric that may be NULL.
func (PGNumeric)GormDataType¶added inv1.88.0
GormDataType is used by gorm to determine the default data type for fields with this type.
func (PGNumeric)MarshalJSON¶added inv1.30.0
MarshalJSON implements json.Marshaler.MarshalJSON for PGNumeric.
func (*PGNumeric)UnmarshalJSON¶added inv1.30.0
UnmarshalJSON implements json.Unmarshaler.UnmarshalJSON for PGNumeric.
typePartition¶
type Partition struct {// contains filtered or unexported fields}Partition defines a segment of data to be read in a batch read or query. Apartition can be serialized and processed across several different machinesor processes.
func (*Partition)GetPartitionToken¶added inv1.52.0
GetPartitionToken returns partition token
func (Partition)MarshalBinary¶
MarshalBinary implements BinaryMarshaler.
func (*Partition)UnmarshalBinary¶
UnmarshalBinary implements BinaryUnmarshaler.
typePartitionOptions¶
type PartitionOptions struct {// The desired data size for each partition generated.PartitionBytesint64// The desired maximum number of partitions to return.MaxPartitionsint64}PartitionOptions specifies options for a PartitionQueryRequest andPartitionReadRequest. Seehttps://godoc.org/google.golang.org/genproto/googleapis/spanner/v1#PartitionOptionsfor more details.
typeQueryOptions¶added inv1.3.0
type QueryOptions struct {Mode *sppb.ExecuteSqlRequest_QueryModeOptions *sppb.ExecuteSqlRequest_QueryOptions// Priority is the RPC priority to use for the query/update.Prioritysppb.RequestOptions_Priority// The request tag to use for this request.RequestTagstring// If this is for a partitioned query and DataBoostEnabled field is set to true, the request will be executed// via Spanner independent compute resources. Setting this option for regular query operations has no effect.DataBoostEnabledbool// QueryOptions option used to set the DirectedReadOptions for all ExecuteSqlRequests which indicate// which replicas or regions should be used for executing queries.DirectedReadOptions *sppb.DirectedReadOptions// Controls whether to exclude recording modifications in current partitioned update operation// from the allowed tracking change streams(with DDL option allow_txn_exclusion=true). Setting// this value for any sql/dml requests other than partitioned update will receive an error.ExcludeTxnFromChangeStreamsbool// LastStatement indicates whether this statement is the last statement in this transaction.// If set to true, this option marks the end of the transaction. The transaction should be// committed or rolled back after this statement executes, and attempts to execute any other requests// against this transaction (including reads and queries) will be rejected. Mixing mutations with// statements that are marked as the last statement is not allowed.//// For DML statements, setting this option may cause some error reporting to be deferred until// commit time (e.g. validation of unique constraints). Given this, successful execution of a DML// statement should not be assumed until the transaction commits.LastStatementbool// ClientContext contains client-owned context information to be passed with the query.ClientContext *sppb.RequestOptions_ClientContext}QueryOptions provides options for executing a sql query or update statement.
typeReadOnlyTransaction¶
type ReadOnlyTransaction struct {// contains filtered or unexported fields}ReadOnlyTransaction provides a snapshot transaction with guaranteedconsistency across reads, but does not allow writes. Read-only transactionscan be configured to read at timestamps in the past.
Read-only transactions do not take locks. Instead, they work by choosing aCloud Spanner timestamp, then executing all reads at that timestamp. Sincethey do not acquire locks, they do not block concurrent read-writetransactions.
Unlike locking read-write transactions, read-only transactions never abort.They can fail if the chosen read timestamp is garbage collected; however, thedefault garbage collection policy is generous enough that most applicationsdo not need to worry about this in practice. See the documentation ofTimestampBound for more details.
A ReadOnlyTransaction consumes resources on the server until Close is called.
func (*ReadOnlyTransaction)AnalyzeQuery¶
func (t *ReadOnlyTransaction) AnalyzeQuery(ctxcontext.Context, statementStatement) (*sppb.QueryPlan,error)
AnalyzeQuery returns the query plan for statement.
func (*ReadOnlyTransaction)Close¶
func (t *ReadOnlyTransaction) Close()
Close closes a ReadOnlyTransaction, the transaction cannot perform any readsafter being closed.
func (*ReadOnlyTransaction)Query¶
func (t *ReadOnlyTransaction) Query(ctxcontext.Context, statementStatement) *RowIterator
Query executes a query against the database. It returns a RowIterator forretrieving the resulting rows.
Query returns only row data, without a query plan or execution statistics.Use QueryWithStats to get rows along with the plan and statistics. UseAnalyzeQuery to get just the plan.
Example¶
package mainimport ("context""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}iter := client.Single().Query(ctx, spanner.NewStatement("SELECT FirstName FROM Singers"))_ = iter // TODO: iterate using Next or Do.}func (*ReadOnlyTransaction)QueryWithOptions¶added inv1.3.0
func (t *ReadOnlyTransaction) QueryWithOptions(ctxcontext.Context, statementStatement, optsQueryOptions) *RowIterator
QueryWithOptions executes a SQL statment against the database. It returnsa RowIterator for retrieving the resulting rows. The sql query executionwill be optimized based on the given query options.
func (*ReadOnlyTransaction)QueryWithStats¶
func (t *ReadOnlyTransaction) QueryWithStats(ctxcontext.Context, statementStatement) *RowIterator
QueryWithStats executes a SQL statement against the database. It returnsa RowIterator for retrieving the resulting rows. The RowIterator will alsobe populated with a query plan and execution statistics.
func (*ReadOnlyTransaction)Read¶
func (t *ReadOnlyTransaction) Read(ctxcontext.Context, tablestring, keysKeySet, columns []string) *RowIterator
Read returns a RowIterator for reading multiple rows from the database.
Example¶
package mainimport ("context""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}iter := client.Single().Read(ctx, "Users",spanner.KeySets(spanner.Key{"alice"}, spanner.Key{"bob"}),[]string{"name", "email"})_ = iter // TODO: iterate using Next or Do.}func (*ReadOnlyTransaction)ReadRow¶
func (t *ReadOnlyTransaction) ReadRow(ctxcontext.Context, tablestring, keyKey, columns []string) (*Row,error)
ReadRow reads a single row from the database.
If no row is present with the given key, then ReadRow returns an error(spanner.ErrRowNotFound) wherespanner.ErrCode(err) is codes.NotFound.
To check if the error is spanner.ErrRowNotFound:
if errors.Is(err, spanner.ErrRowNotFound) {...}Example¶
package mainimport ("context""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}row, err := client.Single().ReadRow(ctx, "Users", spanner.Key{"alice"},[]string{"name", "email"})if err != nil {// TODO: Handle error.}_ = row // TODO: use row}func (*ReadOnlyTransaction)ReadRowUsingIndex¶added inv1.2.0
func (t *ReadOnlyTransaction) ReadRowUsingIndex(ctxcontext.Context, tablestring, indexstring, keyKey, columns []string) (*Row,error)
ReadRowUsingIndex reads a single row from the database using an index.
If no row is present with the given index, then ReadRowUsingIndex returns anerror(spanner.ErrRowNotFound) where spanner.ErrCode(err) is codes.NotFound.
To check if the error is spanner.ErrRowNotFound:
if errors.Is(err, spanner.ErrRowNotFound) {...}If more than one row received with the given index, then ReadRowUsingIndexreturns an error where spanner.ErrCode(err) is codes.FailedPrecondition.
func (*ReadOnlyTransaction)ReadRowWithOptions¶added inv1.29.0
func (t *ReadOnlyTransaction) ReadRowWithOptions(ctxcontext.Context, tablestring, keyKey, columns []string, opts *ReadOptions) (*Row,error)
ReadRowWithOptions reads a single row from the database. Pass a ReadOptions to modify the read operation.
If no row is present with the given key, then ReadRowWithOptions returns an error wherespanner.ErrCode(err) is codes.NotFound.
To check if the error is spanner.ErrRowNotFound:
if errors.Is(err, spanner.ErrRowNotFound) {...}func (*ReadOnlyTransaction)ReadUsingIndex¶
func (t *ReadOnlyTransaction) ReadUsingIndex(ctxcontext.Context, table, indexstring, keysKeySet, columns []string) (ri *RowIterator)
ReadUsingIndex calls ReadWithOptions with ReadOptions{Index: index}.
Example¶
package mainimport ("context""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}iter := client.Single().ReadUsingIndex(ctx, "Users","UsersByEmail",spanner.KeySets(spanner.Key{"a@example.com"}, spanner.Key{"b@example.com"}),[]string{"name", "email"})_ = iter // TODO: iterate using Next or Do.}func (*ReadOnlyTransaction)ReadWithOptions¶
func (t *ReadOnlyTransaction) ReadWithOptions(ctxcontext.Context, tablestring, keysKeySet, columns []string, opts *ReadOptions) (ri *RowIterator)
ReadWithOptions returns a RowIterator for reading multiple rows from thedatabase. Pass a ReadOptions to modify the read operation.
Example¶
package mainimport ("context""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}// Use an index, and limit to 100 rows at most.iter := client.Single().ReadWithOptions(ctx, "Users",spanner.KeySets(spanner.Key{"a@example.com"}, spanner.Key{"b@example.com"}),[]string{"name", "email"}, &spanner.ReadOptions{Index: "UsersByEmail",Limit: 100,})_ = iter // TODO: iterate using Next or Do.}func (*ReadOnlyTransaction)Timestamp¶
func (t *ReadOnlyTransaction) Timestamp() (time.Time,error)
Timestamp returns the timestamp chosen to perform reads and queries in thistransaction. The value can only be read after some read or query has eitherreturned some data or completed without returning any data.
Example¶
package mainimport ("context""fmt""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}txn := client.Single()row, err := txn.ReadRow(ctx, "Users", spanner.Key{"alice"},[]string{"name", "email"})if err != nil {// TODO: Handle error.}readTimestamp, err := txn.Timestamp()if err != nil {// TODO: Handle error.}fmt.Println("read happened at", readTimestamp)_ = row // TODO: use row}func (*ReadOnlyTransaction)WithBeginTransactionOption¶added inv1.83.0
func (t *ReadOnlyTransaction) WithBeginTransactionOption(optionBeginTransactionOption) *ReadOnlyTransaction
WithBeginTransactionOption specifies how the read-only transaction should be started.The default is to execute a BeginTransaction RPC before any statements are executed.Set this to InlinedBeginTransaction to include the BeginTransaction option with thefirst statement in the transaction. This saves one round-trip to Spanner. This ismore efficient if you are not executing multiple queries in parallel at the start ofthe transaction.
func (*ReadOnlyTransaction)WithTimestampBound¶
func (t *ReadOnlyTransaction) WithTimestampBound(tbTimestampBound) *ReadOnlyTransaction
WithTimestampBound specifies the TimestampBound to use for read or query.This can only be used before the first read or query is invoked. Note:bounded staleness is not available with general ReadOnlyTransactions; use asingle-use ReadOnlyTransaction instead.
The returned value is the ReadOnlyTransaction so calls can be chained.
Example¶
package mainimport ("context""fmt""time""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}txn := client.Single().WithTimestampBound(spanner.MaxStaleness(30 * time.Second))row, err := txn.ReadRow(ctx, "Users", spanner.Key{"alice"}, []string{"name", "email"})if err != nil {// TODO: Handle error.}_ = row // TODO: use rowreadTimestamp, err := txn.Timestamp()if err != nil {// TODO: Handle error.}fmt.Println("read happened at", readTimestamp)}typeReadOptions¶
type ReadOptions struct {// The index to use for reading. If non-empty, you can only read columns// that are part of the index key, part of the primary key, or stored in the// index due to a STORING clause in the index definition.Indexstring// The maximum number of rows to read. A limit value less than 1 means no// limit.Limitint// Priority is the RPC priority to use for the operation.Prioritysppb.RequestOptions_Priority// The request tag to use for this request.RequestTagstring// If this is for a partitioned read and DataBoostEnabled field is set to true, the request will be executed// via Spanner independent compute resources. Setting this option for regular read operations has no effect.DataBoostEnabledbool// ReadOptions option used to set the DirectedReadOptions for all ReadRequests which indicate// which replicas or regions should be used for running read operations.DirectedReadOptions *sppb.DirectedReadOptions// An option to control the order in which rows are returned from a read.OrderBysppb.ReadRequest_OrderBy// A lock hint mechanism to use for this request. This setting is only applicable for// read-write transaction as as read-only transactions do not take locks.LockHintsppb.ReadRequest_LockHint// ClientContext contains client-owned context information to be passed with the read request.ClientContext *sppb.RequestOptions_ClientContext}ReadOptions provides options for reading rows from a database.
typeReadWriteStmtBasedTransaction¶added inv1.8.0
type ReadWriteStmtBasedTransaction struct {// ReadWriteTransaction contains methods for performing transactional reads.ReadWriteTransaction// contains filtered or unexported fields}ReadWriteStmtBasedTransaction provides a wrapper of ReadWriteTransaction inorder to run a read-write transaction in a statement-based way.
This struct is returned by NewReadWriteStmtBasedTransaction and containsCommit() and Rollback() methods to end a transaction.
funcNewReadWriteStmtBasedTransaction¶added inv1.8.0
func NewReadWriteStmtBasedTransaction(ctxcontext.Context, c *Client) (*ReadWriteStmtBasedTransaction,error)
NewReadWriteStmtBasedTransaction starts a read-write transaction. Commit() orRollback() must be called to end a transaction.
This method should only be used when manual error handling and retrymanagement is needed. Cloud Spanner may abort a read/write transaction at anymoment, and each statement that is executed on the transaction should bechecked for an Aborted error, including queries and read operations.
For most use cases, client.ReadWriteTransaction should be used, as it willhandle all Aborted errors automatically.
Example¶
package mainimport ("context""errors""time""cloud.google.com/go/spanner""google.golang.org/grpc/codes""google.golang.org/grpc/status")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}defer client.Close()f := func(tx *spanner.ReadWriteStmtBasedTransaction) error {var balance int64row, err := tx.ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"balance"})if err != nil {return err}if err := row.Column(0, &balance); err != nil {return err}if balance <= 10 {return errors.New("insufficient funds in account")}balance -= 10m := spanner.Update("Accounts", []string{"user", "balance"}, []interface{}{"alice", balance})return tx.BufferWrite([]*spanner.Mutation{m})}for {tx, err := spanner.NewReadWriteStmtBasedTransaction(ctx, client)if err != nil {// TODO: Handle error.break}err = f(tx)if err != nil && status.Code(err) != codes.Aborted {tx.Rollback(ctx)// TODO: Handle error.break} else if err == nil {_, err = tx.Commit(ctx)if err == nil {break} else if status.Code(err) != codes.Aborted {// TODO: Handle error.break}}// Set a default sleep time if the server delay is absent.delay := 10 * time.Millisecondif serverDelay, hasServerDelay := spanner.ExtractRetryDelay(err); hasServerDelay {delay = serverDelay}time.Sleep(delay)}}funcNewReadWriteStmtBasedTransactionWithCallbackForOptions¶added inv1.85.0
func NewReadWriteStmtBasedTransactionWithCallbackForOptions(ctxcontext.Context, c *Client, optsTransactionOptions, callback func()TransactionOptions) (*ReadWriteStmtBasedTransaction,error)
NewReadWriteStmtBasedTransactionWithCallbackForOptions starts a read-writetransaction with a callback that gives the actual transaction options.Commit() or Rollback() must be called to end a transaction.
ResetForRetry resets the transaction before a retry attempt. This functionreturns a new transaction that should be used for the retry attempt. Thetransaction that is returned by this function is assigned a higher prioritythan the previous transaction, making it less probable to be aborted bySpanner again during the retry.
NewReadWriteStmtBasedTransactionWithCallbackForOptions is the same asNewReadWriteStmtBasedTransactionWithOptions, but allows the caller to waitwith setting the actual transaction options until a later moment.
funcNewReadWriteStmtBasedTransactionWithOptions¶added inv1.12.0
func NewReadWriteStmtBasedTransactionWithOptions(ctxcontext.Context, c *Client, optionsTransactionOptions) (*ReadWriteStmtBasedTransaction,error)
NewReadWriteStmtBasedTransactionWithOptions starts a read-write transactionwith configurable options. Commit() or Rollback() must be called to end atransaction.
ResetForRetry resets the transaction before a retry attempt. This functionreturns a new transaction that should be used for the retry attempt. Thetransaction that is returned by this function is assigned a higher prioritythan the previous transaction, making it less probable to be aborted bySpanner again during the retry.
NewReadWriteStmtBasedTransactionWithOptions is a configurable version ofNewReadWriteStmtBasedTransaction.
func (*ReadWriteStmtBasedTransaction)AnalyzeQuery¶added inv1.8.0
func (t *ReadWriteStmtBasedTransaction) AnalyzeQuery(ctxcontext.Context, statementStatement) (*sppb.QueryPlan,error)
AnalyzeQuery returns the query plan for statement.
func (*ReadWriteStmtBasedTransaction)Commit¶added inv1.8.0
Commit tries to commit a readwrite transaction to Cloud Spanner. It alsoreturns the commit timestamp for the transactions.
func (*ReadWriteStmtBasedTransaction)CommitWithReturnResp¶added inv1.14.0
func (t *ReadWriteStmtBasedTransaction) CommitWithReturnResp(ctxcontext.Context) (CommitResponse,error)
CommitWithReturnResp tries to commit a readwrite transaction. It also returnsthe commit timestamp and stats for the transactions.
func (*ReadWriteStmtBasedTransaction)Query¶added inv1.8.0
func (t *ReadWriteStmtBasedTransaction) Query(ctxcontext.Context, statementStatement) *RowIterator
Query executes a query against the database. It returns a RowIterator forretrieving the resulting rows.
Query returns only row data, without a query plan or execution statistics.Use QueryWithStats to get rows along with the plan and statistics. UseAnalyzeQuery to get just the plan.
func (*ReadWriteStmtBasedTransaction)QueryWithOptions¶added inv1.8.0
func (t *ReadWriteStmtBasedTransaction) QueryWithOptions(ctxcontext.Context, statementStatement, optsQueryOptions) *RowIterator
QueryWithOptions executes a SQL statment against the database. It returnsa RowIterator for retrieving the resulting rows. The sql query executionwill be optimized based on the given query options.
func (*ReadWriteStmtBasedTransaction)QueryWithStats¶added inv1.8.0
func (t *ReadWriteStmtBasedTransaction) QueryWithStats(ctxcontext.Context, statementStatement) *RowIterator
QueryWithStats executes a SQL statement against the database. It returnsa RowIterator for retrieving the resulting rows. The RowIterator will alsobe populated with a query plan and execution statistics.
func (*ReadWriteStmtBasedTransaction)Read¶added inv1.8.0
func (t *ReadWriteStmtBasedTransaction) Read(ctxcontext.Context, tablestring, keysKeySet, columns []string) *RowIterator
Read returns a RowIterator for reading multiple rows from the database.
func (*ReadWriteStmtBasedTransaction)ReadRow¶added inv1.8.0
func (t *ReadWriteStmtBasedTransaction) ReadRow(ctxcontext.Context, tablestring, keyKey, columns []string) (*Row,error)
ReadRow reads a single row from the database.
If no row is present with the given key, then ReadRow returns an error(spanner.ErrRowNotFound) wherespanner.ErrCode(err) is codes.NotFound.
To check if the error is spanner.ErrRowNotFound:
if errors.Is(err, spanner.ErrRowNotFound) {...}func (*ReadWriteStmtBasedTransaction)ReadRowUsingIndex¶added inv1.8.0
func (t *ReadWriteStmtBasedTransaction) ReadRowUsingIndex(ctxcontext.Context, tablestring, indexstring, keyKey, columns []string) (*Row,error)
ReadRowUsingIndex reads a single row from the database using an index.
If no row is present with the given index, then ReadRowUsingIndex returns anerror(spanner.ErrRowNotFound) where spanner.ErrCode(err) is codes.NotFound.
To check if the error is spanner.ErrRowNotFound:
if errors.Is(err, spanner.ErrRowNotFound) {...}If more than one row received with the given index, then ReadRowUsingIndexreturns an error where spanner.ErrCode(err) is codes.FailedPrecondition.
func (*ReadWriteStmtBasedTransaction)ReadRowWithOptions¶added inv1.29.0
func (t *ReadWriteStmtBasedTransaction) ReadRowWithOptions(ctxcontext.Context, tablestring, keyKey, columns []string, opts *ReadOptions) (*Row,error)
ReadRowWithOptions reads a single row from the database. Pass a ReadOptions to modify the read operation.
If no row is present with the given key, then ReadRowWithOptions returns an error wherespanner.ErrCode(err) is codes.NotFound.
To check if the error is spanner.ErrRowNotFound:
if errors.Is(err, spanner.ErrRowNotFound) {...}func (*ReadWriteStmtBasedTransaction)ReadUsingIndex¶added inv1.8.0
func (t *ReadWriteStmtBasedTransaction) ReadUsingIndex(ctxcontext.Context, table, indexstring, keysKeySet, columns []string) (ri *RowIterator)
ReadUsingIndex calls ReadWithOptions with ReadOptions{Index: index}.
func (*ReadWriteStmtBasedTransaction)ReadWithOptions¶added inv1.8.0
func (t *ReadWriteStmtBasedTransaction) ReadWithOptions(ctxcontext.Context, tablestring, keysKeySet, columns []string, opts *ReadOptions) (ri *RowIterator)
ReadWithOptions returns a RowIterator for reading multiple rows from thedatabase. Pass a ReadOptions to modify the read operation.
func (*ReadWriteStmtBasedTransaction)ResetForRetry¶added inv1.73.0
func (t *ReadWriteStmtBasedTransaction) ResetForRetry(ctxcontext.Context) (*ReadWriteStmtBasedTransaction,error)
ResetForRetry resets the transaction before a retry. This should becalled if the transaction was aborted by Spanner and the applicationwants to retry the transaction.It is recommended to use this method above creating a new transaction,as this method will give the transaction a higher priority and thus asmaller probability of being aborted again by Spanner.
func (*ReadWriteStmtBasedTransaction)Rollback¶added inv1.8.0
func (t *ReadWriteStmtBasedTransaction) Rollback(ctxcontext.Context)
Rollback is called to cancel the ongoing transaction that has not beencommitted yet.
typeReadWriteTransaction¶
type ReadWriteTransaction struct {// contains filtered or unexported fields}ReadWriteTransaction provides a locking read-write transaction.
This type of transaction is the only way to write data into Cloud Spanner;(*Client).Apply, (*Client).ApplyAtLeastOnce, (*Client).PartitionedUpdate usetransactions internally. These transactions rely on pessimistic locking and,if necessary, two-phase commit. Locking read-write transactions may abort,requiring the application to retry. However, the interface exposed by(*Client).ReadWriteTransaction eliminates the need for applications to writeretry loops explicitly.
Locking transactions may be used to atomically read-modify-write dataanywhere in a database. This type of transaction is externally consistent.
Clients should attempt to minimize the amount of time a transaction isactive. Faster transactions commit with higher probability and cause lesscontention. Cloud Spanner attempts to keep read locks active as long as thetransaction continues to do reads. Long periods of inactivity at the clientmay cause Cloud Spanner to release a transaction's locks and abort it.
Reads performed within a transaction acquire locks on the data beingread. Writes can only be done at commit time, after all reads have beencompleted. Conceptually, a read-write transaction consists of zero or morereads or SQL queries followed by a commit.
See (*Client).ReadWriteTransaction for an example.
Semantics¶
Cloud Spanner can commit the transaction if all read locks it acquired arestill valid at commit time, and it is able to acquire write locks for allwrites. Cloud Spanner can abort the transaction for any reason. If a commitattempt returns ABORTED, Cloud Spanner guarantees that the transaction hasnot modified any user data in Cloud Spanner.
Unless the transaction commits, Cloud Spanner makes no guarantees about howlong the transaction's locks were held for. It is an error to use CloudSpanner locks for any sort of mutual exclusion other than between CloudSpanner transactions themselves.
Aborted transactions¶
Application code does not need to retry explicitly; RunInTransaction willautomatically retry a transaction if an attempt results in an abort. The lockpriority of a transaction increases after each prior aborted transaction,meaning that the next attempt has a slightly better chance of success thanbefore.
Under some circumstances (e.g., many transactions attempting to modify thesame row(s)), a transaction can abort many times in a short period beforesuccessfully committing. Thus, it is not a good idea to cap the number ofretries a transaction can attempt; instead, it is better to limit the totalamount of wall time spent retrying.
Idle transactions¶
A transaction is considered idle if it has no outstanding reads or SQLqueries and has not started a read or SQL query within the last 10seconds. Idle transactions can be aborted by Cloud Spanner so that they don'thold on to locks indefinitely. In that case, the commit will fail with errorABORTED.
If this behavior is undesirable, periodically executing a simple SQL queryin the transaction (e.g., SELECT 1) prevents the transaction from becomingidle.
func (*ReadWriteTransaction)AnalyzeQuery¶
func (t *ReadWriteTransaction) AnalyzeQuery(ctxcontext.Context, statementStatement) (*sppb.QueryPlan,error)
AnalyzeQuery returns the query plan for statement.
func (*ReadWriteTransaction)BatchUpdate¶
func (t *ReadWriteTransaction) BatchUpdate(ctxcontext.Context, stmts []Statement) (_ []int64, errerror)
BatchUpdate groups one or more DML statements and sends them to Spanner in asingle RPC. This is an efficient way to execute multiple DML statements.
A slice of counts is returned, where each count represents the number ofaffected rows for the given query at the same index. If an error occurs,counts will be returned up to the query that encountered the error.
func (*ReadWriteTransaction)BatchUpdateWithOptions¶added inv1.17.0
func (t *ReadWriteTransaction) BatchUpdateWithOptions(ctxcontext.Context, stmts []Statement, optsQueryOptions) (_ []int64, errerror)
BatchUpdateWithOptions groups one or more DML statements and sends them toSpanner in a single RPC. This is an efficient way to execute multiple DMLstatements.
A slice of counts is returned, where each count represents the number ofaffected rows for the given query at the same index. If an error occurs,counts will be returned up to the query that encountered the error.
The request tag and priority given in the QueryOptions are included with theRPC. Any other options that are set in the QueryOptions struct are ignored.
func (*ReadWriteTransaction)BufferWrite¶
func (t *ReadWriteTransaction) BufferWrite(ms []*Mutation)error
BufferWrite adds a list of mutations to the set of updates that will beapplied when the transaction is committed. It does not actually apply thewrite until the transaction is committed, so the operation does not block.The effects of the write won't be visible to any reads (including reads donein the same transaction) until the transaction commits.
See the example for Client.ReadWriteTransaction.
func (*ReadWriteTransaction)Query¶
func (t *ReadWriteTransaction) Query(ctxcontext.Context, statementStatement) *RowIterator
Query executes a query against the database. It returns a RowIterator forretrieving the resulting rows.
Query returns only row data, without a query plan or execution statistics.Use QueryWithStats to get rows along with the plan and statistics. UseAnalyzeQuery to get just the plan.
func (*ReadWriteTransaction)QueryWithOptions¶added inv1.3.0
func (t *ReadWriteTransaction) QueryWithOptions(ctxcontext.Context, statementStatement, optsQueryOptions) *RowIterator
QueryWithOptions executes a SQL statment against the database. It returnsa RowIterator for retrieving the resulting rows. The sql query executionwill be optimized based on the given query options.
func (*ReadWriteTransaction)QueryWithStats¶
func (t *ReadWriteTransaction) QueryWithStats(ctxcontext.Context, statementStatement) *RowIterator
QueryWithStats executes a SQL statement against the database. It returnsa RowIterator for retrieving the resulting rows. The RowIterator will alsobe populated with a query plan and execution statistics.
func (*ReadWriteTransaction)Read¶
func (t *ReadWriteTransaction) Read(ctxcontext.Context, tablestring, keysKeySet, columns []string) *RowIterator
Read returns a RowIterator for reading multiple rows from the database.
func (*ReadWriteTransaction)ReadRow¶
func (t *ReadWriteTransaction) ReadRow(ctxcontext.Context, tablestring, keyKey, columns []string) (*Row,error)
ReadRow reads a single row from the database.
If no row is present with the given key, then ReadRow returns an error(spanner.ErrRowNotFound) wherespanner.ErrCode(err) is codes.NotFound.
To check if the error is spanner.ErrRowNotFound:
if errors.Is(err, spanner.ErrRowNotFound) {...}func (*ReadWriteTransaction)ReadRowUsingIndex¶added inv1.2.0
func (t *ReadWriteTransaction) ReadRowUsingIndex(ctxcontext.Context, tablestring, indexstring, keyKey, columns []string) (*Row,error)
ReadRowUsingIndex reads a single row from the database using an index.
If no row is present with the given index, then ReadRowUsingIndex returns anerror(spanner.ErrRowNotFound) where spanner.ErrCode(err) is codes.NotFound.
To check if the error is spanner.ErrRowNotFound:
if errors.Is(err, spanner.ErrRowNotFound) {...}If more than one row received with the given index, then ReadRowUsingIndexreturns an error where spanner.ErrCode(err) is codes.FailedPrecondition.
func (*ReadWriteTransaction)ReadRowWithOptions¶added inv1.29.0
func (t *ReadWriteTransaction) ReadRowWithOptions(ctxcontext.Context, tablestring, keyKey, columns []string, opts *ReadOptions) (*Row,error)
ReadRowWithOptions reads a single row from the database. Pass a ReadOptions to modify the read operation.
If no row is present with the given key, then ReadRowWithOptions returns an error wherespanner.ErrCode(err) is codes.NotFound.
To check if the error is spanner.ErrRowNotFound:
if errors.Is(err, spanner.ErrRowNotFound) {...}func (*ReadWriteTransaction)ReadUsingIndex¶
func (t *ReadWriteTransaction) ReadUsingIndex(ctxcontext.Context, table, indexstring, keysKeySet, columns []string) (ri *RowIterator)
ReadUsingIndex calls ReadWithOptions with ReadOptions{Index: index}.
func (*ReadWriteTransaction)ReadWithOptions¶
func (t *ReadWriteTransaction) ReadWithOptions(ctxcontext.Context, tablestring, keysKeySet, columns []string, opts *ReadOptions) (ri *RowIterator)
ReadWithOptions returns a RowIterator for reading multiple rows from thedatabase. Pass a ReadOptions to modify the read operation.
func (*ReadWriteTransaction)Update¶
Update executes a DML statement against the database. It returns the numberof affected rows. Update returns an error if the statement is a query.However, the query is executed, and any data read will be validated uponcommit.
func (*ReadWriteTransaction)UpdateWithOptions¶added inv1.3.0
func (t *ReadWriteTransaction) UpdateWithOptions(ctxcontext.Context, stmtStatement, optsQueryOptions) (rowCountint64, errerror)
UpdateWithOptions executes a DML statement against the database. It returnsthe number of affected rows. The given QueryOptions will be used for theexecution of this statement.
typeRow¶
type Row struct {// contains filtered or unexported fields}A Row is a view of a row of data returned by a Cloud Spanner read.It consists of a number of columns; the number depends on the columnsused to construct the read.
The column values can be accessed by index. For instance, if the read specified[]string{"photo_id", "caption"}, then each row will contain twocolumns: "photo_id" with index 0, and "caption" with index 1.
Column values are decoded by using one of the Column, ColumnByName, orColumns methods. The valid values passed to these methods depend on thecolumn type. For example:
var photoID int64err := row.Column(0, &photoID) // Decode column 0 as an integer.var caption stringerr := row.Column(1, &caption) // Decode column 1 as a string.// Decode all the columns.err := row.Columns(&photoID, &caption)
Supported types and their corresponding Cloud Spanner column type(s) are:
*string(not NULL), *NullString - STRING*[]string, *[]NullString - STRING ARRAY*[]byte - BYTES*[][]byte - BYTES ARRAY*int64(not NULL), *NullInt64 - INT64*[]int64, *[]NullInt64 - INT64 ARRAY*bool(not NULL), *NullBool - BOOL*[]bool, *[]NullBool - BOOL ARRAY*float32(not NULL), *NullFloat32 - FLOAT32*[]float32, *[]NullFloat32 - FLOAT32 ARRAY*float64(not NULL), *NullFloat64 - FLOAT64*[]float64, *[]NullFloat64 - FLOAT64 ARRAY*big.Rat(not NULL), *NullNumeric - NUMERIC*[]big.Rat, *[]NullNumeric - NUMERIC ARRAY*time.Time(not NULL), *NullTime - TIMESTAMP*[]time.Time, *[]NullTime - TIMESTAMP ARRAY*Date(not NULL), *NullDate - DATE*[]civil.Date, *[]NullDate - DATE ARRAY*uuid.UUID(not NULL), *NullUuid - UUID*[]uuid.UUID, *[]NullUuid - UUID Array*[]*some_go_struct, *[]NullRow - STRUCT ARRAY*NullJSON - JSON*[]NullJSON - JSON ARRAY*GenericColumnValue - any Cloud Spanner type
For TIMESTAMP columns, the returned time.Time object will be in UTC.
To fetch an array of BYTES, pass a *[][]byte. To fetch an array of (sub)rows, passa *[]spanner.NullRow or a *[]*some_go_struct where some_go_struct holds allinformation of the subrow, see spanner.Row.ToStruct for the mapping between aCloud Spanner row and a Go struct. To fetch an array of other types, pass a*[]spanner.NullXXX type of the appropriate type. Use GenericColumnValue when youdon't know in advance what column type to expect.
Row decodes the row contents lazily; as a result, each call to a getter hasa chance of returning an error.
A column value may be NULL if the corresponding value is not present inCloud Spanner. The spanner.NullXXX types (spanner.NullInt64 et al.) allow fetchingvalues that may be null. A NULL BYTES can be fetched into a *[]byte as nil.It is an error to fetch a NULL value into any other type.
funcNewRow¶
NewRow returns a Row containing the supplied data. This can be useful formocking Cloud Spanner Read and Query responses for unit testing.
func (*Row)Column¶
Column fetches the value from the ith column, decoding it into ptr.See the Row documentation for the list of acceptable argument types.see Client.ReadWriteTransaction for an example.
func (*Row)ColumnByName¶
ColumnByName fetches the value from the named column, decoding it into ptr.See the Row documentation for the list of acceptable argument types.
Example¶
package mainimport ("context""fmt""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}row, err := client.Single().ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"name", "balance"})if err != nil {// TODO: Handle error.}var balance int64if err := row.ColumnByName("balance", &balance); err != nil {// TODO: Handle error.}fmt.Println(balance)}func (*Row)ColumnIndex¶
ColumnIndex returns the index of the column with the given name. Thecomparison is case-sensitive.
Example¶
package mainimport ("context""fmt""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}row, err := client.Single().ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"name", "balance"})if err != nil {// TODO: Handle error.}index, err := row.ColumnIndex("balance")if err != nil {// TODO: Handle error.}fmt.Println(index)}func (*Row)ColumnName¶
ColumnName returns the name of column i, or empty string for invalid column.
Example¶
package mainimport ("context""fmt""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}row, err := client.Single().ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"name", "balance"})if err != nil {// TODO: Handle error.}fmt.Println(row.ColumnName(1)) // "balance"}func (*Row)ColumnNames¶
ColumnNames returns all column names of the row.
Example¶
package mainimport ("context""fmt""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}row, err := client.Single().ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"name", "balance"})if err != nil {// TODO: Handle error.}fmt.Println(row.ColumnNames())}func (*Row)ColumnType¶added inv1.52.0
ColumnType returns the Cloud Spanner Type of column i, or nil for invalid column.
func (*Row)ColumnValue¶added inv1.52.0
ColumnValue returns the Cloud Spanner Value of column i, or nil for invalid column.
func (*Row)Columns¶
Columns fetches all the columns in the row at once.
The value of the kth column will be decoded into the kth argument to Columns. SeeRow for the list of acceptable argument types. The number of arguments must beequal to the number of columns. Pass nil to specify that a column should beignored.
Example¶
package mainimport ("context""fmt""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}row, err := client.Single().ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"name", "balance"})if err != nil {// TODO: Handle error.}var name stringvar balance int64if err := row.Columns(&name, &balance); err != nil {// TODO: Handle error.}fmt.Println(name, balance)}func (*Row)Size¶
Size is the number of columns in the row.
Example¶
package mainimport ("context""fmt""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}row, err := client.Single().ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"name", "balance"})if err != nil {// TODO: Handle error.}fmt.Println(row.Size()) // 2}func (*Row)ToStruct¶
ToStruct fetches the columns in a row into the fields of a struct.The rules for mapping a row's columns into a struct's exported fieldsare:
If a field has a `spanner: "column_name"` tag, then decode column'column_name' into the field. A special case is the `spanner: "-"`tag, which instructs ToStruct to ignore the field during decoding.
Otherwise, if the name of a field matches the name of a column (ignoring case),decode the column into the field.
The number of columns in the row must match the number of exported fields in the struct.There must be exactly one match for each column in the row. The method will return an errorif a column in the row cannot be assigned to a field in the struct.
The fields of the destination struct can be of any type that is acceptableto spanner.Row.Column.
Slice and pointer fields will be set to nil if the source column is NULL, and anon-nil value if the column is not NULL. To decode NULL values of other types, useone of the spanner.NullXXX types as the type of the destination field.
If ToStruct returns an error, the contents of p are undefined. Some fields mayhave been successfully populated, while others were not; you should not use any ofthe fields.
Example¶
package mainimport ("context""fmt""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}row, err := client.Single().ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"name", "balance"})if err != nil {// TODO: Handle error.}type Account struct {Name stringBalance int64}var acct Accountif err := row.ToStruct(&acct); err != nil {// TODO: Handle error.}fmt.Println(acct)}func (*Row)ToStructLenient¶added inv1.28.0
ToStructLenient fetches the columns in a row into the fields of a struct.The rules for mapping a row's columns into a struct's exported fieldsare:
If a field has a `spanner: "column_name"` tag, then decode column'column_name' into the field. A special case is the `spanner: "-"`tag, which instructs ToStruct to ignore the field during decoding.
Otherwise, if the name of a field matches the name of a column (ignoring case),decode the column into the field.
The number of columns in the row and exported fields in the struct do not need to match.Any field in the struct that cannot not be assigned a value from the row is assigned its default value.Any column in the row that does not have a corresponding field in the struct is ignored.
The fields of the destination struct can be of any type that is acceptableto spanner.Row.Column.
Slice and pointer fields will be set to nil if the source column is NULL, and anon-nil value if the column is not NULL. To decode NULL values of other types, useone of the spanner.NullXXX types as the type of the destination field.
If ToStructLenient returns an error, the contents of p are undefined. Some fields mayhave been successfully populated, while others were not; you should not use any ofthe fields.
Example¶
package mainimport ("context""fmt""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}row, err := client.Single().ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"accountID", "name", "balance"})if err != nil {// TODO: Handle error.}type Account struct {Name stringBalance int64NickName string}var acct Accountif err := row.ToStructLenient(&acct); err != nil {// TODO: Handle error.}fmt.Println(acct)}typeRowIterator¶
type RowIterator struct {// The plan for the query. Available after RowIterator.Next returns// iterator.Done if QueryWithStats was called.QueryPlan *sppb.QueryPlan// Execution statistics for the query. Available after RowIterator.Next// returns iterator.Done if QueryWithStats was called.QueryStats map[string]interface{}// For a DML statement, the number of rows affected. For PDML, this is a// lower bound. Available for DML statements after RowIterator.Next returns// iterator.Done.RowCountint64// The metadata of the results of the query. The metadata are available// after the first call to RowIterator.Next(), unless the first call to// RowIterator.Next() returned an error that is not equal to iterator.Done.Metadata *sppb.ResultSetMetadata// contains filtered or unexported fields}RowIterator is an iterator over Rows.
func (*RowIterator)Do¶
func (r *RowIterator) Do(f func(r *Row)error)error
Do calls the provided function once in sequence for each row in theiteration. If the function returns a non-nil error, Do immediately returnsthat error.
If there are no rows in the iterator, Do will return nil without calling theprovided function.
Do always calls Stop on the iterator.
Example¶
package mainimport ("context""fmt""cloud.google.com/go/spanner")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}iter := client.Single().Query(ctx, spanner.NewStatement("SELECT FirstName FROM Singers"))err = iter.Do(func(r *spanner.Row) error {var firstName stringif err := r.Column(0, &firstName); err != nil {return err}fmt.Println(firstName)return nil})if err != nil {// TODO: Handle error.}}func (*RowIterator)Next¶
func (r *RowIterator) Next() (*Row,error)
Next returns the next result. Its second return value is iterator.Done ifthere are no more results. Once Next returns Done, all subsequent callswill return Done.
Example¶
package mainimport ("context""fmt""cloud.google.com/go/spanner""google.golang.org/api/iterator")const myDB = "projects/my-project/instances/my-instance/database/my-db"func main() {ctx := context.Background()client, err := spanner.NewClient(ctx, myDB)if err != nil {// TODO: Handle error.}iter := client.Single().Query(ctx, spanner.NewStatement("SELECT FirstName FROM Singers"))defer iter.Stop()for {row, err := iter.Next()if err == iterator.Done {break}if err != nil {// TODO: Handle error.}var firstName stringif err := row.Column(0, &firstName); err != nil {// TODO: Handle error.}fmt.Println(firstName)}}func (*RowIterator)Stop¶
func (r *RowIterator) Stop()
Stop terminates the iteration. It should be called after you finish using theiterator.
typeSendOption¶added inv1.88.0
type SendOption func(*Mutation)
SendOption specifies optional fields for Send mutation
funcWithDeliveryTime¶added inv1.88.0
func WithDeliveryTime(ttime.Time)SendOption
WithDeliveryTime returns an SendOption to set field `deliverTime`
typeSessionPoolConfigdeprecated
type SessionPoolConfig struct {// MaxOpened is the maximum number of opened sessions allowed by the session pool.//// Deprecated: This option is no longer used as the session pool has been removed.MaxOpeneduint64// MinOpened is the minimum number of opened sessions that the session pool tries to maintain.//// Deprecated: This option is no longer used as the session pool has been removed.MinOpeneduint64// MaxIdle is the maximum number of idle sessions.//// Deprecated: This option is no longer used as the session pool has been removed.MaxIdleuint64// MaxBurst is the maximum number of concurrent session creation requests.//// Deprecated: This option is no longer used as the session pool has been removed.MaxBurstuint64// WriteSessions is the fraction of sessions we try to keep prepared for write.//// Deprecated: This option is no longer used as the session pool has been removed.WriteSessionsfloat64// HealthCheckWorkers is number of workers used by health checker.//// Deprecated: This option is no longer used as the session pool has been removed.HealthCheckWorkersint// HealthCheckInterval is how often the health checker pings a session.//// Deprecated: This option is no longer used as the session pool has been removed.HealthCheckIntervaltime.Duration// MultiplexSessionCheckInterval is the interval at which the multiplexed session is checked.//// Defaults to 10 mins.MultiplexSessionCheckIntervaltime.Duration// TrackSessionHandles determines whether the session pool will keep track of session handles.//// Deprecated: This option is no longer used as the session pool has been removed.TrackSessionHandlesbool// Deprecated: This option is no longer used as the session pool has been removed.InactiveTransactionRemovalOptions}SessionPoolConfig stores configurations of a session pool.
Deprecated: This configuration is no longer used as the session pool has been removed.Multiplexed sessions are now used for all operations. These options are kept forbackward compatibility but are ignored.
typeStatement¶
A Statement is a SQL query with named parameters.
A parameter placeholder consists of '@' followed by the parameter name.The parameter name is an identifier which must conform to the namingrequirements inhttps://cloud.google.com/spanner/docs/lexical#identifiers.Parameters may appear anywhere that a literal value is expected. The sameparameter name may be used more than once. It is an error to execute astatement with unbound parameters. On the other hand, it is allowable tobind parameter names that are not used.
See the documentation of the Row type for how Go types are mapped to CloudSpanner types.
Example (ArrayOfStructParam)¶
package mainimport ("cloud.google.com/go/spanner")func main() {stmt := spanner.Statement{SQL: "SELECT * FROM SINGERS WHERE (FirstName, LastName) IN UNNEST(@singerinfo)",Params: map[string]interface{}{"singerinfo": []struct {FirstName stringLastName string}{{"Ringo", "Starr"},{"John", "Lennon"},},},}_ = stmt // TODO: Use stmt in Query.}Example (RegexpContains)¶
package mainimport ("cloud.google.com/go/spanner")func main() {// Search for accounts with valid emails using regexp as per:// https://cloud.google.com/spanner/docs/functions-and-operators#regexp_containsstmt := spanner.Statement{SQL: `SELECT * FROM users WHERE REGEXP_CONTAINS(email, @valid_email)`,Params: map[string]interface{}{"valid_email": `\Q@\E`,},}_ = stmt // TODO: Use stmt in a query.}Example (StructParam)¶
package mainimport ("cloud.google.com/go/spanner")func main() {stmt := spanner.Statement{SQL: "SELECT * FROM SINGERS WHERE (FirstName, LastName) = @singerinfo",Params: map[string]interface{}{"singerinfo": struct {FirstName stringLastName string}{"Bob", "Dylan"},},}_ = stmt // TODO: Use stmt in Query.}funcNewStatement¶
NewStatement returns a Statement with the given SQL and an empty Params map.
Example¶
package mainimport ("cloud.google.com/go/spanner")func main() {stmt := spanner.NewStatement("SELECT FirstName, LastName FROM SINGERS WHERE LastName >= @start")stmt.Params["start"] = "Dylan"// TODO: Use stmt in Query.}Example (StructLiteral)¶
package mainimport ("cloud.google.com/go/spanner")func main() {stmt := spanner.Statement{SQL: `SELECT FirstName, LastName FROM SINGERS WHERE LastName = ("Lea", "Martin")`,}_ = stmt // TODO: Use stmt in Query.}typeTimestampBound¶
type TimestampBound struct {// contains filtered or unexported fields}TimestampBound defines how Cloud Spanner will choose a timestamp for a singleread/query or read-only transaction.
There are three types of timestamp bound: strong, bounded staleness and exactstaleness. Strong is the default.
If the Cloud Spanner database to be read is geographically distributed, staleread-only transactions can execute more quickly than strong or read-writetransactions, because they are able to execute far from the leader replica.
Each type of timestamp bound is discussed in detail below. A TimestampBoundcan be specified when creating transactions, see the documentation ofspanner.Client for an example.
Strong reads¶
Strong reads are guaranteed to see the effects of all transactions that havecommitted before the start of the read. Furthermore, all rows yielded by asingle read are consistent with each other: if any part of the readobserves a transaction, all parts of the read see the transaction.
Strong reads are not repeatable: two consecutive strong read-onlytransactions might return inconsistent results if there are concurrentwrites. If consistency across reads is required, the reads should beexecuted within a transaction or at an exact read timestamp.
Use StrongRead to create a bound of this type.
Exact staleness¶
An exact staleness timestamp bound executes reads at a user-specifiedtimestamp. Reads at a timestamp are guaranteed to see a consistent prefix ofthe global transaction history: they observe modifications done by alltransactions with a commit timestamp less than or equal to the readtimestamp, and observe none of the modifications done by transactions with alarger commit timestamp. They will block until all conflicting transactionsthat may be assigned commit timestamps less than or equal to the readtimestamp have finished.
The timestamp can either be expressed as an absolute Cloud Spanner committimestamp or a staleness relative to the current time.
These modes do not require a "negotiation phase" to pick a timestamp. As aresult, they execute slightly faster than the equivalent boundedly staleconcurrency modes. On the other hand, boundedly stale reads usually returnfresher results.
Use ReadTimestamp and ExactStaleness to create a bound of this type.
Bounded staleness¶
Bounded staleness modes allow Cloud Spanner to pick the read timestamp,subject to a user-provided staleness bound. Cloud Spanner chooses the newesttimestamp within the staleness bound that allows execution of the reads atthe closest available replica without blocking.
All rows yielded are consistent with each other: if any part of the readobserves a transaction, all parts of the read see the transaction. Boundedlystale reads are not repeatable: two stale reads, even if they use the samestaleness bound, can execute at different timestamps and thus returninconsistent results.
Boundedly stale reads execute in two phases. The first phase negotiates atimestamp among all replicas needed to serve the read. In the second phase,reads are executed at the negotiated timestamp.
As a result of this two-phase execution, bounded staleness reads are usuallya little slower than comparable exact staleness reads. However, they aretypically able to return fresher results, and are more likely to execute atthe closest replica.
Because the timestamp negotiation requires up-front knowledge of which rowswill be read, it can only be used with single-use reads and single-useread-only transactions.
Use MinReadTimestamp and MaxStaleness to create a bound of this type.
Old read timestamps and garbage collection¶
Cloud Spanner continuously garbage collects deleted and overwritten data inthe background to reclaim storage space. This process is known as "versionGC". By default, version GC reclaims versions after they are one hour old.Because of this, Cloud Spanner cannot perform reads at read timestamps morethan one hour in the past. This restriction also applies to in-progress readsand/or SQL queries whose timestamps become too old while executing. Reads andSQL queries with too-old read timestamps fail with the errorErrorCode.FAILED_PRECONDITION.
funcExactStaleness¶
func ExactStaleness(dtime.Duration)TimestampBound
ExactStaleness returns a TimestampBound that will perform reads and queriesat an exact staleness.
funcMaxStaleness¶
func MaxStaleness(dtime.Duration)TimestampBound
MaxStaleness returns a TimestampBound that will perform reads and queries ata time chosen to be at most "d" stale.
funcMinReadTimestamp¶
func MinReadTimestamp(ttime.Time)TimestampBound
MinReadTimestamp returns a TimestampBound that bound that will perform readsand queries at a time chosen to be at least "t".
funcReadTimestamp¶
func ReadTimestamp(ttime.Time)TimestampBound
ReadTimestamp returns a TimestampBound that will peform reads and queries atthe given time.
funcStrongRead¶
func StrongRead()TimestampBound
StrongRead returns a TimestampBound that will perform reads and queries at atimestamp where all previously committed transactions are visible.
func (TimestampBound)String¶
func (tbTimestampBound) String()string
typeTransactionOptions¶added inv1.12.0
type TransactionOptions struct {CommitOptionsCommitOptions// The transaction tag to use for a read/write transaction.// This tag is automatically included with each statement and the commit// request of a read/write transaction.TransactionTagstring// CommitPriority is the priority to use for the Commit RPC for the// transaction.CommitPrioritysppb.RequestOptions_Priority// the transaction lock mode is used to specify a concurrency mode for the// read/query operations. It works for a read/write transaction only.ReadLockModesppb.TransactionOptions_ReadWrite_ReadLockMode// Controls whether to exclude recording modifications in current transaction// from the allowed tracking change streams(with DDL option allow_txn_exclusion=true).ExcludeTxnFromChangeStreamsbool// sets the isolation level for RW transactionIsolationLevelsppb.TransactionOptions_IsolationLevel// BeginTransactionOption controls whether a separate BeginTransaction RPC should be used,// or whether the BeginTransaction operation should be inlined with the first statement// in the transaction.BeginTransactionOptionBeginTransactionOption// ClientContext contains client-owned context information to be passed with the transaction.ClientContext *sppb.RequestOptions_ClientContext}TransactionOptions provides options for a transaction.
typeTransactionOutcomeUnknownError¶added inv1.3.0
type TransactionOutcomeUnknownError struct {// contains filtered or unexported fields}TransactionOutcomeUnknownError is wrapped in a Spanner error when the erroroccurred during a transaction, and the outcome of the transaction isunknown as a result of the error. This could be the case if a timeout orcanceled error occurs after a Commit request has been sent, but before theclient has received a response from the server.
func (*TransactionOutcomeUnknownError)Error¶added inv1.3.0
func (*TransactionOutcomeUnknownError) Error()string
Error implements error.Error.
func (*TransactionOutcomeUnknownError)Unwrap¶added inv1.3.0
func (e *TransactionOutcomeUnknownError) Unwrap()error
Unwrap returns the wrapped error (if any).
Source Files¶
Directories¶
| Path | Synopsis |
|---|---|
adapter | |
apiv1 Package adapter is an auto-generated package for the Cloud Spanner API. | Package adapter is an auto-generated package for the Cloud Spanner API. |
admin | |
instance/apiv1 Package instance is an auto-generated package for the Cloud Spanner API. | Package instance is an auto-generated package for the Cloud Spanner API. |
Package spanner is an auto-generated package for the Cloud Spanner API. | Package spanner is an auto-generated package for the Cloud Spanner API. |
benchmarksmodule | |
executor | |
apiv1 Package executor is an auto-generated package for the Cloud Spanner Executor test API. | Package executor is an auto-generated package for the Cloud Spanner Executor test API. |
benchwrappercommand Package main wraps the client library in a gRPC interface that a benchmarker can communicate through. | Package main wraps the client library in a gRPC interface that a benchmarker can communicate through. |
Package spannertest contains test helpers for working with Cloud Spanner. | Package spannertest contains test helpers for working with Cloud Spanner. |
Package spansql contains types and a parser for the Cloud Spanner SQL dialect. | Package spansql contains types and a parser for the Cloud Spanner SQL dialect. |
test | |
cloudexecutorcommand | |