Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

BigQuery Storage Node.js client

License

Apache-2.0, Apache-2.0 licenses found

Licenses found

Apache-2.0
LICENSE
Apache-2.0
LICENSE.md
NotificationsYou must be signed in to change notification settings

googleapis/nodejs-bigquery-storage

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

359 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Google Cloud Platform logo

release levelnpm version

Node.js idiomatic client forBigQuery Storage.

The BigQuery Storage product is divided into two major APIs: Write and Read API.BigQuery Storage API does not provide functionality related to managing BigQueryresources such as datasets, jobs, or tables.

The BigQuery Storage Write API is a unified data-ingestion API for BigQuery.It combines streaming ingestion and batch loading into a single high-performance API.You can use the Storage Write API to stream records into BigQuery in real time orto batch process an arbitrarily large number of records and commit them in a singleatomic operation.

Read more in ourintroduction guide.

Using a system provided default stream, this code sample demonstrates using theschema of a destination stream/table to construct a writer, and send severalbatches of row data to the table.

const{adapt, managedwriter}=require('@google-cloud/bigquery-storage');const{WriterClient, JSONWriter}=managedwriter;asyncfunctionappendJSONRowsDefaultStream(){constprojectId='my_project';constdatasetId='my_dataset';consttableId='my_table';constdestinationTable=`projects/${projectId}/datasets/${datasetId}/tables/${tableId}`;constwriteClient=newWriterClient({projectId});try{constwriteStream=awaitwriteClient.getWriteStream({streamId:`${destinationTable}/streams/_default`,view:'FULL'});constprotoDescriptor=adapt.convertStorageSchemaToProto2Descriptor(writeStream.tableSchema,'root');constconnection=awaitwriteClient.createStreamConnection({streamId:managedwriter.DefaultStream,      destinationTable,});conststreamId=connection.getStreamId();constwriter=newJSONWriter({      streamId,      connection,      protoDescriptor,});letrows=[];constpendingWrites=[];// Row 1letrow={row_num:1,customer_name:'Octavia',};rows.push(row);// Row 2row={row_num:2,customer_name:'Turing',};rows.push(row);// Send batch.letpw=writer.appendRows(rows);pendingWrites.push(pw);rows=[];// Row 3row={row_num:3,customer_name:'Bell',};rows.push(row);// Send batch.pw=writer.appendRows(rows);pendingWrites.push(pw);constresults=awaitPromise.all(pendingWrites.map(pw=>pw.getResult()));console.log('Write results:',results);}catch(err){console.log(err);}finally{writeClient.close();}}

The BigQuery Storage Read API provides fast access to BigQuery-managed storage byusing an gRPC based protocol. When you use the Storage Read API, structured data issent over the wire in a binary serialization format. This allows for additionalparallelism among multiple consumers for a set of results.

Read more how touse the BigQuery Storage Read API.

See sample code on theQuickstart section.

A comprehensive list of changes in each version may be found inthe CHANGELOG.

Read more about the client libraries for Cloud APIs, including the olderGoogle APIs Client Libraries, inClient Libraries Explained.

Table of contents:

Quickstart

Before you begin

  1. Select or create a Cloud Platform project.
  2. Enable billing for your project.
  3. Enable the Google BigQuery Storage API.
  4. Set up authentication so you can access theAPI from your local workstation.

Installing the client library

npm install @google-cloud/bigquery-storage

Using the client library

// The read stream contains blocks of Avro-encoded bytes. We use the// 'avsc' library to decode these blocks. Install avsc with the following// command: npm install avscconstavro=require('avsc');// See reference documentation at// https://cloud.google.com/bigquery/docs/reference/storageconst{BigQueryReadClient}=require('@google-cloud/bigquery-storage');constclient=newBigQueryReadClient();asyncfunctionbigqueryStorageQuickstart(){// Get current project ID. The read session is created in this project.// This project can be different from that which contains the table.constmyProjectId=awaitclient.getProjectId();// This example reads baby name data from the public datasets.constprojectId='bigquery-public-data';constdatasetId='usa_names';consttableId='usa_1910_current';consttableReference=`projects/${projectId}/datasets/${datasetId}/tables/${tableId}`;constparent=`projects/${myProjectId}`;/* We limit the output columns to a subset of those allowed in the table,   * and set a simple filter to only report names from the state of   * Washington (WA).   */constreadOptions={selectedFields:['name','number','state'],rowRestriction:'state = "WA"',};lettableModifiers=null;constsnapshotSeconds=0;// Set a snapshot time if it's been specified.if(snapshotSeconds>0){tableModifiers={snapshotTime:{seconds:snapshotSeconds}};}// API request.constrequest={    parent,readSession:{table:tableReference,// This API can also deliver data serialized in Apache Arrow format.// This example leverages Apache Avro.dataFormat:'AVRO',      readOptions,      tableModifiers,},};const[session]=awaitclient.createReadSession(request);constschema=JSON.parse(session.avroSchema.schema);constavroType=avro.Type.forSchema(schema);/* The offset requested must be less than the last   * row read from ReadRows. Requesting a larger offset is   * undefined.   */letoffset=0;constreadRowsRequest={// Required stream name and optional offset. Offset requested must be less than// the last row read from readRows(). Requesting a larger offset is undefined.readStream:session.streams[0].name,    offset,};constnames=newSet();conststates=[];/* We'll use only a single stream for reading data from the table. Because   * of dynamic sharding, this will yield all the rows in the table. However,   * if you wanted to fan out multiple readers you could do so by having a   * reader process each individual stream.   */client.readRows(readRowsRequest).on('error',console.error).on('data',data=>{offset=data.avroRows.serializedBinaryRows.offset;try{// Decode all rows in bufferletpos;do{constdecodedData=avroType.decode(data.avroRows.serializedBinaryRows,pos,);if(decodedData.value){names.add(decodedData.value.name);if(!states.includes(decodedData.value.state)){states.push(decodedData.value.state);}}pos=decodedData.offset;}while(pos>0);}catch(error){console.log(error);}}).on('end',()=>{console.log(`Got${names.size} unique names in states:${states}`);console.log(`Last offset:${offset}`);});}

Samples

Samples are in thesamples/ directory. Each sample'sREADME.md has instructions for running its sample.

SampleSource CodeTry it
Append_rows_bufferedsource codeOpen in Cloud Shell
Append_rows_json_writer_committedsource codeOpen in Cloud Shell
Append_rows_json_writer_defaultsource codeOpen in Cloud Shell
Append_rows_pendingsource codeOpen in Cloud Shell
Append_rows_proto2source codeOpen in Cloud Shell
Append_rows_table_to_proto2source codeOpen in Cloud Shell
Customer_record_pbsource codeOpen in Cloud Shell
BigQuery Storage Quickstartsource codeOpen in Cloud Shell
Sample_data_pbsource codeOpen in Cloud Shell

TheGoogle BigQuery Storage Node.js Client API Reference documentationalso contains samples.

Supported Node.js Versions

Our client libraries follow theNode.js release schedule.Libraries are compatible with all currentactive andmaintenance versions ofNode.js.If you are using an end-of-life version of Node.js, we recommend that you updateas soon as possible to an actively supported LTS version.

Google's client libraries support legacy versions of Node.js runtimes on abest-efforts basis with the following warnings:

  • Legacy versions are not tested in continuous integration.
  • Some security patches and features cannot be backported.
  • Dependencies cannot be kept up-to-date.

Client libraries targeting some end-of-life versions of Node.js are available, andcan be installed through npmdist-tags.The dist-tags follow the naming conventionlegacy-(version).For example,npm install @google-cloud/bigquery-storage@legacy-8 installs client librariesfor versions compatible with Node.js 8.

Versioning

This library followsSemantic Versioning.

This library is considered to bestable. The code surface will not change in backwards-incompatible waysunless absolutely necessary (e.g. because of critical security issues) or withan extensive deprecation period. Issues and requests againststable librariesare addressed with the highest priority.

More Information:Google Cloud Platform Launch Stages

Contributing

Contributions welcome! See theContributing Guide.

Please note that thisREADME.md, thesamples/README.md,and a variety of configuration files in this repository (including.nycrc andtsconfig.json)are generated from a central template. To edit one of these files, make an editto its templates indirectory.

License

Apache Version 2.0

SeeLICENSE

About

BigQuery Storage Node.js client

Resources

License

Apache-2.0, Apache-2.0 licenses found

Licenses found

Apache-2.0
LICENSE
Apache-2.0
LICENSE.md

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages


[8]ページ先頭

©2009-2026 Movatter.jp