Google BigQuery Storage: Node.js Client
Client for the BigQuery Storage API
A comprehensive list of changes in each version may be found inthe CHANGELOG.
- Google BigQuery Storage Node.js Client API Reference
- Google BigQuery Storage Documentation
- github.com/googleapis/nodejs-bigquery-storage
Read more about the client libraries for Cloud APIs, including the olderGoogle APIs Client Libraries, inClient Libraries Explained.
Table of contents:
Quickstart
Before you begin
- Select or create a Cloud Platform project.
- Enable billing for your project.
- Enable the Google BigQuery Storage API.
- Set up authentication with a service account so you can access theAPI from your local workstation.
Installing the client library
npm install @google-cloud/bigquery-storageUsing the client library
// The read stream contains blocks of Avro-encoded bytes. We use the// 'avsc' library to decode these blocks. Install avsc with the following// command: npm install avscconst avro = require('avsc');// See reference documentation at// https://cloud.google.com/bigquery/docs/reference/storageconst {BigQueryReadClient} = require('@google-cloud/bigquery-storage');const client = newBigQueryReadClient();async function bigqueryStorageQuickstart() { // Get current project ID. The read session is created in this project. // This project can be different from that which contains the table. const myProjectId = await client.getProjectId(); // This example reads baby name data from the public datasets. const projectId = 'bigquery-public-data'; const datasetId = 'usa_names'; const tableId = 'usa_1910_current'; const tableReference = `projects/${projectId}/datasets/${datasetId}/tables/${tableId}`; const parent = `projects/${myProjectId}`; /* We limit the output columns to a subset of those allowed in the table, * and set a simple filter to only report names from the state of * Washington (WA). */ const readOptions = { selectedFields: ['name', 'number', 'state'], rowRestriction: 'state = "WA"', }; let tableModifiers = null; const snapshotSeconds = 0; // Set a snapshot time if it's been specified. if (snapshotSeconds > 0) { tableModifiers = {snapshotTime: {seconds: snapshotSeconds}}; } // API request. const request = { parent, readSession: { table: tableReference, // This API can also deliver data serialized in Apache Arrow format. // This example leverages Apache Avro. dataFormat: 'AVRO', readOptions, tableModifiers, }, }; const [session] = await client.createReadSession(request); const schema =JSON.parse(session.avroSchema.schema); const avroType = avro.Type.forSchema(schema); /* The offset requested must be less than the last * row read from ReadRows. Requesting a larger offset is * undefined. */ let offset = 0; const readRowsRequest = { // Required stream name and optional offset. Offset requested must be less than // the last row read from readRows(). Requesting a larger offset is undefined. readStream: session.streams[0].name, offset, }; const names = new Set(); const states = []; /* We'll use only a single stream for reading data from the table. Because * of dynamic sharding, this will yield all the rows in the table. However, * if you wanted to fan out multiple readers you could do so by having a * reader process each individual stream. */ client .readRows(readRowsRequest) .on('error', console.error) .on('data', data => { offset = data.avroRows.serializedBinaryRows.offset; try { // Decode all rows in buffer let pos; do { const decodedData = avroType.decode( data.avroRows.serializedBinaryRows, pos ); if (decodedData.value) { names.add(decodedData.value.name); if (!states.includes(decodedData.value.state)) { states.push(decodedData.value.state); } } pos = decodedData.offset; } while (pos > 0); } catch (error) { console.log(error); } }) .on('end', () => { console.log(`Got ${names.size} unique names in states: ${states}`); console.log(`Last offset: ${offset}`); });}Samples
Samples are in thesamples/ directory. Each sample'sREADME.md has instructions for running its sample.
| Sample | Source Code | Try it |
|---|---|---|
| Append_rows_buffered | source code | ![]() |
| Append_rows_json_writer_commited | source code | ![]() |
| Append_rows_json_writer_default | source code | ![]() |
| Append_rows_pending | source code | ![]() |
| Append_rows_proto2 | source code | ![]() |
| Append_rows_table_to_proto2 | source code | ![]() |
| Customer_record_pb | source code | ![]() |
| BigQuery Storage Quickstart | source code | ![]() |
| Sample_data_pb | source code | ![]() |
TheGoogle BigQuery Storage Node.js Client API Reference documentationalso contains samples.
Supported Node.js Versions
Our client libraries follow theNode.js release schedule.Libraries are compatible with all currentactive andmaintenance versions ofNode.js.If you are using an end-of-life version of Node.js, we recommend that you updateas soon as possible to an actively supported LTS version.
Google's client libraries support legacy versions of Node.js runtimes on abest-efforts basis with the following warnings:
- Legacy versions are not tested in continuous integration.
- Some security patches and features cannot be backported.
- Dependencies cannot be kept up-to-date.
Client libraries targeting some end-of-life versions of Node.js are available, andcan be installed through npmdist-tags.The dist-tags follow the naming conventionlegacy-(version).For example,npm install @google-cloud/bigquery-storage@legacy-8 installs client librariesfor versions compatible with Node.js 8.
Versioning
This library followsSemantic Versioning.
This library is considered to bestable. The code surface will not change in backwards-incompatible waysunless absolutely necessary (e.g. because of critical security issues) or withan extensive deprecation period. Issues and requests againststable librariesare addressed with the highest priority.
More Information:Google Cloud Platform Launch Stages
Contributing
Contributions welcome! See theContributing Guide.
Please note that thisREADME.md, thesamples/README.md,and a variety of configuration files in this repository (including.nycrc andtsconfig.json)are generated from a central template. To edit one of these files, make an editto its templates indirectory.
License
Apache Version 2.0
SeeLICENSE
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-10-30 UTC.
