Bigtable to JSON template

The Bigtable to JSON template is a pipeline that reads data from a Bigtable table and writes it to a Cloud Storage bucket in the JSON format.

Pipeline requirements

  • The Bigtable table must exist.
  • The output Cloud Storage bucket must exist before you run the pipeline.

Template parameters

Required parameters

  • bigtableProjectId: The ID for the Google Cloud project that contains the Bigtable instance that you want to read data from.
  • bigtableInstanceId: The ID of the Bigtable instance that contains the table.
  • bigtableTableId: The ID of the Bigtable table to read from.
  • outputDirectory: The Cloud Storage path where the output JSON files are stored. For example,gs://your-bucket/your-path/.

Optional parameters

  • filenamePrefix: The prefix of the JSON file name. For example,table1-. If no value is provided, defaults topart.
  • userOption: Possible values areFLATTEN orNONE.FLATTEN flattens the row to the single level.NONE stores the whole row as a JSON string. Defaults toNONE.
  • columnsAliases: A comma-separated list of columns that are required for the Vertex AI Vector Search index. The columnsid andembedding are required for Vertex AI Vector Search. You can use the notationfromfamily:fromcolumn;to. For example, if the columns arerowkey andcf:my_embedding, whererowkey has a different name than the embedding column, specifycf:my_embedding;embedding and,rowkey;id. Only use this option when the value foruserOption isFLATTEN.
  • bigtableAppProfileId: The ID of the Bigtable application profile to use for the export. If you don't specify an app profile, Bigtable uses the instance's default app profile:https://cloud.google.com/bigtable/docs/app-profiles#default-app-profile.

Run the template

Console

  1. Go to the DataflowCreate job from template page.
  2. Go to Create job from template
  3. In theJob name field, enter a unique job name.
  4. Optional: ForRegional endpoint, select a value from the drop-down menu. The default region isus-central1.

    For a list of regions where you can run a Dataflow job, seeDataflow locations.

  5. From theDataflow template drop-down menu, select theBigtable to JSON template.
  6. In the provided parameter fields, enter your parameter values.
  7. ClickRun job.

gcloud CLI

Note: To use the Google Cloud CLI to run classic templates, you must haveGoogle Cloud CLI version 138.0.0 or later.

In your shell or terminal, run the template:

gclouddataflowjobsrunJOB_NAME\--gcs-location=gs://dataflow-templates-REGION_NAME/VERSION/Cloud_Bigtable_to_GCS_Json\--project=PROJECT_ID\--region=REGION_NAME\--parameters\bigtableProjectId=BIGTABLE_PROJECT_ID,\bigtableInstanceId=BIGTABLE_INSTANCE_ID,\bigtableTableId=BIGTABLE_TABLE_ID,\filenamePrefix=FILENAME_PREFIX,\

Replace the following:

API

To run the template using the REST API, send an HTTP POST request. For more information on the API and its authorization scopes, seeprojects.templates.launch.

POSThttps://dataflow.googleapis.com/v1b3/projects/PROJECT_ID/locations/LOCATION/templates:launch?gcsPath=gs://dataflow-templates-LOCATION/VERSION/Cloud_Bigtable_to_GCS_Json{"jobName":"JOB_NAME","parameters":{"bigtableProjectId":"BIGTABLE_PROJECT_ID","bigtableInstanceId":"BIGTABLE_INSTANCE_ID","bigtableTableId":"BIGTABLE_TABLE_ID","filenamePrefix":"FILENAME_PREFIX",},"environment":{"maxWorkers":"10"}}

Replace the following:

Template source code

Java

/* * Copyright (C) 2023 Google LLC * * Licensed under the Apache License, Version 2.0 (the "License"); you may not * use this file except in compliance with the License. You may obtain a copy of * the License at * *   http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the * License for the specific language governing permissions and limitations under * the License. */packagecom.google.cloud.teleport.bigtable;importcom.google.bigtable.v2.Cell;importcom.google.bigtable.v2.Column;importcom.google.bigtable.v2.Family;importcom.google.bigtable.v2.Row;importcom.google.cloud.teleport.bigtable.BigtableToJson.Options;importcom.google.cloud.teleport.metadata.Template;importcom.google.cloud.teleport.metadata.TemplateCategory;importcom.google.cloud.teleport.metadata.TemplateParameter;importcom.google.cloud.teleport.metadata.TemplateParameter.TemplateEnumOption;importcom.google.cloud.teleport.util.DualInputNestedValueProvider;importcom.google.cloud.teleport.util.DualInputNestedValueProvider.TranslatorInput;importcom.google.gson.stream.JsonWriter;importjava.io.IOException;importjava.io.StringWriter;importjava.util.HashMap;importjava.util.Map;importorg.apache.beam.runners.dataflow.options.DataflowPipelineOptions;importorg.apache.beam.sdk.Pipeline;importorg.apache.beam.sdk.PipelineResult;importorg.apache.beam.sdk.io.FileSystems;importorg.apache.beam.sdk.io.TextIO;importorg.apache.beam.sdk.io.fs.ResolveOptions.StandardResolveOptions;importorg.apache.beam.sdk.io.gcp.bigtable.BigtableIO;importorg.apache.beam.sdk.options.Default;importorg.apache.beam.sdk.options.PipelineOptions;importorg.apache.beam.sdk.options.PipelineOptionsFactory;importorg.apache.beam.sdk.options.Validation.Required;importorg.apache.beam.sdk.options.ValueProvider;importorg.apache.beam.sdk.transforms.MapElements;importorg.apache.beam.sdk.transforms.SerializableFunction;importorg.apache.beam.sdk.transforms.SimpleFunction;importorg.apache.commons.lang3.StringUtils;importorg.slf4j.Logger;importorg.slf4j.LoggerFactory;/** * Dataflow pipeline that exports data from a Cloud Bigtable table to JSON files in GCS. Currently, * filtering on Cloud Bigtable table is not supported. * * <p>Check out <a href= * "https://github.com/GoogleCloudPlatform/DataflowTemplates/blob/main/v1/README_Cloud_Bigtable_to_GCS_JSON.md">README</a> * for instructions on how to use or modify this template. */@Template(name="Cloud_Bigtable_to_GCS_Json",category=TemplateCategory.BATCH,displayName="Cloud Bigtable to JSON",description="The Bigtable to JSON template is a pipeline that reads data from a Bigtable table and writes it to a Cloud Storage bucket in JSON format",optionsClass=Options.class,documentation="https://cloud.google.com/dataflow/docs/guides/templates/provided/bigtable-to-json",contactInformation="https://cloud.google.com/support",requirements={"The Bigtable table must exist.","The output Cloud Storage bucket must exist before running the pipeline."})publicclassBigtableToJson{privatestaticfinalLoggerLOG=LoggerFactory.getLogger(BigtableToJson.class);/** Options for the export pipeline. */publicinterfaceOptionsextendsPipelineOptions{@TemplateParameter.ProjectId(order=1,groupName="Source",description="Project ID",helpText="The ID for the Google Cloud project that contains the Bigtable instance that you want to read data from.")ValueProvider<String>getBigtableProjectId();@SuppressWarnings("unused")voidsetBigtableProjectId(ValueProvider<String>projectId);@TemplateParameter.Text(order=2,groupName="Source",regexes={"[a-z][a-z0-9\\-]+[a-z0-9]"},description="Instance ID",helpText="The ID of the Bigtable instance that contains the table.")ValueProvider<String>getBigtableInstanceId();@SuppressWarnings("unused")voidsetBigtableInstanceId(ValueProvider<String>instanceId);@TemplateParameter.Text(order=3,groupName="Source",regexes={"[_a-zA-Z0-9][-_.a-zA-Z0-9]*"},description="Table ID",helpText="The ID of the Bigtable table to read from.")ValueProvider<String>getBigtableTableId();@SuppressWarnings("unused")voidsetBigtableTableId(ValueProvider<String>tableId);@TemplateParameter.GcsWriteFolder(order=4,groupName="Target",description="Cloud Storage directory for storing JSON files",helpText="The Cloud Storage path where the output JSON files are stored.",example="gs://your-bucket/your-path/")@RequiredValueProvider<String>getOutputDirectory();@SuppressWarnings("unused")voidsetOutputDirectory(ValueProvider<String>outputDirectory);@TemplateParameter.Text(order=5,groupName="Target",optional=true,description="JSON file prefix",helpText="The prefix of the JSON file name. For example, `table1-`. If no value is provided, defaults to `part`.")@Default.String("part")ValueProvider<String>getFilenamePrefix();@SuppressWarnings("unused")voidsetFilenamePrefix(ValueProvider<String>filenamePrefix);@TemplateParameter.Enum(order=6,groupName="Target",optional=true,enumOptions={@TemplateEnumOption("FLATTEN"),@TemplateEnumOption("NONE")},description="User option",helpText="Possible values are `FLATTEN` or `NONE`. `FLATTEN` flattens the row to the single level. `NONE` stores the whole row as a JSON string. Defaults to `NONE`.")@Default.String("NONE")StringgetUserOption();@SuppressWarnings("unused")voidsetUserOption(StringuserOption);@TemplateParameter.Text(order=7,groupName="Target",optional=true,parentName="userOption",parentTriggerValues={"FLATTEN"},description="Columns aliases",helpText="A comma-separated list of columns that are required for the Vertex AI Vector Search index. The"+" columns `id` and `embedding` are required for Vertex AI Vector Search. You can use the notation"+" `fromfamily:fromcolumn;to`. For example, if the columns are `rowkey` and `cf:my_embedding`, where"+" `rowkey` has a different name than the embedding column, specify `cf:my_embedding;embedding` and,"+" `rowkey;id`. Only use this option when the value for `userOption` is `FLATTEN`.")ValueProvider<String>getColumnsAliases();@SuppressWarnings("unused")voidsetColumnsAliases(ValueProvider<String>value);@TemplateParameter.Text(order=8,groupName="Source",optional=true,regexes={"[_a-zA-Z0-9][-_.a-zA-Z0-9]*"},description="Application profile ID",helpText="The ID of the Bigtable application profile to use for the export. If you don't specify an app profile, Bigtable uses the instance's default app profile: https://cloud.google.com/bigtable/docs/app-profiles#default-app-profile.")@Default.String("default")ValueProvider<String>getBigtableAppProfileId();@SuppressWarnings("unused")voidsetBigtableAppProfileId(ValueProvider<String>appProfileId);}/**   * Runs a pipeline to export data from a Cloud Bigtable table to JSON files in GCS in JSON format.   *   * @param args arguments to the pipeline   */publicstaticvoidmain(String[]args){Optionsoptions=PipelineOptionsFactory.fromArgs(args).withValidation().as(Options.class);PipelineResultresult=run(options);// Wait for pipeline to finish only if it is not constructing a template.if(options.as(DataflowPipelineOptions.class).getTemplateLocation()==null){result.waitUntilFinish();}LOG.info("Completed pipeline setup");}publicstaticPipelineResultrun(Optionsoptions){Pipelinepipeline=Pipeline.create(PipelineUtils.tweakPipelineOptions(options));BigtableIO.Readread=BigtableIO.read().withProjectId(options.getBigtableProjectId()).withInstanceId(options.getBigtableInstanceId()).withAppProfileId(options.getBigtableAppProfileId()).withTableId(options.getBigtableTableId());// Do not validate input fields if it is running as a template.if(options.as(DataflowPipelineOptions.class).getTemplateLocation()!=null){read=read.withoutValidation();}ValueProvider<String>filePathPrefix=DualInputNestedValueProvider.of(options.getOutputDirectory(),options.getFilenamePrefix(),newSerializableFunction<TranslatorInput<String,String>,String>(){@OverridepublicStringapply(TranslatorInput<String,String>input){returnFileSystems.matchNewResource(input.getX(),true).resolve(input.getY(),StandardResolveOptions.RESOLVE_FILE).toString();}});StringuserOption=options.getUserOption();pipeline.apply("Read from Bigtable",read).apply("Transform to JSON",MapElements.via(newBigtableToJsonFn(userOption.equals("FLATTEN"),options.getColumnsAliases()))).apply("Write to storage",TextIO.write().to(filePathPrefix).withSuffix(".json"));returnpipeline.run();}/** Translates Bigtable {@link Row} to JSON. */staticclassBigtableToJsonFnextendsSimpleFunction<Row,String>{privatebooleanflatten;privateValueProvider<String>columnsAliases;publicBigtableToJsonFn(booleanflatten,ValueProvider<String>columnsAliases){this.flatten=flatten;this.columnsAliases=columnsAliases;}@OverridepublicStringapply(Rowrow){StringWriterstringWriter=newStringWriter();JsonWriterjsonWriter=newJsonWriter(stringWriter);try{if(flatten){serializeFlattented(row,jsonWriter);}else{serializeUnFlattented(row,jsonWriter);}}catch(IOExceptione){thrownewRuntimeException(e);}returnstringWriter.toString();}privatevoidserializeUnFlattented(Rowrow,JsonWriterjsonWriter)throwsIOException{jsonWriter.beginObject();jsonWriter.name(row.getKey().toStringUtf8());jsonWriter.beginObject();for(Familyfamily:row.getFamiliesList()){StringfamilyName=family.getName();jsonWriter.name(familyName);jsonWriter.beginObject();for(Columncolumn:family.getColumnsList()){for(Cellcell:column.getCellsList()){jsonWriter.name(column.getQualifier().toStringUtf8()).value(cell.getValue().toStringUtf8());}}jsonWriter.endObject();}jsonWriter.endObject();jsonWriter.endObject();}privatevoidserializeFlattented(Rowrow,JsonWriterjsonWriter)throwsIOException{jsonWriter.beginObject();Map<String,String>columnsWithAliases=extractColumnsAliases();maybeAddToJson(jsonWriter,columnsWithAliases,"rowkey",row.getKey().toStringUtf8());for(Familyfamily:row.getFamiliesList()){StringfamilyName=family.getName();for(Columncolumn:family.getColumnsList()){for(Cellcell:column.getCellsList()){maybeAddToJson(jsonWriter,columnsWithAliases,familyName+":"+column.getQualifier().toStringUtf8(),cell.getValue().toStringUtf8());}}}jsonWriter.endObject();}privatevoidmaybeAddToJson(JsonWriterjsonWriter,Map<String,String>columnsWithAliases,Stringkey,Stringvalue)throwsIOException{if(!columnsWithAliases.isEmpty() &&!columnsWithAliases.containsKey(key)){return;}jsonWriter.name(columnsWithAliases.getOrDefault(key,key)).value(value);}privateMap<String,String>extractColumnsAliases(){Map<String,String>columnsWithAliases=newHashMap<>();if(StringUtils.isBlank(columnsAliases.get())){returncolumnsWithAliases;}String[]columnsList=columnsAliases.get().split(",");for(StringcolumnsWithAlias:columnsList){String[]columnWithAlias=columnsWithAlias.split(";");if(columnWithAlias.length==2){columnsWithAliases.put(columnWithAlias[0],columnWithAlias[1]);}}returncolumnsWithAliases;}}}

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.