Transcribe audio files with the ML.TRANSCRIBE function
This document describes how to use theML.TRANSCRIBE functionwith aremote modelto transcribe audio files from anobject table.
Supported locations
You must create the remote model used in this procedure in one of thefollowinglocations:
asia-northeast1asia-south1asia-southeast1australia-southeast1eueurope-west1europe-west2europe-west3europe-west4northamerica-northeast1usus-central1us-east1us-east4us-west1
You must runtheML.TRANSCRIBE function in the same region as the remote model.
Required roles
To create a remote model and transcribe audio files, you need thefollowing Identity and Access Management (IAM) roles at the project level:
- Create a speech recognizer: Cloud Speech Editor(
roles/speech.editor) - Create and use BigQuery datasets, tables, and models:BigQuery Data Editor (
roles/bigquery.dataEditor) Create, delegate, and use BigQuery connections:BigQuery Connections Admin (
roles/bigquery.connectionsAdmin)If you don't have adefault connectionconfigured, you can create and set one as part of running the
CREATE MODELstatement. To do so, you must have BigQuery Admin(roles/bigquery.admin) on your project. For more information, seeConfigure the default connection.Grant permissions to the connection's service account: Project IAM Admin(
roles/resourcemanager.projectIamAdmin)Create BigQuery jobs: BigQuery Job User(
roles/bigquery.jobUser)
These predefined roles contain the permissions required to perform the tasks inthis document. To see the exact permissions that are required, expand theRequired permissions section:
Required permissions
- Create a dataset:
bigquery.datasets.create - Create, delegate, and use a connection:
bigquery.connections.* - Set service account permissions:
resourcemanager.projects.getIamPolicyandresourcemanager.projects.setIamPolicy - Create a model and run inference:
bigquery.jobs.createbigquery.models.createbigquery.models.getDatabigquery.models.updateDatabigquery.models.updateMetadata
- Create an object table:
bigquery.tables.createandbigquery.tables.update - Create a speech recognizer:
speech.recognizers.createspeech.recognizers.getspeech.recognizers.recognizespeech.recognizers.update
You might also be able to get these permissions withcustom roles or otherpredefined roles.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the BigQuery, BigQuery Connection API, and Speech-to-Text APIs.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the BigQuery, BigQuery Connection API, and Speech-to-Text APIs.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.
Create a recognizer
Speech-to-Text supports resources called recognizers. Recognizers representstored and reusable recognition configurations. You cancreate a recognizer to logically group together transcriptions or traffic for yourapplication.
Creating a speech recognizer is optional. If you choose to create a speechrecognizer, note the project ID, location, and recognizer ID of the recognizerfor use in theCREATE MODEL statement, as described inSPEECH_RECOGNIZER.If you choose not to create a speech recognizer, you must specify a valuefor therecognition_config argumentof theML.TRANSCRIBE function.
You can only use thechirptranscription modelin the speech recognizer orrecognition_config value that you provide.
Create a dataset
Create a BigQuery dataset to contain your resources:
Console
In the Google Cloud console, go to theBigQuery page.
In the left pane, clickExplorer:

If you don't see the left pane, clickExpand left pane to open the pane.
In theExplorer pane, click your project name.
ClickView actions > Create dataset.
On theCreate dataset page, do the following:
ForDataset ID, type a name for the dataset.
ForLocation type, selectRegion orMulti-region.
- If you selectedRegion, then select a location from theRegion list.
- If you selectedMulti-region, then selectUS orEuropefrom theMulti-region list.
ClickCreate dataset.
bq
Create a connection
Create aCloud resource connectionand get the connection's service account. Create the connection inthe samelocation as the dataset you created in theprevious step.
You can skip this step if you either have a default connection configured, oryou have the BigQuery Admin role.
Select one of the following options:Console
Go to theBigQuery page.
In the left pane, clickExplorer:

If you don't see the left pane, clickExpand left pane to open the pane.
In theExplorer pane, expand your project name, and then clickConnections.
On theConnections page, clickCreate connection.
ForConnection type, chooseVertex AI remote models, remotefunctions, BigLake and Spanner (Cloud Resource).
In theConnection ID field, enter a name for your connection.
ForLocation type, select a location for your connection. Theconnection should be colocated with your other resources such asdatasets.
ClickCreate connection.
ClickGo to connection.
In theConnection info pane, copy the service account ID for use ina later step.
bq
In a command-line environment, create a connection:
bqmk--connection--location=REGION--project_id=PROJECT_ID\--connection_type=CLOUD_RESOURCECONNECTION_ID
The
--project_idparameter overrides the default project.Replace the following:
REGION: yourconnection regionPROJECT_ID: your Google Cloud project IDCONNECTION_ID: an ID for yourconnection
When you create a connection resource, BigQuery creates aunique system service account and associates it with the connection.
Troubleshooting: If you get the following connection error,update the Google Cloud SDK:
Flags parsing error: flag --connection_type=CLOUD_RESOURCE: value should be one of...
Retrieve and copy the service account ID for use in a laterstep:
bqshow--connectionPROJECT_ID.REGION.CONNECTION_ID
The output is similar to the following:
name properties1234.REGION.CONNECTION_ID {"serviceAccountId": "connection-1234-9u56h9@gcp-sa-bigquery-condel.iam.gserviceaccount.com"}
Python
Before trying this sample, follow thePython setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryPython API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.
importgoogle.api_core.exceptionsfromgoogle.cloudimportbigquery_connection_v1client=bigquery_connection_v1.ConnectionServiceClient()defcreate_connection(project_id:str,location:str,connection_id:str,):"""Creates a BigQuery connection to a Cloud Resource. Cloud Resource connection creates a service account which can then be granted access to other Google Cloud resources for federated queries. Args: project_id: The Google Cloud project ID. location: The location of the connection (for example, "us-central1"). connection_id: The ID of the connection to create. """parent=client.common_location_path(project_id,location)connection=bigquery_connection_v1.Connection(friendly_name="Example Connection",description="A sample connection for a Cloud Resource.",cloud_resource=bigquery_connection_v1.CloudResourceProperties(),)try:created_connection=client.create_connection(parent=parent,connection_id=connection_id,connection=connection)print(f"Successfully created connection:{created_connection.name}")print(f"Friendly name:{created_connection.friendly_name}")print(f"Service Account:{created_connection.cloud_resource.service_account_id}")exceptgoogle.api_core.exceptions.AlreadyExists:print(f"Connection with ID '{connection_id}' already exists.")print("Please use a different connection ID.")exceptExceptionase:print(f"An unexpected error occurred while creating the connection:{e}")Node.js
Before trying this sample, follow theNode.js setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryNode.js API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.
const{ConnectionServiceClient}=require('@google-cloud/bigquery-connection').v1;const{status}=require('@grpc/grpc-js');constclient=newConnectionServiceClient();/** * Creates a new BigQuery connection to a Cloud Resource. * * A Cloud Resource connection creates a service account that can be granted access * to other Google Cloud resources. * * @param {string} projectId The Google Cloud project ID. for example, 'example-project-id' * @param {string} location The location of the project to create the connection in. for example, 'us-central1' * @param {string} connectionId The ID of the connection to create. for example, 'example-connection-id' */asyncfunctioncreateConnection(projectId,location,connectionId){constparent=client.locationPath(projectId,location);constconnection={friendlyName:'Example Connection',description:'A sample connection for a Cloud Resource',// The service account for this cloudResource will be created by the API.// Its ID will be available in the response.cloudResource:{},};constrequest={parent,connectionId,connection,};try{const[response]=awaitclient.createConnection(request);console.log(`Successfully created connection:${response.name}`);console.log(`Friendly name:${response.friendlyName}`);console.log(`Service Account:${response.cloudResource.serviceAccountId}`);}catch(err){if(err.code===status.ALREADY_EXISTS){console.log(`Connection '${connectionId}' already exists.`);}else{console.error(`Error creating connection:${err.message}`);}}}Terraform
Use thegoogle_bigquery_connectionresource.
To authenticate to BigQuery, set up Application DefaultCredentials. For more information, seeSet up authentication for client libraries.
The following example creates a Cloud resource connection namedmy_cloud_resource_connection in theUS region:
# This queries the provider for project information.data "google_project" "default" {}# This creates a cloud resource connection in the US region named my_cloud_resource_connection.# Note: The cloud resource nested object has only one output field - serviceAccountId.resource "google_bigquery_connection" "default" { connection_id = "my_cloud_resource_connection" project = data.google_project.default.project_id location = "US" cloud_resource {}}To apply your Terraform configuration in a Google Cloud project, complete the steps in the following sections.
Prepare Cloud Shell
- LaunchCloud Shell.
Set the default Google Cloud project where you want to apply your Terraform configurations.
You only need to run this command once per project, and you can run it in any directory.
export GOOGLE_CLOUD_PROJECT=PROJECT_ID
Environment variables are overridden if you set explicit values in the Terraform configuration file.
Prepare the directory
Each Terraform configuration file must have its own directory (alsocalled aroot module).
- InCloud Shell, create a directory and a new file within that directory. The filename must have the
.tfextension—for examplemain.tf. In this tutorial, the file is referred to asmain.tf.mkdirDIRECTORY && cdDIRECTORY && touch main.tf
If you are following a tutorial, you can copy the sample code in each section or step.
Copy the sample code into the newly created
main.tf.Optionally, copy the code from GitHub. This is recommended when the Terraform snippet is part of an end-to-end solution.
- Review and modify the sample parameters to apply to your environment.
- Save your changes.
- Initialize Terraform. You only need to do this once per directory.
terraform init
Optionally, to use the latest Google provider version, include the
-upgradeoption:terraform init -upgrade
Apply the changes
- Review the configuration and verify that the resources that Terraform is going to create or update match your expectations:
terraform plan
Make corrections to the configuration as necessary.
- Apply the Terraform configuration by running the following command and entering
yesat the prompt:terraform apply
Wait until Terraform displays the "Apply complete!" message.
- Open your Google Cloud project to view the results. In the Google Cloud console, navigate to your resources in the UI to make sure that Terraform has created or updated them.
Grant access to the service account
Select one of the following options:
Console
Go to theIAM & Admin page.
ClickGrant Access.
TheAdd principals dialog opens.
In theNew principals field, enter the service account ID that youcopied earlier.
Click theSelect a role field and then type
Cloud Speech ClientinFilter.ClickAdd another role.
In theSelect a role field, selectCloud Storage, and thenselectStorage Object Viewer.
ClickSave.
gcloud
Use thegcloud projects add-iam-policy-binding command:
gcloud projects add-iam-policy-binding 'PROJECT_NUMBER' --member='serviceAccount:MEMBER' --role='roles/speech.client' --condition=Nonegcloud projects add-iam-policy-binding 'PROJECT_NUMBER' --member='serviceAccount:MEMBER' --role='roles/storage.objectViewer' --condition=None
Replace the following:
PROJECT_NUMBER: your project number.MEMBER: the service account ID that you copied earlier.
Failure to grant the permission results in aPermission denied error.
Note: If you create the recognizer ina different project than the Cloud Storage bucket used by the objecttable, grant the service account Identity and Access Management (IAM) roles as follows:
- Grant the service account the Cloud Speech Client role in the project thatcontains the recognizer.
- Grant the service account the Storage Object Viewer role in the project thatcontains the Cloud Storage bucket.
- Grant the Speech-to-Text service agent (
service-my_project_number@gcp-sa-speech.iam.gserviceaccount.com) the Storage Object Viewer role in the project thatcontains the Cloud Storage bucket.
Create an object table
Create an object table over a set ofaudio files in Cloud Storage. The audio files in the object table must beof asupported type.
The Cloud Storage bucket used by the object table should be in thesame project where you plan to create the model and call theML.TRANSCRIBE function. If you want to call theML.TRANSCRIBE function in a different project than the onethat contains the Cloud Storage bucket used by the object table, you mustgrant the Storage Admin role at the bucket levelto theservice-A@gcp-sa-aiplatform.iam.gserviceaccount.com service account.
Create a model
Create a remote model with aREMOTE_SERVICE_TYPE ofCLOUD_AI_SPEECH_TO_TEXT_V2:
CREATEORREPLACEMODEL`PROJECT_ID.DATASET_ID.MODEL_NAME`REMOTEWITHCONNECTION{DEFAULT|`PROJECT_ID.REGION.CONNECTION_ID`}OPTIONS(REMOTE_SERVICE_TYPE='CLOUD_AI_SPEECH_TO_TEXT_V2',SPEECH_RECOGNIZER='projects/PROJECT_NUMBER/locations/LOCATION/recognizers/RECOGNIZER_ID');
Replace the following:
PROJECT_ID: your project ID.DATASET_ID: the ID of the dataset to contain the model.MODEL_NAME: the name of the model.REGION: the region used by the connection.CONNECTION_ID: the connection ID—for example,myconnection.When youview the connection details in the Google Cloud console, the connection ID is the value in the last section of the fully qualified connection ID that is shown inConnection ID—for example
projects/myproject/locations/connection_location/connections/myconnection.PROJECT_NUMBER: the project number of the project that contains the speech recognizer. You can find this value on theProject info card in theDashboard page of the Google Cloud console.LOCATION: the location used by the speech recognizer. You can find this value in theLocation field on theList recognizers page of the Google Cloud console.RECOGNIZER_ID: the speech recognizer ID. You can find this value in theID field on theList recognizers page of the Google Cloud console.This option isn't required. If you don't specify a value for it, a default recognizer is used. In that case, you must specify a value for the
recognition_configparameter of theML.TRANSCRIBEfunction in order to provide a configuration for the default recognizer.You can only use the
chirptranscription model in therecognition_configvalue that you provide.
Transcribe audio files
Transcribe audio files with theML.TRANSCRIBE function:
SELECT*FROMML.TRANSCRIBE(MODEL`PROJECT_ID.DATASET_ID.MODEL_NAME`,TABLE`PROJECT_ID.DATASET_ID.OBJECT_TABLE_NAME`,RECOGNITION_CONFIG=>(JSON'recognition_config'));
Replace the following:
PROJECT_ID: your project ID.DATASET_ID: the ID of the dataset that contains the model.MODEL_NAME: the name of the model.OBJECT_TABLE_NAME: the name of the object table that contains the URIs of the audio files to process.recognition_config: aRecognitionConfigresource in JSON format.If a recognizer has been specified for the remote model by using the
SPEECH_RECOGNIZERoption, you can't specify arecognition_configvalue.If no recognizer has been specified for the remote model by using the
SPEECH_RECOGNIZERoption, you must specify arecognition_configvalue. This value is used to provide a configuration for the default recognizer.You can only use the
chirptranscription model in therecognition_configvalue that you provide.
Examples
Example 1
The following example transcribes the audio files represented by theaudio table without overriding the recognizer's default configuration:
SELECT*FROMML.TRANSCRIBE(MODEL`myproject.mydataset.transcribe_model`,TABLE`myproject.mydataset.audio`);
The following example transcribes the audio files represented by theaudio table and provides a configuration for the default recognizer:
SELECT*FROMML.TRANSCRIBE(MODEL`myproject.mydataset.transcribe_model`,TABLE`myproject.mydataset.audio`,recognition_config=>(JSON'{"language_codes": ["en-US" ],"model": "chirp","auto_decoding_config": {}}'));
What's next
- For more information about model inference in BigQuery ML, seeModel inference overview.
- For more information about using Cloud AI APIs to perform AI tasks, seeAI application overview.
- For more information about supported SQL statements and functions forgenerative AI models, seeEnd-to-end user journeys for generative AI models.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.