Transcribe long audio files into text Stay organized with collections Save and categorize content based on your preferences.
This page demonstrates how to transcribe long audio files (longer than oneminute) to text using the Speech-to-Text API and asynchronous speechrecognition.
About asynchronous speech recognition
Batch speech recognition starts a long-running audio processing operation. Useasynchronous speech recognition to transcribe audio that is longer than 60seconds. For shorter audio,synchronous speech recognition is faster and simpler. The upper limit for asynchronousspeech recognition is 480 minutes (8 hours).
Batch speech recognition is only able to transcribe audio stored inCloud Storage. The transcription output can be either provided inline in theresponse (for single-file batch recognition requests) or written toCloud Storage.
The batch recognition request returns anOperationthat contains information about the ongoing recognition processing of yourrequest. You canpoll the operation to know when theoperation is complete and transcripts are available.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the Speech-to-Text APIs.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.Make sure that you have the following role or roles on the project: Cloud Speech Administrator
Check for the roles
In the Google Cloud console, go to theIAM page.
Go to IAM- Select the project.
In thePrincipal column, find all rows that identify you or a group that you're included in. To learn which groups you're included in, contact your administrator.
- For all rows that specify or include you, check theRole column to see whether the list of roles includes the required roles.
Grant the roles
In the Google Cloud console, go to theIAM page.
Go to IAM- Select the project.
- ClickGrant access.
In theNew principals field, enter your user identifier. This is typically the email address for a Google Account.
- ClickSelect a role, then search for the role.
- To grant additional roles, clickAdd another role and add each additional role.
- ClickSave.
Install the Google Cloud CLI.
Note: If you installed the gcloud CLI previously, make sure you have the latest version by runninggcloud components update.If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
Toinitialize the gcloud CLI, run the following command:
gcloudinit
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the Speech-to-Text APIs.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.Make sure that you have the following role or roles on the project: Cloud Speech Administrator
Check for the roles
In the Google Cloud console, go to theIAM page.
Go to IAM- Select the project.
In thePrincipal column, find all rows that identify you or a group that you're included in. To learn which groups you're included in, contact your administrator.
- For all rows that specify or include you, check theRole column to see whether the list of roles includes the required roles.
Grant the roles
In the Google Cloud console, go to theIAM page.
Go to IAM- Select the project.
- ClickGrant access.
In theNew principals field, enter your user identifier. This is typically the email address for a Google Account.
- ClickSelect a role, then search for the role.
- To grant additional roles, clickAdd another role and add each additional role.
- ClickSave.
Install the Google Cloud CLI.
Note: If you installed the gcloud CLI previously, make sure you have the latest version by runninggcloud components update.If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
Toinitialize the gcloud CLI, run the following command:
gcloudinit
If you're using a local shell, then create local authentication credentials for your user account:
gcloudauthapplication-defaultlogin
You don't need to do this if you're using Cloud Shell.
If an authentication error is returned, and you are using an external identity provider (IdP), confirm that you have signed in to the gcloud CLI with your federated identity.
Client libraries can useApplication Default Credentials to easily authenticate with Google APIs and send requests to those APIs. With Application Default Credentials, you can test your application locally and deploy it without changing the underlying code. For more information, see Authenticate for using client libraries.
Also ensure you haveinstalled the client library.
Enable access to Cloud Storage
Speech-to-Text uses a service account to access your files in Cloud Storage.By default, the service account has access to Cloud Storage files in the sameproject.
The service account email address is the following:
service-PROJECT_NUMBER@gcp-sa-speech.iam.gserviceaccount.comIn order to transcribe Cloud Storage files in another project, you can givethis service account the [Speech-to-Text Service Agent][speech-service-agent]role in the other project:
gcloudprojectsadd-iam-policy-bindingPROJECT_ID\--member=serviceAccount:service-PROJECT_NUMBER@gcp-sa-speech.iam.gserviceaccount.com\--role=roles/speech.serviceAgentMore information about project IAM policy is available at[Manage access to projects, folders, and organizations][manage-access].
You can also give the service account more granular access by giving itpermission to a specific Cloud Storage bucket:
gcloudstoragebucketsadd-iam-policy-bindinggs://BUCKET_NAME\--member=serviceAccount:service-PROJECT_NUMBER@gcp-sa-speech.iam.gserviceaccount.com\--role=roles/storage.adminMore information about managing access to Cloud Storage is available at[Create and Manage access control lists][buckets-manage-acl]in the Cloud Storage documentation.
Perform batch recognition with inline results
Here is an example of performing batch speech recognition on an audio file inCloud Storage and reading the transcription results inline from the response:
Python
importosfromgoogle.cloud.speech_v2importSpeechClientfromgoogle.cloud.speech_v2.typesimportcloud_speechPROJECT_ID=os.getenv("GOOGLE_CLOUD_PROJECT")deftranscribe_batch_gcs_input_inline_output_v2(audio_uri:str,)->cloud_speech.BatchRecognizeResults:"""Transcribes audio from a Google Cloud Storage URI using the Google Cloud Speech-to-Text API. The transcription results are returned inline in the response. Args: audio_uri (str): The Google Cloud Storage URI of the input audio file. Such as gs://[BUCKET]/[FILE] Returns: cloud_speech.BatchRecognizeResults: The response containing the transcription results. """# Instantiates a clientclient=SpeechClient()config=cloud_speech.RecognitionConfig(auto_decoding_config=cloud_speech.AutoDetectDecodingConfig(),language_codes=["en-US"],model="chirp_3",)file_metadata=cloud_speech.BatchRecognizeFileMetadata(uri=audio_uri)request=cloud_speech.BatchRecognizeRequest(recognizer=f"projects/{PROJECT_ID}/locations/global/recognizers/_",config=config,files=[file_metadata],recognition_output_config=cloud_speech.RecognitionOutputConfig(inline_response_config=cloud_speech.InlineOutputConfig(),),)# Transcribes the audio into textoperation=client.batch_recognize(request=request)print("Waiting for operation to complete...")response=operation.result(timeout=120)forresultinresponse.results[audio_uri].transcript.results:print(f"Transcript:{result.alternatives[0].transcript}")returnresponse.results[audio_uri].transcriptPerform batch recognition and write results to Cloud Storage
Here is an example of performing batch speech recognition on an audio file inCloud Storage and reading the transcription results from the output file inCloud Storage. Note that the file written to Cloud Storage is aBatchRecognizeResults message in JSONformat:
Python
importosimportrefromgoogle.cloudimportstoragefromgoogle.cloud.speech_v2importSpeechClientfromgoogle.cloud.speech_v2.typesimportcloud_speechPROJECT_ID=os.getenv("GOOGLE_CLOUD_PROJECT")deftranscribe_batch_gcs_input_gcs_output_v2(audio_uri:str,gcs_output_path:str,)->cloud_speech.BatchRecognizeResults:"""Transcribes audio from a Google Cloud Storage URI using the Google Cloud Speech-to-Text API. The transcription results are stored in another Google Cloud Storage bucket. Args: audio_uri (str): The Google Cloud Storage URI of the input audio file. E.g., gs://[BUCKET]/[FILE] gcs_output_path (str): The Google Cloud Storage bucket URI where the output transcript will be stored. E.g., gs://[BUCKET] Returns: cloud_speech.BatchRecognizeResults: The response containing the URI of the transcription results. """# Instantiates a clientclient=SpeechClient()config=cloud_speech.RecognitionConfig(auto_decoding_config=cloud_speech.AutoDetectDecodingConfig(),language_codes=["en-US"],model="chirp_3",)file_metadata=cloud_speech.BatchRecognizeFileMetadata(uri=audio_uri)request=cloud_speech.BatchRecognizeRequest(recognizer=f"projects/{PROJECT_ID}/locations/global/recognizers/_",config=config,files=[file_metadata],recognition_output_config=cloud_speech.RecognitionOutputConfig(gcs_output_config=cloud_speech.GcsOutputConfig(uri=gcs_output_path,),),)# Transcribes the audio into textoperation=client.batch_recognize(request=request)print("Waiting for operation to complete...")response=operation.result(timeout=120)file_results=response.results[audio_uri]print(f"Operation finished. Fetching results from{file_results.uri}...")output_bucket,output_object=re.match(r"gs://([^/]+)/(.*)",file_results.uri).group(1,2)# Instantiates a Cloud Storage clientstorage_client=storage.Client()# Fetch results from Cloud Storagebucket=storage_client.bucket(output_bucket)blob=bucket.blob(output_object)results_bytes=blob.download_as_bytes()batch_recognize_results=cloud_speech.BatchRecognizeResults.from_json(results_bytes,ignore_unknown_fields=True)forresultinbatch_recognize_results.results:print(f"Transcript:{result.alternatives[0].transcript}")returnbatch_recognize_resultsPerform batch recognition on multiple files
Here is an example of performing batch speech recognition on multiple audiofiles in Cloud Storage and reading the transcription results from the outputfiles in Cloud Storage:
Python
importosimportrefromtypingimportListfromgoogle.cloudimportstoragefromgoogle.cloud.speech_v2importSpeechClientfromgoogle.cloud.speech_v2.typesimportcloud_speechPROJECT_ID=os.getenv("GOOGLE_CLOUD_PROJECT")deftranscribe_batch_multiple_files_v2(audio_uris:List[str],gcs_output_path:str,)->cloud_speech.BatchRecognizeResponse:"""Transcribes audio from multiple Google Cloud Storage URIs using the Google Cloud Speech-to-Text API. The transcription results are stored in another Google Cloud Storage bucket. Args: audio_uris (List[str]): The list of Google Cloud Storage URIs of the input audio files. Such as ["gs://[BUCKET]/[FILE]", "gs://[BUCKET]/[FILE]"] gcs_output_path (str): The Google Cloud Storage bucket URI where the output transcript is stored. Such as gs://[BUCKET] Returns: cloud_speech.BatchRecognizeResponse: The response containing the URIs of the transcription results. """# Instantiates a clientclient=SpeechClient()config=cloud_speech.RecognitionConfig(auto_decoding_config=cloud_speech.AutoDetectDecodingConfig(),language_codes=["en-US"],model="chirp_3",)files=[cloud_speech.BatchRecognizeFileMetadata(uri=uri)foruriinaudio_uris]request=cloud_speech.BatchRecognizeRequest(recognizer=f"projects/{PROJECT_ID}/locations/global/recognizers/_",config=config,files=files,recognition_output_config=cloud_speech.RecognitionOutputConfig(gcs_output_config=cloud_speech.GcsOutputConfig(uri=gcs_output_path,),),)# Transcribes the audio into textoperation=client.batch_recognize(request=request)print("Waiting for operation to complete...")response=operation.result(timeout=120)print("Operation finished. Fetching results from:")foruriinaudio_uris:file_results=response.results[uri]print(f"{file_results.uri}...")output_bucket,output_object=re.match(r"gs://([^/]+)/(.*)",file_results.uri).group(1,2)# Instantiates a Cloud Storage clientstorage_client=storage.Client()# Fetch results from Cloud Storagebucket=storage_client.bucket(output_bucket)blob=bucket.blob(output_object)results_bytes=blob.download_as_bytes()batch_recognize_results=cloud_speech.BatchRecognizeResults.from_json(results_bytes,ignore_unknown_fields=True)forresultinbatch_recognize_results.results:print(f" Transcript:{result.alternatives[0].transcript}")returnresponseEnable dynamic batching on batch recognition
Dynamic batching enables lower cost transcription for higher latency. Thisfeature is only available for batch recognition.
Here is an example of performing batch recognition on an audio file inCloud Storage with dynamic batching enabled:
Python
importosfromgoogle.cloud.speech_v2importSpeechClientfromgoogle.cloud.speech_v2.typesimportcloud_speechPROJECT_ID=os.getenv("GOOGLE_CLOUD_PROJECT")deftranscribe_batch_dynamic_batching_v2(audio_uri:str,)->cloud_speech.BatchRecognizeResults:"""Transcribes audio from a Google Cloud Storage URI using dynamic batching. Args: audio_uri (str): The Cloud Storage URI of the input audio. E.g., gs://[BUCKET]/[FILE] Returns: cloud_speech.BatchRecognizeResults: The response containing the transcription results. """# Instantiates a clientclient=SpeechClient()config=cloud_speech.RecognitionConfig(auto_decoding_config=cloud_speech.AutoDetectDecodingConfig(),language_codes=["en-US"],model="chirp_3",)file_metadata=cloud_speech.BatchRecognizeFileMetadata(uri=audio_uri)request=cloud_speech.BatchRecognizeRequest(recognizer=f"projects/{PROJECT_ID}/locations/global/recognizers/_",config=config,files=[file_metadata],recognition_output_config=cloud_speech.RecognitionOutputConfig(inline_response_config=cloud_speech.InlineOutputConfig(),),processing_strategy=cloud_speech.BatchRecognizeRequest.ProcessingStrategy.DYNAMIC_BATCHING,)# Transcribes the audio into textoperation=client.batch_recognize(request=request)print("Waiting for operation to complete...")response=operation.result(timeout=120)forresultinresponse.results[audio_uri].transcript.results:print(f"Transcript:{result.alternatives[0].transcript}")returnresponse.results[audio_uri].transcriptOverride recognition features per file
Batch recognition by default uses the same recognition configuration for eachfile in the batch recognition request. If different files require differentconfiguration or features, configuration can be overridden per file using theconfig field in theBatchRecognizeFileMetadata message. Seetherecognizers documentation for an exampleoverriding recognition features.
Clean up
To avoid incurring charges to your Google Cloud account for the resources used on this page, follow these steps.
Optional: Revoke the authentication credentials that you created, and delete the local credential file.
gcloudauthapplication-defaultrevoke
Optional: Revoke credentials from the gcloud CLI.
gcloudauthrevoke
Console
gcloud
What's next
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.