Audio understanding (speech only)

You can add audio to Gemini requests to perform tasks that involveunderstanding the contents of the included audio. This page shows you how to addaudio to your requests to Gemini in Vertex AI by using theGoogle Cloud console and the Vertex AI API.

Supported models

The following table lists the models that support audio understanding:

ModelMedia detailsMIME types
Gemini 3 FlashPreview model
  • Maximum audio length per prompt: Approximately 8.4 hours, or up to 1 million tokens
  • Maximum number of audio files per prompt: 1
  • Speech understanding for: Audio summarization, transcription, and translation
  • audio/x-aac
  • audio/flac
  • audio/mp3
  • audio/m4a
  • audio/mpeg
  • audio/mpga
  • audio/mp4
  • audio/ogg
  • audio/pcm
  • audio/wav
  • audio/webm
Gemini 3 ProPreview model
  • Maximum audio length per prompt: Approximately 8.4 hours, or up to 1 million tokens
  • Maximum number of audio files per prompt: 1
  • Speech understanding for: Audio summarization, transcription, and translation
  • audio/x-aac
  • audio/flac
  • audio/mp3
  • audio/m4a
  • audio/mpeg
  • audio/mpga
  • audio/mp4
  • audio/ogg
  • audio/pcm
  • audio/wav
  • audio/webm
Gemini 2.5 Pro
  • Maximum audio length per prompt: Appropximately 8.4 hours, or up to 1 million tokens
  • Maximum number of audio files per prompt: 1
  • Speech understanding for: Audio summarization, transcription, and translation
  • audio/x-aac
  • audio/flac
  • audio/mp3
  • audio/m4a
  • audio/mpeg
  • audio/mpga
  • audio/mp4
  • audio/ogg
  • audio/pcm
  • audio/wav
  • audio/webm
Gemini 2.5 FlashPreview model
  • Maximum audio length per prompt: Appropximately 8.4 hours, or up to 1 million tokens
  • Maximum number of audio files per prompt: 1
  • Speech understanding for: Audio summarization, transcription, and translation
  • audio/x-aac
  • audio/flac
  • audio/mp3
  • audio/m4a
  • audio/mpeg
  • audio/mpga
  • audio/mp4
  • audio/ogg
  • audio/pcm
  • audio/wav
  • audio/webm
Gemini 2.5 Flash-LitePreview model
  • Maximum audio length per prompt: Appropximately 8.4 hours, or up to 1 million tokens
  • Maximum number of audio files per prompt: 1
  • audio/x-aac
  • audio/flac
  • audio/mp3
  • audio/m4a
  • audio/mpeg
  • audio/mpga
  • audio/mp4
  • audio/ogg
  • audio/pcm
  • audio/wav
  • audio/webm
Gemini 2.5 Flash
  • Maximum audio length per prompt: Appropximately 8.4 hours, or up to 1 million tokens
  • Maximum number of audio files per prompt: 1
  • Speech understanding for: Audio summarization, transcription, and translation
  • audio/x-aac
  • audio/flac
  • audio/mp3
  • audio/m4a
  • audio/mpeg
  • audio/mpga
  • audio/mp4
  • audio/ogg
  • audio/pcm
  • audio/wav
  • audio/webm
Gemini 2.5 Flash-Lite
  • Maximum audio length per prompt: Appropximately 8.4 hours, or up to 1 million tokens
  • Maximum number of audio files per prompt: 1
  • audio/x-aac
  • audio/flac
  • audio/mp3
  • audio/m4a
  • audio/mpeg
  • audio/mpga
  • audio/mp4
  • audio/ogg
  • audio/pcm
  • audio/wav
  • audio/webm
Gemini 2.5 Flash with Gemini Live API native audio
  • Maximum conversation length: Default 10 minutes that canbe extended.
  • Required audio input format: Raw 16-bit PCM audio at 16kHz, little-endian
  • Required audio output format: Raw 16-bit PCM audio at 24kHz, little-endian
  • audio/x-aac
  • audio/flac
  • audio/mp3
  • audio/m4a
  • audio/mpeg
  • audio/mpga
  • audio/mp4
  • audio/ogg
  • audio/pcm
  • audio/wav
  • audio/webm
Gemini 2.5 Flash with Live API native audio (Preview)Preview model
  • Maximum conversation length: Default 10 minutes that canbe extended.
  • Required audio input format: Raw 16-bit PCM audio at 16kHz, little-endian
  • Required audio output format: Raw 16-bit PCM audio at 24kHz, little-endian
  • audio/x-aac
  • audio/flac
  • audio/mp3
  • audio/m4a
  • audio/mpeg
  • audio/mpga
  • audio/mp4
  • audio/ogg
  • audio/pcm
  • audio/wav
  • audio/webm
Gemini 2.0 Flash with Live APIPreview model
  • Maximum audio length per prompt: Appropximately 8.4 hours, or up to 1 million tokens
  • Maximum number of audio files per prompt: 1
  • Speech understanding for: Audio summarization, transcription, and translation
  • Maximum tokens per minute (TPM):
    • US/Asia: 1.7 M
    • EU: 0.4 M
  • audio/x-aac
  • audio/flac
  • audio/mp3
  • audio/m4a
  • audio/mpeg
  • audio/mpga
  • audio/mp4
  • audio/ogg
  • audio/pcm
  • audio/wav
  • audio/webm
Gemini 2.0 Flash
  • Maximum audio length per prompt: Appropximately 8.4 hours, or up to 1 million tokens
  • Maximum number of audio files per prompt: 1
  • Speech understanding for: Audio summarization, transcription, and translation
  • Maximum tokens per minute (TPM):
    • US/Asia: 3.5 M
    • EU: 3.5 M
  • audio/x-aac
  • audio/flac
  • audio/mp3
  • audio/m4a
  • audio/mpeg
  • audio/mpga
  • audio/mp4
  • audio/ogg
  • audio/pcm
  • audio/wav
  • audio/webm
Gemini 2.0 Flash-Lite
  • Maximum audio length per prompt: Appropximately 8.4 hours, or up to 1 million tokens
  • Maximum number of audio files per prompt: 1
  • Speech understanding for: Audio summarization, transcription, and translation
  • Maximum tokens per minute (TPM):
    • US/Asia: 3.5 M
    • EU: 3.5 M

    For a list of languages supported by Gemini models, see model informationGoogle models. To learnmore about how to design multimodal prompts, seeDesign multimodal prompts.If you're looking for a way to use Gemini directly from your mobile andweb apps, see theFirebase AI Logic client SDKs forSwift, Android, Web, Flutter, and Unity apps.

    Add audio to a request

    You can add audio files in your requests to Gemini.

    Single audio

    The following shows you how to use an audio file to summarize a podcast:

    Console

    To send a multimodal prompt by using the Google Cloud console, do thefollowing:

    1. In the Vertex AI section of the Google Cloud console, go to theVertex AI Studio page.

      Go to Vertex AI Studio

    2. ClickCreate prompt.

    3. Optional: Configure the model and parameters:

      • Model: Select a model.
    4. Optional: To configure advanced parameters, clickAdvanced and configure as follows:

      Click to expand advanced configurations

      • Top-K: Use the slider or textbox to enter a value for top-K.

        Top-K changes how the model selects tokens for output. A top-K of1 means the next selected token is the most probable among alltokens in the model's vocabulary (also called greedy decoding), while a top-K of3 means that the next token is selected from among the three mostprobable tokens by using temperature.

        For each token selection step, the top-K tokens with the highestprobabilities are sampled. Then tokens are further filtered based on top-P withthe final token selected using temperature sampling.

        Specify a lower value for less random responses and a higher value for morerandom responses.

      • Top-P: Use the slider or textbox to enter a value for top-P. Tokens are selected from most probable to the least until the sum of their probabilities equals the value of top-P. For the least variable results, set top-P to0.
      • Max responses: Use the slider or textbox to enter a value for the number of responses to generate.
      • Streaming responses: Enable to print responses as they're generated.
      • Safety filter threshold: Select the threshold of how likely you are to see responses that could be harmful.
      • Enable Grounding: Grounding isn't supported for multimodal prompts.
      • Region: Select the region that you want to use.
      • Temperature: Use the slider or textbox to enter a value for temperature.

        The temperature is used for sampling during response generation, which occurs whentopPandtopK are applied. Temperature controls the degree of randomness in token selection.Lower temperatures are good for prompts that require a less open-ended or creative response, whilehigher temperatures can lead to more diverse or creative results. A temperature of0means that the highest probability tokens are always selected. In this case, responses for a givenprompt are mostly deterministic, but a small amount of variation is still possible.

        If the model returns a response that's too generic, too short, or the model gives a fallbackresponse, try increasing the temperature. If the model enters infinite generation, increasing thetemperature to at least0.1 may lead to improved results.

        1.0 is therecommended starting value for temperature.</li> <li>**Output token limit**: Use the slider or textbox to enter a value for the max output limit.Maximum number of tokens that can be generated in the response. A token isapproximately four characters. 100 tokens correspond to roughly 60-80 words.

        Specify a lower value for shorter responses and a higher value for potentially longerresponses.

        </li> <li>**Add stop sequence**: Optional. Enter a stop sequence, which is a series of characters that includes spaces. If the model encounters a stop sequence, the response generation stops. The stop sequence isn't included in the response, and you can add up to five stop sequences.</li></ul>

    5. ClickInsert Media, and select a source for your file.

      Upload

      Select the file that you want to upload and clickOpen.

      By URL

      Enter the URL of the file that you want to use and clickInsert.

      Cloud Storage

      Select the bucket and then the file from the bucket thatyou want to import and clickSelect.

      Google Drive

      1. Choose an account and give consent toVertex AI Studio to access your account the firsttime you select this option. You can upload multiple files thathave a total size of up to 10 MB. A single file can't exceed7 MB.
      2. Click the file that you want to add.
      3. ClickSelect.

        The file thumbnail displays in thePrompt pane. The totalnumber of tokens also displays. If your prompt data exceeds thetoken limit, thetokens are truncated and aren't included in processing your data.

    6. Enter your text prompt in thePrompt pane.

    7. Optional: To view theToken ID to text andToken IDs, click thetokens count in thePrompt pane.

      Note: Media tokens aren't supported.
    8. ClickSubmit.

    9. Optional: To save your prompt toMy prompts, clickSave.

    10. Optional: To get the Python code or a curl command for your prompt, clickBuild with code > Get code.

    Python

    Install

    pip install --upgrade google-genai

    To learn more, see the SDK reference documentation.

    Set environment variables to use the Gen AI SDK with Vertex AI:

    # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values# with appropriate values for your project.exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECTexportGOOGLE_CLOUD_LOCATION=globalexportGOOGLE_GENAI_USE_VERTEXAI=True

    fromgoogleimportgenaifromgoogle.genai.typesimportHttpOptions,Partclient=genai.Client(http_options=HttpOptions(api_version="v1"))prompt="""Provide a concise summary of the main points in the audio file."""response=client.models.generate_content(model="gemini-2.5-flash",contents=[prompt,Part.from_uri(file_uri="gs://cloud-samples-data/generative-ai/audio/pixel.mp3",mime_type="audio/mpeg",),],)print(response.text)# Example response:# Here's a summary of the main points from the audio file:# The Made by Google podcast discusses the Pixel feature drops with product managers Aisha Sheriff and De Carlos Love.  The key idea is that devices should improve over time, with a connected experience across phones, watches, earbuds, and tablets.

    Go

    Learn how to install or update theGo.

    To learn more, see the SDK reference documentation.

    Set environment variables to use the Gen AI SDK with Vertex AI:

    # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values# with appropriate values for your project.exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECTexportGOOGLE_CLOUD_LOCATION=globalexportGOOGLE_GENAI_USE_VERTEXAI=True

    import("context""fmt""io"genai"google.golang.org/genai")//generateWithAudioshowshowtogeneratetextusinganaudioinput.funcgenerateWithAudio(wio.Writer)error{ctx:=context.Background()client,err:=genai.NewClient(ctx, &genai.ClientConfig{HTTPOptions:genai.HTTPOptions{APIVersion:"v1"},})iferr!=nil{returnfmt.Errorf("failed to create genai client: %w",err)}modelName:="gemini-2.5-flash"contents:=[]*genai.Content{{Parts:[]*genai.Part{{Text:`Providethesummaryoftheaudiofile.Summarizethemainpointsoftheaudioconcisely.Createachapterbreakdownwithtimestampsforkeysectionsortopicsdiscussed.`},{FileData: &genai.FileData{FileURI:"gs://cloud-samples-data/generative-ai/audio/pixel.mp3",MIMEType:"audio/mpeg",}},},Role:"user"},}resp,err:=client.Models.GenerateContent(ctx,modelName,contents,nil)iferr!=nil{returnfmt.Errorf("failed to generate content: %w",err)}respText:=resp.Text()fmt.Fprintln(w,respText)//Exampleresponse://Hereisasummaryandchapterbreakdownoftheaudiofile:////**Summary:**////Theaudiofileisa"Made by Google"podcastepisodediscussingthePixelFeatureDrops,...////**ChapterBreakdown:**////***0:00-0:54:**Introductiontothepodcastandguests,AishaSharifandDeCarlosLove.//...returnnil}

    Node.js

    Install

    npm install @google/genai

    To learn more, see the SDK reference documentation.

    Set environment variables to use the Gen AI SDK with Vertex AI:

    # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values# with appropriate values for your project.exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECTexportGOOGLE_CLOUD_LOCATION=globalexportGOOGLE_GENAI_USE_VERTEXAI=True

    const{GoogleGenAI}=require('@google/genai');constGOOGLE_CLOUD_PROJECT=process.env.GOOGLE_CLOUD_PROJECT;constGOOGLE_CLOUD_LOCATION=process.env.GOOGLE_CLOUD_LOCATION||'global';asyncfunctiongenerateText(projectId=GOOGLE_CLOUD_PROJECT,location=GOOGLE_CLOUD_LOCATION){constclient=newGoogleGenAI({vertexai:true,project:projectId,location:location,});constprompt='Provide a concise summary of the main points in the audio file.';constresponse=awaitclient.models.generateContent({model:'gemini-2.5-flash',contents:[{fileData:{fileUri:'gs://cloud-samples-data/generative-ai/audio/pixel.mp3',mimeType:'audio/mpeg',},},{text:prompt},],});console.log(response.text);//Exampleresponse://Here's a summary of the main points from the audio file://TheMadebyGooglepodcastdiscussesthePixelfeaturedropswithproductmanagersAishaSheriffandDeCarlosLove.Thekeyideaisthatdevicesshouldimproveovertime,withaconnectedexperienceacrossphones,watches,earbuds,andtablets.returnresponse.text;}

    Java

    Learn how to install or update theJava.

    To learn more, see the SDK reference documentation.

    Set environment variables to use the Gen AI SDK with Vertex AI:

    # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values# with appropriate values for your project.exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECTexportGOOGLE_CLOUD_LOCATION=globalexportGOOGLE_GENAI_USE_VERTEXAI=True

    importcom.google.genai.Client;importcom.google.genai.types.Content;importcom.google.genai.types.GenerateContentResponse;importcom.google.genai.types.HttpOptions;importcom.google.genai.types.Part;publicclassTextGenerationWithGcsAudio{publicstaticvoidmain(String[]args){//TODO(developer):Replacethesevariablesbeforerunningthesample.StringmodelId="gemini-2.5-flash";generateContent(modelId);}//GeneratestextwithaudioinputpublicstaticStringgenerateContent(StringmodelId){//ClientInitialization.Oncecreated,itcanbereusedformultiplerequests.try(Clientclient=Client.builder().location("global").vertexAI(true).httpOptions(HttpOptions.builder().apiVersion("v1").build()).build()){GenerateContentResponseresponse=client.models.generateContent(modelId,Content.fromParts(Part.fromUri("gs://cloud-samples-data/generative-ai/audio/pixel.mp3","audio/mpeg"),Part.fromText("Provide a concise summary of the main points in the audio file.")),null);System.out.print(response.text());//Exampleresponse://TheaudiofeaturesGoogleproductmanagersAishaSharifandD.CarlosLovediscussingPixel//FeatureDrops,emphasizingtheirroleincontinuallyenhancingdevicesacrosstheentire//Pixelecosystem...returnresponse.text();}}}

    REST

    After youset up your environment, you can use REST to test a text prompt. The following sample sends a request to the publisher model endpoint.

    Before using any of the request data, make the following replacements:

    • PROJECT_ID: Yourproject ID.
    • FILE_URI: The URI or URL of the file to include in the prompt. Acceptable values include the following:
      • Cloud Storage bucket URI: The object must either be publicly readable or reside in the same Google Cloud project that's sending the request. Forgemini-2.0-flash andgemini-2.0-flash-lite, the size limit is 2 GB.
      • HTTP URL: The file URL must be publicly readable. You can specify one video file, one audio file, and up to 10 image files per request. Audio files, video files, and documents can't exceed 15 MB.
      • YouTube video URL:The YouTube video must be either owned by the account that you used to sign in to the Google Cloud console or is public. Only one YouTube video URL is supported per request.

      When specifying afileURI, you must also specify the media type (mimeType) of the file. If VPC Service Controls is enabled, specifying a media file URL forfileURI is not supported.

      If you don't have an audio file in Cloud Storage, then you can use the following publicly available file:gs://cloud-samples-data/generative-ai/audio/pixel.mp3 with a mime type ofaudio/mp3. To listen to this audio,open the sample MP3 file.

    • MIME_TYPE: The media type of the file specified in thedata orfileUrifields. Acceptable values include the following:

      Click to expand MIME types

      • application/pdf
      • audio/mpeg
      • audio/mp3
      • audio/wav
      • image/png
      • image/jpeg
      • image/webp
      • text/plain
      • video/mov
      • video/mpeg
      • video/mp4
      • video/mpg
      • video/avi
      • video/wmv
      • video/mpegps
      • video/flv
    • TEXT
      The text instructions to include in the prompt. For example,Please provide a summary for the audio. Provide chapter titles, be concise and short, no need to provide chapter summaries. Do not make up any information that is not part of the audio and do not be verbose.

    To send your request, choose one of these options:

    curl

    Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.

    Save the request body in a file namedrequest.json. Run the following command in the terminal to create or overwrite this file in the current directory:

    cat > request.json<< 'EOF'{  "contents": {    "role": "USER",    "parts": [      {        "fileData": {          "fileUri": "FILE_URI",          "mimeType": "MIME_TYPE"        }      },      {        "text": "TEXT"      }    ]  }}EOF

    Then execute the following command to send your REST request:

    curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \
    "https://aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/global/publishers/google/models/gemini-2.0-flash:generateContent"

    PowerShell

    Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.

    Save the request body in a file namedrequest.json. Run the following command in the terminal to create or overwrite this file in the current directory:

    @'{  "contents": {    "role": "USER",    "parts": [      {        "fileData": {          "fileUri": "FILE_URI",          "mimeType": "MIME_TYPE"        }      },      {        "text": "TEXT"      }    ]  }}'@  | Out-File -FilePath request.json -Encoding utf8

    Then execute the following command to send your REST request:

    $cred = gcloud auth print-access-token
    $headers = @{ "Authorization" = "Bearer $cred" }

    Invoke-WebRequest `
    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "https://aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/global/publishers/google/models/gemini-2.0-flash:generateContent" | Select-Object -Expand Content

    You should receive a JSON response similar to the following.

    Response

    {  "candidates": [    {      "content": {        "role": "model",        "parts": [          {            "text": "## Made By Google Podcast - Pixel Feature Drops \n\n**Chapter 1: Transformative Pixel Features**\n\n**Chapter 2: Importance of Feature Drops**\n\n**Chapter 3: January's Feature Drop Highlights**\n\n**Chapter 4: March's Feature Drop Highlights for Pixel Watch**\n\n**Chapter 5: March's Feature Drop Highlights for Pixel Phones**\n\n**Chapter 6: Feature Drop Expansion to Other Devices**\n\n**Chapter 7: Deciding Which Features to Include in Feature Drops**\n\n**Chapter 8: Importance of User Feedback**\n\n**Chapter 9: When to Expect March's Feature Drop**\n\n**Chapter 10: Stand-Out Features from Past Feature Drops** \n"          }        ]      },      "finishReason": "STOP",      "safetyRatings": [        {          "category": "HARM_CATEGORY_HATE_SPEECH",          "probability": "NEGLIGIBLE",          "probabilityScore": 0.05470151,          "severity": "HARM_SEVERITY_NEGLIGIBLE",          "severityScore": 0.07864238        },        {          "category": "HARM_CATEGORY_DANGEROUS_CONTENT",          "probability": "NEGLIGIBLE",          "probabilityScore": 0.027742893,          "severity": "HARM_SEVERITY_NEGLIGIBLE",          "severityScore": 0.050051305        },        {          "category": "HARM_CATEGORY_HARASSMENT",          "probability": "NEGLIGIBLE",          "probabilityScore": 0.08678674,          "severity": "HARM_SEVERITY_NEGLIGIBLE",          "severityScore": 0.06108711        },        {          "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",          "probability": "NEGLIGIBLE",          "probabilityScore": 0.11899801,          "severity": "HARM_SEVERITY_NEGLIGIBLE",          "severityScore": 0.14706452        }      ]    }  ],  "usageMetadata": {    "promptTokenCount": 18883,    "candidatesTokenCount": 150,    "totalTokenCount": 19033  }}
    Note the following in the URL for this sample:
    • Use thegenerateContent method to request that the response is returned after it's fully generated. To reduce the perception of latency to a human audience, stream the response as it's being generated by using thestreamGenerateContent method.
    • The multimodal model ID is located at the end of the URL before the method (for example,gemini-2.0-flash). This sample might support other models as well.

    Audio transcription

    The following shows you how to use an audio file to transcribe an interview. Toenable timestamp understanding for audio-only files, enable theaudioTimestampparameter inGenerationConfig:

    Console

    To send a multimodal prompt by using the Google Cloud console, do thefollowing:

    1. In the Vertex AI section of the Google Cloud console, go to theVertex AI Studio page.

      Go to Vertex AI Studio

    2. ClickCreate prompt.

    3. Optional: Configure the model and parameters:

      • Model: Select a model.
    4. Optional: To configure advanced parameters, clickAdvanced and configure as follows:

      Click to expand advanced configurations

      • Top-K: Use the slider or textbox to enter a value for top-K.

        Top-K changes how the model selects tokens for output. A top-K of1 means the next selected token is the most probable among alltokens in the model's vocabulary (also called greedy decoding), while a top-K of3 means that the next token is selected from among the three mostprobable tokens by using temperature.

        For each token selection step, the top-K tokens with the highestprobabilities are sampled. Then tokens are further filtered based on top-P withthe final token selected using temperature sampling.

        Specify a lower value for less random responses and a higher value for morerandom responses.

      • Top-P: Use the slider or textbox to enter a value for top-P. Tokens are selected from most probable to the least until the sum of their probabilities equals the value of top-P. For the least variable results, set top-P to0.
      • Max responses: Use the slider or textbox to enter a value for the number of responses to generate.
      • Streaming responses: Enable to print responses as they're generated.
      • Safety filter threshold: Select the threshold of how likely you are to see responses that could be harmful.
      • Enable Grounding: Grounding isn't supported for multimodal prompts.
      • Region: Select the region that you want to use.
      • Temperature: Use the slider or textbox to enter a value for temperature.

        The temperature is used for sampling during response generation, which occurs whentopPandtopK are applied. Temperature controls the degree of randomness in token selection.Lower temperatures are good for prompts that require a less open-ended or creative response, whilehigher temperatures can lead to more diverse or creative results. A temperature of0means that the highest probability tokens are always selected. In this case, responses for a givenprompt are mostly deterministic, but a small amount of variation is still possible.

        If the model returns a response that's too generic, too short, or the model gives a fallbackresponse, try increasing the temperature. If the model enters infinite generation, increasing thetemperature to at least0.1 may lead to improved results.

        1.0 is therecommended starting value for temperature.</li> <li>**Output token limit**: Use the slider or textbox to enter a value for the max output limit.Maximum number of tokens that can be generated in the response. A token isapproximately four characters. 100 tokens correspond to roughly 60-80 words.

        Specify a lower value for shorter responses and a higher value for potentially longerresponses.

        </li> <li>**Add stop sequence**: Optional. Enter a stop sequence, which is a series of characters that includes spaces. If the model encounters a stop sequence, the response generation stops. The stop sequence isn't included in the response, and you can add up to five stop sequences.</li></ul>

    5. ClickInsert Media, and select a source for your file.

      Upload

      Select the file that you want to upload and clickOpen.

      By URL

      Enter the URL of the file that you want to use and clickInsert.

      Cloud Storage

      Select the bucket and then the file from the bucket thatyou want to import and clickSelect.

      Google Drive

      1. Choose an account and give consent toVertex AI Studio to access your account the firsttime you select this option. You can upload multiple files thathave a total size of up to 10 MB. A single file can't exceed7 MB.
      2. Click the file that you want to add.
      3. ClickSelect.

        The file thumbnail displays in thePrompt pane. The totalnumber of tokens also displays. If your prompt data exceeds thetoken limit, thetokens are truncated and aren't included in processing your data.

    6. Enter your text prompt in thePrompt pane.

    7. Optional: To view theToken ID to text andToken IDs, click thetokens count in thePrompt pane.

      Note: Media tokens aren't supported.
    8. ClickSubmit.

    9. Optional: To save your prompt toMy prompts, clickSave.

    10. Optional: To get the Python code or a curl command for your prompt, clickBuild with code > Get code.

    Python

    Install

    pip install --upgrade google-genai

    To learn more, see the SDK reference documentation.

    Set environment variables to use the Gen AI SDK with Vertex AI:

    # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values# with appropriate values for your project.exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECTexportGOOGLE_CLOUD_LOCATION=globalexportGOOGLE_GENAI_USE_VERTEXAI=True

    fromgoogleimportgenaifromgoogle.genai.typesimportGenerateContentConfig,HttpOptions,Partclient=genai.Client(http_options=HttpOptions(api_version="v1"))prompt="""Transcribe the interview, in the format of timecode, speaker, caption.Use speaker A, speaker B, etc. to identify speakers."""response=client.models.generate_content(model="gemini-2.5-flash",contents=[prompt,Part.from_uri(file_uri="gs://cloud-samples-data/generative-ai/audio/pixel.mp3",mime_type="audio/mpeg",),],# Required to enable timestamp understanding for audio-only filesconfig=GenerateContentConfig(audio_timestamp=True),)print(response.text)# Example response:# [00:00:00] **Speaker A:** your devices are getting better over time. And so ...# [00:00:14] **Speaker B:** Welcome to the Made by Google podcast where we meet ...# [00:00:20] **Speaker B:** Here's your host, Rasheed Finch.# [00:00:23] **Speaker C:** Today we're talking to Aisha Sharif and DeCarlos Love. ...# ...

    Go

    Learn how to install or update theGo.

    To learn more, see the SDK reference documentation.

    Set environment variables to use the Gen AI SDK with Vertex AI:

    # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values# with appropriate values for your project.exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECTexportGOOGLE_CLOUD_LOCATION=globalexportGOOGLE_GENAI_USE_VERTEXAI=True

    import("context""fmt""io"genai"google.golang.org/genai")//generateAudioTranscriptshowshowtogenerateanaudiotranscript.funcgenerateAudioTranscript(wio.Writer)error{ctx:=context.Background()client,err:=genai.NewClient(ctx, &genai.ClientConfig{HTTPOptions:genai.HTTPOptions{APIVersion:"v1"},})iferr!=nil{returnfmt.Errorf("failed to create genai client: %w",err)}modelName:="gemini-2.5-flash"contents:=[]*genai.Content{{Parts:[]*genai.Part{{Text:`Transcribetheinterview,intheformatoftimecode,speaker,caption.UsespeakerA,speakerB,etc.toidentifyspeakers.`},{FileData: &genai.FileData{FileURI:"gs://cloud-samples-data/generative-ai/audio/pixel.mp3",MIMEType:"audio/mpeg",}},},Role:"user"},}resp,err:=client.Models.GenerateContent(ctx,modelName,contents,nil)iferr!=nil{returnfmt.Errorf("failed to generate content: %w",err)}respText:=resp.Text()fmt.Fprintln(w,respText)//Exampleresponse://00:00:00,A:yourdevicesaregettingbetterovertime.//00:01:13,A:Andsowethinkaboutitacrosstheentireportfoliofromphonestowatch,...//...returnnil}

    Node.js

    Install

    npm install @google/genai

    To learn more, see the SDK reference documentation.

    Set environment variables to use the Gen AI SDK with Vertex AI:

    # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values# with appropriate values for your project.exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECTexportGOOGLE_CLOUD_LOCATION=globalexportGOOGLE_GENAI_USE_VERTEXAI=True

    const{GoogleGenAI}=require('@google/genai');constGOOGLE_CLOUD_PROJECT=process.env.GOOGLE_CLOUD_PROJECT;constGOOGLE_CLOUD_LOCATION=process.env.GOOGLE_CLOUD_LOCATION||'global';asyncfunctiongenerateText(projectId=GOOGLE_CLOUD_PROJECT,location=GOOGLE_CLOUD_LOCATION){constclient=newGoogleGenAI({vertexai:true,project:projectId,location:location,});constprompt=`Transcribetheinterview,intheformatoftimecode,speaker,caption.UsespeakerA,speakerB,etc.toidentifyspeakers.`;constresponse=awaitclient.models.generateContent({model:'gemini-2.5-flash',contents:[{text:prompt},{fileData:{fileUri:'gs://cloud-samples-data/generative-ai/audio/pixel.mp3',mimeType:'audio/mpeg',},},],//Requiredtoenabletimestampunderstandingforaudio-onlyfilesconfig:{audioTimestamp:true,},});console.log(response.text);//Exampleresponse://[00:00:00]**SpeakerA:**yourdevicesaregettingbetterovertime.Andso...//[00:00:14]**SpeakerB:**WelcometotheMadebyGooglepodcastwherewemeet...//[00:00:20]**SpeakerB:**Here's your host, Rasheed Finch.//[00:00:23]**SpeakerC:**Todaywe're talking to Aisha Sharif and DeCarlos Love. ...//...returnresponse.text;}

    Java

    Learn how to install or update theJava.

    To learn more, see the SDK reference documentation.

    Set environment variables to use the Gen AI SDK with Vertex AI:

    # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values# with appropriate values for your project.exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECTexportGOOGLE_CLOUD_LOCATION=globalexportGOOGLE_GENAI_USE_VERTEXAI=True

    importcom.google.genai.Client;importcom.google.genai.types.Content;importcom.google.genai.types.GenerateContentConfig;importcom.google.genai.types.GenerateContentResponse;importcom.google.genai.types.HttpOptions;importcom.google.genai.types.Part;publicclassTextGenerationTranscriptWithGcsAudio{publicstaticvoidmain(String[]args){//TODO(developer):Replacethesevariablesbeforerunningthesample.StringmodelId="gemini-2.5-flash";generateContent(modelId);}//GeneratestranscriptwithaudioinputpublicstaticStringgenerateContent(StringmodelId){//ClientInitialization.Oncecreated,itcanbereusedformultiplerequests.try(Clientclient=Client.builder().location("global").vertexAI(true).httpOptions(HttpOptions.builder().apiVersion("v1").build()).build()){Stringprompt="Transcribe the interview, in the format of timecode, speaker, caption.\n"+"Use speaker A, speaker B, etc. to identify speakers.";//EnableaudioTimestamptogeneratetimestampsforaudio-onlyfiles.GenerateContentConfigcontentConfig=GenerateContentConfig.builder().audioTimestamp(true).build();GenerateContentResponseresponse=client.models.generateContent(modelId,Content.fromParts(Part.fromUri("gs://cloud-samples-data/generative-ai/audio/pixel.mp3","audio/mpeg"),Part.fromText(prompt)),contentConfig);System.out.print(response.text());//Exampleresponse://00:00-SpeakerA:yourdevicesaregettingbetterovertime.Andsowethinkaboutit...//00:14-SpeakerB:WelcometotheMadebyGooglePodcast,wherewemeetthepeoplewho...//00:41-SpeakerA:Somanyfeatures.Iamasinger,soIactuallythinkrecorder...returnresponse.text();}}}

    REST

    After youset up your environment, you can use REST to test a text prompt. The following sample sends a request to the publisher model endpoint.

    Before using any of the request data, make the following replacements:

    • PROJECT_ID: .
    • FILE_URI: The URI or URL of the file to include in the prompt. Acceptable values include the following:
      • Cloud Storage bucket URI: The object must either be publicly readable or reside in the same Google Cloud project that's sending the request. Forgemini-2.0-flash andgemini-2.0-flash-lite, the size limit is 2 GB.
      • HTTP URL: The file URL must be publicly readable. You can specify one video file, one audio file, and up to 10 image files per request. Audio files, video files, and documents can't exceed 15 MB.
      • YouTube video URL:The YouTube video must be either owned by the account that you used to sign in to the Google Cloud console or is public. Only one YouTube video URL is supported per request.

      When specifying afileURI, you must also specify the media type (mimeType) of the file. If VPC Service Controls is enabled, specifying a media file URL forfileURI is not supported.

      If you don't have an audio file in Cloud Storage, then you can use the following publicly available file:gs://cloud-samples-data/generative-ai/audio/pixel.mp3 with a mime type ofaudio/mp3. To listen to this audio,open the sample MP3 file.

    • MIME_TYPE: The media type of the file specified in thedata orfileUrifields. Acceptable values include the following:

      Click to expand MIME types

      • application/pdf
      • audio/mpeg
      • audio/mp3
      • audio/wav
      • image/png
      • image/jpeg
      • image/webp
      • text/plain
      • video/mov
      • video/mpeg
      • video/mp4
      • video/mpg
      • video/avi
      • video/wmv
      • video/mpegps
      • video/flv
    • TEXT
      The text instructions to include in the prompt. For example,Can you transcribe this interview, in the format of timecode, speaker, caption. Use speaker A, speaker B, etc. to identify speakers.

    To send your request, choose one of these options:

    curl

    Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.

    Save the request body in a file namedrequest.json. Run the following command in the terminal to create or overwrite this file in the current directory:

    cat > request.json<< 'EOF'{  "contents": {    "role": "USER",    "parts": [      {        "fileData": {          "fileUri": "FILE_URI",          "mimeType": "MIME_TYPE"        }      },      {        "text": "TEXT"      }    ]  },  "generatationConfig": {    "audioTimestamp": true  }}EOF

    Then execute the following command to send your REST request:

    curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \
    "https://aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/global/publishers/google/models/gemini-2.0-flash:generateContent"

    PowerShell

    Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.

    Save the request body in a file namedrequest.json. Run the following command in the terminal to create or overwrite this file in the current directory:

    @'{  "contents": {    "role": "USER",    "parts": [      {        "fileData": {          "fileUri": "FILE_URI",          "mimeType": "MIME_TYPE"        }      },      {        "text": "TEXT"      }    ]  },  "generatationConfig": {    "audioTimestamp": true  }}'@  | Out-File -FilePath request.json -Encoding utf8

    Then execute the following command to send your REST request:

    $cred = gcloud auth print-access-token
    $headers = @{ "Authorization" = "Bearer $cred" }

    Invoke-WebRequest `
    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "https://aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/global/publishers/google/models/gemini-2.0-flash:generateContent" | Select-Object -Expand Content

    You should receive a JSON response similar to the following.

    Response

    {  "candidates": [    {      "content": {        "role": "model",        "parts": [          {            "text": "0:00 Speaker A: Your devices are getting better over time, and so we think              about it across the entire portfolio from phones to watch to buds to tablet. We get              really excited about how we can tell a joint narrative across everything.              0:18 Speaker B: Welcome to the Made By Google Podcast, where we meet the people who              work on the Google products you love. Here's your host, Rasheed.              0:33 Speaker B: Today we're talking to Aisha and DeCarlos. They're both              Product Managers for various Pixel devices and work on something that all the Pixel              owners love. The Pixel feature drops. This is the Made By Google Podcast. Aisha, which              feature on your Pixel phone has been most transformative in your own life?              0:56 Speaker A: So many features. I am a singer, so I actually think recorder              transcription has been incredible because before I would record songs I'd just like,              freestyle them, record them, type them up. But now with transcription it works so well              even deciphering lyrics that are jumbled. I think that's huge.              ...              Subscribe now wherever you get your podcasts to be the first to listen."          }        ]      },      "finishReason": "STOP",      "safetyRatings": [        {          "category": "HARM_CATEGORY_HATE_SPEECH",          "probability": "NEGLIGIBLE",          "probabilityScore": 0.043609526,          "severity": "HARM_SEVERITY_NEGLIGIBLE",          "severityScore": 0.06255973        },        {          "category": "HARM_CATEGORY_DANGEROUS_CONTENT",          "probability": "NEGLIGIBLE",          "probabilityScore": 0.022328783,          "severity": "HARM_SEVERITY_NEGLIGIBLE",          "severityScore": 0.04426588        },        {          "category": "HARM_CATEGORY_HARASSMENT",          "probability": "NEGLIGIBLE",          "probabilityScore": 0.07107367,          "severity": "HARM_SEVERITY_NEGLIGIBLE",          "severityScore": 0.049405243        },        {          "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",          "probability": "NEGLIGIBLE",          "probabilityScore": 0.10484337,          "severity": "HARM_SEVERITY_NEGLIGIBLE",          "severityScore": 0.13128456        }      ]    }  ],  "usageMetadata": {    "promptTokenCount": 18871,    "candidatesTokenCount": 2921,    "totalTokenCount": 21792  }}
    Note the following in the URL for this sample:
    • Use thegenerateContent method to request that the response is returned after it's fully generated. To reduce the perception of latency to a human audience, stream the response as it's being generated by using thestreamGenerateContent method.
    • The multimodal model ID is located at the end of the URL before the method (for example,gemini-2.0-flash). This sample might support other models as well.

    Set optional model parameters

    Each model has a set of optional parameters that you can set. For moreinformation, seeContent generation parameters.

    Limitations

    While Gemini multimodal models are powerful in many multimodal usecases, it's important to understand the limitations of the models:

    • Non-speech sound recognition: The models that supportaudio might make mistakes recognizing sound that's not speech.
    • Audio-only timestamps: To accurately generate timestamps for audio-only files, you must configure theaudio_timestamp parameter ingeneration_config.

    What's next

    Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

    Last updated 2025-12-17 UTC.