Gemini API in Vertex AI quickstart Stay organized with collections Save and categorize content based on your preferences.
This quickstart shows you how to install the Google Gen AI SDK for yourlanguage of choice and then make your first API request. The samples varyslightly based on whether you authenticate to Vertex AI using anAPI key orapplication default credentials (ADC).
Choose your authentication method:
Before you begin
If you haven't configured ADC yet, follow these instructions:
Configure your project
Select a project, enable billing, enable the Vertex AI API, and install gcloud CLI:
- Sign in to your Google Account.
If you don't already have one, sign up for a new account.
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the Vertex AI API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.Install the Google Cloud CLI.
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
Toinitialize the gcloud CLI, run the following command:
gcloudinit
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the Vertex AI API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.Install the Google Cloud CLI.
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
Toinitialize the gcloud CLI, run the following command:
gcloudinit
Create local authentication credentials
Create local authentication credentials for your user account:
gcloudauthapplication-defaultlogin
If an authentication error is returned, and you are using an external identity provider (IdP), confirm that you have signed in to the gcloud CLI with your federated identity.
Required roles
To get the permissions that you need to use the Gemini API in Vertex AI, ask your administrator to grant you theVertex AI User (roles/aiplatform.user) IAM role on your project. For more information about granting roles, seeManage access to projects, folders, and organizations.
You might also be able to get the required permissions throughcustom roles or otherpredefined roles.
Install the SDK and set up your environment
On your local machine, click one of the following tabs to install the SDK foryour programming language.
Python Gen AI SDK
Install and update the Gen AI SDK for Python by running this command.
pipinstall--upgradegoogle-genai
Set environment variables:
# Replace the `GOOGLE_CLOUD_PROJECT_ID` and `GOOGLE_CLOUD_LOCATION` values# with appropriate values for your project.exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_IDexportGOOGLE_CLOUD_LOCATION=global
exportGOOGLE_GENAI_USE_VERTEXAI=True
Go Gen AI SDK
Install and update the Gen AI SDK for Go by running this command.
gogetgoogle.golang.org/genai
Set environment variables:
# Replace the `GOOGLE_CLOUD_PROJECT_ID` and `GOOGLE_CLOUD_LOCATION` values# with appropriate values for your project.exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_IDexportGOOGLE_CLOUD_LOCATION=global
exportGOOGLE_GENAI_USE_VERTEXAI=True
Node.js Gen AI SDK
Install and update the Gen AI SDK for Node.js by running this command.
npminstall@google/genai
Set environment variables:
# Replace the `GOOGLE_CLOUD_PROJECT_ID` and `GOOGLE_CLOUD_LOCATION` values# with appropriate values for your project.exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_IDexportGOOGLE_CLOUD_LOCATION=global
exportGOOGLE_GENAI_USE_VERTEXAI=True
Java Gen AI SDK
Install and update the Gen AI SDK for Java by running this command.
Maven
Add the following to yourpom.xml:
<dependencies><dependency><groupId>com.google.genai</groupId><artifactId>google-genai</artifactId><version>0.7.0</version></dependency></dependencies>Set environment variables:
# Replace the `GOOGLE_CLOUD_PROJECT_ID` and `GOOGLE_CLOUD_LOCATION` values# with appropriate values for your project.exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_IDexportGOOGLE_CLOUD_LOCATION=global
exportGOOGLE_GENAI_USE_VERTEXAI=True
REST
Set environment variables:
GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_IDGOOGLE_CLOUD_LOCATION=globalAPI_ENDPOINT=YOUR_API_ENDPOINTMODEL_ID="gemini-2.5-flash"GENERATE_CONTENT_API="generateContent"
Make your first request
Use thegenerateContentmethod to send a request to the Gemini API in Vertex AI:
Python
fromgoogleimportgenaifromgoogle.genai.typesimportHttpOptionsclient=genai.Client(http_options=HttpOptions(api_version="v1"))response=client.models.generate_content(model="gemini-2.5-flash",contents="How does AI work?",)print(response.text)# Example response:# Okay, let's break down how AI works. It's a broad field, so I'll focus on the ...## Here's a simplified overview:# ...Go
import("context""fmt""io""google.golang.org/genai")//generateWithTextshowshowtogeneratetextusingatextprompt.funcgenerateWithText(wio.Writer)error{ctx:=context.Background()client,err:=genai.NewClient(ctx, &genai.ClientConfig{HTTPOptions:genai.HTTPOptions{APIVersion:"v1"},})iferr!=nil{returnfmt.Errorf("failed to create genai client: %w",err)}resp,err:=client.Models.GenerateContent(ctx,"gemini-2.5-flash",genai.Text("How does AI work?"),nil,)iferr!=nil{returnfmt.Errorf("failed to generate content: %w",err)}respText:=resp.Text()fmt.Fprintln(w,respText)//Exampleresponse://That's a great question! Understanding how AI works can feel like ...//...//**1.TheFoundation:DataandAlgorithms**//...returnnil}Node.js
const{GoogleGenAI}=require('@google/genai');constGOOGLE_CLOUD_PROJECT=process.env.GOOGLE_CLOUD_PROJECT;constGOOGLE_CLOUD_LOCATION=process.env.GOOGLE_CLOUD_LOCATION||'global';asyncfunctiongenerateContent(projectId=GOOGLE_CLOUD_PROJECT,location=GOOGLE_CLOUD_LOCATION){constclient=newGoogleGenAI({vertexai:true,project:projectId,location:location,});constresponse=awaitclient.models.generateContent({model:'gemini-2.5-flash',contents:'How does AI work?',});console.log(response.text);returnresponse.text;}Java
importcom.google.genai.Client;importcom.google.genai.types.GenerateContentResponse;importcom.google.genai.types.HttpOptions;publicclassTextGenerationWithText{publicstaticvoidmain(String[]args){//TODO(developer):Replacethesevariablesbeforerunningthesample.StringmodelId="gemini-2.5-flash";generateContent(modelId);}//GeneratestextwithtextinputpublicstaticStringgenerateContent(StringmodelId){//Initializeclientthatwillbeusedtosendrequests.Thisclientonlyneedstobecreated//once,andcanbereusedformultiplerequests.try(Clientclient=Client.builder().location("global").vertexAI(true).httpOptions(HttpOptions.builder().apiVersion("v1").build()).build()){GenerateContentResponseresponse=client.models.generateContent(modelId,"How does AI work?",null);System.out.print(response.text());//Exampleresponse://Okay,let's break down how AI works. It'sabroadfield,soI'll focus on the ...////Here's a simplified overview://...returnresponse.text();}}}REST
To send this prompt request, run the curl command from the command line or include the REST call in your application.
curl-XPOST-H"Content-Type: application/json"-H"Authorization: Bearer$(gcloudauthprint-access-token)""https://${API_ENDPOINT}/v1/projects/${GOOGLE_CLOUD_PROJECT}/locations/${GOOGLE_CLOUD_LOCATION}/publishers/google/models/${MODEL_ID}:${GENERATE_CONTENT_API}"-d$'{ "contents": { "role": "user", "parts": { "text": "Explain how AI works in a few words" } }}'
The model returns a response. Note that the response is generated in sections with each section separately evaluated for safety.
Generate images
Note: Image generation with Gemini is in preview.If you require production-ready image generation features, use Imagen.For more information, see theImagen on Vertex AI quickstart.Gemini can generate and process images conversationally. You can promptGemini with text, images, or a combination of both to achieve variousimage-related tasks, such as image generation and editing. The following codedemonstrates how to generate an image based on a descriptive prompt:
You must includeresponseModalities: ["TEXT", "IMAGE"] in yourconfiguration. Image-only output is not supported with these models.
Python
fromgoogleimportgenaifromgoogle.genai.typesimportGenerateContentConfig,ModalityfromPILimportImagefromioimportBytesIOclient=genai.Client()response=client.models.generate_content(model="gemini-3-pro-image-preview",contents=("Generate an image of the Eiffel tower with fireworks in the background."),config=GenerateContentConfig(response_modalities=[Modality.TEXT,Modality.IMAGE],),)forpartinresponse.candidates[0].content.parts:ifpart.text:print(part.text)elifpart.inline_data:image=Image.open(BytesIO((part.inline_data.data)))image.save("output_folder/example-image-eiffel-tower.png")Node.js
constfs=require('fs');const{GoogleGenAI,Modality}=require('@google/genai');constGOOGLE_CLOUD_PROJECT=process.env.GOOGLE_CLOUD_PROJECT;constGOOGLE_CLOUD_LOCATION=process.env.GOOGLE_CLOUD_LOCATION||'us-central1';asyncfunctiongenerateImage(projectId=GOOGLE_CLOUD_PROJECT,location=GOOGLE_CLOUD_LOCATION){constclient=newGoogleGenAI({vertexai:true,project:projectId,location:location,});constresponse=awaitclient.models.generateContentStream({model:'gemini-2.5-flash-image',contents:'Generate an image of the Eiffel tower with fireworks in the background.',config:{responseModalities:[Modality.TEXT,Modality.IMAGE],},});constgeneratedFileNames=[];letimageIndex=0;forawait(constchunkofresponse){consttext=chunk.text;constdata=chunk.data;if(text){console.debug(text);}elseif(data){constoutputDir='output-folder';if(!fs.existsSync(outputDir)){fs.mkdirSync(outputDir,{recursive:true});}constfileName=`${outputDir}/generate_content_streaming_image_${imageIndex++}.png`;console.debug(`Writingresponseimagetofile:${fileName}.`);try{fs.writeFileSync(fileName,data);generatedFileNames.push(fileName);}catch(error){console.error(`Failedtowriteimagefile${fileName}:`,error);}}}//Exampleresponse://IwillgenerateanimageoftheEiffelToweratnight,withavibrantdisplayof//colorfulfireworksexplodinginthedarkskybehindit.Thetowerwillbe//illuminated,standingtallasthefocalpointofthescene,withtheburstsof//lightfromthefireworkscreatingafestiveatmosphere.returngeneratedFileNames;}Java
importcom.google.genai.Client;importcom.google.genai.types.Blob;importcom.google.genai.types.Candidate;importcom.google.genai.types.Content;importcom.google.genai.types.GenerateContentConfig;importcom.google.genai.types.GenerateContentResponse;importcom.google.genai.types.Part;importcom.google.genai.types.SafetySetting;importjava.awt.image.BufferedImage;importjava.io.ByteArrayInputStream;importjava.io.File;importjava.io.IOException;importjava.util.ArrayList;importjava.util.List;importjavax.imageio.ImageIO;publicclassImageGenMmFlashWithText{publicstaticvoidmain(String[]args)throwsIOException{//TODO(developer):Replacethesevariablesbeforerunningthesample.StringmodelId="gemini-2.5-flash-image";StringoutputFile="resources/output/example-image-eiffel-tower.png";generateContent(modelId,outputFile);}//GeneratesanimagewithtextinputpublicstaticvoidgenerateContent(StringmodelId,StringoutputFile)throwsIOException{//ClientInitialization.Oncecreated,itcanbereusedformultiplerequests.try(Clientclient=Client.builder().location("global").vertexAI(true).build()){GenerateContentConfigcontentConfig=GenerateContentConfig.builder().responseModalities("TEXT","IMAGE").candidateCount(1).safetySettings(SafetySetting.builder().method("PROBABILITY").category("HARM_CATEGORY_DANGEROUS_CONTENT").threshold("BLOCK_MEDIUM_AND_ABOVE").build()).build();GenerateContentResponseresponse=client.models.generateContent(modelId,"Generate an image of the Eiffel tower with fireworks in the background.",contentConfig);//GetpartsoftheresponseList<Part>parts=response.candidates().flatMap(candidates->candidates.stream().findFirst()).flatMap(Candidate::content).flatMap(Content::parts).orElse(newArrayList<>());//Foreachpartprinttextifpresent,otherwisereadimagedataifpresentand//writeittotheoutputfilefor(Partpart:parts){if(part.text().isPresent()){System.out.println(part.text().get());}elseif(part.inlineData().flatMap(Blob::data).isPresent()){BufferedImageimage=ImageIO.read(newByteArrayInputStream(part.inlineData().flatMap(Blob::data).get()));ImageIO.write(image,"png",newFile(outputFile));}}System.out.println("Content written to: "+outputFile);//Exampleresponse://HereistheEiffelTowerwithfireworksinthebackground...////Contentwrittento:resources/output/example-image-eiffel-tower.png}}}Image understanding
Gemini can understand images as well. The following code uses the imagegenerated in the previous section and uses a different model to inferinformation about the image:
Python
fromgoogleimportgenaifromgoogle.genai.typesimportHttpOptions,Partclient=genai.Client(http_options=HttpOptions(api_version="v1"))response=client.models.generate_content(model="gemini-2.5-flash",contents=["What is shown in this image?",Part.from_uri(file_uri="gs://cloud-samples-data/generative-ai/image/scones.jpg",mime_type="image/jpeg",),],)print(response.text)# Example response:# The image shows a flat lay of blueberry scones arranged on parchment paper. There are ...Go
import("context""fmt""io"genai"google.golang.org/genai")//generateWithTextImageshowshowtogeneratetextusingbothtextandimageinputfuncgenerateWithTextImage(wio.Writer)error{ctx:=context.Background()client,err:=genai.NewClient(ctx, &genai.ClientConfig{HTTPOptions:genai.HTTPOptions{APIVersion:"v1"},})iferr!=nil{returnfmt.Errorf("failed to create genai client: %w",err)}modelName:="gemini-2.5-flash"contents:=[]*genai.Content{{Parts:[]*genai.Part{{Text:"What is shown in this image?"},{FileData: &genai.FileData{//Imagesource:https://storage.googleapis.com/cloud-samples-data/generative-ai/image/scones.jpgFileURI:"gs://cloud-samples-data/generative-ai/image/scones.jpg",MIMEType:"image/jpeg",}},},Role:"user"},}resp,err:=client.Models.GenerateContent(ctx,modelName,contents,nil)iferr!=nil{returnfmt.Errorf("failed to generate content: %w",err)}respText:=resp.Text()fmt.Fprintln(w,respText)//Exampleresponse://Theimageshowsanoverheadshotofarustic,artisticarrangementonasurfacethat...returnnil}Node.js
const{GoogleGenAI}=require('@google/genai');constGOOGLE_CLOUD_PROJECT=process.env.GOOGLE_CLOUD_PROJECT;constGOOGLE_CLOUD_LOCATION=process.env.GOOGLE_CLOUD_LOCATION||'global';asyncfunctiongenerateContent(projectId=GOOGLE_CLOUD_PROJECT,location=GOOGLE_CLOUD_LOCATION){constclient=newGoogleGenAI({vertexai:true,project:projectId,location:location,});constimage={fileData:{fileUri:'gs://cloud-samples-data/generative-ai/image/scones.jpg',mimeType:'image/jpeg',},};constresponse=awaitclient.models.generateContent({model:'gemini-2.5-flash',contents:[image,'What is shown in this image?'],});console.log(response.text);returnresponse.text;}Java
importcom.google.genai.Client;importcom.google.genai.types.Content;importcom.google.genai.types.GenerateContentResponse;importcom.google.genai.types.HttpOptions;importcom.google.genai.types.Part;publicclassTextGenerationWithTextAndImage{publicstaticvoidmain(String[]args){//TODO(developer):Replacethesevariablesbeforerunningthesample.StringmodelId="gemini-2.5-flash";generateContent(modelId);}//GeneratestextwithtextandimageinputpublicstaticStringgenerateContent(StringmodelId){//Initializeclientthatwillbeusedtosendrequests.Thisclientonlyneedstobecreated//once,andcanbereusedformultiplerequests.try(Clientclient=Client.builder().location("global").vertexAI(true).httpOptions(HttpOptions.builder().apiVersion("v1").build()).build()){GenerateContentResponseresponse=client.models.generateContent(modelId,Content.fromParts(Part.fromText("What is shown in this image?"),Part.fromUri("gs://cloud-samples-data/generative-ai/image/scones.jpg","image/jpeg")),null);System.out.print(response.text());//Exampleresponse://Theimageshowsaflatlayofblueberrysconesarrangedonparchmentpaper.Thereare...returnresponse.text();}}}Code execution
The Gemini API in Vertex AI code execution feature enables the model to generate andrun Python code and learn iteratively from the results until it arrives at afinal output. Vertex AI provides code execution as a tool, similar tofunction calling. You can use this code execution capability to buildapplications that benefit from code-based reasoning and that produce textoutput. For example:
Python
fromgoogleimportgenaifromgoogle.genai.typesimport(HttpOptions,Tool,ToolCodeExecution,GenerateContentConfig,)client=genai.Client(http_options=HttpOptions(api_version="v1"))model_id="gemini-2.5-flash"code_execution_tool=Tool(code_execution=ToolCodeExecution())response=client.models.generate_content(model=model_id,contents="Calculate 20th fibonacci number. Then find the nearest palindrome to it.",config=GenerateContentConfig(tools=[code_execution_tool],temperature=0,),)print("# Code:")print(response.executable_code)print("# Outcome:")print(response.code_execution_result)# Example response:# # Code:# def fibonacci(n):# if n <= 0:# return 0# elif n == 1:# return 1# else:# a, b = 0, 1# for _ in range(2, n + 1):# a, b = b, a + b# return b## fib_20 = fibonacci(20)# print(f'{fib_20=}')## # Outcome:# fib_20=6765Go
import("context""fmt""io"genai"google.golang.org/genai")//generateWithCodeExecshowshowtogeneratetextusingthecodeexecutiontool.funcgenerateWithCodeExec(wio.Writer)error{ctx:=context.Background()client,err:=genai.NewClient(ctx, &genai.ClientConfig{HTTPOptions:genai.HTTPOptions{APIVersion:"v1"},})iferr!=nil{returnfmt.Errorf("failed to create genai client: %w",err)}prompt:="Calculate 20th fibonacci number. Then find the nearest palindrome to it."contents:=[]*genai.Content{{Parts:[]*genai.Part{{Text:prompt},},Role:"user"},}config:= &genai.GenerateContentConfig{Tools:[]*genai.Tool{{CodeExecution: &genai.ToolCodeExecution{}},},Temperature:genai.Ptr(float32(0.0)),}modelName:="gemini-2.5-flash"resp,err:=client.Models.GenerateContent(ctx,modelName,contents,config)iferr!=nil{returnfmt.Errorf("failed to generate content: %w",err)}for_,p:=rangeresp.Candidates[0].Content.Parts{ifp.Text!=""{fmt.Fprintf(w,"Gemini:%s",p.Text)}ifp.ExecutableCode!=nil{fmt.Fprintf(w,"Language:%s\n%s\n",p.ExecutableCode.Language,p.ExecutableCode.Code)}ifp.CodeExecutionResult!=nil{fmt.Fprintf(w,"Outcome:%s\n%s\n",p.CodeExecutionResult.Outcome,p.CodeExecutionResult.Output)}}//Exampleresponse://Gemini:Okay,Icandothat.First,I'll calculate the 20th Fibonacci number. Then, I need ...////Language:PYTHON////deffibonacci(n)://...////fib_20=fibonacci(20)//print(f'{fib_20=}')////Outcome:OUTCOME_OK//fib_20=6765////NowthatIhavethe20thFibonaccinumber(6765),Ineedtofindthenearestpalindrome....//...returnnil}Node.js
const{GoogleGenAI}=require('@google/genai');constGOOGLE_CLOUD_PROJECT=process.env.GOOGLE_CLOUD_PROJECT;constGOOGLE_CLOUD_LOCATION=process.env.GOOGLE_CLOUD_LOCATION||'global';asyncfunctiongenerateAndExecuteCode(projectId=GOOGLE_CLOUD_PROJECT,location=GOOGLE_CLOUD_LOCATION){constclient=newGoogleGenAI({vertexai:true,project:projectId,location:location,});constresponse=awaitclient.models.generateContent({model:'gemini-2.5-flash',contents:'Calculate 20th fibonacci number. Then find the nearest palindrome to it.',config:{tools:[{codeExecution:{}}],temperature:0,},});console.debug(response.executableCode);//Exampleresponse://Code://functionfibonacci(n){//if(n<=0){//return0;//}elseif(n===1){//return1;//}else{//leta=0,b=1;//for(leti=2;i<=n;i++){//[a,b]=[b,a+b];//}//returnb;//}//}////constfib20=fibonacci(20);//console.log(`fib20=${fib20}`);console.debug(response.codeExecutionResult);//Outcome://fib20=6765returnresponse.codeExecutionResult;}Java
importcom.google.genai.Client;importcom.google.genai.types.GenerateContentConfig;importcom.google.genai.types.GenerateContentResponse;importcom.google.genai.types.HttpOptions;importcom.google.genai.types.Tool;importcom.google.genai.types.ToolCodeExecution;publicclassToolsCodeExecWithText{publicstaticvoidmain(String[]args){//TODO(developer):Replacethesevariablesbeforerunningthesample.StringmodelId="gemini-2.5-flash";generateContent(modelId);}//GeneratestextusingtheCodeExecutiontoolpublicstaticStringgenerateContent(StringmodelId){//Initializeclientthatwillbeusedtosendrequests.Thisclientonlyneedstobecreated//once,andcanbereusedformultiplerequests.try(Clientclient=Client.builder().location("global").vertexAI(true).httpOptions(HttpOptions.builder().apiVersion("v1").build()).build()){//CreateaGenerateContentConfigandsetcodeExecutiontoolGenerateContentConfigcontentConfig=GenerateContentConfig.builder().tools(Tool.builder().codeExecution(ToolCodeExecution.builder().build()).build()).temperature(0.0F).build();GenerateContentResponseresponse=client.models.generateContent(modelId,"Calculate 20th fibonacci number. Then find the nearest palindrome to it.",contentConfig);System.out.println("Code:\n"+response.executableCode());System.out.println("Outcome:\n"+response.codeExecutionResult());//Exampleresponse//Code://deffibonacci(n)://ifn <=0://return0//elifn==1://return1//else://a,b=1,1//for_inrange(2,n)://a,b=b,a+b//returnb////fib_20=fibonacci(20)//print(f'{fib_20=}')////Outcome://fib_20=6765returnresponse.executableCode();}}}For more examples of code execution, check out thecode executiondocumentation.
What's next
Now that you made your first API request, you might want to explore thefollowing guides that show how to set up more advanced Vertex AIfeatures for production code:
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.