ai package Stay organized with collections Save and categorize content based on your preferences.
The Firebase AI Web SDK.
Functions
| Function | Description |
|---|---|
| function(app, ...) | |
| getAI(app, options) | Returns the defaultAI instance that is associated with the providedFirebaseApp. If no instance exists, initializes a new instance with the default settings. |
| function(ai, ...) | |
| getGenerativeModel(ai, modelParams, requestOptions) | Returns aGenerativeModel class with methods for inference and other functionality. |
| getImagenModel(ai, modelParams, requestOptions) | Returns anImagenModel class with methods for using Imagen.Only Imagen 3 models (namedimagen-3.0-*) are supported. |
| getLiveGenerativeModel(ai, modelParams) | (Public Preview) Returns aLiveGenerativeModel class for real-time, bidirectional communication.The Live API is only supported in modern browser windows and Node >= 22. |
| getTemplateGenerativeModel(ai, requestOptions) | (Public Preview) Returns aTemplateGenerativeModel class for executing server-side templates. |
| getTemplateImagenModel(ai, requestOptions) | (Public Preview) Returns aTemplateImagenModel class for executing server-side Imagen templates. |
| function(liveSession, ...) | |
| startAudioConversation(liveSession, options) | (Public Preview) Starts a real-time, bidirectional audio conversation with the model. This helper function manages the complexities of microphone access, audio recording, playback, and interruptions. |
Classes
| Class | Description |
|---|---|
| AIError | Error class for the Firebase AI SDK. |
| AIModel | Base class for Firebase AI model APIs.Instances of this class are associated with a specific Firebase AIBackend and provide methods for interacting with the configured generative model. |
| AnyOfSchema | Schema class representing a value that can conform to any of the provided sub-schemas. This is useful when a field can accept multiple distinct types or structures. |
| ArraySchema | Schema class for "array" types. Theitems param should refer to the type of item that can be a member of the array. |
| Backend | Abstract base class representing the configuration for an AI service backend. This class should not be instantiated directly. Use its subclasses;GoogleAIBackend for the Gemini Developer API (viaGoogle AI), andVertexAIBackend for the Vertex AI Gemini API. |
| BooleanSchema | Schema class for "boolean" types. |
| ChatSession | ChatSession class that enables sending chat messages and stores history of sent and received messages so far. |
| GenerativeModel | Class for generative model APIs. |
| GoogleAIBackend | Configuration class for the Gemini Developer API.Use this withAIOptions when initializing the AI service viagetAI() to specify the Gemini Developer API as the backend. |
| ImagenImageFormat | Defines the image format for images generated by Imagen.Use this class to specify the desired format (JPEG or PNG) and compression quality for images generated by Imagen. This is typically included as part ofImagenModelParams. |
| ImagenModel | Class for Imagen model APIs.This class provides methods for generating images using the Imagen model. |
| IntegerSchema | Schema class for "integer" types. |
| LiveGenerativeModel | (Public Preview) Class for Live generative model APIs. The Live API enables low-latency, two-way multimodal interactions with Gemini.This class should only be instantiated withgetLiveGenerativeModel(). |
| LiveSession | (Public Preview) Represents an active, real-time, bidirectional conversation with the model.This class should only be instantiated by callingLiveGenerativeModel.connect(). |
| NumberSchema | Schema class for "number" types. |
| ObjectSchema | Schema class for "object" types. Theproperties param must be a map ofSchema objects. |
| Schema | Parent class encompassing all Schema types, with static methods that allow building specific Schema types. This class can be converted withJSON.stringify() into a JSON string accepted by Vertex AI REST endpoints. (This string conversion is automatically done when calling SDK methods.) |
| StringSchema | Schema class for "string" types. Can be used with or without enum values. |
| TemplateGenerativeModel | (Public Preview)GenerativeModel APIs that execute on a server-side template.This class should only be instantiated withgetTemplateGenerativeModel(). |
| TemplateImagenModel | (Public Preview) Class for Imagen model APIs that execute on a server-side template.This class should only be instantiated withgetTemplateImagenModel(). |
| VertexAIBackend | Configuration class for the Vertex AI Gemini API.Use this withAIOptions when initializing the AI service viagetAI() to specify the Vertex AI Gemini API as the backend. |
Interfaces
| Interface | Description |
|---|---|
| AI | An instance of the Firebase AI SDK.Do not create this instance directly. Instead, usegetAI(). |
| AIOptions | Options for initializing the AI service usinggetAI(). This allows specifying which backend to use (Vertex AI Gemini API or Gemini Developer API) and configuring its specific options (like location for Vertex AI). |
| AudioConversationController | (Public Preview) A controller for managing an active audio conversation. |
| AudioTranscriptionConfig | The audio transcription configuration. |
| BaseParams | Base parameters for a number of methods. |
| ChromeAdapter | (Public Preview) Defines an inference "backend" that uses Chrome's on-device model, and encapsulates logic for detecting when on-device inference is possible.These methods should not be called directly by the user. |
| Citation | A single citation. |
| CitationMetadata | Citation metadata that may be found on aGenerateContentCandidate. |
| CodeExecutionResult | (Public Preview) The results of code execution run by the model. |
| CodeExecutionResultPart | (Public Preview) Represents the code execution result from the model. |
| CodeExecutionTool | (Public Preview) A tool that enables the model to use code execution. |
| Content | Content type for both prompts and response candidates. |
| CountTokensRequest | Params for callingGenerativeModel.countTokens() |
| CountTokensResponse | Response from callingGenerativeModel.countTokens(). |
| CustomErrorData | Details object that contains data originating from a bad HTTP response. |
| Date_2 | Protobuf google.type.Date |
| EnhancedGenerateContentResponse | Response object wrapped with helper methods. |
| ErrorDetails | Details object that may be included in an error response. |
| ExecutableCode | (Public Preview) An interface for executable code returned by the model. |
| ExecutableCodePart | (Public Preview) Represents the code that is executed by the model. |
| FileData | Data pointing to a file uploaded on Google Cloud Storage. |
| FileDataPart | Content part interface if the part representsFileData |
| FunctionCall | A predictedFunctionCall returned from the model that contains a string representing theFunctionDeclaration.name and a structured JSON object containing the parameters and their values. |
| FunctionCallingConfig | |
| FunctionCallPart | Content part interface if the part represents aFunctionCall. |
| FunctionDeclaration | Structured representation of a function declaration as defined by theOpenAPI 3.0 specification. Included in this declaration are the function name and parameters. ThisFunctionDeclaration is a representation of a block of code that can be used as a Tool by the model and executed by the client. |
| FunctionDeclarationsTool | AFunctionDeclarationsTool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. |
| FunctionResponse | The result output from aFunctionCall that contains a string representing theFunctionDeclaration.name and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of aFunctionCall made based on model prediction. |
| FunctionResponsePart | Content part interface if the part representsFunctionResponse. |
| GenerateContentCandidate | A candidate returned as part of aGenerateContentResponse. |
| GenerateContentRequest | Request sent throughGenerativeModel.generateContent() |
| GenerateContentResponse | Individual response fromGenerativeModel.generateContent() andGenerativeModel.generateContentStream().generateContentStream() will return one in each chunk until the stream is done. |
| GenerateContentResult | Result object returned fromGenerativeModel.generateContent() call. |
| GenerateContentStreamResult | Result object returned fromGenerativeModel.generateContentStream() call. Iterate overstream to get chunks as they come in and/or use theresponse promise to get the aggregated response when the stream is done. |
| GenerationConfig | Config options for content-related requests |
| GenerativeContentBlob | Interface for sending an image. |
| GoogleSearch | Specifies the Google Search configuration. |
| GoogleSearchTool | A tool that allows a Gemini model to connect to Google Search to access and incorporate up-to-date information from the web into its responses.Important: If using Grounding with Google Search, you are required to comply with the "Grounding with Google Search" usage requirements for your chosen API provider:Gemini Developer API or Vertex AI Gemini API (seeService Terms section within the Service Specific Terms). |
| GroundingChunk | Represents a chunk of retrieved data that supports a claim in the model's response. This is part of the grounding information provided when grounding is enabled. |
| GroundingMetadata | Metadata returned when grounding is enabled.Currently, only Grounding with Google Search is supported (seeGoogleSearchTool).Important: If using Grounding with Google Search, you are required to comply with the "Grounding with Google Search" usage requirements for your chosen API provider:Gemini Developer API or Vertex AI Gemini API (seeService Terms section within the Service Specific Terms). |
| GroundingSupport | Provides information about how a specific segment of the model's response is supported by the retrieved grounding chunks. |
| HybridParams | (Public Preview) Configures hybrid inference. |
| ImagenGCSImage | An image generated by Imagen, stored in a Cloud Storage for Firebase bucket.This feature is not available yet. |
| ImagenGenerationConfig | Configuration options for generating images with Imagen.See thedocumentation for more details. |
| ImagenGenerationResponse | The response from a request to generate images with Imagen. |
| ImagenInlineImage | An image generated by Imagen, represented as inline data. |
| ImagenModelParams | Parameters for configuring anImagenModel. |
| ImagenSafetySettings | Settings for controlling the aggressiveness of filtering out sensitive content.See thedocumentation for more details. |
| InlineDataPart | Content part interface if the part represents an image. |
| LanguageModelCreateCoreOptions | (Public Preview) Configures the creation of an on-device language model session. |
| LanguageModelCreateOptions | (Public Preview) Configures the creation of an on-device language model session. |
| LanguageModelExpected | (Public Preview) Options for the expected inputs for an on-device language model. |
| LanguageModelMessage | (Public Preview) An on-device language model message. |
| LanguageModelMessageContent | (Public Preview) An on-device language model content object. |
| LanguageModelPromptOptions | (Public Preview) Options for an on-device language model prompt. |
| LiveGenerationConfig | (Public Preview) Configuration parameters used byLiveGenerativeModel to control live content generation. |
| LiveModelParams | (Public Preview) Params passed togetLiveGenerativeModel(). |
| LiveServerContent | (Public Preview) An incremental content update from the model. |
| LiveServerToolCall | (Public Preview) A request from the model for the client to execute one or more functions. |
| LiveServerToolCallCancellation | (Public Preview) Notification to cancel a previous function call triggered byLiveServerToolCall. |
| ModalityTokenCount | Represents token counting info for a single modality. |
| ModelParams | Params passed togetGenerativeModel(). |
| ObjectSchemaRequest | Interface for JSON parameters in a schema ofSchemaType "object" when not using theSchema.object() helper. |
| OnDeviceParams | (Public Preview) Encapsulates configuration for on-device inference. |
| PrebuiltVoiceConfig | (Public Preview) Configuration for a pre-built voice. |
| PromptFeedback | If the prompt was blocked, this will be populated withblockReason and the relevantsafetyRatings. |
| RequestOptions | Params passed togetGenerativeModel(). |
| RetrievedContextAttribution | |
| SafetyRating | A safety rating associated with aGenerateContentCandidate |
| SafetySetting | Safety setting that can be sent as part of request parameters. |
| SchemaInterface | Interface forSchema class. |
| SchemaParams | Params passed toSchema static methods to create specificSchema classes. |
| SchemaRequest | Final format forSchema params passed to backend requests. |
| SchemaShared | BasicSchema properties shared across several Schema-related types. |
| SearchEntrypoint | Google search entry point. |
| Segment | Represents a specific segment within aContent object, often used to pinpoint the exact location of text or data that grounding information refers to. |
| SpeechConfig | (Public Preview) Configures speech synthesis. |
| StartAudioConversationOptions | (Public Preview) Options forstartAudioConversation(). |
| StartChatParams | Params forGenerativeModel.startChat(). |
| TextPart | Content part interface if the part represents a text string. |
| ThinkingConfig | Configuration for "thinking" behavior of compatible Gemini models.Certain models utilize a thinking process before generating a response. This allows them to reason through complex problems and plan a more coherent and accurate answer. |
| ToolConfig | Tool config. This config is shared for all tools provided in the request. |
| Transcription | (Public Preview) Transcription of audio. This can be returned from aLiveGenerativeModel if transcription is enabled with theinputAudioTranscription oroutputAudioTranscription properties on theLiveGenerationConfig. |
| URLContext | (Public Preview) Specifies the URL Context configuration. |
| URLContextMetadata | (Public Preview) Metadata related toURLContextTool. |
| URLContextTool | (Public Preview) A tool that allows you to provide additional context to the models in the form of public web URLs. By including URLs in your request, the Gemini model will access the content from those pages to inform and enhance its response. |
| URLMetadata | (Public Preview) Metadata for a single URL retrieved by theURLContextTool tool. |
| UsageMetadata | Usage metadata about aGenerateContentResponse. |
| VideoMetadata | Describes the input video content. |
| VoiceConfig | (Public Preview) Configuration for the voice to used in speech synthesis. |
| WebAttribution | |
| WebGroundingChunk | A grounding chunk from the web.Important: If using Grounding with Google Search, you are required to comply with theService Specific Terms for "Grounding with Google Search". |
Variables
| Variable | Description |
|---|---|
| AIErrorCode | Standardized error codes thatAIError can have. |
| BackendType | An enum-like object containing constants that represent the supported backends for the Firebase AI SDK. This determines which backend service (Vertex AI Gemini API or Gemini Developer API) the SDK will communicate with.These values are assigned to thebackendType property within the specific backend configuration objects (GoogleAIBackend orVertexAIBackend) to identify which service to target. |
| BlockReason | Reason that a prompt was blocked. |
| FinishReason | Reason that a candidate finished. |
| FunctionCallingMode | |
| HarmBlockMethod | This property is not supported in the Gemini Developer API (GoogleAIBackend). |
| HarmBlockThreshold | Threshold above which a prompt or candidate will be blocked. |
| HarmCategory | Harm categories that would cause prompts or candidates to be blocked. |
| HarmProbability | Probability that a prompt or candidate matches a harm category. |
| HarmSeverity | Harm severity levels. |
| ImagenAspectRatio | Aspect ratios for Imagen images.To specify an aspect ratio for generated images, set theaspectRatio property in yourImagenGenerationConfig.See thedocumentation for more details and examples of the supported aspect ratios. |
| ImagenPersonFilterLevel | A filter level controlling whether generation of images containing people or faces is allowed.See thepersonGeneration documentation for more details. |
| ImagenSafetyFilterLevel | A filter level controlling how aggressively to filter sensitive content.Text prompts provided as inputs and images (generated or uploaded) through Imagen on Vertex AI are assessed against a list of safety filters, which include 'harmful categories' (for example,violence,sexual,derogatory, andtoxic). This filter level controls how aggressively to filter out potentially harmful content from responses. See thedocumentation and theResponsible AI and usage guidelines for more details. |
| InferenceMode | (Public Preview) Determines whether inference happens on-device or in-cloud. |
| InferenceSource | (Public Preview) Indicates whether inference happened on-device or in-cloud. |
| Language | (Public Preview) The programming language of the code. |
| LiveResponseType | (Public Preview) The types of responses that can be returned byLiveSession.receive(). |
| Modality | Content part modality. |
| Outcome | (Public Preview) Represents the result of the code execution. |
| POSSIBLE_ROLES | Possible roles. |
| ResponseModality | (Public Preview) Generation modalities to be returned in generation responses. |
| SchemaType | Contains the list of OpenAPI data types as defined by theOpenAPI specification |
| URLRetrievalStatus | (Public Preview) The status of a URL retrieval. |
Type Aliases
| Type Alias | Description |
|---|---|
| AIErrorCode | Standardized error codes thatAIError can have. |
| BackendType | Type alias representing valid backend types. It can be either'VERTEX_AI' or'GOOGLE_AI'. |
| BlockReason | Reason that a prompt was blocked. |
| FinishReason | Reason that a candidate finished. |
| FunctionCallingMode | |
| HarmBlockMethod | This property is not supported in the Gemini Developer API (GoogleAIBackend). |
| HarmBlockThreshold | Threshold above which a prompt or candidate will be blocked. |
| HarmCategory | Harm categories that would cause prompts or candidates to be blocked. |
| HarmProbability | Probability that a prompt or candidate matches a harm category. |
| HarmSeverity | Harm severity levels. |
| ImagenAspectRatio | Aspect ratios for Imagen images.To specify an aspect ratio for generated images, set theaspectRatio property in yourImagenGenerationConfig.See thedocumentation for more details and examples of the supported aspect ratios. |
| ImagenPersonFilterLevel | A filter level controlling whether generation of images containing people or faces is allowed.See thepersonGeneration documentation for more details. |
| ImagenSafetyFilterLevel | A filter level controlling how aggressively to filter sensitive content.Text prompts provided as inputs and images (generated or uploaded) through Imagen on Vertex AI are assessed against a list of safety filters, which include 'harmful categories' (for example,violence,sexual,derogatory, andtoxic). This filter level controls how aggressively to filter out potentially harmful content from responses. See thedocumentation and theResponsible AI and usage guidelines for more details. |
| InferenceMode | (Public Preview) Determines whether inference happens on-device or in-cloud. |
| InferenceSource | (Public Preview) Indicates whether inference happened on-device or in-cloud. |
| Language | (Public Preview) The programming language of the code. |
| LanguageModelMessageContentValue | (Public Preview) Content formats that can be provided as on-device message content. |
| LanguageModelMessageRole | (Public Preview) Allowable roles for on-device language model usage. |
| LanguageModelMessageType | (Public Preview) Allowable types for on-device language model messages. |
| LiveResponseType | (Public Preview) The types of responses that can be returned byLiveSession.receive(). This is a property on all messages that can be used for type narrowing. This property is not returned by the server, it is assigned to a server message object once it's parsed. |
| Modality | Content part modality. |
| Outcome | (Public Preview) Represents the result of the code execution. |
| Part | Content part - includes text, image/video, or function call/response part types. |
| ResponseModality | (Public Preview) Generation modalities to be returned in generation responses. |
| Role | Role is the producer of the content. |
| SchemaType | Contains the list of OpenAPI data types as defined by theOpenAPI specification |
| Tool | Defines a tool that model can call to access external knowledge. |
| TypedSchema | A type that includes all specific Schema types. |
| URLRetrievalStatus | (Public Preview) The status of a URL retrieval. |
function(app, ...)
getAI(app, options)
Returns the defaultAI instance that is associated with the providedFirebaseApp. If no instance exists, initializes a new instance with the default settings.
Signature:
exportdeclarefunctiongetAI(app?:FirebaseApp,options?:AIOptions):AI;Parameters
| Parameter | Type | Description |
|---|---|---|
| app | FirebaseApp | TheFirebaseApp to use. |
| options | AIOptions | AIOptions that configure the AI instance. |
Returns:
The defaultAI instance for the givenFirebaseApp.
Example 1
constai=getAI(app);Example 2
// Get an AI instance configured to use the Gemini Developer API (via Google AI).constai=getAI(app,{backend:newGoogleAIBackend()});Example 3
// Get an AI instance configured to use the Vertex AI Gemini API.constai=getAI(app,{backend:newVertexAIBackend()});function(ai, ...)
getGenerativeModel(ai, modelParams, requestOptions)
Returns aGenerativeModel class with methods for inference and other functionality.
Signature:
exportdeclarefunctiongetGenerativeModel(ai:AI,modelParams:ModelParams|HybridParams,requestOptions?:RequestOptions):GenerativeModel;Parameters
| Parameter | Type | Description |
|---|---|---|
| ai | AI | |
| modelParams | ModelParams |HybridParams | |
| requestOptions | RequestOptions |
Returns:
getImagenModel(ai, modelParams, requestOptions)
Returns anImagenModel class with methods for using Imagen.
Only Imagen 3 models (namedimagen-3.0-*) are supported.
Signature:
exportdeclarefunctiongetImagenModel(ai:AI,modelParams:ImagenModelParams,requestOptions?:RequestOptions):ImagenModel;Parameters
| Parameter | Type | Description |
|---|---|---|
| ai | AI | AnAI instance. |
| modelParams | ImagenModelParams | Parameters to use when making Imagen requests. |
| requestOptions | RequestOptions | Additional options to use when making requests. |
Returns:
Exceptions
If theapiKey orprojectId fields are missing in your Firebase config.
getLiveGenerativeModel(ai, modelParams)
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
Returns aLiveGenerativeModel class for real-time, bidirectional communication.
The Live API is only supported in modern browser windows and Node >= 22.
Signature:
exportdeclarefunctiongetLiveGenerativeModel(ai:AI,modelParams:LiveModelParams):LiveGenerativeModel;Parameters
| Parameter | Type | Description |
|---|---|---|
| ai | AI | AnAI instance. |
| modelParams | LiveModelParams | Parameters to use when setting up aLiveSession. |
Returns:
Exceptions
If theapiKey orprojectId fields are missing in your Firebase config.
getTemplateGenerativeModel(ai, requestOptions)
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
Returns aTemplateGenerativeModel class for executing server-side templates.
Signature:
exportdeclarefunctiongetTemplateGenerativeModel(ai:AI,requestOptions?:RequestOptions):TemplateGenerativeModel;Parameters
| Parameter | Type | Description |
|---|---|---|
| ai | AI | AnAI instance. |
| requestOptions | RequestOptions | Additional options to use when making requests. |
Returns:
getTemplateImagenModel(ai, requestOptions)
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
Returns aTemplateImagenModel class for executing server-side Imagen templates.
Signature:
exportdeclarefunctiongetTemplateImagenModel(ai:AI,requestOptions?:RequestOptions):TemplateImagenModel;Parameters
| Parameter | Type | Description |
|---|---|---|
| ai | AI | AnAI instance. |
| requestOptions | RequestOptions | Additional options to use when making requests. |
Returns:
function(liveSession, ...)
startAudioConversation(liveSession, options)
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
Starts a real-time, bidirectional audio conversation with the model. This helper function manages the complexities of microphone access, audio recording, playback, and interruptions.
Important: This function must be called in response to a user gesture (for example, a button click) to comply withbrowser autoplay policies.Signature:
exportdeclarefunctionstartAudioConversation(liveSession:LiveSession,options?:StartAudioConversationOptions):Promise<AudioConversationController>;Parameters
| Parameter | Type | Description |
|---|---|---|
| liveSession | LiveSession | An activeLiveSession instance. |
| options | StartAudioConversationOptions | Configuration options for the audio conversation. |
Returns:
Promise<AudioConversationController>
APromise that resolves with anAudioConversationController.
Exceptions
AIError if the environment does not support required Web APIs (UNSUPPORTED), if a conversation is already active (REQUEST_ERROR), the session is closed (SESSION_CLOSED), or if an unexpected initialization error occurs (ERROR).
DOMException Thrown bynavigator.mediaDevices.getUserMedia() if issues occur with microphone access, such as permissions being denied (NotAllowedError) or no compatible hardware being found (NotFoundError). See theMDN documentation for a full list of exceptions.
Example
constliveSession=awaitmodel.connect();letconversationController;// This function must be called from within a click handler.asyncfunctionstartConversation(){try{conversationController=awaitstartAudioConversation(liveSession);}catch(e){// Handle AI-specific errorsif(einstanceofAIError){console.error("AI Error:",e.message);}// Handle microphone permission and hardware errorselseif(einstanceofDOMException){console.error("Microphone Error:",e.message);}// Handle other unexpected errorselse{console.error("An unexpected error occurred:",e);}}}// Later, to stop the conversation:// if (conversationController) {// await conversationController.stop();// }AIErrorCode
Standardized error codes thatAIError can have.
Signature:
AIErrorCode:{readonlyERROR:"error";readonlyREQUEST_ERROR:"request-error";readonlyRESPONSE_ERROR:"response-error";readonlyFETCH_ERROR:"fetch-error";readonlySESSION_CLOSED:"session-closed";readonlyINVALID_CONTENT:"invalid-content";readonlyAPI_NOT_ENABLED:"api-not-enabled";readonlyINVALID_SCHEMA:"invalid-schema";readonlyNO_API_KEY:"no-api-key";readonlyNO_APP_ID:"no-app-id";readonlyNO_MODEL:"no-model";readonlyNO_PROJECT_ID:"no-project-id";readonlyPARSE_FAILED:"parse-failed";readonlyUNSUPPORTED:"unsupported";}BackendType
An enum-like object containing constants that represent the supported backends for the Firebase AI SDK. This determines which backend service (Vertex AI Gemini API or Gemini Developer API) the SDK will communicate with.
These values are assigned to thebackendType property within the specific backend configuration objects (GoogleAIBackend orVertexAIBackend) to identify which service to target.
Signature:
BackendType:{readonlyVERTEX_AI:"VERTEX_AI";readonlyGOOGLE_AI:"GOOGLE_AI";}BlockReason
Reason that a prompt was blocked.
Signature:
BlockReason:{readonlySAFETY:"SAFETY";readonlyOTHER:"OTHER";readonlyBLOCKLIST:"BLOCKLIST";readonlyPROHIBITED_CONTENT:"PROHIBITED_CONTENT";}FinishReason
Reason that a candidate finished.
Signature:
FinishReason:{readonlySTOP:"STOP";readonlyMAX_TOKENS:"MAX_TOKENS";readonlySAFETY:"SAFETY";readonlyRECITATION:"RECITATION";readonlyOTHER:"OTHER";readonlyBLOCKLIST:"BLOCKLIST";readonlyPROHIBITED_CONTENT:"PROHIBITED_CONTENT";readonlySPII:"SPII";readonlyMALFORMED_FUNCTION_CALL:"MALFORMED_FUNCTION_CALL";}FunctionCallingMode
Signature:
FunctionCallingMode:{readonlyAUTO:"AUTO";readonlyANY:"ANY";readonlyNONE:"NONE";}HarmBlockMethod
This property is not supported in the Gemini Developer API (GoogleAIBackend).
Signature:
HarmBlockMethod:{readonlySEVERITY:"SEVERITY";readonlyPROBABILITY:"PROBABILITY";}HarmBlockThreshold
Threshold above which a prompt or candidate will be blocked.
Signature:
HarmBlockThreshold:{readonlyBLOCK_LOW_AND_ABOVE:"BLOCK_LOW_AND_ABOVE";readonlyBLOCK_MEDIUM_AND_ABOVE:"BLOCK_MEDIUM_AND_ABOVE";readonlyBLOCK_ONLY_HIGH:"BLOCK_ONLY_HIGH";readonlyBLOCK_NONE:"BLOCK_NONE";readonlyOFF:"OFF";}HarmCategory
Harm categories that would cause prompts or candidates to be blocked.
Signature:
HarmCategory:{readonlyHARM_CATEGORY_HATE_SPEECH:"HARM_CATEGORY_HATE_SPEECH";readonlyHARM_CATEGORY_SEXUALLY_EXPLICIT:"HARM_CATEGORY_SEXUALLY_EXPLICIT";readonlyHARM_CATEGORY_HARASSMENT:"HARM_CATEGORY_HARASSMENT";readonlyHARM_CATEGORY_DANGEROUS_CONTENT:"HARM_CATEGORY_DANGEROUS_CONTENT";}HarmProbability
Probability that a prompt or candidate matches a harm category.
Signature:
HarmProbability:{readonlyNEGLIGIBLE:"NEGLIGIBLE";readonlyLOW:"LOW";readonlyMEDIUM:"MEDIUM";readonlyHIGH:"HIGH";}HarmSeverity
Harm severity levels.
Signature:
HarmSeverity:{readonlyHARM_SEVERITY_NEGLIGIBLE:"HARM_SEVERITY_NEGLIGIBLE";readonlyHARM_SEVERITY_LOW:"HARM_SEVERITY_LOW";readonlyHARM_SEVERITY_MEDIUM:"HARM_SEVERITY_MEDIUM";readonlyHARM_SEVERITY_HIGH:"HARM_SEVERITY_HIGH";readonlyHARM_SEVERITY_UNSUPPORTED:"HARM_SEVERITY_UNSUPPORTED";}ImagenAspectRatio
Aspect ratios for Imagen images.
To specify an aspect ratio for generated images, set theaspectRatio property in yourImagenGenerationConfig.
See thedocumentation for more details and examples of the supported aspect ratios.
Signature:
ImagenAspectRatio:{readonlySQUARE:"1:1";readonlyLANDSCAPE_3x4:"3:4";readonlyPORTRAIT_4x3:"4:3";readonlyLANDSCAPE_16x9:"16:9";readonlyPORTRAIT_9x16:"9:16";}ImagenPersonFilterLevel
A filter level controlling whether generation of images containing people or faces is allowed.
See thepersonGeneration documentation for more details.
Signature:
ImagenPersonFilterLevel:{readonlyBLOCK_ALL:"dont_allow";readonlyALLOW_ADULT:"allow_adult";readonlyALLOW_ALL:"allow_all";}ImagenSafetyFilterLevel
A filter level controlling how aggressively to filter sensitive content.
Text prompts provided as inputs and images (generated or uploaded) through Imagen on Vertex AI are assessed against a list of safety filters, which include 'harmful categories' (for example,violence,sexual,derogatory, andtoxic). This filter level controls how aggressively to filter out potentially harmful content from responses. See thedocumentation and theResponsible AI and usage guidelines for more details.
Signature:
ImagenSafetyFilterLevel:{readonlyBLOCK_LOW_AND_ABOVE:"block_low_and_above";readonlyBLOCK_MEDIUM_AND_ABOVE:"block_medium_and_above";readonlyBLOCK_ONLY_HIGH:"block_only_high";readonlyBLOCK_NONE:"block_none";}InferenceMode
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
Determines whether inference happens on-device or in-cloud.
PREFER_ON_DEVICE: Attempt to make inference calls using an on-device model. If on-device inference is not available, the SDK will fall back to using a cloud-hosted model.
ONLY_ON_DEVICE: Only attempt to make inference calls using an on-device model. The SDK will not fall back to a cloud-hosted model. If on-device inference is not available, inference methods will throw.
ONLY_IN_CLOUD: Only attempt to make inference calls using a cloud-hosted model. The SDK will not fall back to an on-device model.
PREFER_IN_CLOUD: Attempt to make inference calls to a cloud-hosted model. If not available, the SDK will fall back to an on-device model.
Signature:
InferenceMode:{readonlyPREFER_ON_DEVICE:"prefer_on_device";readonlyONLY_ON_DEVICE:"only_on_device";readonlyONLY_IN_CLOUD:"only_in_cloud";readonlyPREFER_IN_CLOUD:"prefer_in_cloud";}InferenceSource
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
Indicates whether inference happened on-device or in-cloud.
Signature:
InferenceSource:{readonlyON_DEVICE:"on_device";readonlyIN_CLOUD:"in_cloud";}Language
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
The programming language of the code.
Signature:
Language:{UNSPECIFIED:string;PYTHON:string;}LiveResponseType
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
The types of responses that can be returned byLiveSession.receive().
Signature:
LiveResponseType:{SERVER_CONTENT:string;TOOL_CALL:string;TOOL_CALL_CANCELLATION:string;}Modality
Content part modality.
Signature:
Modality:{readonlyMODALITY_UNSPECIFIED:"MODALITY_UNSPECIFIED";readonlyTEXT:"TEXT";readonlyIMAGE:"IMAGE";readonlyVIDEO:"VIDEO";readonlyAUDIO:"AUDIO";readonlyDOCUMENT:"DOCUMENT";}Outcome
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
Represents the result of the code execution.
Signature:
Outcome:{UNSPECIFIED:string;OK:string;FAILED:string;DEADLINE_EXCEEDED:string;}POSSIBLE_ROLES
Possible roles.
Signature:
POSSIBLE_ROLES:readonly["user","model","function","system"]ResponseModality
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
Generation modalities to be returned in generation responses.
Signature:
ResponseModality:{readonlyTEXT:"TEXT";readonlyIMAGE:"IMAGE";readonlyAUDIO:"AUDIO";}SchemaType
Contains the list of OpenAPI data types as defined by theOpenAPI specification
Signature:
SchemaType:{readonlySTRING:"string";readonlyNUMBER:"number";readonlyINTEGER:"integer";readonlyBOOLEAN:"boolean";readonlyARRAY:"array";readonlyOBJECT:"object";}URLRetrievalStatus
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
The status of a URL retrieval.
URL_RETRIEVAL_STATUS_UNSPECIFIED: Unspecified retrieval status.
URL_RETRIEVAL_STATUS_SUCCESS: The URL retrieval was successful.
URL_RETRIEVAL_STATUS_ERROR: The URL retrieval failed.
URL_RETRIEVAL_STATUS_PAYWALL: The URL retrieval failed because the content is behind a paywall.
URL_RETRIEVAL_STATUS_UNSAFE: The URL retrieval failed because the content is unsafe.
Signature:
URLRetrievalStatus:{URL_RETRIEVAL_STATUS_UNSPECIFIED:string;URL_RETRIEVAL_STATUS_SUCCESS:string;URL_RETRIEVAL_STATUS_ERROR:string;URL_RETRIEVAL_STATUS_PAYWALL:string;URL_RETRIEVAL_STATUS_UNSAFE:string;}AIErrorCode
Standardized error codes thatAIError can have.
Signature:
exporttypeAIErrorCode=(typeofAIErrorCode)[keyoftypeofAIErrorCode];BackendType
Type alias representing valid backend types. It can be either'VERTEX_AI' or'GOOGLE_AI'.
Signature:
exporttypeBackendType=(typeofBackendType)[keyoftypeofBackendType];BlockReason
Reason that a prompt was blocked.
Signature:
exporttypeBlockReason=(typeofBlockReason)[keyoftypeofBlockReason];FinishReason
Reason that a candidate finished.
Signature:
exporttypeFinishReason=(typeofFinishReason)[keyoftypeofFinishReason];FunctionCallingMode
Signature:
exporttypeFunctionCallingMode=(typeofFunctionCallingMode)[keyoftypeofFunctionCallingMode];HarmBlockMethod
This property is not supported in the Gemini Developer API (GoogleAIBackend).
Signature:
exporttypeHarmBlockMethod=(typeofHarmBlockMethod)[keyoftypeofHarmBlockMethod];HarmBlockThreshold
Threshold above which a prompt or candidate will be blocked.
Signature:
exporttypeHarmBlockThreshold=(typeofHarmBlockThreshold)[keyoftypeofHarmBlockThreshold];HarmCategory
Harm categories that would cause prompts or candidates to be blocked.
Signature:
exporttypeHarmCategory=(typeofHarmCategory)[keyoftypeofHarmCategory];HarmProbability
Probability that a prompt or candidate matches a harm category.
Signature:
exporttypeHarmProbability=(typeofHarmProbability)[keyoftypeofHarmProbability];HarmSeverity
Harm severity levels.
Signature:
exporttypeHarmSeverity=(typeofHarmSeverity)[keyoftypeofHarmSeverity];ImagenAspectRatio
Aspect ratios for Imagen images.
To specify an aspect ratio for generated images, set theaspectRatio property in yourImagenGenerationConfig.
See thedocumentation for more details and examples of the supported aspect ratios.
Signature:
exporttypeImagenAspectRatio=(typeofImagenAspectRatio)[keyoftypeofImagenAspectRatio];ImagenPersonFilterLevel
A filter level controlling whether generation of images containing people or faces is allowed.
See thepersonGeneration documentation for more details.
Signature:
exporttypeImagenPersonFilterLevel=(typeofImagenPersonFilterLevel)[keyoftypeofImagenPersonFilterLevel];ImagenSafetyFilterLevel
A filter level controlling how aggressively to filter sensitive content.
Text prompts provided as inputs and images (generated or uploaded) through Imagen on Vertex AI are assessed against a list of safety filters, which include 'harmful categories' (for example,violence,sexual,derogatory, andtoxic). This filter level controls how aggressively to filter out potentially harmful content from responses. See thedocumentation and theResponsible AI and usage guidelines for more details.
Signature:
exporttypeImagenSafetyFilterLevel=(typeofImagenSafetyFilterLevel)[keyoftypeofImagenSafetyFilterLevel];InferenceMode
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
Determines whether inference happens on-device or in-cloud.
Signature:
exporttypeInferenceMode=(typeofInferenceMode)[keyoftypeofInferenceMode];InferenceSource
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
Indicates whether inference happened on-device or in-cloud.
Signature:
exporttypeInferenceSource=(typeofInferenceSource)[keyoftypeofInferenceSource];Language
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
The programming language of the code.
Signature:
exporttypeLanguage=(typeofLanguage)[keyoftypeofLanguage];LanguageModelMessageContentValue
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
Content formats that can be provided as on-device message content.
Signature:
exporttypeLanguageModelMessageContentValue=ImageBitmapSource|AudioBuffer|BufferSource|string;LanguageModelMessageRole
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
Allowable roles for on-device language model usage.
Signature:
exporttypeLanguageModelMessageRole='system'|'user'|'assistant';LanguageModelMessageType
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
Allowable types for on-device language model messages.
Signature:
exporttypeLanguageModelMessageType='text'|'image'|'audio';LiveResponseType
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
The types of responses that can be returned byLiveSession.receive(). This is a property on all messages that can be used for type narrowing. This property is not returned by the server, it is assigned to a server message object once it's parsed.
Signature:
exporttypeLiveResponseType=(typeofLiveResponseType)[keyoftypeofLiveResponseType];Modality
Content part modality.
Signature:
exporttypeModality=(typeofModality)[keyoftypeofModality];Outcome
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
Represents the result of the code execution.
Signature:
exporttypeOutcome=(typeofOutcome)[keyoftypeofOutcome];Part
Content part - includes text, image/video, or function call/response part types.
Signature:
exporttypePart=TextPart|InlineDataPart|FunctionCallPart|FunctionResponsePart|FileDataPart|ExecutableCodePart|CodeExecutionResultPart;ResponseModality
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
Generation modalities to be returned in generation responses.
Signature:
exporttypeResponseModality=(typeofResponseModality)[keyoftypeofResponseModality];Role
Role is the producer of the content.
Signature:
exporttypeRole=(typeofPOSSIBLE_ROLES)[number];SchemaType
Contains the list of OpenAPI data types as defined by theOpenAPI specification
Signature:
exporttypeSchemaType=(typeofSchemaType)[keyoftypeofSchemaType];Tool
Defines a tool that model can call to access external knowledge.
Signature:
exporttypeTool=FunctionDeclarationsTool|GoogleSearchTool|CodeExecutionTool|URLContextTool;TypedSchema
A type that includes all specific Schema types.
Signature:
exporttypeTypedSchema=IntegerSchema|NumberSchema|StringSchema|BooleanSchema|ObjectSchema|ArraySchema|AnyOfSchema;URLRetrievalStatus
This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
The status of a URL retrieval.
URL_RETRIEVAL_STATUS_UNSPECIFIED: Unspecified retrieval status.
URL_RETRIEVAL_STATUS_SUCCESS: The URL retrieval was successful.
URL_RETRIEVAL_STATUS_ERROR: The URL retrieval failed.
URL_RETRIEVAL_STATUS_PAYWALL: The URL retrieval failed because the content is behind a paywall.
URL_RETRIEVAL_STATUS_UNSAFE: The URL retrieval failed because the content is unsafe.
Signature:
exporttypeURLRetrievalStatus=(typeofURLRetrievalStatus)[keyoftypeofURLRetrievalStatus];Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-11-14 UTC.