Use model configuration to control responses

In each call to a model, you can send along a model configuration to control howthe model generates a response. Each model offers different configurationoptions.

You can also experiment with prompts and model configurations usingGoogle AI Studio.

Jump toGemini config options Jump toImagen config options



ConfigureGemini models

Click yourGemini API provider to view provider-specific content and code on this page.

This section shows you how toset up a configuration for use withGemini models and provides adescription of each parameter.

Set up a model configuration (Gemini)

Note: For the majority of use cases when accessing aGemini model, youconfigure the model usingGenerationConfig. However, if you'reconfiguring a model for theGemini Live API,you use aLiveGenerationConfig.

Config for general use cases

The configuration is maintained for the lifetime of the instance. If you want touse a different config, create a newGenerativeModel instance with thatconfig.

Swift

Set the values of the parameters in aGenerationConfigas part of creating aGenerativeModel instance.

importFirebaseAI// Set parameter values in a `GenerationConfig`.// IMPORTANT: Example values shown here. Make sure to update for your use case.letconfig=GenerationConfig(candidateCount:1,temperature:0.9,topP:0.1,topK:16,maxOutputTokens:200,stopSequences:["red"])// Initialize the Gemini Developer API backend service// Specify the config as part of creating the `GenerativeModel` instanceletmodel=FirebaseAI.firebaseAI(backend:.googleAI()).generativeModel(modelName:"GEMINI_MODEL_NAME",generationConfig:config)// ...

Kotlin

Set the values of the parameters in aGenerationConfigas part of creating aGenerativeModel instance.

// ...// Set parameter values in a `GenerationConfig`.// IMPORTANT: Example values shown here. Make sure to update for your use case.valconfig=generationConfig{candidateCount=1maxOutputTokens=200stopSequences=listOf("red")temperature=0.9ftopK=16topP=0.1f}// Initialize the Gemini Developer API backend service// Specify the config as part of creating the `GenerativeModel` instancevalmodel=Firebase.ai(backend=GenerativeBackend.googleAI()).generativeModel(modelName="GEMINI_MODEL_NAME",generationConfig=config)// ...

Java

Set the values of the parameters in aGenerationConfigas part of creating aGenerativeModel instance.

// ...// Set parameter values in a `GenerationConfig`.// IMPORTANT: Example values shown here. Make sure to update for your use case.GenerationConfig.BuilderconfigBuilder=newGenerationConfig.Builder();configBuilder.candidateCount=1;configBuilder.maxOutputTokens=200;configBuilder.stopSequences=List.of("red");configBuilder.temperature=0.9f;configBuilder.topK=16;configBuilder.topP=0.1f;GenerationConfigconfig=configBuilder.build();// Specify the config as part of creating the `GenerativeModel` instanceGenerativeModelFuturesmodel=GenerativeModelFutures.from(FirebaseAI.getInstance(GenerativeBackend.googleAI()).generativeModel("GEMINI_MODEL_NAME",config););// ...

Web

Set the values of the parameters in aGenerationConfigas part of creating aGenerativeModel instance.

// ...// Initialize the Gemini Developer API backend serviceconstai=getAI(firebaseApp,{backend:newGoogleAIBackend()});// Set parameter values in a `GenerationConfig`.// IMPORTANT: Example values shown here. Make sure to update for your use case.constgenerationConfig={candidate_count:1,maxOutputTokens:200,stopSequences:["red"],temperature:0.9,topP:0.1,topK:16,};// Specify the config as part of creating the `GenerativeModel` instanceconstmodel=getGenerativeModel(ai,{model:"GEMINI_MODEL_NAME",generationConfig});// ...

Dart

Set the values of the parameters in aGenerationConfig as part of creating aGenerativeModel instance.

// ...// Set parameter values in a `GenerationConfig`.// IMPORTANT: Example values shown here. Make sure to update for your use case.finalgenerationConfig=GenerationConfig(candidateCount:1,maxOutputTokens:200,stopSequences:["red"],temperature:0.9,topP:0.1,topK:16,);// Initialize the Gemini Developer API backend service// Specify the config as part of creating the `GenerativeModel` instancefinalmodel=FirebaseAI.googleAI().generativeModel(model:'GEMINI_MODEL_NAME',config:generationConfig,);// ...

Unity

Set the values of the parameters in aGenerationConfigas part of creating aGenerativeModel instance.

// ...// Set parameter values in a `GenerationConfig`.// IMPORTANT: Example values shown here. Make sure to update for your use case.vargenerationConfig=newGenerationConfig(candidateCount:1,maxOutputTokens:200,stopSequences:newstring[]{"red"},temperature:0.9f,topK:16,topP:0.1f);// Specify the config as part of creating the `GenerativeModel` instancevarai=FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI());varmodel=ai.GetGenerativeModel(modelName:"GEMINI_MODEL_NAME",generationConfig:generationConfig);

You can find adescription of each parameterin the next section of this page.

Config for theGemini Live API

The configuration is maintained for the lifetime of the instance. If you want touse a different config, create a newLiveModel instance with thatconfig.

Swift

TheLive API is not yet supported for Apple platform apps, but check back soon!

Kotlin

Set the values of parameters in aLiveGenerationConfigas part of creating aLiveModel instance.

// ...// Set parameter values in a `LiveGenerationConfig` (example values shown here)valconfig=liveGenerationConfig{maxOutputTokens=200responseModality=ResponseModality.AUDIOspeechConfig=SpeechConfig(voice=Voices.FENRIR)temperature=0.9ftopK=16topP=0.1f}// Initialize the Gemini Developer API backend service// Specify the config as part of creating the `LiveModel` instancevalmodel=Firebase.ai(backend=GenerativeBackend.googleAI()).liveModel(modelName="GEMINI_MODEL_NAME",generationConfig=config)// ...

Java

Set the values of parameters in aLiveGenerationConfigas part of creating aLiveModel instance.

// ...// Set parameter values in a `LiveGenerationConfig` (example values shown here)LiveGenerationConfig.BuilderconfigBuilder=newLiveGenerationConfig.Builder();configBuilder.setMaxOutputTokens(200);configBuilder.setResponseModalities(ResponseModality.AUDIO);configBuilder.setSpeechConfig(newSpeechConfig(Voices.FENRIR));configBuilder.setTemperature(0.9f);configBuilder.setTopK(16);configBuilder.setTopP(0.1f);LiveGenerationConfigconfig=configBuilder.build();// Initialize the Gemini Developer API backend service// Specify the config as part of creating the `LiveModel` instanceLiveModelFuturesmodel=LiveModelFutures.from(FirebaseAI.getInstance(GenerativeBackend.googleAI()).generativeModel("GEMINI_MODEL_NAME",config););// ...

Web

Set the values of parameters in theLiveGenerationConfigduring initialization of theLiveGenerativeModel instance:

// ...// Initialize the Gemini Developer API backend serviceconstai=getAI(firebaseApp,{backend:newGoogleAIBackend()});// Set parameter values in a `LiveGenerationConfig` (example values shown here)constgenerationConfig={maxOutputTokens:200,responseModalities:[ResponseModality.AUDIO],speechConfig:{voiceConfig:{prebuiltVoiceConfig:{voiceName:"Fenrir"},},},temperature:0.9,topP:0.1,topK:16,};// Specify the config as part of creating the `LiveGenerativeModel` instanceconstmodel=getLiveGenerativeModel(ai,{model:"GEMINI_MODEL_NAME",generationConfig,});// ...

Dart

Set the values of parameters in aLiveGenerationConfig as part of creating aLiveModel instance.

// ...// Set parameter values in a `LiveGenerationConfig` (example values shown here)finalgenerationConfig=LiveGenerationConfig(maxOutputTokens:200,responseModalities:[ResponseModalities.audio],speechConfig:SpeechConfig(voiceName:'Fenrir'),temperature:0.9,topP:0.1,topK:16,);// Initialize the Gemini Developer API backend service// Specify the config as part of creating the `LiveModel` instancefinalmodel=FirebaseAI.googleAI().liveGenerativeModel(model:'GEMINI_MODEL_NAME',config:generationConfig,);// ...

Unity

Set the values of parameters in aLiveGenerationConfigas part of creating aLiveModel instance.

// ...// Set parameter values in a `LiveGenerationConfig` (example values shown here)varliveGenerationConfig=newLiveGenerationConfig(maxOutputTokens:200,responseModalities:new[]{ResponseModality.Audio},speechConfig:SpeechConfig.UsePrebuiltVoice("Fenrir"),temperature:0.9f,topK:16,topP:0.1f);// Initialize the Gemini Developer API backend service// Specify the config as part of creating the `LiveModel` instancevarai=FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI());varmodel=ai.GetLiveModel(modelName:"GEMINI_MODEL_NAME",liveGenerationConfig:liveGenerationConfig);

You can find adescription of each parameterin the next section of this page.

Description of parameters (Gemini)

Here is a high-level overview of the available parameters, as applicable.You can find acomprehensive list of parameters and their values in theGemini Developer API documentation.

ParameterDescriptionDefault value
Audio timestamp
audioTimestamp

A boolean that enables timestamp understanding for audio-only input files.

Only applicable when usinggenerateContent orgenerateContentStream calls and the input type is an audio-only file.

false
Candidate count
candidateCount

Specifies the number of response variations to return. For each request, you're charged for the output tokens of all candidates, but you're only charged once for the input tokens.

Supported values:1 -8 (inclusive)

Only applicable when usinggenerateContent and the latestGemini models. TheLive API models andgenerateContentStream are not supported.

1
Frequency penalty
frequencyPenalty
Controls the probability of including tokens that repeatedly appear in the generated response.
Positive values penalize tokens that repeatedly appear in the generated content, decreasing the probability of repeating content.
---
Max output tokens
maxOutputTokens
Specifies the maximum number of tokens that can be generated in the response.---
Presence penalty
presencePenalty
Controls the probability of including tokens that already appear in the generated response.
Positive values penalize tokens that already appear in the generated content, increasing the probability of generating more diverse content.
---
Stop sequences
stopSequences

Specifies a list of strings that tells the model to stop generating content if one of the strings is encountered in the response.

Only applicable when using aGenerativeModel configuration.

---
Temperature
temperature
Controls the degree of randomness in the response.
Lower temperatures result in more deterministic responses, and higher temperatures result in more diverse or creative responses.
Depends on the model
Top-K
topK
Limits the number of highest probability words used in the generated content.
A top-K value of1 means the next selected token should bethe most probable among all tokens in the model's vocabulary, while a top-K value ofn means that the next token should be selected from amongthen most probable tokens (all based on the temperature that's set).
Depends on the model
Top-P
topP
Controls diversity of generated content.
Tokens are selected from the most probable (see top-K above) to least probable until the sum of their probabilities equals the top-P value.
Depends on the model
Response modality
responseModality

Specifies the type of streamed output when using theLive API or native multimodal output by aGemini model, for example text, audio, or images.

Only applicable when using theLive API and aLiveModel configuration, or when using aGemini model capable of multimodal output.

---
Speech (voice)
speechConfig

Specifies the voice used for the streamed audio output when using theLive API.

Only applicable when using theLive API and aLiveModel configuration.

Puck
Note: The following two configurations are also supported in theGenerationConfig:



ConfigureImagen models

Click yourImagen API provider to view provider-specific content and code on this page.

This section shows you how toset up a configuration for use withImagen models and provides adescription of each parameter.

Set up a model configuration (Imagen)

The configuration is maintained for the lifetime of the instance. If you want touse a different config, create a newImagenModel instance with thatconfig.

Swift

Set the values of the parameters in anImagenGenerationConfigas part of creating anImagenModel instance.

importFirebaseAI// Set parameter values in a `ImagenGenerationConfig` (example values shown here)letconfig=ImagenGenerationConfig(negativePrompt:"frogs",numberOfImages:2,aspectRatio:.landscape16x9,imageFormat:.jpeg(compressionQuality:100),addWatermark:false)// Initialize the Gemini Developer API backend service// Specify the config as part of creating the `ImagenModel` instanceletmodel=FirebaseAI.firebaseAI(backend:.googleAI()).imagenModel(modelName:"IMAGEN_MODEL_NAME",generationConfig:config)// ...

Kotlin

Set the values of the parameters in anImagenGenerationConfigas part of creating anImagenModel instance.

// ...// Set parameter values in a `ImagenGenerationConfig` (example values shown here)valconfig=ImagenGenerationConfig{negativePrompt="frogs",numberOfImages=2,aspectRatio=ImagenAspectRatio.LANDSCAPE_16x9,imageFormat=ImagenImageFormat.jpeg(compressionQuality=100),addWatermark=false}// Initialize the Gemini Developer API backend service// Specify the config as part of creating the `GenerativeModel` instancevalmodel=Firebase.ai(backend=GenerativeBackend.vertexAI()).imagenModel(modelName="IMAGEN_MODEL_NAME",generationConfig=config)// ...

Java

Set the values of the parameters in anImagenGenerationConfigas part of creating anImagenModel instance.

// ...// Set parameter values in a `ImagenGenerationConfig` (example values shown here)ImagenGenerationConfigconfig=newImagenGenerationConfig.Builder().setNegativePrompt("frogs").setNumberOfImages(2).setAspectRatio(ImagenAspectRatio.LANDSCAPE_16x9).setImageFormat(ImagenImageFormat.jpeg(100)).setAddWatermark(false).build();// Specify the config as part of creating the `ImagenModel` instanceImagenModelFuturesmodel=ImagenModelFutures.from(FirebaseAI.getInstance(GenerativeBackend.googleAI()).imagenModel("IMAGEN_MODEL_NAME",config););// ...

Web

Set the values of the parameters in anImagenGenerationConfigas part of creating anImagenModel instance.

// ...// Initialize the Gemini Developer API backend serviceconstai=getAI(firebaseApp,{backend:newGoogleAIBackend()});// Set parameter values in a `ImagenGenerationConfig` (example values shown here)constgenerationConfig={negativePrompt:"frogs",numberOfImages:2,aspectRatio:ImagenAspectRatio.LANDSCAPE_16x9,imageFormat:ImagenImageFormat.jpeg(100),addWatermark:false};// Specify the config as part of creating the `ImagenModel` instanceconstmodel=getImagenModel(ai,{model:"IMAGEN_MODEL_NAME",generationConfig});// ...

Dart

Set the values of the parameters in anImagenGenerationConfig as part of creating anImagenModel instance.

// ...// Set parameter values in a `ImagenGenerationConfig` (example values shown here)finalgenerationConfig=ImagenGenerationConfig(negativePrompt:'frogs',numberOfImages:2,aspectRatio:ImagenAspectRatio.landscape16x9,imageFormat:ImagenImageFormat.jpeg(compressionQuality:100)addWatermark:false);// Initialize the Gemini Developer API backend service// Specify the config as part of creating the `ImagenModel` instancefinalmodel=FirebaseAI.googleAI().imagenModel(model:'IMAGEN_MODEL_NAME',config:generationConfig,);// ...

Unity

Set the values of the parameters in anImagenGenerationConfigas part of creating anImagenModel instance.

usingFirebase.AI;// Set parameter values in a `ImagenGenerationConfig` (example values shown here)varconfig=newImagenGenerationConfig(numberOfImages:2,aspectRatio:ImagenAspectRatio.Landscape16x9,imageFormat:ImagenImageFormat.Jpeg(100));// Initialize the Gemini Developer API backend service// Specify the config as part of creating the `ImagenModel` instancevarmodel=FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetImagenModel(modelName:"imagen-4.0-generate-001",generationConfig:config);// ...

You can find adescription of each parameterin the next section of this page.

Description of parameters (Imagen)

Here is a high-level overview of the available parameters, as applicable.You can find acomprehensive list of parameters and their values in theGoogle Cloud documentation.

ParameterDescriptionDefault value
Negative prompt
negativePrompt
A description of what you want to omit in generated images

This parameter is not yet supported byimagen-3.0-generate-002.

---
Number of results
numberOfImages
The number of generated images returned for each requestdefault is one image
Aspect ratio
aspectRatio
The ratio of width to height of generated imagesdefault is square (1:1)
Image format
imageFormat
The output options, like the image format (MIME type) and level of compression of generated imagesdefault MIME type is PNG
default compression is 75 (if MIME type is set to JPEG)
Watermark
addWatermark
Whether to add a non-visible digital watermark (called aSynthID) to generated imagesdefault istrue
Person generation
personGeneration
Whether to allow generation of people by the modeldefault depends on the model
Note:Firebase AI Logic doesnot yet support the following parameters:



Other options to control content generation

  • Learn more aboutprompt design so that you can influence the model to generate output specific to your needs.
  • Usesafety settings to adjust the likelihood of getting responses that may be considered harmful, including hate speech and sexually explicit content.
  • Setsystem instructions to steer the behavior of the model. This feature is like a preamble that you add before the model gets exposed to any further instructions from the end user.
  • Pass aresponse schema along with the prompt to specify a specific output schema. This feature is most commonly used whengenerating JSON output, but it can also be used forclassification tasks (like when you want the model to use specific labels or tags).

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-10-03 UTC.