Firebase.AI.GenerationConfig

A struct defining model parameters to be used when sending generativeAI requests to the backend model.

Summary

Constructors and Destructors

GenerationConfig(float? temperature, float? topP, float? topK, int? candidateCount, int? maxOutputTokens, float? presencePenalty, float? frequencyPenalty, string[] stopSequences, string responseMimeType,Schema responseSchema, IEnumerable<ResponseModality > responseModalities,ThinkingConfig? thinkingConfig)
Creates a newGenerationConfig value.

Public functions

GenerationConfig

Firebase::AI::GenerationConfig::GenerationConfig(float?temperature,float?topP,float?topK,int?candidateCount,int?maxOutputTokens,float?presencePenalty,float?frequencyPenalty,string[]stopSequences,stringresponseMimeType,SchemaresponseSchema,IEnumerable<ResponseModality>responseModalities,ThinkingConfig?thinkingConfig)

Creates a newGenerationConfig value.

See theConfigure model parameters guide and theCloud documentation for more details.

Note: A temperature of 0 means that the highest probability tokens are always selected. In this case, responses for a given prompt are mostly deterministic, but a small amount of variation is still possible.

Details
Parameters
temperature
Controls the randomness of the language model's output. Higher values (for example, 1.0) make the text more random and creative, while lower values (for example, 0.1) make it more focused and deterministic.

Important: The range of supported temperature values depends on the model; see theCloud documentation for more details.

The supported range is 0.0 to 1.0.

Details
Parameters
topP
Controls diversity of generated text. Higher values (e.g., 0.9) produce more diverse text, while lower values (e.g., 0.5) make the output more focused.

Important: The defaulttopP value depends on the model; see theCloud documentation for more details.

The supported range is 1 to 40.

Details
Parameters
topK
Limits the number of highest probability words the model considers when generating text. For example, a topK of 40 means only the 40 most likely words are considered for the next token. A higher value increases diversity, while a lower value makes the output more deterministic.

Important: Support fortopK and the default value depends on the model; see theCloud documentation for more details.

Details
Parameters
candidateCount
The number of response variations to return; defaults to 1 if not set. Support for multiple candidates depends on the model; see theCloud documentation for more details.

Details
Parameters
maxOutputTokens
Maximum number of tokens that can be generated in the response. See the configure model parametersdocumentation for more details.

Note: While bothpresencePenalty andfrequencyPenalty discourage repetition,presencePenalty applies the same penalty regardless of how many times the word/phrase has already appeared, whereasfrequencyPenalty increases the penalty foreach repetition of a word/phrase.

Details
Parameters
presencePenalty
Controls the likelihood of repeating the same words or phrases already generated in the text. Higher values increase the penalty of repetition, resulting in more diverse output.

Important: The range of supportedpresencePenalty values depends on the model; see theCloud documentation for more details.

Note: While bothfrequencyPenalty andpresencePenalty discourage repetition,frequencyPenalty increases the penalty foreach repetition of a word/phrase, whereaspresencePenalty applies the same penalty regardless of how many times the word/phrase has already appeared.

Details
Parameters
frequencyPenalty
Controls the likelihood of repeating words or phrases, with the penalty increasing for each repetition. Higher values increase the penalty of repetition, resulting in more diverse output.

Important: The range of supportedfrequencyPenalty values depends on the model; see theCloud documentation for more details.

Details
Parameters
stopSequences
A set of up to 5Strings that will stop output generation. If specified, the API will stop at the first appearance of a stop sequence. The stop sequence will not be included as part of the response. See theCloud documentation for more details.

Supported MIME types:

  • text/plain: Text output; the default behavior if unspecified.
  • application/json: JSON response in the candidates.
  • text/x.enum: For classification tasks, output an enum value as defined in theresponseSchema.

Details
Parameters
responseMimeType
Output response MIME type of the generated candidate text.

Compatible MIME types:

  • application/json:Schema for JSON response.

Details
Parameters
responseSchema
Output schema of the generated candidate text. If set, a compatibleresponseMimeType must also be set.

Refer to theControl generated output guide for more details.

See themultimodal responses documentation for more details.

Details
Parameters
responseModalities
The data types (modalities) that may be returned in model responses.

An error will be returned if this field is set for models that don't support thinking.

Details
Parameters
thinkingConfig
Configuration for controlling the "thinking" behavior of compatible Gemini models; seeThinkingConfig for more details.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-07-24 UTC.