Gemini for Google Cloud and responsible AI

This document describes how Gemini for Google Cloud is designed in view of thecapabilities, limitations, and risks that are associated with generative AI.

Capabilities and risks of large language models

Large language models (LLMs) can perform many useful tasks such as thefollowing:

  • Translate language.
  • Summarize text.
  • Generate code and creative writing.
  • Power chatbots and virtual assistants.
  • Complement search engines and recommendation systems.

At the same time, the evolving technical capabilities of LLMs create thepotential for misapplication, misuse, and unintended or unforeseen consequences.

LLMs can generate output that you don't expect, including text that's offensive,insensitive, or factually incorrect. Because LLMs are incredibly versatile, itcan be difficult to predict exactly what kinds of unintended or unforeseenoutputs they might produce.

Given these risks and complexities, Gemini for Google Cloud is designed withGoogle's AI principles inmind. However, it's important for users to understand some of the limitations ofGemini for Google Cloud to work safely and responsibly.

Gemini for Google Cloud limitations

Some of the limitations that you might encounter using Geminifor Google Cloud include (but aren't limited to) the following:

  • Edge cases. Edge cases refer to unusual, rare, or exceptional situationsthat aren't well represented in the training data. These cases can lead tolimitations in the output of Gemini models, such as modeloverconfidence, misinterpretation of context, or inappropriate outputs.

  • Model hallucinations, grounding, and factuality. Geminimodels might lack grounding and factuality in real-world knowledge, physicalproperties, or accurate understanding. This limitation can lead to modelhallucinations, where Gemini for Google Cloud mightgenerate outputs that are plausible-sounding but factually incorrect,irrelevant, inappropriate, or nonsensical. Hallucinations can also includefabricating links to web pages that don't exist and have never existed. Formore information, seeWrite better prompts for Gemini for Google Cloud.

  • Data quality and tuning. The quality, accuracy, and bias of the promptdata that's entered into Gemini for Google Cloudproducts can have a significant impact on its performance. If users enterinaccurate or incorrect prompts, Gemini for Google Cloudmight return suboptimal or false responses.

  • Bias amplification. Language models can inadvertently amplify existingbiases in their training data, leading to outputs that might further reinforcesocietal prejudices and unequal treatment of certain groups.

  • Language quality. While Gemini for Google Cloudyields impressive multilingual capabilities on the benchmarks that we evaluated against, the majority of our benchmarks (including all of thefairness evaluations) are in American English.

    Language models might provide inconsistent service quality to different users.For example, text generation might not be as effective for some dialects orlanguage varieties because they are underrepresented in the training data.Performance might be worse for non-English languages or English languagevarieties with less representation.

  • Fairness benchmarks and subgroups. Google Research's fairness analyses ofGemini models don't provide an exhaustive account of the variouspotential risks. For example, we focus on biases along gender, race,ethnicity, and religion axes, but perform the analysis only on the AmericanEnglish language data and model outputs.

  • Limited domain expertise. Gemini models have been trainedon Google Cloud technology, but it might lack the depth of knowledgethat's required to provide accurate and detailed responses on highlyspecialized or technical topics, leading to superficial or incorrectinformation.

    When you use theGemini pane in the Google Cloud console,Gemini is not context aware of your specific environment, soit cannot answer questions such as "When was the last time I created a VM?"

    In some cases, Gemini for Google Cloud sends a specificsegment of your context to the model to receive a context-specificresponse—for example, when you click theTroubleshooting suggestionsbutton in the Error Reporting service page.

Gemini safety and toxicity filtering

Gemini for Google Cloud prompts and responses are checkedagainst a comprehensive list of safety attributes as applicable for each usecase. These safety attributes aim to filter out content that violates ourAcceptable Use Policy. If an output is consideredharmful, the response will be blocked.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.