Responsible AI and usage guidelines for Imagen

Imagen on Vertex AI brings Google's state of the art generative AI capabilitiesto application developers. As an early-stage technology, Imagen on Vertex AI'sevolving capabilities and uses create potential for misapplication, misuse, andunintended or unforeseen consequences. For example, Imagen on Vertex AI couldgenerate output that you don't expect, such as images that are offensive,insensitive, or contextually incorrect.

Given these risks and complexities,Imagen on Vertex AI is designed with Google'sAI Principles in mind.However, it is important for developers to understand and test their models todeploy them safely and responsibly. To aid developers, Imagen on Vertex AI hasbuilt-in safety filters to help customers block potentially harmful outputswithin their use case. See thesafety filters sectionfor more.

When Imagen on Vertex AI is integrated into a customer's unique usecase and context, additional responsible AI considerations and model limitationsmay need to be considered. We encourage customers to use fairness,interpretability, privacy, and securityrecommended practices.

View Imagen for Generation model card

View Imagen for Editing and Customization model card

Imagen usage guidelines

Read the following general product attributes and legal considerations beforeyou use Imagen on Vertex AI.

  • Image and text filters and outputs: Images (generated or uploaded)through Imagen on Vertex AI are assessed against safety filters.Imagen aims to filter out (generated or uploaded) thatviolate our acceptable use policy (AUP) or additional Generative AI productrestrictions. In addition, our generative imagery models are intended togenerate original content and not replicate existing content. We've designedour systems to limit the chances of this occurring, and we will continue toimprove how these systems function. Like all cloud service providers, Googlemaintains an Acceptable Use Policy that prohibits customers from using ourservices in ways that infringe third-party IP rights.
  • Configurable safety filter thresholds: Google blocks model responsesthat exceed the designated confidence scores for certain safety attributes.To request the ability to modify a safety threshold, contact yourGoogle Cloud account team.
  • Text addition supported on certain model versions:Imagen does not support adding text to images (uploadedor generated) using a text prompt when using theimagegeneration@004 orlower model versions.
  • Report suspected abuse: You can report suspected abuse of Imagen on Vertex AI or any generated output thatcontains inappropriate material or inaccurate information using theReport suspected abuse on Google Cloud form.
  • Trusted Tester Program opt-out: If you previously opted in to permit Google to use your data to improve pre-GA AI/ML services as part of the Trusted Tester Program terms, you can opt out using theTrusted Tester Program - Opt Out Request form.

Safety filters

Text prompts provided as inputs and images (generated or uploaded) throughImagen on Vertex AI are assessed against a list of safety filters, whichinclude 'harmful categories' (for example,violence,sexual,derogatory,andtoxic).These safety filters aim to filter out (generated or uploaded) content thatviolates ourAcceptable Use Policy (AUP),Generative AI Prohibited Use Policy or ourAI Principles.

If the model responds to a request with an error message such as "The promptcouldn't be submitted" or "it might violate our policies", then the input istriggering a safety filter. If fewer images than requested are returned, thensome generated output are blocked for not meeting safety requirements.

You can choose how aggressively to filter sensitive content by adjusting thesafetySetting parameter.

Safety attributes

Safety attributes and safety filters don't have a one-to-one mappingrelationship. Safety attributes are the set of attributes that we return to userwhenincludeSafetyAttributes is set. Safety filters are the set of filters weuse to filter content. We don't filter on all safety attribute categories. Forexample, for the safety attribute category "Health", we don't filter contentbased on the health confidence score. Also, we don't expose the confidencescores for some of our internal sensitive safety filters.

Configure safety filters

There are severalsafety filtering parameters you can use with theimage generation models. For example, you can let the model report safety filtercodes for blocked content, disable people or face generation, adjust thesensitivity of content filtering, or return rounded safety scores of list ofsafety attributes for input and output. For more technical information about individual fields, see theimage generation model API reference.

The response varies depending on which parameters you set; some parametersaffect the content produced, while others affect content filtering and howfiltering is reported to you. Additionally, the output format depends on if theinput data is filtered, or if the generated image output is filtered.

Parameters that filter content

The following optional parameters affect content filtering or how filtering isreported to you:

  • safetySetting - Lets you set how aggressively to filter forpotentially sensitive output content.
  • includeRaiReason - Provides more verbose information on filtered output.
  • personGeneration - A setting that allows you more control over thegeneration of people, faces, and children.
  • disablePersonFace - Deprecated. A choice to allow person and facegeneration or not. Users should setpersonGeneration instead.
  • includeSafetyAttributes - Gives you full safety attribute information forinput text, input image (for editing), and all generated images. Thisinformation includes safety category (for example,"Firearms & Weapons","Illicit Drugs", or"Violence") and the confidence scores.

Filtered input

If your text input or input image (for editing) is filtered, you get a responsewith a400 error code. A request with RAI-filtered input returns this outputformat if you set eitherincludeRaiReason orincludeSafetyAttributes.

Output depends on the model version you use. The following shows output when theinput is filtered for different model versions:

Model

{"error":{"code":400,"message":"Image generation failed with the following error: The prompt could not be submitted. This prompt contains sensitive words that violate Google's Responsible AI practices. Try rephrasing the prompt. If you think this was an error, send feedback.""status":"INVALID_ARGUMENT","details":[{"@type":"type.googleapis.com/google.rpc.DebugInfo","detail":"[ORIGINAL ERROR] generic::invalid_argument: Image generation failed with the following error: The prompt could not be submitted. This prompt contains sensitive words that violate Google's Responsible AI practices. Try rephrasing the prompt. If you think this was an error, send feedback. [google.rpc.error_details_ext] { message: \"Image editing failed with the following error: The prompt could not be submitted. This prompt contains sensitive words that violate Google's Responsible AI practices. Try rephrasing the prompt. If you think this was an error, send feedback. Support codes: 42876398\" }"}]}}

Models

{"error":{"code":400,"message":"Image generation failed with the following error: The prompt could not be submitted. This prompt contains sensitive words that violate Google's Responsible AI practices. Try rephrasing the prompt. If you think this was an error, send feedback.","status":"INVALID_ARGUMENT","details":[{"@type":"type.googleapis.com/google.rpc.DebugInfo","detail":"[ORIGINAL ERROR] generic::invalid_argument: Image generation failed with the following error: The prompt could not be submitted. This prompt contains sensitive words that violate Google's Responsible AI practices. Try rephrasing the prompt. If you think this was an error, send feedback. [google.rpc.error_details_ext] { message: \"Image generation failed with the following error: The prompt could not be submitted. This prompt contains sensitive words that violate Google\\'s Responsible AI practices. Try rephrasing the prompt. If you think this was an error, send feedback.\" }"}]}}

Filtered output

The contents of filtered output vary depending on the RAI parameter you set.The following output examples show the result of using theincludeRaiReasonandincludeSafetyAttributes parameters.

Filtered output usingincludeRaiReason

If you don't addincludeRaiReason or setincludeRaiReason: false, yourresponse only includes generated image objects that aren't filtered. Anyfiltered image objects are omitted from the"predictions": [] array. Forexample, the following is a response to a request with"sampleCount": 4, buttwo of the images are filtered and consequently omitted:

{"predictions":[{"bytesBase64Encoded":"/9j/4AAQSkZJRgABA[...]bdsdgD2PLbZQfW96HEFE/9k=","mimeType":"image/png"},{"mimeType":"image/png","bytesBase64Encoded":"/9j/4AAQSkZJRgABA[...]Ct+F+1SLLH/2+SJ4ZLdOvg//Z"}],"deployedModelId":"MODEL_ID"}

If you setincludeRaiReason: true and several output images are filtered, yourresponse includes generated image objects andraiFilteredReason objects forany filtered output images. For example, the following is a response to arequest with"sampleCount": 4 andincludeRaiReason: true, but two of theimages are filtered. Consequently, two objects include generated imageinformation and the other object includes an error message.

Model

{"predictions":[{"bytesBase64Encoded":"/9j/4AAQSkZJRgABA[...]bdsdgD2PLbZQfW96HEFE/9k=","mimeType":"image/png"},{"mimeType":"image/png","bytesBase64Encoded":"/9j/4AAQSkZJRgABA[...]Ct+F+1SLLH/2+SJ4ZLdOvg//Z"},{"raiFilteredReason":"Your current safety filter threshold filtered out 2 generated images. You will not be charged for blocked images. Try rephrasing the prompt. If you think this was an error, send feedback."},],"deployedModelId":"MODEL_ID"}

Models

{"predictions":[{"bytesBase64Encoded":"/9j/4AAQSkZJRgABA[...]bdsdgD2PLbZQfW96HEFE/9k=","mimeType":"image/png"},{"mimeType":"image/png","bytesBase64Encoded":"/9j/4AAQSkZJRgABA[...]Ct+F+1SLLH/2+SJ4ZLdOvg//Z"},{"raiFilteredReason":"56562880"},{"raiFilteredReason":"56562880"}],"deployedModelId":"MODEL_ID"}
Filtered output usingincludeSafetyAttributes

If you set"includeSafetyAttributes": true, the response"predictions": []array includes the RAI scores (rounded to one decimal place) of text safetyattributes of the positive prompt. The image safetyattributes are also added to each unfiltered output. If an output image isfiltered its safety attributes aren't returned. For example, the following is aresponse to an unfiltered request, and one image is returned:

{"predictions":[{"bytesBase64Encoded":"/9j/4AAQSkZJRgABA[...]bdsdgD2PLbZQfW96HEFE/9k=","mimeType":"image/png","safetyAttributes":{"categories":["Porn","Violence"],"scores":[0.1,0.2]}},{"contentType":"Positive Prompt","safetyAttributes":{"categories":["Death, Harm & Tragedy","Firearms & Weapons","Hate","Health","Illicit Drugs","Politics","Porn","Religion & Belief","Toxic","Violence","Vulgarity","War & Conflict"],"scores":[0,0,0,0,0,0,0.2,0,0.1,0,0.1,0]}},],"deployedModelId":"MODEL_ID"}

Safety filter code categories

Depending on the safety filters you configure, your output may contain a safetyreason code similar to the following:

    {      "raiFilteredReason": "ERROR_MESSAGE. Support codes: 56562880""    }

The code listed corresponds to a specific harmful category. These code tocategory mappings are as follows:

Error codeSafety categoryDescriptionContent filtered: prompt input or image output
58061214
17301594
Child Detects child content where it isn't allowed due to the API request settings or allowlisting.input (prompt): 58061214
output (image): 17301594
29310472
15236754
Celebrity Detects a photorealistic representation of a celebrity in the request.input (prompt): 29310472
output (image): 15236754
62263041Dangerous contentDetects content that's potentially dangerous in nature.input (prompt)
57734940
22137204
HateDetects hate-related topics or content.input (prompt): 57734940
output (image): 22137204
74803281
29578790
42876398
OtherDetects other miscellaneous safety issues with the request.input (prompt): 42876398
output (image): 29578790, 74803281
39322892People/Face Detects a person or face when it isn't allowed due to the request safety settings.output (image)
92201652Personal information Detects Personally Identifiable Information (PII) in the text, such as the mentioning a credit card number, home addresses, or other such information.input (prompt)
89371032
49114662
72817394
Prohibited contentDetects the request of prohibited content in the request.input (prompt): 89371032
output (image): 49114662, 72817394
90789179
63429089
43188360
SexualDetects content that's sexual in nature.input (prompt): 90789179
output (image): 63429089, 43188360
35561574
35561575
Third-party contentGuardrails related to third-party content.input (prompt)
output (image)
78610348ToxicDetects toxic topics or content in the text.input (prompt)
61493863
56562880
ViolenceDetects violence-related content from the image or text.input (prompt): 61493863
output (image): 56562880
32635315VulgarDetects vulgar topics or content from the text.input (prompt)
64151117Celebrity or child Detects photorealistic respresentation of a celebrity or of a child that violates Google's safety policies.input (prompt)
output (image)

Limitations

The following limits apply to different tasks:

Image generation and editing limitations

  • Bias amplification: While Imagen on Vertex AI can generate high-qualityimages, there may be potential biases in the generated content. Imagesgenerated rely on the product's training data, which can unintentionallyinclude biases that may perpetuate stereotypes or discriminate againstcertain groups. Careful monitoring and evaluation are necessary to ensurethe outputs align with Google's Acceptable Use Policy and your use case.
  • Transparency and disclosure: It can be difficult for users todifferentiate between AI generated Imagery and non AI generated imagery.When using AI-generated images within your use case, it is important toclearly disclose to users that the images have been generated by an AIsystem to ensure transparency and maintain trust in the process. We'veapplied metadata labeling to AI-generated images to help combat the risk ofmisinformation and as part of our responsible approach to AI.
  • Insufficient context: Imagen on Vertex AI may lack the contextualunderstanding required to generate images that are appropriate for allsituations or audiences within your use case. Be sure to check that yourgenerated images align with your chosen context, purpose, and intendedaudience.
  • Misrepresentation and authenticity: Editing images usingImagen on Vertex AI can result in misrepresentation or manipulation ofimages, potentially leading to the creation of deceptive or misleadingcontent. It is important to ensure that the editing process is usedresponsibly, without compromising the authenticity and truthfulness of theimages edited. We've applied metadata labeling to AI-edited images to helpcombat the risk of misinformation and as part of our responsible approach toAI.
  • Misrepresentation and authenticity: Be cautious when editing images ofadults or children, as editing images using Imagen on Vertex AI might resultin misrepresentation or manipulation of images. This can potentially lead tothe creation of deceptive or misleading content. It's important to ensurethat the editing process is used responsibly, without compromising theauthenticity and truthfulness of the images edited. We've applied metadatalabeling to AI-edited images to help combat the risk of misinformation andas part of our responsible approach to AI.

Visual captioning limitations

  • Accuracy and context sensitivity: Visual captioning may encounterchallenges in accurately describing complex or ambiguous images. Thegenerated descriptions may not always capture the complete context ornuances of the visual content. It is important to acknowledge that automatedcaptioning systems have limitations in understanding images with varyinglevels of complexity, and their descriptions should be used with caution,particularly in critical or sensitive contexts.
  • Ambiguity and subjective interpretations: Images can often be open tomultiple interpretations, and the generated captions may not always alignwith human understanding or expectations. Different individuals may perceiveand describe images differently based on their subjective experiences andcultural backgrounds. It is crucial to consider the potential for ambiguityand subjectivity in image descriptions and provide additional context oralternative interpretations where necessary.
  • Accessibility considerations: While automated image captions can supportaccessibility by providing descriptions for visually impaired individuals,it is important to recognize that they may not fully replace human-generatedalt-text or descriptions tailored to specific accessibility needs. Automatedcaptions may lack the level of detail or contextual understanding necessaryfor certain accessibility use cases.

Visual Question Answering (VQA) limitations

  • Overconfidence and uncertainty: VQA models may sometimes provide answerswith unwarranted confidence, even when the correct answer is uncertain orambiguous. It is essential to communicate the model's uncertainty andprovide appropriate confidence scores or alternative answers when there isambiguity, rather than conveying a false sense of certainty.

Recommended practices

To utilize this technology safely and responsibly, it is also important toconsider other risks specific to your use case, users, and business context inaddition to built-in technical safeguards.

We recommend taking the following steps:

  1. Assess your application's security risks.
  2. Consider adjustments to mitigate safety risks.
  3. Perform safety testing appropriate to your use case.
  4. Solicit user feedback and monitor content.

Additional Responsible AI resources

Give feedback on Imagen on Vertex AI

If you receive an output or response that is inaccurate or that you feel isunsafe, you can let us know bysubmitting feedback. Yourfeedback can help improve Imagen on Vertex AI and broader Google efforts in AI.

Because feedback may be human readable, don't submit data that containspersonal, confidential, or sensitive information.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.