Gemini image generation and responsible AI

To ensure a safe and responsible experience, Vertex AI's imagegeneration capabilities are equipped with a multi-layered safety approach. Thisis designed to prevent the creation of inappropriate content, including sexuallyexplicit, dangerous, violent, hateful, or toxic material.

All users must adhere to the Generative AI Prohibited Use Policy. This policystrictly forbids the generation of content that:

  • Relates to child sexual abuse or exploitation.
  • Facilitates violent extremism or terrorism.
  • Facilitates non-consensual intimate imagery.Facilitates self-harm.
  • Is sexually explicit.
  • Constitutes hate speech.
  • Promotes harassment or bullying.

When provided with an unsafe prompt, the model might refuse to generate animage, or the prompt or generated response might be blocked by our safetyfilters.

  • Model refusal: If a prompt is potentially unsafe, the model might refuse to process the request. If this happens, the model usually gives a text response saying it can't generate unsafe images. TheFinishReason isSTOP.
  • Safety filter blocking:
    • If the prompt is identified as potentially harmful by a safety filter, the API returnsBlockedReason inPromptFeedback.
    • If the response is identified as potentially harmful by a safety filter, the API response includes aFinishReason ofIMAGE_SAFETY,IMAGE_PROHIBITED_CONTENT, or similar.

Safety filter code categories

Depending on the safety filters you configure, your output may contain a safetyreason code similar to the following:

    {      "raiFilteredReason": "ERROR_MESSAGE. Support codes: 56562880"    }

The code listed corresponds to a specific harmful category. These code tocategory mappings are as follows:

Error codeSafety categoryDescriptionContent filtered: prompt input or image output
58061214
17301594
Child Detects child content where it isn't allowed due to the API request settings or allowlisting.input (prompt): 58061214
output (image): 17301594
29310472
15236754
Celebrity Detects a photorealistic representation of a celebrity in the request.input (prompt): 29310472
output (image): 15236754
62263041Dangerous contentDetects content that's potentially dangerous in nature.input (prompt)
57734940
22137204
HateDetects hate-related topics or content.input (prompt): 57734940
output (image): 22137204
74803281
29578790
42876398
OtherDetects other miscellaneous safety issues with the request.input (prompt): 42876398
output (image): 29578790, 74803281
39322892People/Face Detects a person or face when it isn't allowed due to the request safety settings.output (image)
92201652Personal information Detects Personally Identifiable Information (PII) in the text, such as the mentioning a credit card number, home addresses, or other such information.input (prompt)
89371032
49114662
72817394
Prohibited contentDetects the request of prohibited content in the request.input (prompt): 89371032
output (image): 49114662, 72817394
90789179
63429089
43188360
SexualDetects content that's sexual in nature.input (prompt): 90789179
output (image): 63429089, 43188360
78610348ToxicDetects toxic topics or content in the text.input (prompt)
61493863
56562880
ViolenceDetects violence-related content from the image or text.input (prompt): 61493863
output (image): 56562880
32635315VulgarDetects vulgar topics or content from the text.input (prompt)
64151117Celebrity or child Detects photorealistic respresentation of a celebrity or of a child that violates Google's safety policies.input (prompt)
output (image)

What's next?

See the following links for more information about Gemini imagegeneration:

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.