SafetyRating

public final classSafetyRating


This class is deprecated.
The Vertex AI in Firebase SDK (firebase-vertexai) has been replaced with the FirebaseAI SDK (firebase-ai) to accommodate the evolving set of supported features and services.For migration details, see the migration guide: https://firebase.google.com/docs/vertex-ai/migrate-to-latest-sdk

An assessment of the potential harm of some generated content.

The rating will be restricted to a particularcategory.

Summary

Public fields

finalBoolean

Indicates whether the content was blocked due to safety concerns.

final @NonNullHarmCategory

The category of harm being assessed (e.g., Hate speech).

final @NonNullHarmProbability

The likelihood of the content causing harm.

final float

A numerical score representing the probability of harm, between 0 and

finalHarmSeverity

The severity of the potential harm.

finalFloat

A numerical score representing the severity of harm.

Public fields

blocked

public final Boolean blocked

Indicates whether the content was blocked due to safety concerns.

category

public final @NonNullHarmCategory category

The category of harm being assessed (e.g., Hate speech).

probability

public final @NonNullHarmProbability probability

The likelihood of the content causing harm.

probabilityScore

public final float probabilityScore

A numerical score representing the probability of harm, between 0 and

severity

public final HarmSeverity severity

The severity of the potential harm.

severityScore

public final Float severityScore

A numerical score representing the severity of harm.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-07-21 UTC.