SafetyRating

classSafetyRating


This class is deprecated.
The Vertex AI in Firebase SDK (firebase-vertexai) has been replaced with the FirebaseAI SDK (firebase-ai) to accommodate the evolving set of supported features and services.For migration details, see the migration guide: https://firebase.google.com/docs/vertex-ai/migrate-to-latest-sdk

An assessment of the potential harm of some generated content.

The rating will be restricted to a particularcategory.

Summary

Public properties

Boolean?

Indicates whether the content was blocked due to safety concerns.

HarmCategory

The category of harm being assessed (e.g., Hate speech).

HarmProbability

The likelihood of the content causing harm.

Float

A numerical score representing the probability of harm, between 0 and

HarmSeverity?

The severity of the potential harm.

Float?

A numerical score representing the severity of harm.

Public properties

blocked

val blockedBoolean?

Indicates whether the content was blocked due to safety concerns.

category

val categoryHarmCategory

The category of harm being assessed (e.g., Hate speech).

probability

val probabilityHarmProbability

The likelihood of the content causing harm.

probabilityScore

val probabilityScoreFloat

A numerical score representing the probability of harm, between 0 and

severity

val severityHarmSeverity?

The severity of the potential harm.

severityScore

val severityScoreFloat?

A numerical score representing the severity of harm.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-07-21 UTC.