FirebaseAILogic Framework Reference

SafetyRating

@available(iOS15.0,macOS12.0,tvOS15.0,watchOS8.0,*)publicstructSafetyRating:Equatable,Hashable,Sendable
extensionSafetyRating:Decodable

A type defining potentially harmful media categories and their model-assigned ratings. A valueof this type may be assigned to a category for every model-generated response, not justresponses that exceed a certain threshold.

  • The category describing the potential harm a piece of content may pose.

    SeeHarmCategory for a list of possible values.

    Declaration

    Swift

    publicletcategory:HarmCategory
  • The model-generated probability that the content falls under the specified harmcategory.

    SeeHarmProbability for a list of possible values. This is a discretized representationof theprobabilityScore.

    Important

    This does not indicate the severity of harm for a piece of content.

    Declaration

    Swift

    publicletprobability:HarmProbability
  • The confidence score that the response is associated with the corresponding harmcategory.

    The probability safety score is a confidence score between 0.0 and 1.0, rounded to one decimalplace; it is discretized into aHarmProbability inprobability. Seeprobabilityscoresin the Google Cloud documentation for more details.

    Declaration

    Swift

    publicletprobabilityScore:Float
  • The severity reflects the magnitude of how harmful a model response might be.

    SeeHarmSeverity for a list of possible values. This is a discretized representation oftheseverityScore.

    Declaration

    Swift

    publicletseverity:HarmSeverity
  • The severity score is the magnitude of how harmful a model response might be.

    The severity score ranges from 0.0 to 1.0, rounded to one decimal place; it is discretizedinto aHarmSeverity inseverity. Seeseverity scoresin the Google Cloud documentation for more details.

    Declaration

    Swift

    publicletseverityScore:Float
  • If true, the response was blocked.

    Declaration

    Swift

    publicletblocked:Bool
  • Initializes a newSafetyRating instance with the given category and probability.Use this initializer for SwiftUI previews or tests.

    Declaration

    Swift

    publicinit(category:HarmCategory,probability:HarmProbability,probabilityScore:Float,severity:HarmSeverity,severityScore:Float,blocked:Bool)
  • The probability that a given model output falls under a harmful content category.

    Note

    This does not indicate the severity of harm for a piece of content.

    Declaration

    Swift

    @available(iOS15.0,macOS12.0,tvOS15.0,watchOS8.0,*)publicstructHarmProbability:DecodableProtoEnum,Hashable,Sendable
  • The magnitude of how harmful a model response might be for the respectiveHarmCategory.

    Declaration

    Swift

    @available(iOS15.0,macOS12.0,tvOS15.0,watchOS8.0,*)publicstructHarmSeverity:DecodableProtoEnum,Hashable,Sendable
  • Declaration

    Swift

    publicinit(fromdecoder:anyDecoder)throws

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-10-28 UTC.