FirebaseAILogic Framework Reference Stay organized with collections Save and categorize content based on your preferences.
SafetyRating
@available(iOS15.0,macOS12.0,tvOS15.0,watchOS8.0,*)publicstructSafetyRating:Equatable,Hashable,SendableextensionSafetyRating:DecodableA type defining potentially harmful media categories and their model-assigned ratings. A valueof this type may be assigned to a category for every model-generated response, not justresponses that exceed a certain threshold.
The category describing the potential harm a piece of content may pose.
See
HarmCategoryfor a list of possible values.Declaration
Swift
publicletcategory:HarmCategoryThe model-generated probability that the content falls under the specified harm
category.See
HarmProbabilityfor a list of possible values. This is a discretized representationof theprobabilityScore.Important
This does not indicate the severity of harm for a piece of content.
Declaration
Swift
publicletprobability:HarmProbabilityThe confidence score that the response is associated with the corresponding harm
category.The probability safety score is a confidence score between 0.0 and 1.0, rounded to one decimalplace; it is discretized into a
HarmProbabilityinprobability. Seeprobabilityscoresin the Google Cloud documentation for more details.Declaration
Swift
publicletprobabilityScore:FloatThe severity reflects the magnitude of how harmful a model response might be.
See
HarmSeverityfor a list of possible values. This is a discretized representation oftheseverityScore.Declaration
Swift
publicletseverity:HarmSeverityThe severity score is the magnitude of how harmful a model response might be.
The severity score ranges from 0.0 to 1.0, rounded to one decimal place; it is discretizedinto a
HarmSeverityinseverity. Seeseverity scoresin the Google Cloud documentation for more details.Declaration
Swift
publicletseverityScore:FloatIf true, the response was blocked.
Declaration
Swift
publicletblocked:BoolInitializes a new
SafetyRatinginstance with the given category and probability.Use this initializer for SwiftUI previews or tests.Declaration
Swift
publicinit(category:HarmCategory,probability:HarmProbability,probabilityScore:Float,severity:HarmSeverity,severityScore:Float,blocked:Bool)The probability that a given model output falls under a harmful content category.
Note
This does not indicate the severity of harm for a piece of content.
Declaration
Swift
@available(iOS15.0,macOS12.0,tvOS15.0,watchOS8.0,*)publicstructHarmProbability:DecodableProtoEnum,Hashable,SendableThe magnitude of how harmful a model response might be for the respective
HarmCategory.Declaration
Swift
@available(iOS15.0,macOS12.0,tvOS15.0,watchOS8.0,*)publicstructHarmSeverity:DecodableProtoEnum,Hashable,Sendable
Declaration
Swift
publicinit(fromdecoder:anyDecoder)throws
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-10-28 UTC.