SafetyRating
@available(iOS 15.0, macOS 12.0, tvOS 15.0, watchOS 8.0, *)
public struct SafetyRating : Equatable, Hashable, Sendableextension SafetyRating: DecodableA type defining potentially harmful media categories and their model-assigned ratings. A value of this type may be assigned to a category for every model-generated response, not just responses that exceed a certain threshold.
-
The category describing the potential harm a piece of content may pose.
See
HarmCategoryfor a list of possible values.Declaration
Swift
public let category: HarmCategory -
The model-generated probability that the content falls under the specified harm
category.See
HarmProbabilityfor a list of possible values. This is a discretized representation of theprobabilityScore.Important
This does not indicate the severity of harm for a piece of content.
Declaration
Swift
public let probability: HarmProbability -
The confidence score that the response is associated with the corresponding harm
category.The probability safety score is a confidence score between 0.0 and 1.0, rounded to one decimal place; it is discretized into a
HarmProbabilityinprobability. See probability scores in the Google Cloud documentation for more details.Declaration
Swift
public let probabilityScore: Float -
The severity reflects the magnitude of how harmful a model response might be.
See
HarmSeverityfor a list of possible values. This is a discretized representation of theseverityScore.Declaration
Swift
public let severity: HarmSeverity -
The severity score is the magnitude of how harmful a model response might be.
The severity score ranges from 0.0 to 1.0, rounded to one decimal place; it is discretized into a
HarmSeverityinseverity. See severity scores in the Google Cloud documentation for more details.Declaration
Swift
public let severityScore: Float -
If true, the response was blocked.
Declaration
Swift
public let blocked: Bool -
Initializes a new
SafetyRatinginstance with the given category and probability. Use this initializer for SwiftUI previews or tests.Declaration
Swift
public init(category: HarmCategory, probability: HarmProbability, probabilityScore: Float, severity: HarmSeverity, severityScore: Float, blocked: Bool) -
The probability that a given model output falls under a harmful content category.
Note
This does not indicate the severity of harm for a piece of content.
Declaration
Swift
@available(iOS 15.0, macOS 12.0, tvOS 15.0, watchOS 8.0, *) public struct HarmProbability : DecodableProtoEnum, Hashable, Sendable -
The magnitude of how harmful a model response might be for the respective
HarmCategory.Declaration
Swift
@available(iOS 15.0, macOS 12.0, tvOS 15.0, watchOS 8.0, *) public struct HarmSeverity : DecodableProtoEnum, Hashable, Sendable
-
Declaration
Swift
public init(from decoder: any Decoder) throws