A type defining potentially harmful media categories and their model-assigned ratings. A value
of this type may be assigned to a category for every model-generated response, not just
responses that exceed a certain threshold.
The confidence score that the response is associated with the corresponding harm category.
The probability safety score is a confidence score between 0.0 and 1.0, rounded to one decimal
place; it is discretized into a HarmProbability in probability. See probability
scores
in the Google Cloud documentation for more details.
The severity score is the magnitude of how harmful a model response might be.
The severity score ranges from 0.0 to 1.0, rounded to one decimal place; it is discretized
into a HarmSeverity in severity. See severity scores
in the Google Cloud documentation for more details.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-05-20 UTC."],[],[],null,["# FirebaseAI Framework Reference\n\nSafetyRating\n============\n\n @available(iOS 15.0, macOS 12.0, tvOS 15.0, watchOS 8.0, *)\n public struct SafetyRating : Equatable, Hashable, Sendable\n\n extension SafetyRating: Decodable\n\nA type defining potentially harmful media categories and their model-assigned ratings. A value\nof this type may be assigned to a category for every model-generated response, not just\nresponses that exceed a certain threshold.\n- `\n ``\n ``\n `\n\n ### [category](#/s:10FirebaseAI12SafetyRatingV8categoryAA12HarmCategoryVvp)\n\n `\n ` \n The category describing the potential harm a piece of content may pose.\n\n See [HarmCategory](../Structs/HarmCategory.html) for a list of possible values. \n\n #### Declaration\n\n Swift \n\n public let category: ../Structs/HarmCategory.html\n\n- `\n ``\n ``\n `\n\n ### [probability](#/s:10FirebaseAI12SafetyRatingV11probabilityAC15HarmProbabilityVvp)\n\n `\n ` \n The model-generated probability that the content falls under the specified harm [category](../Structs/SafetyRating.html#/s:10FirebaseAI12SafetyRatingV8categoryAA12HarmCategoryVvp).\n\n See [HarmProbability](../Structs/SafetyRating/HarmProbability.html) for a list of possible values. This is a discretized representation\n of the [probabilityScore](../Structs/SafetyRating.html#/s:10FirebaseAI12SafetyRatingV16probabilityScoreSfvp). \n Important\n\n This does not indicate the severity of harm for a piece of content. \n\n #### Declaration\n\n Swift \n\n public let probability: ../Structs/SafetyRating/HarmProbability.html\n\n- `\n ``\n ``\n `\n\n ### [probabilityScore](#/s:10FirebaseAI12SafetyRatingV16probabilityScoreSfvp)\n\n `\n ` \n The confidence score that the response is associated with the corresponding harm [category](../Structs/SafetyRating.html#/s:10FirebaseAI12SafetyRatingV8categoryAA12HarmCategoryVvp).\n\n The probability safety score is a confidence score between 0.0 and 1.0, rounded to one decimal\n place; it is discretized into a [HarmProbability](../Structs/SafetyRating/HarmProbability.html) in [probability](../Structs/SafetyRating.html#/s:10FirebaseAI12SafetyRatingV11probabilityAC15HarmProbabilityVvp). See [probability\n scores](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/configure-safety-filters#comparison_of_probability_scores_and_severity_scores)\n in the Google Cloud documentation for more details. \n\n #### Declaration\n\n Swift \n\n public let probabilityScore: Float\n\n- `\n ``\n ``\n `\n\n ### [severity](#/s:10FirebaseAI12SafetyRatingV8severityAC12HarmSeverityVvp)\n\n `\n ` \n The severity reflects the magnitude of how harmful a model response might be.\n\n See [HarmSeverity](../Structs/SafetyRating/HarmSeverity.html) for a list of possible values. This is a discretized representation of\n the [severityScore](../Structs/SafetyRating.html#/s:10FirebaseAI12SafetyRatingV13severityScoreSfvp). \n\n #### Declaration\n\n Swift \n\n public let severity: ../Structs/SafetyRating/HarmSeverity.html\n\n- `\n ``\n ``\n `\n\n ### [severityScore](#/s:10FirebaseAI12SafetyRatingV13severityScoreSfvp)\n\n `\n ` \n The severity score is the magnitude of how harmful a model response might be.\n\n The severity score ranges from 0.0 to 1.0, rounded to one decimal place; it is discretized\n into a [HarmSeverity](../Structs/SafetyRating/HarmSeverity.html) in [severity](../Structs/SafetyRating.html#/s:10FirebaseAI12SafetyRatingV8severityAC12HarmSeverityVvp). See [severity scores](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/configure-safety-filters#comparison_of_probability_scores_and_severity_scores)\n in the Google Cloud documentation for more details. \n\n #### Declaration\n\n Swift \n\n public let severityScore: Float\n\n- `\n ``\n ``\n `\n\n ### [blocked](#/s:10FirebaseAI12SafetyRatingV7blockedSbvp)\n\n `\n ` \n If true, the response was blocked. \n\n #### Declaration\n\n Swift \n\n public let blocked: Bool\n\n- `\n ``\n ``\n `\n\n ### [init(category:probability:probabilityScore:severity:severityScore:blocked:)](#/s:10FirebaseAI12SafetyRatingV8category11probability0F5Score8severity0hG07blockedAcA12HarmCategoryV_AC0J11ProbabilityVSfAC0J8SeverityVSfSbtcfc)\n\n `\n ` \n Initializes a new `SafetyRating` instance with the given category and probability.\n Use this initializer for SwiftUI previews or tests. \n\n #### Declaration\n\n Swift \n\n public init(category: ../Structs/HarmCategory.html,\n probability: ../Structs/SafetyRating/HarmProbability.html,\n probabilityScore: Float,\n severity: ../Structs/SafetyRating/HarmSeverity.html,\n severityScore: Float,\n blocked: Bool)\n\n- `\n ``\n ``\n `\n\n ### [HarmProbability](../Structs/SafetyRating/HarmProbability.html)\n\n `\n ` \n The probability that a given model output falls under a harmful content category. \n Note\n\n This does not indicate the severity of harm for a piece of content. \n\n #### Declaration\n\n Swift \n\n @available(iOS 15.0, macOS 12.0, tvOS 15.0, watchOS 8.0, *)\n public struct HarmProbability : DecodableProtoEnum, Hashable, Sendable\n\n- `\n ``\n ``\n `\n\n ### [HarmSeverity](../Structs/SafetyRating/HarmSeverity.html)\n\n `\n ` \n The magnitude of how harmful a model response might be for the respective [HarmCategory](../Structs/HarmCategory.html). \n\n #### Declaration\n\n Swift \n\n @available(iOS 15.0, macOS 12.0, tvOS 15.0, watchOS 8.0, *)\n public struct HarmSeverity : DecodableProtoEnum, Hashable, Sendable\n\n[Codable Conformances\n--------------------](#/Codable-Conformances)\n\n- `\n ``\n ``\n `\n\n ### [init(from:)](#/s:Se4fromxs7Decoder_p_tKcfc)\n\n `\n ` \n\n #### Declaration\n\n Swift \n\n public init(from decoder: any Decoder) throws"]]