Rekognition / Client / detect_moderation_labels

detect_moderation_labels#

Rekognition.Client.detect_moderation_labels(**kwargs)#

Detects unsafe content in a specified JPEG or PNG format image. Use DetectModerationLabels to moderate images depending on your requirements. For example, you might want to filter images that contain nudity, but not images containing suggestive content.

To filter images, use the labels returned by DetectModerationLabels to determine which types of content are appropriate.

For information about moderation labels, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.

You pass the input image either as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file.

You can specify an adapter to use when retrieving label predictions by providing a ProjectVersionArn to the ProjectVersion argument.

See also: AWS API Documentation

Request Syntax

response = client.detect_moderation_labels(
    Image={
        'Bytes': b'bytes',
        'S3Object': {
            'Bucket': 'string',
            'Name': 'string',
            'Version': 'string'
        }
    },
    MinConfidence=...,
    HumanLoopConfig={
        'HumanLoopName': 'string',
        'FlowDefinitionArn': 'string',
        'DataAttributes': {
            'ContentClassifiers': [
                'FreeOfPersonallyIdentifiableInformation'|'FreeOfAdultContent',
            ]
        }
    },
    ProjectVersion='string'
)
Parameters:
  • Image (dict) –

    [REQUIRED]

    The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

    If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.

    • Bytes (bytes) –

      Blob of image bytes up to 5 MBs. Note that the maximum image size you can pass to DetectCustomLabels is 4MB.

    • S3Object (dict) –

      Identifies an S3 object as the image source.

      • Bucket (string) –

        Name of the S3 bucket.

      • Name (string) –

        S3 object key name.

      • Version (string) –

        If the bucket is versioning enabled, you can specify the object version.

  • MinConfidence (float) –

    Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn’t return any labels with a confidence level lower than this specified value.

    If you don’t specify MinConfidence, the operation returns labels with confidence values greater than or equal to 50 percent.

  • HumanLoopConfig (dict) –

    Sets up the configuration for human evaluation, including the FlowDefinition the image will be sent to.

    • HumanLoopName (string) – [REQUIRED]

      The name of the human review used for this image. This should be kept unique within a region.

    • FlowDefinitionArn (string) – [REQUIRED]

      The Amazon Resource Name (ARN) of the flow definition. You can create a flow definition by using the Amazon Sagemaker CreateFlowDefinition Operation.

    • DataAttributes (dict) –

      Sets attributes of the input data.

      • ContentClassifiers (list) –

        Sets whether the input image is free of personally identifiable information.

        • (string) –

  • ProjectVersion (string) – Identifier for the custom adapter. Expects the ProjectVersionArn as a value. Use the CreateProject or CreateProjectVersion APIs to create a custom adapter.

Return type:

dict

Returns:

Response Syntax

{
    'ModerationLabels': [
        {
            'Confidence': ...,
            'Name': 'string',
            'ParentName': 'string',
            'TaxonomyLevel': 123
        },
    ],
    'ModerationModelVersion': 'string',
    'HumanLoopActivationOutput': {
        'HumanLoopArn': 'string',
        'HumanLoopActivationReasons': [
            'string',
        ],
        'HumanLoopActivationConditionsEvaluationResults': 'string'
    },
    'ProjectVersion': 'string',
    'ContentTypes': [
        {
            'Confidence': ...,
            'Name': 'string'
        },
    ]
}

Response Structure

  • (dict) –

    • ModerationLabels (list) –

      Array of detected Moderation labels. For video operations, this includes the time, in milliseconds from the start of the video, they were detected.

      • (dict) –

        Provides information about a single type of inappropriate, unwanted, or offensive content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Content moderation in the Amazon Rekognition Developer Guide.

        • Confidence (float) –

          Specifies the confidence that Amazon Rekognition has that the label has been correctly identified.

          If you don’t specify the MinConfidence parameter in the call to DetectModerationLabels, the operation returns labels with a confidence value greater than or equal to 50 percent.

        • Name (string) –

          The label name for the type of unsafe content detected in the image.

        • ParentName (string) –

          The name for the parent label. Labels at the top level of the hierarchy have the parent label "".

        • TaxonomyLevel (integer) –

          The level of the moderation label with regard to its taxonomy, from 1 to 3.

    • ModerationModelVersion (string) –

      Version number of the base moderation detection model that was used to detect unsafe content.

    • HumanLoopActivationOutput (dict) –

      Shows the results of the human in the loop evaluation.

      • HumanLoopArn (string) –

        The Amazon Resource Name (ARN) of the HumanLoop created.

      • HumanLoopActivationReasons (list) –

        Shows if and why human review was needed.

        • (string) –

      • HumanLoopActivationConditionsEvaluationResults (string) –

        Shows the result of condition evaluations, including those conditions which activated a human review.

    • ProjectVersion (string) –

      Identifier of the custom adapter that was used during inference. If during inference the adapter was EXPIRED, then the parameter will not be returned, indicating that a base moderation detection project version was used.

    • ContentTypes (list) –

      A list of predicted results for the type of content an image contains. For example, the image content might be from animation, sports, or a video game.

      • (dict) –

        Contains information regarding the confidence and name of a detected content type.

        • Confidence (float) –

          The confidence level of the label given

        • Name (string) –

          The name of the label

Exceptions

  • Rekognition.Client.exceptions.InvalidS3ObjectException

  • Rekognition.Client.exceptions.InvalidParameterException

  • Rekognition.Client.exceptions.ImageTooLargeException

  • Rekognition.Client.exceptions.AccessDeniedException

  • Rekognition.Client.exceptions.InternalServerError

  • Rekognition.Client.exceptions.ThrottlingException

  • Rekognition.Client.exceptions.ProvisionedThroughputExceededException

  • Rekognition.Client.exceptions.InvalidImageFormatException

  • Rekognition.Client.exceptions.HumanLoopQuotaExceededException

  • Rekognition.Client.exceptions.ResourceNotFoundException

  • Rekognition.Client.exceptions.ResourceNotReadyException