get_content_moderation
(**kwargs)¶Gets the inappropriate, unwanted, or offensive content analysis results for a Amazon Rekognition Video analysis started by StartContentModeration. For a list of moderation labels in Amazon Rekognition, see Using the image and video moderation APIs.
Amazon Rekognition Video inappropriate or offensive content detection in a stored video is an asynchronous operation. You start analysis by calling StartContentModeration which returns a job identifier ( JobId
). When analysis finishes, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service topic registered in the initial call to StartContentModeration
. To get the results of the content analysis, first check that the status value published to the Amazon SNS topic is SUCCEEDED
. If so, call GetContentModeration
and pass the job identifier ( JobId
) from the initial call to StartContentModeration
.
For more information, see Working with Stored Videos in the Amazon Rekognition Devlopers Guide.
GetContentModeration
returns detected inappropriate, unwanted, or offensive content moderation labels, and the time they are detected, in an array,ModerationLabels
, of ContentModerationDetection objects.
By default, the moderated labels are returned sorted by time, in milliseconds from the start of the video. You can also sort them by moderated label by specifying NAME
for the SortBy
input parameter.
Since video analysis can return a large number of results, use the MaxResults
parameter to limit the number of labels returned in a single call to GetContentModeration
. If there are more results than specified in MaxResults
, the value of NextToken
in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call GetContentModeration
and populate the NextToken
request parameter with the value of NextToken
returned from the previous call to GetContentModeration
.
For more information, see moderating content in the Amazon Rekognition Developer Guide.
See also: AWS API Documentation
Request Syntax
response = client.get_content_moderation(
JobId='string',
MaxResults=123,
NextToken='string',
SortBy='NAME'|'TIMESTAMP'
)
[REQUIRED]
The identifier for the inappropriate, unwanted, or offensive content moderation job. Use JobId
to identify the job in a subsequent call to GetContentModeration
.
ModerationLabelDetections
array. Use TIMESTAMP
to sort array elements by the time labels are detected. Use NAME
to alphabetically group elements for a label together. Within each label group, the array element are sorted by detection confidence. The default sort is by TIMESTAMP
.dict
Response Syntax
{
'JobStatus': 'IN_PROGRESS'|'SUCCEEDED'|'FAILED',
'StatusMessage': 'string',
'VideoMetadata': {
'Codec': 'string',
'DurationMillis': 123,
'Format': 'string',
'FrameRate': ...,
'FrameHeight': 123,
'FrameWidth': 123,
'ColorRange': 'FULL'|'LIMITED'
},
'ModerationLabels': [
{
'Timestamp': 123,
'ModerationLabel': {
'Confidence': ...,
'Name': 'string',
'ParentName': 'string'
}
},
],
'NextToken': 'string',
'ModerationModelVersion': 'string'
}
Response Structure
(dict) --
JobStatus (string) --
The current status of the content moderation analysis job.
StatusMessage (string) --
If the job fails, StatusMessage
provides a descriptive error message.
VideoMetadata (dict) --
Information about a video that Amazon Rekognition analyzed. Videometadata
is returned in every page of paginated responses from GetContentModeration
.
Codec (string) --
Type of compression used in the analyzed video.
DurationMillis (integer) --
Length of the video in milliseconds.
Format (string) --
Format of the analyzed video. Possible values are MP4, MOV and AVI.
FrameRate (float) --
Number of frames per second in the video.
FrameHeight (integer) --
Vertical pixel dimension of the video.
FrameWidth (integer) --
Horizontal pixel dimension of the video.
ColorRange (string) --
A description of the range of luminance values in a video, either LIMITED (16 to 235) or FULL (0 to 255).
ModerationLabels (list) --
The detected inappropriate, unwanted, or offensive content moderation labels and the time(s) they were detected.
(dict) --
Information about an inappropriate, unwanted, or offensive content label detection in a stored video.
Timestamp (integer) --
Time, in milliseconds from the beginning of the video, that the content moderation label was detected. Note that Timestamp
is not guaranteed to be accurate to the individual frame where the moderated content first appears.
ModerationLabel (dict) --
The content moderation label detected by in the stored video.
Confidence (float) --
Specifies the confidence that Amazon Rekognition has that the label has been correctly identified.
If you don't specify the MinConfidence
parameter in the call to DetectModerationLabels
, the operation returns labels with a confidence value greater than or equal to 50 percent.
Name (string) --
The label name for the type of unsafe content detected in the image.
ParentName (string) --
The name for the parent label. Labels at the top level of the hierarchy have the parent label ""
.
NextToken (string) --
If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of content moderation labels.
ModerationModelVersion (string) --
Version number of the moderation detection model that was used to detect inappropriate, unwanted, or offensive content.
Exceptions
Rekognition.Client.exceptions.AccessDeniedException
Rekognition.Client.exceptions.InternalServerError
Rekognition.Client.exceptions.InvalidParameterException
Rekognition.Client.exceptions.InvalidPaginationTokenException
Rekognition.Client.exceptions.ProvisionedThroughputExceededException
Rekognition.Client.exceptions.ResourceNotFoundException
Rekognition.Client.exceptions.ThrottlingException