Rekognition / Client / get_face_liveness_session_results
get_face_liveness_session_results#
- Rekognition.Client.get_face_liveness_session_results(**kwargs)#
Retrieves the results of a specific Face Liveness session. It requires the
sessionId
as input, which was created usingCreateFaceLivenessSession
. Returns the corresponding Face Liveness confidence score, a reference image that includes a face bounding box, and audit images that also contain face bounding boxes. The Face Liveness confidence score ranges from 0 to 100.The number of audit images returned by
GetFaceLivenessSessionResults
is defined by theAuditImagesLimit
paramater when callingCreateFaceLivenessSession
. Reference images are always returned when possible.See also: AWS API Documentation
Request Syntax
response = client.get_face_liveness_session_results( SessionId='string' )
- Parameters:
SessionId (string) –
[REQUIRED]
A unique 128-bit UUID. This is used to uniquely identify the session and also acts as an idempotency token for all operations associated with the session.
- Return type:
dict
- Returns:
Response Syntax
{ 'SessionId': 'string', 'Status': 'CREATED'|'IN_PROGRESS'|'SUCCEEDED'|'FAILED'|'EXPIRED', 'Confidence': ..., 'ReferenceImage': { 'Bytes': b'bytes', 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' }, 'BoundingBox': { 'Width': ..., 'Height': ..., 'Left': ..., 'Top': ... } }, 'AuditImages': [ { 'Bytes': b'bytes', 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' }, 'BoundingBox': { 'Width': ..., 'Height': ..., 'Left': ..., 'Top': ... } }, ] }
Response Structure
(dict) –
SessionId (string) –
The sessionId for which this request was called.
Status (string) –
Represents a status corresponding to the state of the session. Possible statuses are: CREATED, IN_PROGRESS, SUCCEEDED, FAILED, EXPIRED.
Confidence (float) –
Probabalistic confidence score for if the person in the given video was live, represented as a float value between 0 to 100.
ReferenceImage (dict) –
A high-quality image from the Face Liveness video that can be used for face comparison or search. It includes a bounding box of the face and the Base64-encoded bytes that return an image. If the CreateFaceLivenessSession request included an OutputConfig argument, the image will be uploaded to an S3Object specified in the output configuration. In case the reference image is not returned, it’s recommended to retry the Liveness check.
Bytes (bytes) –
The Base64-encoded bytes representing an image selected from the Face Liveness video and returned for audit purposes.
S3Object (dict) –
Provides the S3 bucket name and object name.
The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.
For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide.
Bucket (string) –
Name of the S3 bucket.
Name (string) –
S3 object key name.
Version (string) –
If the bucket is versioning enabled, you can specify the object version.
BoundingBox (dict) –
Identifies the bounding box around the label, face, text, object of interest, or personal protective equipment. The
left
(x-coordinate) andtop
(y-coordinate) are coordinates representing the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).The
top
andleft
values returned are ratios of the overall image size. For example, if the input image is 700x200 pixels, and the top-left coordinate of the bounding box is 350x50 pixels, the API returns aleft
value of 0.5 (350/700) and atop
value of 0.25 (50/200).The
width
andheight
values represent the dimensions of the bounding box as a ratio of the overall image dimension. For example, if the input image is 700x200 pixels, and the bounding box width is 70 pixels, the width returned is 0.1.Note
The bounding box coordinates can have negative values. For example, if Amazon Rekognition is able to detect a face that is at the image edge and is only partially visible, the service can return coordinates that are outside the image bounds and, depending on the image edge, you might get negative values or values greater than 1 for the
left
ortop
values.Width (float) –
Width of the bounding box as a ratio of the overall image width.
Height (float) –
Height of the bounding box as a ratio of the overall image height.
Left (float) –
Left coordinate of the bounding box as a ratio of overall image width.
Top (float) –
Top coordinate of the bounding box as a ratio of overall image height.
AuditImages (list) –
A set of images from the Face Liveness video that can be used for audit purposes. It includes a bounding box of the face and the Base64-encoded bytes that return an image. If the CreateFaceLivenessSession request included an OutputConfig argument, the image will be uploaded to an S3Object specified in the output configuration. If no Amazon S3 bucket is defined, raw bytes are sent instead.
(dict) –
An image that is picked from the Face Liveness video and returned for audit trail purposes, returned as Base64-encoded bytes.
Bytes (bytes) –
The Base64-encoded bytes representing an image selected from the Face Liveness video and returned for audit purposes.
S3Object (dict) –
Provides the S3 bucket name and object name.
The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.
For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide.
Bucket (string) –
Name of the S3 bucket.
Name (string) –
S3 object key name.
Version (string) –
If the bucket is versioning enabled, you can specify the object version.
BoundingBox (dict) –
Identifies the bounding box around the label, face, text, object of interest, or personal protective equipment. The
left
(x-coordinate) andtop
(y-coordinate) are coordinates representing the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).The
top
andleft
values returned are ratios of the overall image size. For example, if the input image is 700x200 pixels, and the top-left coordinate of the bounding box is 350x50 pixels, the API returns aleft
value of 0.5 (350/700) and atop
value of 0.25 (50/200).The
width
andheight
values represent the dimensions of the bounding box as a ratio of the overall image dimension. For example, if the input image is 700x200 pixels, and the bounding box width is 70 pixels, the width returned is 0.1.Note
The bounding box coordinates can have negative values. For example, if Amazon Rekognition is able to detect a face that is at the image edge and is only partially visible, the service can return coordinates that are outside the image bounds and, depending on the image edge, you might get negative values or values greater than 1 for the
left
ortop
values.Width (float) –
Width of the bounding box as a ratio of the overall image width.
Height (float) –
Height of the bounding box as a ratio of the overall image height.
Left (float) –
Left coordinate of the bounding box as a ratio of overall image width.
Top (float) –
Top coordinate of the bounding box as a ratio of overall image height.
Exceptions