CleanRoomsML / Client / get_trained_model_inference_job
get_trained_model_inference_job#
- CleanRoomsML.Client.get_trained_model_inference_job(**kwargs)#
Returns information about a trained model inference job.
See also: AWS API Documentation
Request Syntax
response = client.get_trained_model_inference_job( membershipIdentifier='string', trainedModelInferenceJobArn='string' )
- Parameters:
membershipIdentifier (string) –
[REQUIRED]
Provides the membership ID of the membership that contains the trained model inference job that you are interested in.
trainedModelInferenceJobArn (string) –
[REQUIRED]
Provides the Amazon Resource Name (ARN) of the trained model inference job that you are interested in.
- Return type:
dict
- Returns:
Response Syntax
{ 'createTime': datetime(2015, 1, 1), 'updateTime': datetime(2015, 1, 1), 'trainedModelInferenceJobArn': 'string', 'configuredModelAlgorithmAssociationArn': 'string', 'name': 'string', 'status': 'CREATE_PENDING'|'CREATE_IN_PROGRESS'|'CREATE_FAILED'|'ACTIVE'|'CANCEL_PENDING'|'CANCEL_IN_PROGRESS'|'CANCEL_FAILED'|'INACTIVE', 'trainedModelArn': 'string', 'resourceConfig': { 'instanceType': 'ml.r7i.48xlarge'|'ml.r6i.16xlarge'|'ml.m6i.xlarge'|'ml.m5.4xlarge'|'ml.p2.xlarge'|'ml.m4.16xlarge'|'ml.r7i.16xlarge'|'ml.m7i.xlarge'|'ml.m6i.12xlarge'|'ml.r7i.8xlarge'|'ml.r7i.large'|'ml.m7i.12xlarge'|'ml.m6i.24xlarge'|'ml.m7i.24xlarge'|'ml.r6i.8xlarge'|'ml.r6i.large'|'ml.g5.2xlarge'|'ml.m5.large'|'ml.p3.16xlarge'|'ml.m7i.48xlarge'|'ml.m6i.16xlarge'|'ml.p2.16xlarge'|'ml.g5.4xlarge'|'ml.m7i.16xlarge'|'ml.c4.2xlarge'|'ml.c5.2xlarge'|'ml.c6i.32xlarge'|'ml.c4.4xlarge'|'ml.g5.8xlarge'|'ml.c6i.xlarge'|'ml.c5.4xlarge'|'ml.g4dn.xlarge'|'ml.c7i.xlarge'|'ml.c6i.12xlarge'|'ml.g4dn.12xlarge'|'ml.c7i.12xlarge'|'ml.c6i.24xlarge'|'ml.g4dn.2xlarge'|'ml.c7i.24xlarge'|'ml.c7i.2xlarge'|'ml.c4.8xlarge'|'ml.c6i.2xlarge'|'ml.g4dn.4xlarge'|'ml.c7i.48xlarge'|'ml.c7i.4xlarge'|'ml.c6i.16xlarge'|'ml.c5.9xlarge'|'ml.g4dn.16xlarge'|'ml.c7i.16xlarge'|'ml.c6i.4xlarge'|'ml.c5.xlarge'|'ml.c4.xlarge'|'ml.g4dn.8xlarge'|'ml.c7i.8xlarge'|'ml.c7i.large'|'ml.g5.xlarge'|'ml.c6i.8xlarge'|'ml.c6i.large'|'ml.g5.12xlarge'|'ml.g5.24xlarge'|'ml.m7i.2xlarge'|'ml.c5.18xlarge'|'ml.g5.48xlarge'|'ml.m6i.2xlarge'|'ml.g5.16xlarge'|'ml.m7i.4xlarge'|'ml.p3.2xlarge'|'ml.r6i.32xlarge'|'ml.m6i.4xlarge'|'ml.m5.xlarge'|'ml.m4.10xlarge'|'ml.r6i.xlarge'|'ml.m5.12xlarge'|'ml.m4.xlarge'|'ml.r7i.2xlarge'|'ml.r7i.xlarge'|'ml.r6i.12xlarge'|'ml.m5.24xlarge'|'ml.r7i.12xlarge'|'ml.m7i.8xlarge'|'ml.m7i.large'|'ml.r6i.24xlarge'|'ml.r6i.2xlarge'|'ml.m4.2xlarge'|'ml.r7i.24xlarge'|'ml.r7i.4xlarge'|'ml.m6i.8xlarge'|'ml.m6i.large'|'ml.m5.2xlarge'|'ml.p2.8xlarge'|'ml.r6i.4xlarge'|'ml.m6i.32xlarge'|'ml.p3.8xlarge'|'ml.m4.4xlarge', 'instanceCount': 123 }, 'outputConfiguration': { 'accept': 'string', 'members': [ { 'accountId': 'string' }, ] }, 'membershipIdentifier': 'string', 'dataSource': { 'mlInputChannelArn': 'string' }, 'containerExecutionParameters': { 'maxPayloadInMB': 123 }, 'statusDetails': { 'statusCode': 'string', 'message': 'string' }, 'description': 'string', 'inferenceContainerImageDigest': 'string', 'environment': { 'string': 'string' }, 'kmsKeyArn': 'string', 'metricsStatus': 'PUBLISH_SUCCEEDED'|'PUBLISH_FAILED', 'metricsStatusDetails': 'string', 'logsStatus': 'PUBLISH_SUCCEEDED'|'PUBLISH_FAILED', 'logsStatusDetails': 'string', 'tags': { 'string': 'string' } }
Response Structure
(dict) –
createTime (datetime) –
The time at which the trained model inference job was created.
updateTime (datetime) –
The most recent time at which the trained model inference job was updated.
trainedModelInferenceJobArn (string) –
The Amazon Resource Name (ARN) of the trained model inference job.
configuredModelAlgorithmAssociationArn (string) –
The Amazon Resource Name (ARN) of the configured model algorithm association that was used for the trained model inference job.
name (string) –
The name of the trained model inference job.
status (string) –
The status of the trained model inference job.
trainedModelArn (string) –
The Amazon Resource Name (ARN) for the trained model that was used for the trained model inference job.
resourceConfig (dict) –
The resource configuration information for the trained model inference job.
instanceType (string) –
The type of instance that is used to perform model inference.
instanceCount (integer) –
The number of instances to use.
outputConfiguration (dict) –
The output configuration information for the trained model inference job.
accept (string) –
The MIME type used to specify the output data.
members (list) –
Defines the members that can receive inference output.
(dict) –
Defines who will receive inference results.
accountId (string) –
The account ID of the member that can receive inference results.
membershipIdentifier (string) –
The membership ID of the membership that contains the trained model inference job.
dataSource (dict) –
The data source that was used for the trained model inference job.
mlInputChannelArn (string) –
The Amazon Resource Name (ARN) of the ML input channel for this model inference data source.
containerExecutionParameters (dict) –
The execution parameters for the model inference job container.
maxPayloadInMB (integer) –
The maximum size of the inference container payload, specified in MB.
statusDetails (dict) –
Details about the status of a resource.
statusCode (string) –
The status code that was returned. The status code is intended for programmatic error handling. Clean Rooms ML will not change the status code for existing error conditions.
message (string) –
The error message that was returned. The message is intended for human consumption and can change at any time. Use the
statusCode
for programmatic error handling.
description (string) –
The description of the trained model inference job.
inferenceContainerImageDigest (string) –
Information about the training container image.
environment (dict) –
The environment variables to set in the Docker container.
(string) –
(string) –
kmsKeyArn (string) –
The Amazon Resource Name (ARN) of the KMS key. This key is used to encrypt and decrypt customer-owned data in the ML inference job and associated data.
metricsStatus (string) –
The metrics status for the trained model inference job.
metricsStatusDetails (string) –
Details about the metrics status for the trained model inference job.
logsStatus (string) –
The logs status for the trained model inference job.
logsStatusDetails (string) –
Details about the logs status for the trained model inference job.
tags (dict) –
The optional metadata that you applied to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50.
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8.
Maximum value length - 256 Unicode characters in UTF-8.
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.
(string) –
(string) –
Exceptions
CleanRoomsML.Client.exceptions.ValidationException
CleanRoomsML.Client.exceptions.AccessDeniedException
CleanRoomsML.Client.exceptions.ResourceNotFoundException