Bedrock / Client / get_model_invocation_job
get_model_invocation_job#
- Bedrock.Client.get_model_invocation_job(**kwargs)#
- Gets details about a batch inference job. For more information, see Monitor batch inference jobs - See also: AWS API Documentation - Request Syntax- response = client.get_model_invocation_job( jobIdentifier='string' ) - Parameters:
- jobIdentifier (string) – - [REQUIRED] - The Amazon Resource Name (ARN) of the batch inference job. 
- Return type:
- dict 
- Returns:
- Response Syntax- { 'jobArn': 'string', 'jobName': 'string', 'modelId': 'string', 'clientRequestToken': 'string', 'roleArn': 'string', 'status': 'Submitted'|'InProgress'|'Completed'|'Failed'|'Stopping'|'Stopped'|'PartiallyCompleted'|'Expired'|'Validating'|'Scheduled', 'message': 'string', 'submitTime': datetime(2015, 1, 1), 'lastModifiedTime': datetime(2015, 1, 1), 'endTime': datetime(2015, 1, 1), 'inputDataConfig': { 's3InputDataConfig': { 's3InputFormat': 'JSONL', 's3Uri': 'string', 's3BucketOwner': 'string' } }, 'outputDataConfig': { 's3OutputDataConfig': { 's3Uri': 'string', 's3EncryptionKeyId': 'string', 's3BucketOwner': 'string' } }, 'vpcConfig': { 'subnetIds': [ 'string', ], 'securityGroupIds': [ 'string', ] }, 'timeoutDurationInHours': 123, 'jobExpirationTime': datetime(2015, 1, 1) } - Response Structure- (dict) – - jobArn (string) – - The Amazon Resource Name (ARN) of the batch inference job. 
- jobName (string) – - The name of the batch inference job. 
- modelId (string) – - The unique identifier of the foundation model used for model inference. 
- clientRequestToken (string) – - A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency. 
- roleArn (string) – - The Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference. You can use the console to create a default service role or follow the steps at Create a service role for batch inference. 
- status (string) – - The status of the batch inference job. - The following statuses are possible: - Submitted – This job has been submitted to a queue for validation. 
- Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following: - Your IAM service role has access to the Amazon S3 buckets containing your files. 
- Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn’t check if the - modelInputvalue matches the request body for the model.
- Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock. 
 
- Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn. 
- Expired – This job timed out because it was scheduled but didn’t begin before the set timeout duration. Submit a new job request. 
- InProgress – This job has begun. You can start viewing the results in the output S3 location. 
- Completed – This job has successfully completed. View the output files in the output S3 location. 
- PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location. 
- Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Amazon Web Services Support Center. 
- Stopped – This job was stopped by a user. 
- Stopping – This job is being stopped by a user. 
 
- message (string) – - If the batch inference job failed, this field contains a message describing why the job failed. 
- submitTime (datetime) – - The time at which the batch inference job was submitted. 
- lastModifiedTime (datetime) – - The time at which the batch inference job was last modified. 
- endTime (datetime) – - The time at which the batch inference job ended. 
- inputDataConfig (dict) – - Details about the location of the input to the batch inference job. - Note- This is a Tagged Union structure. Only one of the following top level keys will be set: - s3InputDataConfig. If a client receives an unknown member it will set- SDK_UNKNOWN_MEMBERas the top level key, which maps to the name or tag of the unknown member. The structure of- SDK_UNKNOWN_MEMBERis as follows:- 'SDK_UNKNOWN_MEMBER': {'name': 'UnknownMemberName'} - s3InputDataConfig (dict) – - Contains the configuration of the S3 location of the input data. - s3InputFormat (string) – - The format of the input data. 
- s3Uri (string) – - The S3 location of the input data. 
- s3BucketOwner (string) – - The ID of the Amazon Web Services account that owns the S3 bucket containing the input data. 
 
 
- outputDataConfig (dict) – - Details about the location of the output of the batch inference job. - Note- This is a Tagged Union structure. Only one of the following top level keys will be set: - s3OutputDataConfig. If a client receives an unknown member it will set- SDK_UNKNOWN_MEMBERas the top level key, which maps to the name or tag of the unknown member. The structure of- SDK_UNKNOWN_MEMBERis as follows:- 'SDK_UNKNOWN_MEMBER': {'name': 'UnknownMemberName'} - s3OutputDataConfig (dict) – - Contains the configuration of the S3 location of the output data. - s3Uri (string) – - The S3 location of the output data. 
- s3EncryptionKeyId (string) – - The unique identifier of the key that encrypts the S3 location of the output data. 
- s3BucketOwner (string) – - The ID of the Amazon Web Services account that owns the S3 bucket containing the output data. 
 
 
- vpcConfig (dict) – - The configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job. For more information, see Protect batch inference jobs using a VPC. - subnetIds (list) – - An array of IDs for each subnet in the VPC to use. - (string) – 
 
- securityGroupIds (list) – - An array of IDs for each security group in the VPC to use. - (string) – 
 
 
- timeoutDurationInHours (integer) – - The number of hours after which batch inference job was set to time out. 
- jobExpirationTime (datetime) – - The time at which the batch inference job times or timed out. 
 
 
 - Exceptions- Bedrock.Client.exceptions.ResourceNotFoundException
- Bedrock.Client.exceptions.AccessDeniedException
- Bedrock.Client.exceptions.ValidationException
- Bedrock.Client.exceptions.InternalServerException
- Bedrock.Client.exceptions.ThrottlingException