Bedrock / Paginator / ListModelInvocationJobs
ListModelInvocationJobs#
- class Bedrock.Paginator.ListModelInvocationJobs#
paginator = client.get_paginator('list_model_invocation_jobs')
- paginate(**kwargs)#
Creates an iterator that will paginate through responses from
Bedrock.Client.list_model_invocation_jobs()
.See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate( submitTimeAfter=datetime(2015, 1, 1), submitTimeBefore=datetime(2015, 1, 1), statusEquals='Submitted'|'InProgress'|'Completed'|'Failed'|'Stopping'|'Stopped'|'PartiallyCompleted'|'Expired'|'Validating'|'Scheduled', nameContains='string', sortBy='CreationTime', sortOrder='Ascending'|'Descending', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } )
- Parameters:
submitTimeAfter (datetime) – Specify a time to filter for batch inference jobs that were submitted after the time you specify.
submitTimeBefore (datetime) – Specify a time to filter for batch inference jobs that were submitted before the time you specify.
statusEquals (string) – Specify a status to filter for batch inference jobs whose statuses match the string you specify.
nameContains (string) – Specify a string to filter for batch inference jobs whose names contain the string.
sortBy (string) – An attribute by which to sort the results.
sortOrder (string) – Specifies whether to sort the results by ascending or descending order.
PaginationConfig (dict) –
A dictionary that provides parameters to control pagination.
MaxItems (integer) –
The total number of items to return. If the total number of items available is more than the value specified in max-items then a
NextToken
will be provided in the output that you can use to resume pagination.PageSize (integer) –
The size of each page.
StartingToken (string) –
A token to specify where to start paginating. This is the
NextToken
from a previous response.
- Return type:
dict
- Returns:
Response Syntax
{ 'invocationJobSummaries': [ { 'jobArn': 'string', 'jobName': 'string', 'modelId': 'string', 'clientRequestToken': 'string', 'roleArn': 'string', 'status': 'Submitted'|'InProgress'|'Completed'|'Failed'|'Stopping'|'Stopped'|'PartiallyCompleted'|'Expired'|'Validating'|'Scheduled', 'message': 'string', 'submitTime': datetime(2015, 1, 1), 'lastModifiedTime': datetime(2015, 1, 1), 'endTime': datetime(2015, 1, 1), 'inputDataConfig': { 's3InputDataConfig': { 's3InputFormat': 'JSONL', 's3Uri': 'string' } }, 'outputDataConfig': { 's3OutputDataConfig': { 's3Uri': 'string', 's3EncryptionKeyId': 'string' } }, 'timeoutDurationInHours': 123, 'jobExpirationTime': datetime(2015, 1, 1) }, ], 'NextToken': 'string' }
Response Structure
(dict) –
invocationJobSummaries (list) –
A list of items, each of which contains a summary about a batch inference job.
(dict) –
A summary of a batch inference job.
jobArn (string) –
The Amazon Resource Name (ARN) of the batch inference job.
jobName (string) –
The name of the batch inference job.
modelId (string) –
The unique identifier of the foundation model used for model inference.
clientRequestToken (string) –
A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
roleArn (string) –
The Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference. You can use the console to create a default service role or follow the steps at Create a service role for batch inference.
status (string) –
The status of the batch inference job.
message (string) –
If the batch inference job failed, this field contains a message describing why the job failed.
submitTime (datetime) –
The time at which the batch inference job was submitted.
lastModifiedTime (datetime) –
The time at which the batch inference job was last modified.
endTime (datetime) –
The time at which the batch inference job ended.
inputDataConfig (dict) –
Details about the location of the input to the batch inference job.
Note
This is a Tagged Union structure. Only one of the following top level keys will be set:
s3InputDataConfig
. If a client receives an unknown member it will setSDK_UNKNOWN_MEMBER
as the top level key, which maps to the name or tag of the unknown member. The structure ofSDK_UNKNOWN_MEMBER
is as follows:'SDK_UNKNOWN_MEMBER': {'name': 'UnknownMemberName'}
s3InputDataConfig (dict) –
Contains the configuration of the S3 location of the input data.
s3InputFormat (string) –
The format of the input data.
s3Uri (string) –
The S3 location of the input data.
outputDataConfig (dict) –
Details about the location of the output of the batch inference job.
Note
This is a Tagged Union structure. Only one of the following top level keys will be set:
s3OutputDataConfig
. If a client receives an unknown member it will setSDK_UNKNOWN_MEMBER
as the top level key, which maps to the name or tag of the unknown member. The structure ofSDK_UNKNOWN_MEMBER
is as follows:'SDK_UNKNOWN_MEMBER': {'name': 'UnknownMemberName'}
s3OutputDataConfig (dict) –
Contains the configuration of the S3 location of the output data.
s3Uri (string) –
The S3 location of the output data.
s3EncryptionKeyId (string) –
The unique identifier of the key that encrypts the S3 location of the output data.
timeoutDurationInHours (integer) –
The number of hours after which the batch inference job was set to time out.
jobExpirationTime (datetime) –
The time at which the batch inference job times or timed out.
NextToken (string) –
A token to resume pagination.