Bedrock / Paginator / ListInferenceProfiles
ListInferenceProfiles#
- class Bedrock.Paginator.ListInferenceProfiles#
- paginator = client.get_paginator('list_inference_profiles') - paginate(**kwargs)#
- Creates an iterator that will paginate through responses from - Bedrock.Client.list_inference_profiles().- See also: AWS API Documentation - Request Syntax- response_iterator = paginator.paginate( typeEquals='SYSTEM_DEFINED'|'APPLICATION', PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) - Parameters:
- typeEquals (string) – - Filters for inference profiles that match the type you specify. - SYSTEM_DEFINED– The inference profile is defined by Amazon Bedrock. You can route inference requests across regions with these inference profiles.
- APPLICATION– The inference profile was created by a user. This type of inference profile can track metrics and costs when invoking the model in it. The inference profile may route requests to one or multiple regions.
 
- PaginationConfig (dict) – - A dictionary that provides parameters to control pagination. - MaxItems (integer) – - The total number of items to return. If the total number of items available is more than the value specified in max-items then a - NextTokenwill be provided in the output that you can use to resume pagination.
- PageSize (integer) – - The size of each page. 
- StartingToken (string) – - A token to specify where to start paginating. This is the - NextTokenfrom a previous response.
 
 
- Return type:
- dict 
- Returns:
- Response Syntax- { 'inferenceProfileSummaries': [ { 'inferenceProfileName': 'string', 'description': 'string', 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'inferenceProfileArn': 'string', 'models': [ { 'modelArn': 'string' }, ], 'inferenceProfileId': 'string', 'status': 'ACTIVE', 'type': 'SYSTEM_DEFINED'|'APPLICATION' }, ], 'NextToken': 'string' } - Response Structure- (dict) – - inferenceProfileSummaries (list) – - A list of information about each inference profile that you can use. - (dict) – - Contains information about an inference profile. - inferenceProfileName (string) – - The name of the inference profile. 
- description (string) – - The description of the inference profile. 
- createdAt (datetime) – - The time at which the inference profile was created. 
- updatedAt (datetime) – - The time at which the inference profile was last updated. 
- inferenceProfileArn (string) – - The Amazon Resource Name (ARN) of the inference profile. 
- models (list) – - A list of information about each model in the inference profile. - (dict) – - Contains information about a model. - modelArn (string) – - The Amazon Resource Name (ARN) of the model. 
 
 
- inferenceProfileId (string) – - The unique identifier of the inference profile. 
- status (string) – - The status of the inference profile. - ACTIVEmeans that the inference profile is ready to be used.
- type (string) – - The type of the inference profile. The following types are possible: - SYSTEM_DEFINED– The inference profile is defined by Amazon Bedrock. You can route inference requests across regions with these inference profiles.
- APPLICATION– The inference profile was created by a user. This type of inference profile can track metrics and costs when invoking the model in it. The inference profile may route requests to one or multiple regions.
 
 
 
- NextToken (string) – - A token to resume pagination.