Bedrock / Client / list_inference_profiles
list_inference_profiles#
- Bedrock.Client.list_inference_profiles(**kwargs)#
- Returns a list of inference profiles that you can use. For more information, see Increase throughput and resilience with cross-region inference in Amazon Bedrock. in the Amazon Bedrock User Guide. - See also: AWS API Documentation - Request Syntax- response = client.list_inference_profiles( maxResults=123, nextToken='string', typeEquals='SYSTEM_DEFINED'|'APPLICATION' ) - Parameters:
- maxResults (integer) – The maximum number of results to return in the response. If the total number of results is greater than this value, use the token returned in the response in the - nextTokenfield when making another request to return the next batch of results.
- nextToken (string) – If the total number of results is greater than the - maxResultsvalue provided in the request, enter the token returned in the- nextTokenfield in the response in this field to return the next batch of results.
- typeEquals (string) – - Filters for inference profiles that match the type you specify. - SYSTEM_DEFINED– The inference profile is defined by Amazon Bedrock. You can route inference requests across regions with these inference profiles.
- APPLICATION– The inference profile was created by a user. This type of inference profile can track metrics and costs when invoking the model in it. The inference profile may route requests to one or multiple regions.
 
 
- Return type:
- dict 
- Returns:
- Response Syntax- { 'inferenceProfileSummaries': [ { 'inferenceProfileName': 'string', 'description': 'string', 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'inferenceProfileArn': 'string', 'models': [ { 'modelArn': 'string' }, ], 'inferenceProfileId': 'string', 'status': 'ACTIVE', 'type': 'SYSTEM_DEFINED'|'APPLICATION' }, ], 'nextToken': 'string' } - Response Structure- (dict) – - inferenceProfileSummaries (list) – - A list of information about each inference profile that you can use. - (dict) – - Contains information about an inference profile. - inferenceProfileName (string) – - The name of the inference profile. 
- description (string) – - The description of the inference profile. 
- createdAt (datetime) – - The time at which the inference profile was created. 
- updatedAt (datetime) – - The time at which the inference profile was last updated. 
- inferenceProfileArn (string) – - The Amazon Resource Name (ARN) of the inference profile. 
- models (list) – - A list of information about each model in the inference profile. - (dict) – - Contains information about a model. - modelArn (string) – - The Amazon Resource Name (ARN) of the model. 
 
 
- inferenceProfileId (string) – - The unique identifier of the inference profile. 
- status (string) – - The status of the inference profile. - ACTIVEmeans that the inference profile is ready to be used.
- type (string) – - The type of the inference profile. The following types are possible: - SYSTEM_DEFINED– The inference profile is defined by Amazon Bedrock. You can route inference requests across regions with these inference profiles.
- APPLICATION– The inference profile was created by a user. This type of inference profile can track metrics and costs when invoking the model in it. The inference profile may route requests to one or multiple regions.
 
 
 
- nextToken (string) – - If the total number of results is greater than the - maxResultsvalue provided in the request, use this token when making another request in the- nextTokenfield to return the next batch of results.
 
 
 - Exceptions- Bedrock.Client.exceptions.AccessDeniedException
- Bedrock.Client.exceptions.ValidationException
- Bedrock.Client.exceptions.InternalServerException
- Bedrock.Client.exceptions.ThrottlingException