Bedrock / Client / get_inference_profile
get_inference_profile#
- Bedrock.Client.get_inference_profile(**kwargs)#
Gets information about an inference profile. For more information, see Increase throughput and resilience with cross-region inference in Amazon Bedrock. in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
response = client.get_inference_profile( inferenceProfileIdentifier='string' )
- Parameters:
inferenceProfileIdentifier (string) –
[REQUIRED]
The ID or Amazon Resource Name (ARN) of the inference profile.
- Return type:
dict
- Returns:
Response Syntax
{ 'inferenceProfileName': 'string', 'description': 'string', 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'inferenceProfileArn': 'string', 'models': [ { 'modelArn': 'string' }, ], 'inferenceProfileId': 'string', 'status': 'ACTIVE', 'type': 'SYSTEM_DEFINED'|'APPLICATION' }
Response Structure
(dict) –
inferenceProfileName (string) –
The name of the inference profile.
description (string) –
The description of the inference profile.
createdAt (datetime) –
The time at which the inference profile was created.
updatedAt (datetime) –
The time at which the inference profile was last updated.
inferenceProfileArn (string) –
The Amazon Resource Name (ARN) of the inference profile.
models (list) –
A list of information about each model in the inference profile.
(dict) –
Contains information about a model.
modelArn (string) –
The Amazon Resource Name (ARN) of the model.
inferenceProfileId (string) –
The unique identifier of the inference profile.
status (string) –
The status of the inference profile.
ACTIVE
means that the inference profile is ready to be used.type (string) –
The type of the inference profile. The following types are possible:
SYSTEM_DEFINED
– The inference profile is defined by Amazon Bedrock. You can route inference requests across regions with these inference profiles.APPLICATION
– The inference profile was created by a user. This type of inference profile can track metrics and costs when invoking the model in it. The inference profile may route requests to one or multiple regions.
Exceptions
Bedrock.Client.exceptions.ResourceNotFoundException
Bedrock.Client.exceptions.AccessDeniedException
Bedrock.Client.exceptions.ValidationException
Bedrock.Client.exceptions.InternalServerException
Bedrock.Client.exceptions.ThrottlingException