SageMaker / Client / stop_inference_experiment
stop_inference_experiment#
- SageMaker.Client.stop_inference_experiment(**kwargs)#
Stops an inference experiment.
See also: AWS API Documentation
Request Syntax
response = client.stop_inference_experiment( Name='string', ModelVariantActions={ 'string': 'Retain'|'Remove'|'Promote' }, DesiredModelVariants=[ { 'ModelName': 'string', 'VariantName': 'string', 'InfrastructureConfig': { 'InfrastructureType': 'RealTimeInference', 'RealTimeInferenceConfig': { 'InstanceType': 'ml.t2.medium'|'ml.t2.large'|'ml.t2.xlarge'|'ml.t2.2xlarge'|'ml.t3.medium'|'ml.t3.large'|'ml.t3.xlarge'|'ml.t3.2xlarge'|'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.m5d.large'|'ml.m5d.xlarge'|'ml.m5d.2xlarge'|'ml.m5d.4xlarge'|'ml.m5d.8xlarge'|'ml.m5d.12xlarge'|'ml.m5d.16xlarge'|'ml.m5d.24xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.c5d.xlarge'|'ml.c5d.2xlarge'|'ml.c5d.4xlarge'|'ml.c5d.9xlarge'|'ml.c5d.18xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.p3dn.24xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge'|'ml.r5.large'|'ml.r5.xlarge'|'ml.r5.2xlarge'|'ml.r5.4xlarge'|'ml.r5.8xlarge'|'ml.r5.12xlarge'|'ml.r5.16xlarge'|'ml.r5.24xlarge'|'ml.g5.xlarge'|'ml.g5.2xlarge'|'ml.g5.4xlarge'|'ml.g5.8xlarge'|'ml.g5.16xlarge'|'ml.g5.12xlarge'|'ml.g5.24xlarge'|'ml.g5.48xlarge'|'ml.inf1.xlarge'|'ml.inf1.2xlarge'|'ml.inf1.6xlarge'|'ml.inf1.24xlarge'|'ml.p4d.24xlarge'|'ml.p4de.24xlarge'|'ml.p5.48xlarge'|'ml.m6i.large'|'ml.m6i.xlarge'|'ml.m6i.2xlarge'|'ml.m6i.4xlarge'|'ml.m6i.8xlarge'|'ml.m6i.12xlarge'|'ml.m6i.16xlarge'|'ml.m6i.24xlarge'|'ml.m6i.32xlarge'|'ml.m7i.large'|'ml.m7i.xlarge'|'ml.m7i.2xlarge'|'ml.m7i.4xlarge'|'ml.m7i.8xlarge'|'ml.m7i.12xlarge'|'ml.m7i.16xlarge'|'ml.m7i.24xlarge'|'ml.m7i.48xlarge'|'ml.c6i.large'|'ml.c6i.xlarge'|'ml.c6i.2xlarge'|'ml.c6i.4xlarge'|'ml.c6i.8xlarge'|'ml.c6i.12xlarge'|'ml.c6i.16xlarge'|'ml.c6i.24xlarge'|'ml.c6i.32xlarge'|'ml.c7i.large'|'ml.c7i.xlarge'|'ml.c7i.2xlarge'|'ml.c7i.4xlarge'|'ml.c7i.8xlarge'|'ml.c7i.12xlarge'|'ml.c7i.16xlarge'|'ml.c7i.24xlarge'|'ml.c7i.48xlarge'|'ml.r6i.large'|'ml.r6i.xlarge'|'ml.r6i.2xlarge'|'ml.r6i.4xlarge'|'ml.r6i.8xlarge'|'ml.r6i.12xlarge'|'ml.r6i.16xlarge'|'ml.r6i.24xlarge'|'ml.r6i.32xlarge'|'ml.r7i.large'|'ml.r7i.xlarge'|'ml.r7i.2xlarge'|'ml.r7i.4xlarge'|'ml.r7i.8xlarge'|'ml.r7i.12xlarge'|'ml.r7i.16xlarge'|'ml.r7i.24xlarge'|'ml.r7i.48xlarge'|'ml.m6id.large'|'ml.m6id.xlarge'|'ml.m6id.2xlarge'|'ml.m6id.4xlarge'|'ml.m6id.8xlarge'|'ml.m6id.12xlarge'|'ml.m6id.16xlarge'|'ml.m6id.24xlarge'|'ml.m6id.32xlarge'|'ml.c6id.large'|'ml.c6id.xlarge'|'ml.c6id.2xlarge'|'ml.c6id.4xlarge'|'ml.c6id.8xlarge'|'ml.c6id.12xlarge'|'ml.c6id.16xlarge'|'ml.c6id.24xlarge'|'ml.c6id.32xlarge'|'ml.r6id.large'|'ml.r6id.xlarge'|'ml.r6id.2xlarge'|'ml.r6id.4xlarge'|'ml.r6id.8xlarge'|'ml.r6id.12xlarge'|'ml.r6id.16xlarge'|'ml.r6id.24xlarge'|'ml.r6id.32xlarge'|'ml.g6.xlarge'|'ml.g6.2xlarge'|'ml.g6.4xlarge'|'ml.g6.8xlarge'|'ml.g6.12xlarge'|'ml.g6.16xlarge'|'ml.g6.24xlarge'|'ml.g6.48xlarge', 'InstanceCount': 123 } } }, ], DesiredState='Completed'|'Cancelled', Reason='string' )
- Parameters:
Name (string) –
[REQUIRED]
The name of the inference experiment to stop.
ModelVariantActions (dict) –
[REQUIRED]
Array of key-value pairs, with names of variants mapped to actions. The possible actions are the following:
Promote
- Promote the shadow variant to a production variantRemove
- Delete the variantRetain
- Keep the variant as it is
(string) –
(string) –
DesiredModelVariants (list) –
An array of
ModelVariantConfig
objects. There is one for each variant that you want to deploy after the inference experiment stops. EachModelVariantConfig
describes the infrastructure configuration for deploying the corresponding variant.(dict) –
Contains information about the deployment options of a model.
ModelName (string) – [REQUIRED]
The name of the Amazon SageMaker Model entity.
VariantName (string) – [REQUIRED]
The name of the variant.
InfrastructureConfig (dict) – [REQUIRED]
The configuration for the infrastructure that the model will be deployed to.
InfrastructureType (string) – [REQUIRED]
The inference option to which to deploy your model. Possible values are the following:
RealTime
: Deploy to real-time inference.
RealTimeInferenceConfig (dict) – [REQUIRED]
The infrastructure configuration for deploying the model to real-time inference.
InstanceType (string) – [REQUIRED]
The instance type the model is deployed to.
InstanceCount (integer) – [REQUIRED]
The number of instances of the type specified by
InstanceType
.
DesiredState (string) –
The desired state of the experiment after stopping. The possible states are the following:
Completed
: The experiment completed successfullyCancelled
: The experiment was canceled
Reason (string) – The reason for stopping the experiment.
- Return type:
dict
- Returns:
Response Syntax
{ 'InferenceExperimentArn': 'string' }
Response Structure
(dict) –
InferenceExperimentArn (string) –
The ARN of the stopped inference experiment.
Exceptions
SageMaker.Client.exceptions.ConflictException
SageMaker.Client.exceptions.ResourceNotFound