SageMaker / Client / update_inference_experiment
update_inference_experiment#
- SageMaker.Client.update_inference_experiment(**kwargs)#
- Updates an inference experiment that you created. The status of the inference experiment has to be either - Created,- Running. For more information on the status of an inference experiment, see DescribeInferenceExperiment.- See also: AWS API Documentation - Request Syntax- response = client.update_inference_experiment( Name='string', Schedule={ 'StartTime': datetime(2015, 1, 1), 'EndTime': datetime(2015, 1, 1) }, Description='string', ModelVariants=[ { 'ModelName': 'string', 'VariantName': 'string', 'InfrastructureConfig': { 'InfrastructureType': 'RealTimeInference', 'RealTimeInferenceConfig': { 'InstanceType': 'ml.t2.medium'|'ml.t2.large'|'ml.t2.xlarge'|'ml.t2.2xlarge'|'ml.t3.medium'|'ml.t3.large'|'ml.t3.xlarge'|'ml.t3.2xlarge'|'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.m5d.large'|'ml.m5d.xlarge'|'ml.m5d.2xlarge'|'ml.m5d.4xlarge'|'ml.m5d.8xlarge'|'ml.m5d.12xlarge'|'ml.m5d.16xlarge'|'ml.m5d.24xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.c5d.xlarge'|'ml.c5d.2xlarge'|'ml.c5d.4xlarge'|'ml.c5d.9xlarge'|'ml.c5d.18xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.p3dn.24xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge'|'ml.r5.large'|'ml.r5.xlarge'|'ml.r5.2xlarge'|'ml.r5.4xlarge'|'ml.r5.8xlarge'|'ml.r5.12xlarge'|'ml.r5.16xlarge'|'ml.r5.24xlarge'|'ml.g5.xlarge'|'ml.g5.2xlarge'|'ml.g5.4xlarge'|'ml.g5.8xlarge'|'ml.g5.16xlarge'|'ml.g5.12xlarge'|'ml.g5.24xlarge'|'ml.g5.48xlarge'|'ml.inf1.xlarge'|'ml.inf1.2xlarge'|'ml.inf1.6xlarge'|'ml.inf1.24xlarge'|'ml.p4d.24xlarge'|'ml.p4de.24xlarge', 'InstanceCount': 123 } } }, ], DataStorageConfig={ 'Destination': 'string', 'KmsKey': 'string', 'ContentType': { 'CsvContentTypes': [ 'string', ], 'JsonContentTypes': [ 'string', ] } }, ShadowModeConfig={ 'SourceModelVariantName': 'string', 'ShadowModelVariants': [ { 'ShadowModelVariantName': 'string', 'SamplingPercentage': 123 }, ] } ) - Parameters:
- Name (string) – - [REQUIRED] - The name of the inference experiment to be updated. 
- Schedule (dict) – - The duration for which the inference experiment will run. If the status of the inference experiment is - Created, then you can update both the start and end dates. If the status of the inference experiment is- Running, then you can update only the end date.- StartTime (datetime) – - The timestamp at which the inference experiment started or will start. 
- EndTime (datetime) – - The timestamp at which the inference experiment ended or will end. 
 
- Description (string) – The description of the inference experiment. 
- ModelVariants (list) – - An array of - ModelVariantConfigobjects. There is one for each variant, whose infrastructure configuration you want to update.- (dict) – - Contains information about the deployment options of a model. - ModelName (string) – [REQUIRED] - The name of the Amazon SageMaker Model entity. 
- VariantName (string) – [REQUIRED] - The name of the variant. 
- InfrastructureConfig (dict) – [REQUIRED] - The configuration for the infrastructure that the model will be deployed to. - InfrastructureType (string) – [REQUIRED] - The inference option to which to deploy your model. Possible values are the following: - RealTime: Deploy to real-time inference.
 
- RealTimeInferenceConfig (dict) – [REQUIRED] - The infrastructure configuration for deploying the model to real-time inference. - InstanceType (string) – [REQUIRED] - The instance type the model is deployed to. 
- InstanceCount (integer) – [REQUIRED] - The number of instances of the type specified by - InstanceType.
 
 
 
 
- DataStorageConfig (dict) – - The Amazon S3 location and configuration for storing inference request and response data. - Destination (string) – [REQUIRED] - The Amazon S3 bucket where the inference request and response data is stored. 
- KmsKey (string) – - The Amazon Web Services Key Management Service key that Amazon SageMaker uses to encrypt captured data at rest using Amazon S3 server-side encryption. 
- ContentType (dict) – - Configuration specifying how to treat different headers. If no headers are specified Amazon SageMaker will by default base64 encode when capturing the data. - CsvContentTypes (list) – - The list of all content type headers that Amazon SageMaker will treat as CSV and capture accordingly. - (string) – 
 
- JsonContentTypes (list) – - The list of all content type headers that SageMaker will treat as JSON and capture accordingly. - (string) – 
 
 
 
- ShadowModeConfig (dict) – - The configuration of - ShadowModeinference experiment type. Use this field to specify a production variant which takes all the inference requests, and a shadow variant to which Amazon SageMaker replicates a percentage of the inference requests. For the shadow variant also specify the percentage of requests that Amazon SageMaker replicates.- SourceModelVariantName (string) – [REQUIRED] - The name of the production variant, which takes all the inference requests. 
- ShadowModelVariants (list) – [REQUIRED] - List of shadow variant configurations. - (dict) – - The name and sampling percentage of a shadow variant. - ShadowModelVariantName (string) – [REQUIRED] - The name of the shadow variant. 
- SamplingPercentage (integer) – [REQUIRED] - The percentage of inference requests that Amazon SageMaker replicates from the production variant to the shadow variant. 
 
 
 
 
- Return type:
- dict 
- Returns:
- Response Syntax- { 'InferenceExperimentArn': 'string' } - Response Structure- (dict) – - InferenceExperimentArn (string) – - The ARN of the updated inference experiment. 
 
 
 - Exceptions- SageMaker.Client.exceptions.ConflictException
- SageMaker.Client.exceptions.ResourceNotFound