LookoutforVision / Client / start_model
start_model#
- LookoutforVision.Client.start_model(**kwargs)#
Starts the running of the version of an Amazon Lookout for Vision model. Starting a model takes a while to complete. To check the current state of the model, use DescribeModel.
A model is ready to use when its status is
HOSTED
.Once the model is running, you can detect custom labels in new images by calling DetectAnomalies.
Note
You are charged for the amount of time that the model is running. To stop a running model, call StopModel.
This operation requires permissions to perform the
lookoutvision:StartModel
operation.See also: AWS API Documentation
Request Syntax
response = client.start_model( ProjectName='string', ModelVersion='string', MinInferenceUnits=123, ClientToken='string', MaxInferenceUnits=123 )
- Parameters:
ProjectName (string) –
[REQUIRED]
The name of the project that contains the model that you want to start.
ModelVersion (string) –
[REQUIRED]
The version of the model that you want to start.
MinInferenceUnits (integer) –
[REQUIRED]
The minimum number of inference units to use. A single inference unit represents 1 hour of processing. Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
ClientToken (string) –
ClientToken is an idempotency token that ensures a call to
StartModel
completes only once. You choose the value to pass. For example, An issue might prevent you from getting a response fromStartModel
. In this case, safely retry your call toStartModel
by using the sameClientToken
parameter value.If you don’t supply a value for
ClientToken
, the AWS SDK you are using inserts a value for you. This prevents retries after a network error from making multiple start requests. You’ll need to provide your own value for other use cases.An error occurs if the other input parameters are not the same as in the first request. Using a different value for
ClientToken
is considered a new call toStartModel
. An idempotency token is active for 8 hours.This field is autopopulated if not provided.
MaxInferenceUnits (integer) – The maximum number of inference units to use for auto-scaling the model. If you don’t specify a value, Amazon Lookout for Vision doesn’t auto-scale the model.
- Return type:
dict
- Returns:
Response Syntax
{ 'Status': 'STARTING_HOSTING'|'HOSTED'|'HOSTING_FAILED'|'STOPPING_HOSTING'|'SYSTEM_UPDATING' }
Response Structure
(dict) –
Status (string) –
The current running status of the model.
Exceptions