LookoutEquipment / Client / start_inference_scheduler
start_inference_scheduler#
- LookoutEquipment.Client.start_inference_scheduler(**kwargs)#
Starts an inference scheduler.
See also: AWS API Documentation
Request Syntax
response = client.start_inference_scheduler( InferenceSchedulerName='string' )
- Parameters:
InferenceSchedulerName (string) –
[REQUIRED]
The name of the inference scheduler to be started.
- Return type:
dict
- Returns:
Response Syntax
{ 'ModelArn': 'string', 'ModelName': 'string', 'InferenceSchedulerName': 'string', 'InferenceSchedulerArn': 'string', 'Status': 'PENDING'|'RUNNING'|'STOPPING'|'STOPPED' }
Response Structure
(dict) –
ModelArn (string) –
The Amazon Resource Name (ARN) of the ML model being used by the inference scheduler.
ModelName (string) –
The name of the ML model being used by the inference scheduler.
InferenceSchedulerName (string) –
The name of the inference scheduler being started.
InferenceSchedulerArn (string) –
The Amazon Resource Name (ARN) of the inference scheduler being started.
Status (string) –
Indicates the status of the inference scheduler.
Exceptions
LookoutEquipment.Client.exceptions.ValidationException
LookoutEquipment.Client.exceptions.ConflictException
LookoutEquipment.Client.exceptions.ResourceNotFoundException
LookoutEquipment.Client.exceptions.ThrottlingException
LookoutEquipment.Client.exceptions.AccessDeniedException
LookoutEquipment.Client.exceptions.InternalServerException