LookoutEquipment

Table of Contents

Client

class LookoutEquipment.Client

A low-level client representing Amazon Lookout for Equipment (LookoutEquipment)

Amazon Lookout for Equipment is a machine learning service that uses advanced analytics to identify anomalies in machines from sensor data for use in predictive maintenance.

import boto3

client = boto3.client('lookoutequipment')

These are the available methods:

can_paginate(operation_name)

Check if an operation can be paginated.

Parameters
operation_name (string) -- The operation name. This is the same name as the method name on the client. For example, if the method name is create_foo, and you'd normally invoke the operation as client.create_foo(**kwargs), if the create_foo operation can be paginated, you can use the call client.get_paginator("create_foo").
Returns
True if the operation can be paginated, False otherwise.
create_dataset(**kwargs)

Creates a container for a collection of data being ingested for analysis. The dataset contains the metadata describing where the data is and what the data actually looks like. In other words, it contains the location of the data source, the data schema, and other information. A dataset also contains any tags associated with the ingested data.

See also: AWS API Documentation

Request Syntax

response = client.create_dataset(
    DatasetName='string',
    DatasetSchema={
        'InlineDataSchema': 'string'
    },
    ServerSideKmsKeyId='string',
    ClientToken='string',
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
Parameters
  • DatasetName (string) --

    [REQUIRED]

    The name of the dataset being created.

  • DatasetSchema (dict) --

    [REQUIRED]

    A JSON description of the data that is in each time series dataset, including names, column names, and data types.

    • InlineDataSchema (string) --
  • ServerSideKmsKeyId (string) -- Provides the identifier of the AWS KMS customer master key (CMK) used to encrypt dataset data by Amazon Lookout for Equipment.
  • ClientToken (string) --

    [REQUIRED]

    A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one.

    This field is autopopulated if not provided.

  • Tags (list) --

    Any tags associated with the ingested data described in the dataset.

    • (dict) --

      A tag is a key-value pair that can be added to a resource as metadata.

      • Key (string) -- [REQUIRED]

        The key for the specified tag.

      • Value (string) -- [REQUIRED]

        The value for the specified tag.

Return type

dict

Returns

Response Syntax

{
    'DatasetName': 'string',
    'DatasetArn': 'string',
    'Status': 'CREATED'|'INGESTION_IN_PROGRESS'|'ACTIVE'
}

Response Structure

  • (dict) --

    • DatasetName (string) --

      The name of the dataset being created.

    • DatasetArn (string) --

      The Amazon Resource Name (ARN) of the dataset being created.

    • Status (string) --

      Indicates the status of the CreateDataset operation.

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ConflictException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.ServiceQuotaExceededException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
create_inference_scheduler(**kwargs)

Creates a scheduled inference. Scheduling an inference is setting up a continuous real-time inference plan to analyze new measurement data. When setting up the schedule, you provide an S3 bucket location for the input data, assign it a delimiter between separate entries in the data, set an offset delay if desired, and set the frequency of inferencing. You must also provide an S3 bucket location for the output data.

See also: AWS API Documentation

Request Syntax

response = client.create_inference_scheduler(
    ModelName='string',
    InferenceSchedulerName='string',
    DataDelayOffsetInMinutes=123,
    DataUploadFrequency='PT5M'|'PT10M'|'PT15M'|'PT30M'|'PT1H',
    DataInputConfiguration={
        'S3InputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        },
        'InputTimeZoneOffset': 'string',
        'InferenceInputNameConfiguration': {
            'TimestampFormat': 'string',
            'ComponentTimestampDelimiter': 'string'
        }
    },
    DataOutputConfiguration={
        'S3OutputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        },
        'KmsKeyId': 'string'
    },
    RoleArn='string',
    ServerSideKmsKeyId='string',
    ClientToken='string',
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
Parameters
  • ModelName (string) --

    [REQUIRED]

    The name of the previously trained ML model being used to create the inference scheduler.

  • InferenceSchedulerName (string) --

    [REQUIRED]

    The name of the inference scheduler being created.

  • DataDelayOffsetInMinutes (integer) -- A period of time (in minutes) by which inference on the data is delayed after the data starts. For instance, if you select an offset delay time of five minutes, inference will not begin on the data until the first data measurement after the five minute mark. For example, if five minutes is selected, the inference scheduler will wake up at the configured frequency with the additional five minute delay time to check the customer S3 bucket. The customer can upload data at the same frequency and they don't need to stop and restart the scheduler when uploading new data.
  • DataUploadFrequency (string) --

    [REQUIRED]

    How often data is uploaded to the source S3 bucket for the input data. The value chosen is the length of time between data uploads. For instance, if you select 5 minutes, Amazon Lookout for Equipment will upload the real-time data to the source bucket once every 5 minutes. This frequency also determines how often Amazon Lookout for Equipment starts a scheduled inference on your data. In this example, it starts once every 5 minutes.

  • DataInputConfiguration (dict) --

    [REQUIRED]

    Specifies configuration information for the input data for the inference scheduler, including delimiter, format, and dataset location.

    • S3InputConfiguration (dict) --

      Specifies configuration information for the input data for the inference, including S3 location of input data..

      • Bucket (string) -- [REQUIRED]

        The bucket containing the input dataset for the inference.

      • Prefix (string) --

        The prefix for the S3 bucket used for the input data for the inference.

    • InputTimeZoneOffset (string) --

      Indicates the difference between your time zone and Greenwich Mean Time (GMT).

    • InferenceInputNameConfiguration (dict) --

      > Specifies configuration information for the input data for the inference, including timestamp format and delimiter.

      • TimestampFormat (string) --

        The format of the timestamp, whether Epoch time, or standard, with or without hyphens (-).

      • ComponentTimestampDelimiter (string) --

        Indicates the delimiter character used between items in the data.

  • DataOutputConfiguration (dict) --

    [REQUIRED]

    Specifies configuration information for the output results for the inference scheduler, including the S3 location for the output.

    • S3OutputConfiguration (dict) -- [REQUIRED]

      Specifies configuration information for the output results from for the inference, output S3 location.

      • Bucket (string) -- [REQUIRED]

        The bucket containing the output results from the inference

      • Prefix (string) --

        The prefix for the S3 bucket used for the output results from the inference.

    • KmsKeyId (string) --

      The ID number for the AWS KMS key used to encrypt the inference output.

  • RoleArn (string) --

    [REQUIRED]

    The Amazon Resource Name (ARN) of a role with permission to access the data source being used for the inference.

  • ServerSideKmsKeyId (string) -- Provides the identifier of the AWS KMS customer master key (CMK) used to encrypt inference scheduler data by Amazon Lookout for Equipment.
  • ClientToken (string) --

    [REQUIRED]

    A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one.

    This field is autopopulated if not provided.

  • Tags (list) --

    Any tags associated with the inference scheduler.

    • (dict) --

      A tag is a key-value pair that can be added to a resource as metadata.

      • Key (string) -- [REQUIRED]

        The key for the specified tag.

      • Value (string) -- [REQUIRED]

        The value for the specified tag.

Return type

dict

Returns

Response Syntax

{
    'InferenceSchedulerArn': 'string',
    'InferenceSchedulerName': 'string',
    'Status': 'PENDING'|'RUNNING'|'STOPPING'|'STOPPED'
}

Response Structure

  • (dict) --

    • InferenceSchedulerArn (string) --

      The Amazon Resource Name (ARN) of the inference scheduler being created.

    • InferenceSchedulerName (string) --

      The name of inference scheduler being created.

    • Status (string) --

      Indicates the status of the CreateInferenceScheduler operation.

Exceptions

  • LookoutEquipment.Client.exceptions.ConflictException
  • LookoutEquipment.Client.exceptions.ResourceNotFoundException
  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ServiceQuotaExceededException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
create_model(**kwargs)

Creates an ML model for data inference.

A machine-learning (ML) model is a mathematical model that finds patterns in your data. In Amazon Lookout for Equipment, the model learns the patterns of normal behavior and detects abnormal behavior that could be potential equipment failure (or maintenance events). The models are made by analyzing normal data and abnormalities in machine behavior that have already occurred.

Your model is trained using a portion of the data from your dataset and uses that data to learn patterns of normal behavior and abnormal patterns that lead to equipment failure. Another portion of the data is used to evaluate the model's accuracy.

See also: AWS API Documentation

Request Syntax

response = client.create_model(
    ModelName='string',
    DatasetName='string',
    DatasetSchema={
        'InlineDataSchema': 'string'
    },
    LabelsInputConfiguration={
        'S3InputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        }
    },
    ClientToken='string',
    TrainingDataStartTime=datetime(2015, 1, 1),
    TrainingDataEndTime=datetime(2015, 1, 1),
    EvaluationDataStartTime=datetime(2015, 1, 1),
    EvaluationDataEndTime=datetime(2015, 1, 1),
    RoleArn='string',
    DataPreProcessingConfiguration={
        'TargetSamplingRate': 'PT1S'|'PT5S'|'PT10S'|'PT15S'|'PT30S'|'PT1M'|'PT5M'|'PT10M'|'PT15M'|'PT30M'|'PT1H'
    },
    ServerSideKmsKeyId='string',
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
Parameters
  • ModelName (string) --

    [REQUIRED]

    The name for the ML model to be created.

  • DatasetName (string) --

    [REQUIRED]

    The name of the dataset for the ML model being created.

  • DatasetSchema (dict) --

    The data schema for the ML model being created.

    • InlineDataSchema (string) --
  • LabelsInputConfiguration (dict) --

    The input configuration for the labels being used for the ML model that's being created.

    • S3InputConfiguration (dict) -- [REQUIRED]

      Contains location information for the S3 location being used for label data.

      • Bucket (string) -- [REQUIRED]

        The name of the S3 bucket holding the label data.

      • Prefix (string) --

        The prefix for the S3 bucket used for the label data.

  • ClientToken (string) --

    [REQUIRED]

    A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one.

    This field is autopopulated if not provided.

  • TrainingDataStartTime (datetime) -- Indicates the time reference in the dataset that should be used to begin the subset of training data for the ML model.
  • TrainingDataEndTime (datetime) -- Indicates the time reference in the dataset that should be used to end the subset of training data for the ML model.
  • EvaluationDataStartTime (datetime) -- Indicates the time reference in the dataset that should be used to begin the subset of evaluation data for the ML model.
  • EvaluationDataEndTime (datetime) -- Indicates the time reference in the dataset that should be used to end the subset of evaluation data for the ML model.
  • RoleArn (string) -- The Amazon Resource Name (ARN) of a role with permission to access the data source being used to create the ML model.
  • DataPreProcessingConfiguration (dict) --

    The configuration is the TargetSamplingRate , which is the sampling rate of the data after post processing by Amazon Lookout for Equipment. For example, if you provide data that has been collected at a 1 second level and you want the system to resample the data at a 1 minute rate before training, the TargetSamplingRate is 1 minute.

    When providing a value for the TargetSamplingRate , you must attach the prefix "PT" to the rate you want. The value for a 1 second rate is therefore PT1S , the value for a 15 minute rate is PT15M , and the value for a 1 hour rate is PT1H

    • TargetSamplingRate (string) --

      The sampling rate of the data after post processing by Amazon Lookout for Equipment. For example, if you provide data that has been collected at a 1 second level and you want the system to resample the data at a 1 minute rate before training, the TargetSamplingRate is 1 minute.

      When providing a value for the TargetSamplingRate , you must attach the prefix "PT" to the rate you want. The value for a 1 second rate is therefore PT1S , the value for a 15 minute rate is PT15M , and the value for a 1 hour rate is PT1H

  • ServerSideKmsKeyId (string) -- Provides the identifier of the AWS KMS customer master key (CMK) used to encrypt model data by Amazon Lookout for Equipment.
  • Tags (list) --

    Any tags associated with the ML model being created.

    • (dict) --

      A tag is a key-value pair that can be added to a resource as metadata.

      • Key (string) -- [REQUIRED]

        The key for the specified tag.

      • Value (string) -- [REQUIRED]

        The value for the specified tag.

Return type

dict

Returns

Response Syntax

{
    'ModelArn': 'string',
    'Status': 'IN_PROGRESS'|'SUCCESS'|'FAILED'
}

Response Structure

  • (dict) --

    • ModelArn (string) --

      The Amazon Resource Name (ARN) of the model being created.

    • Status (string) --

      Indicates the status of the CreateModel operation.

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ConflictException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.ServiceQuotaExceededException
  • LookoutEquipment.Client.exceptions.InternalServerException
  • LookoutEquipment.Client.exceptions.ResourceNotFoundException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
delete_dataset(**kwargs)

Deletes a dataset and associated artifacts. The operation will check to see if any inference scheduler or data ingestion job is currently using the dataset, and if there isn't, the dataset, its metadata, and any associated data stored in S3 will be deleted. This does not affect any models that used this dataset for training and evaluation, but does prevent it from being used in the future.

See also: AWS API Documentation

Request Syntax

response = client.delete_dataset(
    DatasetName='string'
)
Parameters
DatasetName (string) --

[REQUIRED]

The name of the dataset to be deleted.

Returns
None

Exceptions

  • LookoutEquipment.Client.exceptions.ResourceNotFoundException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.InternalServerException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.ConflictException
delete_inference_scheduler(**kwargs)

Deletes an inference scheduler that has been set up. Already processed output results are not affected.

See also: AWS API Documentation

Request Syntax

response = client.delete_inference_scheduler(
    InferenceSchedulerName='string'
)
Parameters
InferenceSchedulerName (string) --

[REQUIRED]

The name of the inference scheduler to be deleted.

Returns
None

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ResourceNotFoundException
  • LookoutEquipment.Client.exceptions.ConflictException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
delete_model(**kwargs)

Deletes an ML model currently available for Amazon Lookout for Equipment. This will prevent it from being used with an inference scheduler, even one that is already set up.

See also: AWS API Documentation

Request Syntax

response = client.delete_model(
    ModelName='string'
)
Parameters
ModelName (string) --

[REQUIRED]

The name of the ML model to be deleted.

Returns
None

Exceptions

  • LookoutEquipment.Client.exceptions.ResourceNotFoundException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.InternalServerException
  • LookoutEquipment.Client.exceptions.ConflictException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
describe_data_ingestion_job(**kwargs)

Provides information on a specific data ingestion job such as creation time, dataset ARN, status, and so on.

See also: AWS API Documentation

Request Syntax

response = client.describe_data_ingestion_job(
    JobId='string'
)
Parameters
JobId (string) --

[REQUIRED]

The job ID of the data ingestion job.

Return type
dict
Returns
Response Syntax
{
    'JobId': 'string',
    'DatasetArn': 'string',
    'IngestionInputConfiguration': {
        'S3InputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        }
    },
    'RoleArn': 'string',
    'CreatedAt': datetime(2015, 1, 1),
    'Status': 'IN_PROGRESS'|'SUCCESS'|'FAILED',
    'FailedReason': 'string'
}

Response Structure

  • (dict) --
    • JobId (string) --

      Indicates the job ID of the data ingestion job.

    • DatasetArn (string) --

      The Amazon Resource Name (ARN) of the dataset being used in the data ingestion job.

    • IngestionInputConfiguration (dict) --

      Specifies the S3 location configuration for the data input for the data ingestion job.

      • S3InputConfiguration (dict) --

        The location information for the S3 bucket used for input data for the data ingestion.

        • Bucket (string) --

          The name of the S3 bucket used for the input data for the data ingestion.

        • Prefix (string) --

          The prefix for the S3 location being used for the input data for the data ingestion.

    • RoleArn (string) --

      The Amazon Resource Name (ARN) of an IAM role with permission to access the data source being ingested.

    • CreatedAt (datetime) --

      The time at which the data ingestion job was created.

    • Status (string) --

      Indicates the status of the DataIngestionJob operation.

    • FailedReason (string) --

      Specifies the reason for failure when a data ingestion job has failed.

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ResourceNotFoundException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
describe_dataset(**kwargs)

Provides information on a specified dataset such as the schema location, status, and so on.

See also: AWS API Documentation

Request Syntax

response = client.describe_dataset(
    DatasetName='string'
)
Parameters
DatasetName (string) --

[REQUIRED]

The name of the dataset to be described.

Return type
dict
Returns
Response Syntax
{
    'DatasetName': 'string',
    'DatasetArn': 'string',
    'CreatedAt': datetime(2015, 1, 1),
    'LastUpdatedAt': datetime(2015, 1, 1),
    'Status': 'CREATED'|'INGESTION_IN_PROGRESS'|'ACTIVE',
    'Schema': 'string',
    'ServerSideKmsKeyId': 'string',
    'IngestionInputConfiguration': {
        'S3InputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        }
    }
}

Response Structure

  • (dict) --
    • DatasetName (string) --

      The name of the dataset being described.

    • DatasetArn (string) --

      The Amazon Resource Name (ARN) of the dataset being described.

    • CreatedAt (datetime) --

      Specifies the time the dataset was created in Amazon Lookout for Equipment.

    • LastUpdatedAt (datetime) --

      Specifies the time the dataset was last updated, if it was.

    • Status (string) --

      Indicates the status of the dataset.

    • Schema (string) --

      A JSON description of the data that is in each time series dataset, including names, column names, and data types.

    • ServerSideKmsKeyId (string) --

      Provides the identifier of the AWS KMS customer master key (CMK) used to encrypt dataset data by Amazon Lookout for Equipment.

    • IngestionInputConfiguration (dict) --

      Specifies the S3 location configuration for the data input for the data ingestion job.

      • S3InputConfiguration (dict) --

        The location information for the S3 bucket used for input data for the data ingestion.

        • Bucket (string) --

          The name of the S3 bucket used for the input data for the data ingestion.

        • Prefix (string) --

          The prefix for the S3 location being used for the input data for the data ingestion.

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ResourceNotFoundException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
describe_inference_scheduler(**kwargs)

Specifies information about the inference scheduler being used, including name, model, status, and associated metadata

See also: AWS API Documentation

Request Syntax

response = client.describe_inference_scheduler(
    InferenceSchedulerName='string'
)
Parameters
InferenceSchedulerName (string) --

[REQUIRED]

The name of the inference scheduler being described.

Return type
dict
Returns
Response Syntax
{
    'ModelArn': 'string',
    'ModelName': 'string',
    'InferenceSchedulerName': 'string',
    'InferenceSchedulerArn': 'string',
    'Status': 'PENDING'|'RUNNING'|'STOPPING'|'STOPPED',
    'DataDelayOffsetInMinutes': 123,
    'DataUploadFrequency': 'PT5M'|'PT10M'|'PT15M'|'PT30M'|'PT1H',
    'CreatedAt': datetime(2015, 1, 1),
    'UpdatedAt': datetime(2015, 1, 1),
    'DataInputConfiguration': {
        'S3InputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        },
        'InputTimeZoneOffset': 'string',
        'InferenceInputNameConfiguration': {
            'TimestampFormat': 'string',
            'ComponentTimestampDelimiter': 'string'
        }
    },
    'DataOutputConfiguration': {
        'S3OutputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        },
        'KmsKeyId': 'string'
    },
    'RoleArn': 'string',
    'ServerSideKmsKeyId': 'string'
}

Response Structure

  • (dict) --
    • ModelArn (string) --

      The Amazon Resource Name (ARN) of the ML model of the inference scheduler being described.

    • ModelName (string) --

      The name of the ML model of the inference scheduler being described.

    • InferenceSchedulerName (string) --

      The name of the inference scheduler being described.

    • InferenceSchedulerArn (string) --

      The Amazon Resource Name (ARN) of the inference scheduler being described.

    • Status (string) --

      Indicates the status of the inference scheduler.

    • DataDelayOffsetInMinutes (integer) --

      A period of time (in minutes) by which inference on the data is delayed after the data starts. For instance, if you select an offset delay time of five minutes, inference will not begin on the data until the first data measurement after the five minute mark. For example, if five minutes is selected, the inference scheduler will wake up at the configured frequency with the additional five minute delay time to check the customer S3 bucket. The customer can upload data at the same frequency and they don't need to stop and restart the scheduler when uploading new data.

    • DataUploadFrequency (string) --

      Specifies how often data is uploaded to the source S3 bucket for the input data. This value is the length of time between data uploads. For instance, if you select 5 minutes, Amazon Lookout for Equipment will upload the real-time data to the source bucket once every 5 minutes. This frequency also determines how often Amazon Lookout for Equipment starts a scheduled inference on your data. In this example, it starts once every 5 minutes.

    • CreatedAt (datetime) --

      Specifies the time at which the inference scheduler was created.

    • UpdatedAt (datetime) --

      Specifies the time at which the inference scheduler was last updated, if it was.

    • DataInputConfiguration (dict) --

      Specifies configuration information for the input data for the inference scheduler, including delimiter, format, and dataset location.

      • S3InputConfiguration (dict) --

        Specifies configuration information for the input data for the inference, including S3 location of input data..

        • Bucket (string) --

          The bucket containing the input dataset for the inference.

        • Prefix (string) --

          The prefix for the S3 bucket used for the input data for the inference.

      • InputTimeZoneOffset (string) --

        Indicates the difference between your time zone and Greenwich Mean Time (GMT).

      • InferenceInputNameConfiguration (dict) --

        > Specifies configuration information for the input data for the inference, including timestamp format and delimiter.

        • TimestampFormat (string) --

          The format of the timestamp, whether Epoch time, or standard, with or without hyphens (-).

        • ComponentTimestampDelimiter (string) --

          Indicates the delimiter character used between items in the data.

    • DataOutputConfiguration (dict) --

      Specifies information for the output results for the inference scheduler, including the output S3 location.

      • S3OutputConfiguration (dict) --

        Specifies configuration information for the output results from for the inference, output S3 location.

        • Bucket (string) --

          The bucket containing the output results from the inference

        • Prefix (string) --

          The prefix for the S3 bucket used for the output results from the inference.

      • KmsKeyId (string) --

        The ID number for the AWS KMS key used to encrypt the inference output.

    • RoleArn (string) --

      The Amazon Resource Name (ARN) of a role with permission to access the data source for the inference scheduler being described.

    • ServerSideKmsKeyId (string) --

      Provides the identifier of the AWS KMS customer master key (CMK) used to encrypt inference scheduler data by Amazon Lookout for Equipment.

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ResourceNotFoundException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
describe_model(**kwargs)

Provides overall information about a specific ML model, including model name and ARN, dataset, training and evaluation information, status, and so on.

See also: AWS API Documentation

Request Syntax

response = client.describe_model(
    ModelName='string'
)
Parameters
ModelName (string) --

[REQUIRED]

The name of the ML model to be described.

Return type
dict
Returns
Response Syntax
{
    'ModelName': 'string',
    'ModelArn': 'string',
    'DatasetName': 'string',
    'DatasetArn': 'string',
    'Schema': 'string',
    'LabelsInputConfiguration': {
        'S3InputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        }
    },
    'TrainingDataStartTime': datetime(2015, 1, 1),
    'TrainingDataEndTime': datetime(2015, 1, 1),
    'EvaluationDataStartTime': datetime(2015, 1, 1),
    'EvaluationDataEndTime': datetime(2015, 1, 1),
    'RoleArn': 'string',
    'DataPreProcessingConfiguration': {
        'TargetSamplingRate': 'PT1S'|'PT5S'|'PT10S'|'PT15S'|'PT30S'|'PT1M'|'PT5M'|'PT10M'|'PT15M'|'PT30M'|'PT1H'
    },
    'Status': 'IN_PROGRESS'|'SUCCESS'|'FAILED',
    'TrainingExecutionStartTime': datetime(2015, 1, 1),
    'TrainingExecutionEndTime': datetime(2015, 1, 1),
    'FailedReason': 'string',
    'ModelMetrics': 'string',
    'LastUpdatedTime': datetime(2015, 1, 1),
    'CreatedAt': datetime(2015, 1, 1),
    'ServerSideKmsKeyId': 'string'
}

Response Structure

  • (dict) --
    • ModelName (string) --

      The name of the ML model being described.

    • ModelArn (string) --

      The Amazon Resource Name (ARN) of the ML model being described.

    • DatasetName (string) --

      The name of the dataset being used by the ML being described.

    • DatasetArn (string) --

      The Amazon Resouce Name (ARN) of the dataset used to create the ML model being described.

    • Schema (string) --

      A JSON description of the data that is in each time series dataset, including names, column names, and data types.

    • LabelsInputConfiguration (dict) --

      Specifies configuration information about the labels input, including its S3 location.

      • S3InputConfiguration (dict) --

        Contains location information for the S3 location being used for label data.

        • Bucket (string) --

          The name of the S3 bucket holding the label data.

        • Prefix (string) --

          The prefix for the S3 bucket used for the label data.

    • TrainingDataStartTime (datetime) --

      Indicates the time reference in the dataset that was used to begin the subset of training data for the ML model.

    • TrainingDataEndTime (datetime) --

      Indicates the time reference in the dataset that was used to end the subset of training data for the ML model.

    • EvaluationDataStartTime (datetime) --

      Indicates the time reference in the dataset that was used to begin the subset of evaluation data for the ML model.

    • EvaluationDataEndTime (datetime) --

      Indicates the time reference in the dataset that was used to end the subset of evaluation data for the ML model.

    • RoleArn (string) --

      The Amazon Resource Name (ARN) of a role with permission to access the data source for the ML model being described.

    • DataPreProcessingConfiguration (dict) --

      The configuration is the TargetSamplingRate , which is the sampling rate of the data after post processing by Amazon Lookout for Equipment. For example, if you provide data that has been collected at a 1 second level and you want the system to resample the data at a 1 minute rate before training, the TargetSamplingRate is 1 minute.

      When providing a value for the TargetSamplingRate , you must attach the prefix "PT" to the rate you want. The value for a 1 second rate is therefore PT1S , the value for a 15 minute rate is PT15M , and the value for a 1 hour rate is PT1H

      • TargetSamplingRate (string) --

        The sampling rate of the data after post processing by Amazon Lookout for Equipment. For example, if you provide data that has been collected at a 1 second level and you want the system to resample the data at a 1 minute rate before training, the TargetSamplingRate is 1 minute.

        When providing a value for the TargetSamplingRate , you must attach the prefix "PT" to the rate you want. The value for a 1 second rate is therefore PT1S , the value for a 15 minute rate is PT15M , and the value for a 1 hour rate is PT1H

    • Status (string) --

      Specifies the current status of the model being described. Status describes the status of the most recent action of the model.

    • TrainingExecutionStartTime (datetime) --

      Indicates the time at which the training of the ML model began.

    • TrainingExecutionEndTime (datetime) --

      Indicates the time at which the training of the ML model was completed.

    • FailedReason (string) --

      If the training of the ML model failed, this indicates the reason for that failure.

    • ModelMetrics (string) --

      The Model Metrics show an aggregated summary of the model's performance within the evaluation time range. This is the JSON content of the metrics created when evaluating the model.

    • LastUpdatedTime (datetime) --

      Indicates the last time the ML model was updated. The type of update is not specified.

    • CreatedAt (datetime) --

      Indicates the time and date at which the ML model was created.

    • ServerSideKmsKeyId (string) --

      Provides the identifier of the AWS KMS customer master key (CMK) used to encrypt model data by Amazon Lookout for Equipment.

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ResourceNotFoundException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
generate_presigned_url(ClientMethod, Params=None, ExpiresIn=3600, HttpMethod=None)

Generate a presigned url given a client, its method, and arguments

Parameters
  • ClientMethod (string) -- The client method to presign for
  • Params (dict) -- The parameters normally passed to ClientMethod.
  • ExpiresIn (int) -- The number of seconds the presigned url is valid for. By default it expires in an hour (3600 seconds)
  • HttpMethod (string) -- The http method to use on the generated url. By default, the http method is whatever is used in the method's model.
Returns

The presigned url

get_paginator(operation_name)

Create a paginator for an operation.

Parameters
operation_name (string) -- The operation name. This is the same name as the method name on the client. For example, if the method name is create_foo, and you'd normally invoke the operation as client.create_foo(**kwargs), if the create_foo operation can be paginated, you can use the call client.get_paginator("create_foo").
Raises OperationNotPageableError
Raised if the operation is not pageable. You can use the client.can_paginate method to check if an operation is pageable.
Return type
L{botocore.paginate.Paginator}
Returns
A paginator object.
get_waiter(waiter_name)

Returns an object that can wait for some condition.

Parameters
waiter_name (str) -- The name of the waiter to get. See the waiters section of the service docs for a list of available waiters.
Returns
The specified waiter object.
Return type
botocore.waiter.Waiter
list_data_ingestion_jobs(**kwargs)

Provides a list of all data ingestion jobs, including dataset name and ARN, S3 location of the input data, status, and so on.

See also: AWS API Documentation

Request Syntax

response = client.list_data_ingestion_jobs(
    DatasetName='string',
    NextToken='string',
    MaxResults=123,
    Status='IN_PROGRESS'|'SUCCESS'|'FAILED'
)
Parameters
  • DatasetName (string) -- The name of the dataset being used for the data ingestion job.
  • NextToken (string) -- An opaque pagination token indicating where to continue the listing of data ingestion jobs.
  • MaxResults (integer) -- Specifies the maximum number of data ingestion jobs to list.
  • Status (string) -- Indicates the status of the data ingestion job.
Return type

dict

Returns

Response Syntax

{
    'NextToken': 'string',
    'DataIngestionJobSummaries': [
        {
            'JobId': 'string',
            'DatasetName': 'string',
            'DatasetArn': 'string',
            'IngestionInputConfiguration': {
                'S3InputConfiguration': {
                    'Bucket': 'string',
                    'Prefix': 'string'
                }
            },
            'Status': 'IN_PROGRESS'|'SUCCESS'|'FAILED'
        },
    ]
}

Response Structure

  • (dict) --

    • NextToken (string) --

      An opaque pagination token indicating where to continue the listing of data ingestion jobs.

    • DataIngestionJobSummaries (list) --

      Specifies information about the specific data ingestion job, including dataset name and status.

      • (dict) --

        Provides information about a specified data ingestion job, including dataset information, data ingestion configuration, and status.

        • JobId (string) --

          Indicates the job ID of the data ingestion job.

        • DatasetName (string) --

          The name of the dataset used for the data ingestion job.

        • DatasetArn (string) --

          The Amazon Resource Name (ARN) of the dataset used in the data ingestion job.

        • IngestionInputConfiguration (dict) --

          Specifies information for the input data for the data inference job, including data S3 location parameters.

          • S3InputConfiguration (dict) --

            The location information for the S3 bucket used for input data for the data ingestion.

            • Bucket (string) --

              The name of the S3 bucket used for the input data for the data ingestion.

            • Prefix (string) --

              The prefix for the S3 location being used for the input data for the data ingestion.

        • Status (string) --

          Indicates the status of the data ingestion job.

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
list_datasets(**kwargs)

Lists all datasets currently available in your account, filtering on the dataset name.

See also: AWS API Documentation

Request Syntax

response = client.list_datasets(
    NextToken='string',
    MaxResults=123,
    DatasetNameBeginsWith='string'
)
Parameters
  • NextToken (string) -- An opaque pagination token indicating where to continue the listing of datasets.
  • MaxResults (integer) -- Specifies the maximum number of datasets to list.
  • DatasetNameBeginsWith (string) -- The beginning of the name of the datasets to be listed.
Return type

dict

Returns

Response Syntax

{
    'NextToken': 'string',
    'DatasetSummaries': [
        {
            'DatasetName': 'string',
            'DatasetArn': 'string',
            'Status': 'CREATED'|'INGESTION_IN_PROGRESS'|'ACTIVE',
            'CreatedAt': datetime(2015, 1, 1)
        },
    ]
}

Response Structure

  • (dict) --

    • NextToken (string) --

      An opaque pagination token indicating where to continue the listing of datasets.

    • DatasetSummaries (list) --

      Provides information about the specified dataset, including creation time, dataset ARN, and status.

      • (dict) --

        Contains information about the specific data set, including name, ARN, and status.

        • DatasetName (string) --

          The name of the dataset.

        • DatasetArn (string) --

          The Amazon Resource Name (ARN) of the specified dataset.

        • Status (string) --

          Indicates the status of the dataset.

        • CreatedAt (datetime) --

          The time at which the dataset was created in Amazon Lookout for Equipment.

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
list_inference_executions(**kwargs)

Lists all inference executions that have been performed by the specified inference scheduler.

See also: AWS API Documentation

Request Syntax

response = client.list_inference_executions(
    NextToken='string',
    MaxResults=123,
    InferenceSchedulerName='string',
    DataStartTimeAfter=datetime(2015, 1, 1),
    DataEndTimeBefore=datetime(2015, 1, 1),
    Status='IN_PROGRESS'|'SUCCESS'|'FAILED'
)
Parameters
  • NextToken (string) -- An opaque pagination token indicating where to continue the listing of inference executions.
  • MaxResults (integer) -- Specifies the maximum number of inference executions to list.
  • InferenceSchedulerName (string) --

    [REQUIRED]

    The name of the inference scheduler for the inference execution listed.

  • DataStartTimeAfter (datetime) -- The time reference in the inferenced dataset after which Amazon Lookout for Equipment started the inference execution.
  • DataEndTimeBefore (datetime) -- The time reference in the inferenced dataset before which Amazon Lookout for Equipment stopped the inference execution.
  • Status (string) -- The status of the inference execution.
Return type

dict

Returns

Response Syntax

{
    'NextToken': 'string',
    'InferenceExecutionSummaries': [
        {
            'ModelName': 'string',
            'ModelArn': 'string',
            'InferenceSchedulerName': 'string',
            'InferenceSchedulerArn': 'string',
            'ScheduledStartTime': datetime(2015, 1, 1),
            'DataStartTime': datetime(2015, 1, 1),
            'DataEndTime': datetime(2015, 1, 1),
            'DataInputConfiguration': {
                'S3InputConfiguration': {
                    'Bucket': 'string',
                    'Prefix': 'string'
                },
                'InputTimeZoneOffset': 'string',
                'InferenceInputNameConfiguration': {
                    'TimestampFormat': 'string',
                    'ComponentTimestampDelimiter': 'string'
                }
            },
            'DataOutputConfiguration': {
                'S3OutputConfiguration': {
                    'Bucket': 'string',
                    'Prefix': 'string'
                },
                'KmsKeyId': 'string'
            },
            'CustomerResultObject': {
                'Bucket': 'string',
                'Key': 'string'
            },
            'Status': 'IN_PROGRESS'|'SUCCESS'|'FAILED',
            'FailedReason': 'string'
        },
    ]
}

Response Structure

  • (dict) --

    • NextToken (string) --

      An opaque pagination token indicating where to continue the listing of inference executions.

    • InferenceExecutionSummaries (list) --

      Provides an array of information about the individual inference executions returned from the ListInferenceExecutions operation, including model used, inference scheduler, data configuration, and so on.

      • (dict) --

        Contains information about the specific inference execution, including input and output data configuration, inference scheduling information, status, and so on.

        • ModelName (string) --

          The name of the ML model being used for the inference execution.

        • ModelArn (string) --

          The Amazon Resource Name (ARN) of the ML model used for the inference execution.

        • InferenceSchedulerName (string) --

          The name of the inference scheduler being used for the inference execution.

        • InferenceSchedulerArn (string) --

          The Amazon Resource Name (ARN) of the inference scheduler being used for the inference execution.

        • ScheduledStartTime (datetime) --

          Indicates the start time at which the inference scheduler began the specific inference execution.

        • DataStartTime (datetime) --

          Indicates the time reference in the dataset at which the inference execution began.

        • DataEndTime (datetime) --

          Indicates the time reference in the dataset at which the inference execution stopped.

        • DataInputConfiguration (dict) --

          Specifies configuration information for the input data for the inference scheduler, including delimiter, format, and dataset location.

          • S3InputConfiguration (dict) --

            Specifies configuration information for the input data for the inference, including S3 location of input data..

            • Bucket (string) --

              The bucket containing the input dataset for the inference.

            • Prefix (string) --

              The prefix for the S3 bucket used for the input data for the inference.

          • InputTimeZoneOffset (string) --

            Indicates the difference between your time zone and Greenwich Mean Time (GMT).

          • InferenceInputNameConfiguration (dict) --

            > Specifies configuration information for the input data for the inference, including timestamp format and delimiter.

            • TimestampFormat (string) --

              The format of the timestamp, whether Epoch time, or standard, with or without hyphens (-).

            • ComponentTimestampDelimiter (string) --

              Indicates the delimiter character used between items in the data.

        • DataOutputConfiguration (dict) --

          Specifies configuration information for the output results from for the inference execution, including the output S3 location.

          • S3OutputConfiguration (dict) --

            Specifies configuration information for the output results from for the inference, output S3 location.

            • Bucket (string) --

              The bucket containing the output results from the inference

            • Prefix (string) --

              The prefix for the S3 bucket used for the output results from the inference.

          • KmsKeyId (string) --

            The ID number for the AWS KMS key used to encrypt the inference output.

        • CustomerResultObject (dict) --

          • Bucket (string) --

            The name of the specific S3 bucket.

          • Key (string) --

            The AWS Key Management Service (AWS KMS) key being used to encrypt the S3 object. Without this key, data in the bucket is not accessible.

        • Status (string) --

          Indicates the status of the inference execution.

        • FailedReason (string) --

          Specifies the reason for failure when an inference execution has failed.

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.ResourceNotFoundException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
list_inference_schedulers(**kwargs)

Retrieves a list of all inference schedulers currently available for your account.

See also: AWS API Documentation

Request Syntax

response = client.list_inference_schedulers(
    NextToken='string',
    MaxResults=123,
    InferenceSchedulerNameBeginsWith='string',
    ModelName='string'
)
Parameters
  • NextToken (string) -- An opaque pagination token indicating where to continue the listing of inference schedulers.
  • MaxResults (integer) -- Specifies the maximum number of inference schedulers to list.
  • InferenceSchedulerNameBeginsWith (string) -- The beginning of the name of the inference schedulers to be listed.
  • ModelName (string) -- The name of the ML model used by the inference scheduler to be listed.
Return type

dict

Returns

Response Syntax

{
    'NextToken': 'string',
    'InferenceSchedulerSummaries': [
        {
            'ModelName': 'string',
            'ModelArn': 'string',
            'InferenceSchedulerName': 'string',
            'InferenceSchedulerArn': 'string',
            'Status': 'PENDING'|'RUNNING'|'STOPPING'|'STOPPED',
            'DataDelayOffsetInMinutes': 123,
            'DataUploadFrequency': 'PT5M'|'PT10M'|'PT15M'|'PT30M'|'PT1H'
        },
    ]
}

Response Structure

  • (dict) --

    • NextToken (string) --

      An opaque pagination token indicating where to continue the listing of inference schedulers.

    • InferenceSchedulerSummaries (list) --

      Provides information about the specified inference scheduler, including data upload frequency, model name and ARN, and status.

      • (dict) --

        Contains information about the specific inference scheduler, including data delay offset, model name and ARN, status, and so on.

        • ModelName (string) --

          The name of the ML model used for the inference scheduler.

        • ModelArn (string) --

          The Amazon Resource Name (ARN) of the ML model used by the inference scheduler.

        • InferenceSchedulerName (string) --

          The name of the inference scheduler.

        • InferenceSchedulerArn (string) --

          The Amazon Resource Name (ARN) of the inference scheduler.

        • Status (string) --

          Indicates the status of the inference scheduler.

        • DataDelayOffsetInMinutes (integer) --

          > A period of time (in minutes) by which inference on the data is delayed after the data starts. For instance, if an offset delay time of five minutes was selected, inference will not begin on the data until the first data measurement after the five minute mark. For example, if five minutes is selected, the inference scheduler will wake up at the configured frequency with the additional five minute delay time to check the customer S3 bucket. The customer can upload data at the same frequency and they don't need to stop and restart the scheduler when uploading new data.

        • DataUploadFrequency (string) --

          How often data is uploaded to the source S3 bucket for the input data. This value is the length of time between data uploads. For instance, if you select 5 minutes, Amazon Lookout for Equipment will upload the real-time data to the source bucket once every 5 minutes. This frequency also determines how often Amazon Lookout for Equipment starts a scheduled inference on your data. In this example, it starts once every 5 minutes.

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
list_models(**kwargs)

Generates a list of all models in the account, including model name and ARN, dataset, and status.

See also: AWS API Documentation

Request Syntax

response = client.list_models(
    NextToken='string',
    MaxResults=123,
    Status='IN_PROGRESS'|'SUCCESS'|'FAILED',
    ModelNameBeginsWith='string',
    DatasetNameBeginsWith='string'
)
Parameters
  • NextToken (string) -- An opaque pagination token indicating where to continue the listing of ML models.
  • MaxResults (integer) -- Specifies the maximum number of ML models to list.
  • Status (string) -- The status of the ML model.
  • ModelNameBeginsWith (string) -- The beginning of the name of the ML models being listed.
  • DatasetNameBeginsWith (string) -- The beginning of the name of the dataset of the ML models to be listed.
Return type

dict

Returns

Response Syntax

{
    'NextToken': 'string',
    'ModelSummaries': [
        {
            'ModelName': 'string',
            'ModelArn': 'string',
            'DatasetName': 'string',
            'DatasetArn': 'string',
            'Status': 'IN_PROGRESS'|'SUCCESS'|'FAILED',
            'CreatedAt': datetime(2015, 1, 1)
        },
    ]
}

Response Structure

  • (dict) --

    • NextToken (string) --

      An opaque pagination token indicating where to continue the listing of ML models.

    • ModelSummaries (list) --

      Provides information on the specified model, including created time, model and dataset ARNs, and status.

      • (dict) --

        Provides information about the specified ML model, including dataset and model names and ARNs, as well as status.

        • ModelName (string) --

          The name of the ML model.

        • ModelArn (string) --

          The Amazon Resource Name (ARN) of the ML model.

        • DatasetName (string) --

          The name of the dataset being used for the ML model.

        • DatasetArn (string) --

          The Amazon Resource Name (ARN) of the dataset used to create the model.

        • Status (string) --

          Indicates the status of the ML model.

        • CreatedAt (datetime) --

          The time at which the specific model was created.

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
list_tags_for_resource(**kwargs)

Lists all the tags for a specified resource, including key and value.

See also: AWS API Documentation

Request Syntax

response = client.list_tags_for_resource(
    ResourceArn='string'
)
Parameters
ResourceArn (string) --

[REQUIRED]

The Amazon Resource Name (ARN) of the resource (such as the dataset or model) that is the focus of the ListTagsForResource operation.

Return type
dict
Returns
Response Syntax
{
    'Tags': [
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
}

Response Structure

  • (dict) --
    • Tags (list) --

      Any tags associated with the resource.

      • (dict) --

        A tag is a key-value pair that can be added to a resource as metadata.

        • Key (string) --

          The key for the specified tag.

        • Value (string) --

          The value for the specified tag.

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ResourceNotFoundException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
start_data_ingestion_job(**kwargs)

Starts a data ingestion job. Amazon Lookout for Equipment returns the job status.

See also: AWS API Documentation

Request Syntax

response = client.start_data_ingestion_job(
    DatasetName='string',
    IngestionInputConfiguration={
        'S3InputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        }
    },
    RoleArn='string',
    ClientToken='string'
)
Parameters
  • DatasetName (string) --

    [REQUIRED]

    The name of the dataset being used by the data ingestion job.

  • IngestionInputConfiguration (dict) --

    [REQUIRED]

    Specifies information for the input data for the data ingestion job, including dataset S3 location.

    • S3InputConfiguration (dict) -- [REQUIRED]

      The location information for the S3 bucket used for input data for the data ingestion.

      • Bucket (string) -- [REQUIRED]

        The name of the S3 bucket used for the input data for the data ingestion.

      • Prefix (string) --

        The prefix for the S3 location being used for the input data for the data ingestion.

  • RoleArn (string) --

    [REQUIRED]

    The Amazon Resource Name (ARN) of a role with permission to access the data source for the data ingestion job.

  • ClientToken (string) --

    [REQUIRED]

    A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one.

    This field is autopopulated if not provided.

Return type

dict

Returns

Response Syntax

{
    'JobId': 'string',
    'Status': 'IN_PROGRESS'|'SUCCESS'|'FAILED'
}

Response Structure

  • (dict) --

    • JobId (string) --

      Indicates the job ID of the data ingestion job.

    • Status (string) --

      Indicates the status of the StartDataIngestionJob operation.

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ResourceNotFoundException
  • LookoutEquipment.Client.exceptions.ConflictException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.ServiceQuotaExceededException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
start_inference_scheduler(**kwargs)

Starts an inference scheduler.

See also: AWS API Documentation

Request Syntax

response = client.start_inference_scheduler(
    InferenceSchedulerName='string'
)
Parameters
InferenceSchedulerName (string) --

[REQUIRED]

The name of the inference scheduler to be started.

Return type
dict
Returns
Response Syntax
{
    'ModelArn': 'string',
    'ModelName': 'string',
    'InferenceSchedulerName': 'string',
    'InferenceSchedulerArn': 'string',
    'Status': 'PENDING'|'RUNNING'|'STOPPING'|'STOPPED'
}

Response Structure

  • (dict) --
    • ModelArn (string) --

      The Amazon Resource Name (ARN) of the ML model being used by the inference scheduler.

    • ModelName (string) --

      The name of the ML model being used by the inference scheduler.

    • InferenceSchedulerName (string) --

      The name of the inference scheduler being started.

    • InferenceSchedulerArn (string) --

      The Amazon Resource Name (ARN) of the inference scheduler being started.

    • Status (string) --

      Indicates the status of the inference scheduler.

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ConflictException
  • LookoutEquipment.Client.exceptions.ResourceNotFoundException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
stop_inference_scheduler(**kwargs)

Stops an inference scheduler.

See also: AWS API Documentation

Request Syntax

response = client.stop_inference_scheduler(
    InferenceSchedulerName='string'
)
Parameters
InferenceSchedulerName (string) --

[REQUIRED]

The name of the inference scheduler to be stopped.

Return type
dict
Returns
Response Syntax
{
    'ModelArn': 'string',
    'ModelName': 'string',
    'InferenceSchedulerName': 'string',
    'InferenceSchedulerArn': 'string',
    'Status': 'PENDING'|'RUNNING'|'STOPPING'|'STOPPED'
}

Response Structure

  • (dict) --
    • ModelArn (string) --

      The Amazon Resource Name (ARN) of the ML model used by the inference scheduler being stopped.

    • ModelName (string) --

      The name of the ML model used by the inference scheduler being stopped.

    • InferenceSchedulerName (string) --

      The name of the inference scheduler being stopped.

    • InferenceSchedulerArn (string) --

      The Amazon Resource Name (ARN) of the inference schedule being stopped.

    • Status (string) --

      Indicates the status of the inference scheduler.

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ConflictException
  • LookoutEquipment.Client.exceptions.ResourceNotFoundException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
tag_resource(**kwargs)

Associates a given tag to a resource in your account. A tag is a key-value pair which can be added to an Amazon Lookout for Equipment resource as metadata. Tags can be used for organizing your resources as well as helping you to search and filter by tag. Multiple tags can be added to a resource, either when you create it, or later. Up to 50 tags can be associated with each resource.

See also: AWS API Documentation

Request Syntax

response = client.tag_resource(
    ResourceArn='string',
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
Parameters
  • ResourceArn (string) --

    [REQUIRED]

    The Amazon Resource Name (ARN) of the specific resource to which the tag should be associated.

  • Tags (list) --

    [REQUIRED]

    The tag or tags to be associated with a specific resource. Both the tag key and value are specified.

    • (dict) --

      A tag is a key-value pair that can be added to a resource as metadata.

      • Key (string) -- [REQUIRED]

        The key for the specified tag.

      • Value (string) -- [REQUIRED]

        The value for the specified tag.

Return type

dict

Returns

Response Syntax

{}

Response Structure

  • (dict) --

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ResourceNotFoundException
  • LookoutEquipment.Client.exceptions.ServiceQuotaExceededException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
untag_resource(**kwargs)

Removes a specific tag from a given resource. The tag is specified by its key.

See also: AWS API Documentation

Request Syntax

response = client.untag_resource(
    ResourceArn='string',
    TagKeys=[
        'string',
    ]
)
Parameters
  • ResourceArn (string) --

    [REQUIRED]

    The Amazon Resource Name (ARN) of the resource to which the tag is currently associated.

  • TagKeys (list) --

    [REQUIRED]

    Specifies the key of the tag to be removed from a specified resource.

    • (string) --
Return type

dict

Returns

Response Syntax

{}

Response Structure

  • (dict) --

Exceptions

  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ResourceNotFoundException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException
update_inference_scheduler(**kwargs)

Updates an inference scheduler.

See also: AWS API Documentation

Request Syntax

response = client.update_inference_scheduler(
    InferenceSchedulerName='string',
    DataDelayOffsetInMinutes=123,
    DataUploadFrequency='PT5M'|'PT10M'|'PT15M'|'PT30M'|'PT1H',
    DataInputConfiguration={
        'S3InputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        },
        'InputTimeZoneOffset': 'string',
        'InferenceInputNameConfiguration': {
            'TimestampFormat': 'string',
            'ComponentTimestampDelimiter': 'string'
        }
    },
    DataOutputConfiguration={
        'S3OutputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        },
        'KmsKeyId': 'string'
    },
    RoleArn='string'
)
Parameters
  • InferenceSchedulerName (string) --

    [REQUIRED]

    The name of the inference scheduler to be updated.

  • DataDelayOffsetInMinutes (integer) -- > A period of time (in minutes) by which inference on the data is delayed after the data starts. For instance, if you select an offset delay time of five minutes, inference will not begin on the data until the first data measurement after the five minute mark. For example, if five minutes is selected, the inference scheduler will wake up at the configured frequency with the additional five minute delay time to check the customer S3 bucket. The customer can upload data at the same frequency and they don't need to stop and restart the scheduler when uploading new data.
  • DataUploadFrequency (string) -- How often data is uploaded to the source S3 bucket for the input data. The value chosen is the length of time between data uploads. For instance, if you select 5 minutes, Amazon Lookout for Equipment will upload the real-time data to the source bucket once every 5 minutes. This frequency also determines how often Amazon Lookout for Equipment starts a scheduled inference on your data. In this example, it starts once every 5 minutes.
  • DataInputConfiguration (dict) --

    Specifies information for the input data for the inference scheduler, including delimiter, format, and dataset location.

    • S3InputConfiguration (dict) --

      Specifies configuration information for the input data for the inference, including S3 location of input data..

      • Bucket (string) -- [REQUIRED]

        The bucket containing the input dataset for the inference.

      • Prefix (string) --

        The prefix for the S3 bucket used for the input data for the inference.

    • InputTimeZoneOffset (string) --

      Indicates the difference between your time zone and Greenwich Mean Time (GMT).

    • InferenceInputNameConfiguration (dict) --

      > Specifies configuration information for the input data for the inference, including timestamp format and delimiter.

      • TimestampFormat (string) --

        The format of the timestamp, whether Epoch time, or standard, with or without hyphens (-).

      • ComponentTimestampDelimiter (string) --

        Indicates the delimiter character used between items in the data.

  • DataOutputConfiguration (dict) --

    Specifies information for the output results from the inference scheduler, including the output S3 location.

    • S3OutputConfiguration (dict) -- [REQUIRED]

      Specifies configuration information for the output results from for the inference, output S3 location.

      • Bucket (string) -- [REQUIRED]

        The bucket containing the output results from the inference

      • Prefix (string) --

        The prefix for the S3 bucket used for the output results from the inference.

    • KmsKeyId (string) --

      The ID number for the AWS KMS key used to encrypt the inference output.

  • RoleArn (string) -- The Amazon Resource Name (ARN) of a role with permission to access the data source for the inference scheduler.
Returns

None

Exceptions

  • LookoutEquipment.Client.exceptions.ConflictException
  • LookoutEquipment.Client.exceptions.ResourceNotFoundException
  • LookoutEquipment.Client.exceptions.ValidationException
  • LookoutEquipment.Client.exceptions.ThrottlingException
  • LookoutEquipment.Client.exceptions.AccessDeniedException
  • LookoutEquipment.Client.exceptions.InternalServerException

Paginators

The available paginators are: