Table of Contents
IoTAnalytics.
Client
¶A low-level client representing AWS IoT Analytics
IoT Analytics allows you to collect large amounts of device data, process messages, and store them. You can then query the data and run sophisticated analytics on it. IoT Analytics enables advanced data exploration through integration with Jupyter Notebooks and data visualization through integration with Amazon QuickSight.
Traditional analytics and business intelligence tools are designed to process structured data. IoT data often comes from devices that record noisy processes (such as temperature, motion, or sound). As a result the data from these devices can have significant gaps, corrupted messages, and false readings that must be cleaned up before analysis can occur. Also, IoT data is often only meaningful in the context of other data from external sources.
IoT Analytics automates the steps required to analyze data from IoT devices. IoT Analytics filters, transforms, and enriches IoT data before storing it in a time-series data store for analysis. You can set up the service to collect only the data you need from your devices, apply mathematical transforms to process the data, and enrich the data with device-specific metadata such as device type and location before storing it. Then, you can analyze your data by running queries using the built-in SQL query engine, or perform more complex analytics and machine learning inference. IoT Analytics includes pre-built models for common IoT use cases so you can answer questions like which devices are about to fail or which customers are at risk of abandoning their wearable devices.
import boto3
client = boto3.client('iotanalytics')
These are the available methods:
batch_put_message()
can_paginate()
cancel_pipeline_reprocessing()
close()
create_channel()
create_dataset()
create_dataset_content()
create_datastore()
create_pipeline()
delete_channel()
delete_dataset()
delete_dataset_content()
delete_datastore()
delete_pipeline()
describe_channel()
describe_dataset()
describe_datastore()
describe_logging_options()
describe_pipeline()
get_dataset_content()
get_paginator()
get_waiter()
list_channels()
list_dataset_contents()
list_datasets()
list_datastores()
list_pipelines()
list_tags_for_resource()
put_logging_options()
run_pipeline_activity()
sample_channel_data()
start_pipeline_reprocessing()
tag_resource()
untag_resource()
update_channel()
update_dataset()
update_datastore()
update_pipeline()
batch_put_message
(**kwargs)¶Sends messages to a channel.
See also: AWS API Documentation
Request Syntax
response = client.batch_put_message(
channelName='string',
messages=[
{
'messageId': 'string',
'payload': b'bytes'
},
]
)
[REQUIRED]
The name of the channel where the messages are sent.
[REQUIRED]
The list of messages to be sent. Each message has the format: { "messageId": "string", "payload": "string"}.
The field names of message payloads (data) that you send to IoT Analytics:
For example, {"temp_01": 29} or {"_temp_01": 29} are valid, but {"temp-01": 29}, {"01_temp": 29} or {"__temp_01": 29} are invalid in message payloads.
Information about a message.
The ID you want to assign to the message. Each messageId
must be unique within each batch sent.
The payload of the message. This can be a JSON string or a base64-encoded string representing binary data, in which case you must decode it by means of a pipeline activity.
dict
Response Syntax
{
'batchPutMessageErrorEntries': [
{
'messageId': 'string',
'errorCode': 'string',
'errorMessage': 'string'
},
]
}
Response Structure
(dict) --
batchPutMessageErrorEntries (list) --
A list of any errors encountered when sending the messages to the channel.
(dict) --
Contains informations about errors.
messageId (string) --
The ID of the message that caused the error. See the value corresponding to the messageId
key in the message object.
errorCode (string) --
The code associated with the error.
errorMessage (string) --
The message associated with the error.
Exceptions
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
can_paginate
(operation_name)¶Check if an operation can be paginated.
create_foo
, and you'd normally invoke the
operation as client.create_foo(**kwargs)
, if the
create_foo
operation can be paginated, you can use the
call client.get_paginator("create_foo")
.True
if the operation can be paginated,
False
otherwise.cancel_pipeline_reprocessing
(**kwargs)¶Cancels the reprocessing of data through the pipeline.
See also: AWS API Documentation
Request Syntax
response = client.cancel_pipeline_reprocessing(
pipelineName='string',
reprocessingId='string'
)
[REQUIRED]
The name of pipeline for which data reprocessing is canceled.
[REQUIRED]
The ID of the reprocessing task (returned by StartPipelineReprocessing
).
dict
Response Syntax
{}
Response Structure
Exceptions
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
close
()¶Closes underlying endpoint connections.
create_channel
(**kwargs)¶Used to create a channel. A channel collects data from an MQTT topic and archives the raw, unprocessed messages before publishing the data to a pipeline.
See also: AWS API Documentation
Request Syntax
response = client.create_channel(
channelName='string',
channelStorage={
'serviceManagedS3': {}
,
'customerManagedS3': {
'bucket': 'string',
'keyPrefix': 'string',
'roleArn': 'string'
}
},
retentionPeriod={
'unlimited': True|False,
'numberOfDays': 123
},
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
[REQUIRED]
The name of the channel.
Where channel data is stored. You can choose one of serviceManagedS3
or customerManagedS3
storage. If not specified, the default is serviceManagedS3
. You can't change this storage option after the channel is created.
Used to store channel data in an S3 bucket managed by IoT Analytics. You can't change the choice of S3 storage after the data store is created.
Used to store channel data in an S3 bucket that you manage. If customer managed storage is selected, the retentionPeriod
parameter is ignored. You can't change the choice of S3 storage after the data store is created.
The name of the S3 bucket in which channel data is stored.
(Optional) The prefix used to create the keys of the channel data objects. Each object in an S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/).
The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 resources.
How long, in days, message data is kept for the channel. When customerManagedS3
storage is selected, this parameter is ignored.
If true, message data is kept indefinitely.
The number of days that message data is kept. The unlimited
parameter must be false.
Metadata which can be used to manage the channel.
A set of key-value pairs that are used to manage the resource.
The tag's key.
The tag's value.
dict
Response Syntax
{
'channelName': 'string',
'channelArn': 'string',
'retentionPeriod': {
'unlimited': True|False,
'numberOfDays': 123
}
}
Response Structure
(dict) --
channelName (string) --
The name of the channel.
channelArn (string) --
The ARN of the channel.
retentionPeriod (dict) --
How long, in days, message data is kept for the channel.
unlimited (boolean) --
If true, message data is kept indefinitely.
numberOfDays (integer) --
The number of days that message data is kept. The unlimited
parameter must be false.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceAlreadyExistsException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
IoTAnalytics.Client.exceptions.LimitExceededException
create_dataset
(**kwargs)¶Used to create a dataset. A dataset stores data retrieved from a data store by applying a queryAction
(a SQL query) or a containerAction
(executing a containerized application). This operation creates the skeleton of a dataset. The dataset can be populated manually by calling CreateDatasetContent
or automatically according to a trigger you specify.
See also: AWS API Documentation
Request Syntax
response = client.create_dataset(
datasetName='string',
actions=[
{
'actionName': 'string',
'queryAction': {
'sqlQuery': 'string',
'filters': [
{
'deltaTime': {
'offsetSeconds': 123,
'timeExpression': 'string'
}
},
]
},
'containerAction': {
'image': 'string',
'executionRoleArn': 'string',
'resourceConfiguration': {
'computeType': 'ACU_1'|'ACU_2',
'volumeSizeInGB': 123
},
'variables': [
{
'name': 'string',
'stringValue': 'string',
'doubleValue': 123.0,
'datasetContentVersionValue': {
'datasetName': 'string'
},
'outputFileUriValue': {
'fileName': 'string'
}
},
]
}
},
],
triggers=[
{
'schedule': {
'expression': 'string'
},
'dataset': {
'name': 'string'
}
},
],
contentDeliveryRules=[
{
'entryName': 'string',
'destination': {
'iotEventsDestinationConfiguration': {
'inputName': 'string',
'roleArn': 'string'
},
's3DestinationConfiguration': {
'bucket': 'string',
'key': 'string',
'glueConfiguration': {
'tableName': 'string',
'databaseName': 'string'
},
'roleArn': 'string'
}
}
},
],
retentionPeriod={
'unlimited': True|False,
'numberOfDays': 123
},
versioningConfiguration={
'unlimited': True|False,
'maxVersions': 123
},
tags=[
{
'key': 'string',
'value': 'string'
},
],
lateDataRules=[
{
'ruleName': 'string',
'ruleConfiguration': {
'deltaTimeSessionWindowConfiguration': {
'timeoutInMinutes': 123
}
}
},
]
)
[REQUIRED]
The name of the dataset.
[REQUIRED]
A list of actions that create the dataset contents.
A DatasetAction
object that specifies how dataset contents are automatically created.
The name of the dataset action by which dataset contents are automatically created.
An SqlQueryDatasetAction
object that uses an SQL query to automatically create dataset contents.
A SQL query string.
Prefilters applied to message data.
Information that is used to filter message data, to segregate it according to the timeframe in which it arrives.
Used to limit data to that which has arrived since the last execution of the action.
The number of seconds of estimated in-flight lag time of message data. When you create dataset contents using message data from a specified timeframe, some message data might still be in flight when processing begins, and so do not arrive in time to be processed. Use this field to make allowances for the in flight time of your message data, so that data not processed from a previous timeframe is included with the next timeframe. Otherwise, missed message data would be excluded from processing during the next timeframe too, because its timestamp places it within the previous timeframe.
An expression by which the time of the message data might be determined. This can be the name of a timestamp field or a SQL expression that is used to derive the time the message data was generated.
Information that allows the system to run a containerized application to create the dataset contents. The application must be in a Docker container along with any required support libraries.
The ARN of the Docker container stored in your account. The Docker container contains an application and required support libraries and is used to generate dataset contents.
The ARN of the role that gives permission to the system to access required resources to run the containerAction
. This includes, at minimum, permission to retrieve the dataset contents that are the input to the containerized application.
Configuration of the resource that executes the containerAction
.
The type of the compute resource used to execute the containerAction
. Possible values are: ACU_1
(vCPU=4, memory=16 GiB) or ACU_2
(vCPU=8, memory=32 GiB).
The size, in GB, of the persistent storage available to the resource instance used to execute the containerAction
(min: 1, max: 50).
The values of variables used in the context of the execution of the containerized application (basically, parameters passed to the application). Each variable must have a name and a value given by one of stringValue
, datasetContentVersionValue
, or outputFileUriValue
.
An instance of a variable to be passed to the containerAction
execution. Each variable must have a name and a value given by one of stringValue
, datasetContentVersionValue
, or outputFileUriValue
.
The name of the variable.
The value of the variable as a string.
The value of the variable as a double (numeric).
The value of the variable as a structure that specifies a dataset content version.
The name of the dataset whose latest contents are used as input to the notebook or application.
The value of the variable as a structure that specifies an output file URI.
The URI of the location where dataset contents are stored, usually the URI of a file in an S3 bucket.
A list of triggers. A trigger causes dataset contents to be populated at a specified time interval or when another dataset's contents are created. The list of triggers can be empty or contain up to five DataSetTrigger
objects.
The DatasetTrigger
that specifies when the dataset is automatically updated.
The Schedule when the trigger is initiated.
The expression that defines when to trigger an update. For more information, see Schedule Expressions for Rules in the Amazon CloudWatch Events User Guide .
The dataset whose content creation triggers the creation of this dataset's contents.
The name of the dataset whose content generation triggers the new dataset content generation.
When dataset contents are created, they are delivered to destinations specified here.
When dataset contents are created, they are delivered to destination specified here.
The name of the dataset content delivery rules entry.
The destination to which dataset contents are delivered.
Configuration information for delivery of dataset contents to IoT Events.
The name of the IoT Events input to which dataset contents are delivered.
The ARN of the role that grants IoT Analytics permission to deliver dataset contents to an IoT Events input.
Configuration information for delivery of dataset contents to Amazon S3.
The name of the S3 bucket to which dataset contents are delivered.
The key of the dataset contents object in an S3 bucket. Each object has a key that is a unique identifier. Each object has exactly one key.
You can create a unique key with the following options:
!{iotanalytics:scheduleTime}
to insert the time of a scheduled SQL query run.!{iotanalytics:versionId}
to insert a unique hash that identifies a dataset content.!{iotanalytics:creationTime}
to insert the creation time of a dataset content.The following example creates a unique key for a CSV file: dataset/mydataset/!{iotanalytics:scheduleTime}/!{iotanalytics:versionId}.csv
Note
If you don't use !{iotanalytics:versionId}
to specify the key, you might get duplicate keys. For example, you might have two dataset contents with the same scheduleTime
but different versionId
s. This means that one dataset content overwrites the other.
Configuration information for coordination with Glue, a fully managed extract, transform and load (ETL) service.
The name of the table in your Glue Data Catalog that is used to perform the ETL operations. An Glue Data Catalog table contains partitioned data and descriptions of data sources and targets.
The name of the database in your Glue Data Catalog in which the table is located. An Glue Data Catalog database contains metadata tables.
The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 and Glue resources.
Optional. How long, in days, versions of dataset contents are kept for the dataset. If not specified or set to null
, versions of dataset contents are retained for at most 90 days. The number of versions of dataset contents retained is determined by the versioningConfiguration
parameter. For more information, see Keeping Multiple Versions of IoT Analytics datasets in the IoT Analytics User Guide .
If true, message data is kept indefinitely.
The number of days that message data is kept. The unlimited
parameter must be false.
Optional. How many versions of dataset contents are kept. If not specified or set to null, only the latest version plus the latest succeeded version (if they are different) are kept for the time period specified by the retentionPeriod
parameter. For more information, see Keeping Multiple Versions of IoT Analytics datasets in the IoT Analytics User Guide .
If true, unlimited versions of dataset contents are kept.
How many versions of dataset contents are kept. The unlimited
parameter must be false
.
Metadata which can be used to manage the dataset.
A set of key-value pairs that are used to manage the resource.
The tag's key.
The tag's value.
A list of data rules that send notifications to CloudWatch, when data arrives late. To specify lateDataRules
, the dataset must use a DeltaTimer filter.
A structure that contains the name and configuration information of a late data rule.
The name of the late data rule.
The information needed to configure the late data rule.
The information needed to configure a delta time session window.
A time interval. You can use timeoutInMinutes
so that IoT Analytics can batch up late data notifications that have been generated since the last execution. IoT Analytics sends one batch of notifications to Amazon CloudWatch Events at one time.
For more information about how to write a timestamp expression, see Date and Time Functions and Operators , in the Presto 0.172 Documentation .
dict
Response Syntax
{
'datasetName': 'string',
'datasetArn': 'string',
'retentionPeriod': {
'unlimited': True|False,
'numberOfDays': 123
}
}
Response Structure
(dict) --
datasetName (string) --
The name of the dataset.
datasetArn (string) --
The ARN of the dataset.
retentionPeriod (dict) --
How long, in days, dataset contents are kept for the dataset.
unlimited (boolean) --
If true, message data is kept indefinitely.
numberOfDays (integer) --
The number of days that message data is kept. The unlimited
parameter must be false.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceAlreadyExistsException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
IoTAnalytics.Client.exceptions.LimitExceededException
create_dataset_content
(**kwargs)¶Creates the content of a dataset by applying a queryAction
(a SQL query) or a containerAction
(executing a containerized application).
See also: AWS API Documentation
Request Syntax
response = client.create_dataset_content(
datasetName='string',
versionId='string'
)
[REQUIRED]
The name of the dataset.
versionId
for a dataset content, the dataset must use a DeltaTimer filter.dict
Response Syntax
{
'versionId': 'string'
}
Response Structure
(dict) --
versionId (string) --
The version ID of the dataset contents that are being created.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
create_datastore
(**kwargs)¶Creates a data store, which is a repository for messages.
See also: AWS API Documentation
Request Syntax
response = client.create_datastore(
datastoreName='string',
datastoreStorage={
'serviceManagedS3': {}
,
'customerManagedS3': {
'bucket': 'string',
'keyPrefix': 'string',
'roleArn': 'string'
},
'iotSiteWiseMultiLayerStorage': {
'customerManagedS3Storage': {
'bucket': 'string',
'keyPrefix': 'string'
}
}
},
retentionPeriod={
'unlimited': True|False,
'numberOfDays': 123
},
tags=[
{
'key': 'string',
'value': 'string'
},
],
fileFormatConfiguration={
'jsonConfiguration': {}
,
'parquetConfiguration': {
'schemaDefinition': {
'columns': [
{
'name': 'string',
'type': 'string'
},
]
}
}
},
datastorePartitions={
'partitions': [
{
'attributePartition': {
'attributeName': 'string'
},
'timestampPartition': {
'attributeName': 'string',
'timestampFormat': 'string'
}
},
]
}
)
[REQUIRED]
The name of the data store.
Where data in a data store is stored.. You can choose serviceManagedS3
storage, customerManagedS3
storage, or iotSiteWiseMultiLayerStorage
storage. The default is serviceManagedS3
. You can't change the choice of Amazon S3 storage after your data store is created.
Used to store data in an Amazon S3 bucket managed by IoT Analytics. You can't change the choice of Amazon S3 storage after your data store is created.
S3-customer-managed; When you choose customer-managed storage, the retentionPeriod
parameter is ignored. You can't change the choice of Amazon S3 storage after your data store is created.
The name of the Amazon S3 bucket where your data is stored.
(Optional) The prefix used to create the keys of the data store data objects. Each object in an Amazon S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/).
The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 resources.
Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage. You can't change the choice of Amazon S3 storage after your data store is created.
Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage.
The name of the Amazon S3 bucket where your data is stored.
(Optional) The prefix used to create the keys of the data store data objects. Each object in an Amazon S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/).
How long, in days, message data is kept for the data store. When customerManagedS3
storage is selected, this parameter is ignored.
If true, message data is kept indefinitely.
The number of days that message data is kept. The unlimited
parameter must be false.
Metadata which can be used to manage the data store.
A set of key-value pairs that are used to manage the resource.
The tag's key.
The tag's value.
Contains the configuration information of file formats. IoT Analytics data stores support JSON and Parquet .
The default file format is JSON. You can specify only one format.
You can't change the file format after you create the data store.
Contains the configuration information of the JSON format.
Contains the configuration information of the Parquet format.
Information needed to define a schema.
Specifies one or more columns that store your data.
Each schema can have up to 100 columns. Each column can have up to 100 nested types.
Contains information about a column that stores your data.
The name of the column.
The type of data. For more information about the supported data types, see Common data types in the Glue Developer Guide .
Contains information about the partition dimensions in a data store.
A list of partition dimensions in a data store.
A single dimension to partition a data store. The dimension must be an AttributePartition
or a TimestampPartition
.
A partition dimension defined by an attributeName
.
The name of the attribute that defines a partition dimension.
A partition dimension defined by a timestamp attribute.
The attribute name of the partition defined by a timestamp.
The timestamp format of a partition defined by a timestamp. The default format is seconds since epoch (January 1, 1970 at midnight UTC time).
dict
Response Syntax
{
'datastoreName': 'string',
'datastoreArn': 'string',
'retentionPeriod': {
'unlimited': True|False,
'numberOfDays': 123
}
}
Response Structure
(dict) --
datastoreName (string) --
The name of the data store.
datastoreArn (string) --
The ARN of the data store.
retentionPeriod (dict) --
How long, in days, message data is kept for the data store.
unlimited (boolean) --
If true, message data is kept indefinitely.
numberOfDays (integer) --
The number of days that message data is kept. The unlimited
parameter must be false.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceAlreadyExistsException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
IoTAnalytics.Client.exceptions.LimitExceededException
create_pipeline
(**kwargs)¶Creates a pipeline. A pipeline consumes messages from a channel and allows you to process the messages before storing them in a data store. You must specify both a channel
and a datastore
activity and, optionally, as many as 23 additional activities in the pipelineActivities
array.
See also: AWS API Documentation
Request Syntax
response = client.create_pipeline(
pipelineName='string',
pipelineActivities=[
{
'channel': {
'name': 'string',
'channelName': 'string',
'next': 'string'
},
'lambda': {
'name': 'string',
'lambdaName': 'string',
'batchSize': 123,
'next': 'string'
},
'datastore': {
'name': 'string',
'datastoreName': 'string'
},
'addAttributes': {
'name': 'string',
'attributes': {
'string': 'string'
},
'next': 'string'
},
'removeAttributes': {
'name': 'string',
'attributes': [
'string',
],
'next': 'string'
},
'selectAttributes': {
'name': 'string',
'attributes': [
'string',
],
'next': 'string'
},
'filter': {
'name': 'string',
'filter': 'string',
'next': 'string'
},
'math': {
'name': 'string',
'attribute': 'string',
'math': 'string',
'next': 'string'
},
'deviceRegistryEnrich': {
'name': 'string',
'attribute': 'string',
'thingName': 'string',
'roleArn': 'string',
'next': 'string'
},
'deviceShadowEnrich': {
'name': 'string',
'attribute': 'string',
'thingName': 'string',
'roleArn': 'string',
'next': 'string'
}
},
],
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
[REQUIRED]
The name of the pipeline.
[REQUIRED]
A list of PipelineActivity
objects. Activities perform transformations on your messages, such as removing, renaming or adding message attributes; filtering messages based on attribute values; invoking your Lambda unctions on messages for advanced processing; or performing mathematical transformations to normalize device data.
The list can be 2-25 PipelineActivity
objects and must contain both a channel
and a datastore
activity. Each entry in the list must contain only one activity. For example:
pipelineActivities = [ { "channel": { ... } }, { "lambda": { ... } }, ... ]
An activity that performs a transformation on a message.
Determines the source of the messages to be processed.
The name of the channel activity.
The name of the channel from which the messages are processed.
The next activity in the pipeline.
Runs a Lambda function to modify the message.
The name of the lambda activity.
The name of the Lambda function that is run on the message.
The number of messages passed to the Lambda function for processing.
The Lambda function must be able to process all of these messages within five minutes, which is the maximum timeout duration for Lambda functions.
The next activity in the pipeline.
Specifies where to store the processed message data.
The name of the datastore activity.
The name of the data store where processed messages are stored.
Adds other attributes based on existing attributes in the message.
The name of the addAttributes activity.
A list of 1-50 AttributeNameMapping
objects that map an existing attribute to a new attribute.
Note
The existing attributes remain in the message, so if you want to remove the originals, use RemoveAttributeActivity
.
The next activity in the pipeline.
Removes attributes from a message.
The name of the removeAttributes
activity.
A list of 1-50 attributes to remove from the message.
The next activity in the pipeline.
Used to create a new message using only the specified attributes from the original message.
The name of the selectAttributes
activity.
A list of the attributes to select from the message.
The next activity in the pipeline.
Filters a message based on its attributes.
The name of the filter activity.
An expression that looks like a SQL WHERE clause that must return a Boolean value. Messages that satisfy the condition are passed to the next activity.
The next activity in the pipeline.
Computes an arithmetic expression using the message's attributes and adds it to the message.
The name of the math activity.
The name of the attribute that contains the result of the math operation.
An expression that uses one or more existing attributes and must return an integer value.
The next activity in the pipeline.
Adds data from the IoT device registry to your message.
The name of the deviceRegistryEnrich
activity.
The name of the attribute that is added to the message.
The name of the IoT device whose registry information is added to the message.
The ARN of the role that allows access to the device's registry information.
The next activity in the pipeline.
Adds information from the IoT Device Shadow service to a message.
The name of the deviceShadowEnrich
activity.
The name of the attribute that is added to the message.
The name of the IoT device whose shadow information is added to the message.
The ARN of the role that allows access to the device's shadow.
The next activity in the pipeline.
Metadata which can be used to manage the pipeline.
A set of key-value pairs that are used to manage the resource.
The tag's key.
The tag's value.
dict
Response Syntax
{
'pipelineName': 'string',
'pipelineArn': 'string'
}
Response Structure
(dict) --
pipelineName (string) --
The name of the pipeline.
pipelineArn (string) --
The ARN of the pipeline.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceAlreadyExistsException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
IoTAnalytics.Client.exceptions.LimitExceededException
delete_channel
(**kwargs)¶Deletes the specified channel.
See also: AWS API Documentation
Request Syntax
response = client.delete_channel(
channelName='string'
)
[REQUIRED]
The name of the channel to delete.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
delete_dataset
(**kwargs)¶Deletes the specified dataset.
You do not have to delete the content of the dataset before you perform this operation.
See also: AWS API Documentation
Request Syntax
response = client.delete_dataset(
datasetName='string'
)
[REQUIRED]
The name of the dataset to delete.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
delete_dataset_content
(**kwargs)¶Deletes the content of the specified dataset.
See also: AWS API Documentation
Request Syntax
response = client.delete_dataset_content(
datasetName='string',
versionId='string'
)
[REQUIRED]
The name of the dataset whose content is deleted.
None
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
delete_datastore
(**kwargs)¶Deletes the specified data store.
See also: AWS API Documentation
Request Syntax
response = client.delete_datastore(
datastoreName='string'
)
[REQUIRED]
The name of the data store to delete.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
delete_pipeline
(**kwargs)¶Deletes the specified pipeline.
See also: AWS API Documentation
Request Syntax
response = client.delete_pipeline(
pipelineName='string'
)
[REQUIRED]
The name of the pipeline to delete.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
describe_channel
(**kwargs)¶Retrieves information about a channel.
See also: AWS API Documentation
Request Syntax
response = client.describe_channel(
channelName='string',
includeStatistics=True|False
)
[REQUIRED]
The name of the channel whose information is retrieved.
dict
Response Syntax
{
'channel': {
'name': 'string',
'storage': {
'serviceManagedS3': {},
'customerManagedS3': {
'bucket': 'string',
'keyPrefix': 'string',
'roleArn': 'string'
}
},
'arn': 'string',
'status': 'CREATING'|'ACTIVE'|'DELETING',
'retentionPeriod': {
'unlimited': True|False,
'numberOfDays': 123
},
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1),
'lastMessageArrivalTime': datetime(2015, 1, 1)
},
'statistics': {
'size': {
'estimatedSizeInBytes': 123.0,
'estimatedOn': datetime(2015, 1, 1)
}
}
}
Response Structure
(dict) --
channel (dict) --
An object that contains information about the channel.
name (string) --
The name of the channel.
storage (dict) --
Where channel data is stored. You can choose one of serviceManagedS3
or customerManagedS3
storage. If not specified, the default is serviceManagedS3
. You can't change this storage option after the channel is created.
serviceManagedS3 (dict) --
Used to store channel data in an S3 bucket managed by IoT Analytics. You can't change the choice of S3 storage after the data store is created.
customerManagedS3 (dict) --
Used to store channel data in an S3 bucket that you manage. If customer managed storage is selected, the retentionPeriod
parameter is ignored. You can't change the choice of S3 storage after the data store is created.
bucket (string) --
The name of the S3 bucket in which channel data is stored.
keyPrefix (string) --
(Optional) The prefix used to create the keys of the channel data objects. Each object in an S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/).
roleArn (string) --
The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 resources.
arn (string) --
The ARN of the channel.
status (string) --
The status of the channel.
retentionPeriod (dict) --
How long, in days, message data is kept for the channel.
unlimited (boolean) --
If true, message data is kept indefinitely.
numberOfDays (integer) --
The number of days that message data is kept. The unlimited
parameter must be false.
creationTime (datetime) --
When the channel was created.
lastUpdateTime (datetime) --
When the channel was last updated.
lastMessageArrivalTime (datetime) --
The last time when a new message arrived in the channel.
IoT Analytics updates this value at most once per minute for one channel. Hence, the lastMessageArrivalTime
value is an approximation.
This feature only applies to messages that arrived in the data store after October 23, 2020.
statistics (dict) --
Statistics about the channel. Included if the includeStatistics
parameter is set to true
in the request.
size (dict) --
The estimated size of the channel.
estimatedSizeInBytes (float) --
The estimated size of the resource, in bytes.
estimatedOn (datetime) --
The time when the estimate of the size of the resource was made.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
describe_dataset
(**kwargs)¶Retrieves information about a dataset.
See also: AWS API Documentation
Request Syntax
response = client.describe_dataset(
datasetName='string'
)
[REQUIRED]
The name of the dataset whose information is retrieved.
{
'dataset': {
'name': 'string',
'arn': 'string',
'actions': [
{
'actionName': 'string',
'queryAction': {
'sqlQuery': 'string',
'filters': [
{
'deltaTime': {
'offsetSeconds': 123,
'timeExpression': 'string'
}
},
]
},
'containerAction': {
'image': 'string',
'executionRoleArn': 'string',
'resourceConfiguration': {
'computeType': 'ACU_1'|'ACU_2',
'volumeSizeInGB': 123
},
'variables': [
{
'name': 'string',
'stringValue': 'string',
'doubleValue': 123.0,
'datasetContentVersionValue': {
'datasetName': 'string'
},
'outputFileUriValue': {
'fileName': 'string'
}
},
]
}
},
],
'triggers': [
{
'schedule': {
'expression': 'string'
},
'dataset': {
'name': 'string'
}
},
],
'contentDeliveryRules': [
{
'entryName': 'string',
'destination': {
'iotEventsDestinationConfiguration': {
'inputName': 'string',
'roleArn': 'string'
},
's3DestinationConfiguration': {
'bucket': 'string',
'key': 'string',
'glueConfiguration': {
'tableName': 'string',
'databaseName': 'string'
},
'roleArn': 'string'
}
}
},
],
'status': 'CREATING'|'ACTIVE'|'DELETING',
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1),
'retentionPeriod': {
'unlimited': True|False,
'numberOfDays': 123
},
'versioningConfiguration': {
'unlimited': True|False,
'maxVersions': 123
},
'lateDataRules': [
{
'ruleName': 'string',
'ruleConfiguration': {
'deltaTimeSessionWindowConfiguration': {
'timeoutInMinutes': 123
}
}
},
]
}
}
Response Structure
An object that contains information about the dataset.
The name of the dataset.
The ARN of the dataset.
The DatasetAction
objects that automatically create the dataset contents.
A DatasetAction
object that specifies how dataset contents are automatically created.
The name of the dataset action by which dataset contents are automatically created.
An SqlQueryDatasetAction
object that uses an SQL query to automatically create dataset contents.
A SQL query string.
Prefilters applied to message data.
Information that is used to filter message data, to segregate it according to the timeframe in which it arrives.
Used to limit data to that which has arrived since the last execution of the action.
The number of seconds of estimated in-flight lag time of message data. When you create dataset contents using message data from a specified timeframe, some message data might still be in flight when processing begins, and so do not arrive in time to be processed. Use this field to make allowances for the in flight time of your message data, so that data not processed from a previous timeframe is included with the next timeframe. Otherwise, missed message data would be excluded from processing during the next timeframe too, because its timestamp places it within the previous timeframe.
An expression by which the time of the message data might be determined. This can be the name of a timestamp field or a SQL expression that is used to derive the time the message data was generated.
Information that allows the system to run a containerized application to create the dataset contents. The application must be in a Docker container along with any required support libraries.
The ARN of the Docker container stored in your account. The Docker container contains an application and required support libraries and is used to generate dataset contents.
The ARN of the role that gives permission to the system to access required resources to run the containerAction
. This includes, at minimum, permission to retrieve the dataset contents that are the input to the containerized application.
Configuration of the resource that executes the containerAction
.
The type of the compute resource used to execute the containerAction
. Possible values are: ACU_1
(vCPU=4, memory=16 GiB) or ACU_2
(vCPU=8, memory=32 GiB).
The size, in GB, of the persistent storage available to the resource instance used to execute the containerAction
(min: 1, max: 50).
The values of variables used in the context of the execution of the containerized application (basically, parameters passed to the application). Each variable must have a name and a value given by one of stringValue
, datasetContentVersionValue
, or outputFileUriValue
.
An instance of a variable to be passed to the containerAction
execution. Each variable must have a name and a value given by one of stringValue
, datasetContentVersionValue
, or outputFileUriValue
.
The name of the variable.
The value of the variable as a string.
The value of the variable as a double (numeric).
The value of the variable as a structure that specifies a dataset content version.
The name of the dataset whose latest contents are used as input to the notebook or application.
The value of the variable as a structure that specifies an output file URI.
The URI of the location where dataset contents are stored, usually the URI of a file in an S3 bucket.
The DatasetTrigger
objects that specify when the dataset is automatically updated.
The DatasetTrigger
that specifies when the dataset is automatically updated.
The Schedule when the trigger is initiated.
The expression that defines when to trigger an update. For more information, see Schedule Expressions for Rules in the Amazon CloudWatch Events User Guide .
The dataset whose content creation triggers the creation of this dataset's contents.
The name of the dataset whose content generation triggers the new dataset content generation.
When dataset contents are created they are delivered to destinations specified here.
When dataset contents are created, they are delivered to destination specified here.
The name of the dataset content delivery rules entry.
The destination to which dataset contents are delivered.
Configuration information for delivery of dataset contents to IoT Events.
The name of the IoT Events input to which dataset contents are delivered.
The ARN of the role that grants IoT Analytics permission to deliver dataset contents to an IoT Events input.
Configuration information for delivery of dataset contents to Amazon S3.
The name of the S3 bucket to which dataset contents are delivered.
The key of the dataset contents object in an S3 bucket. Each object has a key that is a unique identifier. Each object has exactly one key.
You can create a unique key with the following options:
!{iotanalytics:scheduleTime}
to insert the time of a scheduled SQL query run.!{iotanalytics:versionId}
to insert a unique hash that identifies a dataset content.!{iotanalytics:creationTime}
to insert the creation time of a dataset content.The following example creates a unique key for a CSV file: dataset/mydataset/!{iotanalytics:scheduleTime}/!{iotanalytics:versionId}.csv
Note
If you don't use !{iotanalytics:versionId}
to specify the key, you might get duplicate keys. For example, you might have two dataset contents with the same scheduleTime
but different versionId
s. This means that one dataset content overwrites the other.
Configuration information for coordination with Glue, a fully managed extract, transform and load (ETL) service.
The name of the table in your Glue Data Catalog that is used to perform the ETL operations. An Glue Data Catalog table contains partitioned data and descriptions of data sources and targets.
The name of the database in your Glue Data Catalog in which the table is located. An Glue Data Catalog database contains metadata tables.
The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 and Glue resources.
The status of the dataset.
When the dataset was created.
The last time the dataset was updated.
Optional. How long, in days, message data is kept for the dataset.
If true, message data is kept indefinitely.
The number of days that message data is kept. The unlimited
parameter must be false.
Optional. How many versions of dataset contents are kept. If not specified or set to null, only the latest version plus the latest succeeded version (if they are different) are kept for the time period specified by the retentionPeriod
parameter. For more information, see Keeping Multiple Versions of IoT Analytics datasets in the IoT Analytics User Guide .
If true, unlimited versions of dataset contents are kept.
How many versions of dataset contents are kept. The unlimited
parameter must be false
.
A list of data rules that send notifications to CloudWatch, when data arrives late. To specify lateDataRules
, the dataset must use a DeltaTimer filter.
A structure that contains the name and configuration information of a late data rule.
The name of the late data rule.
The information needed to configure the late data rule.
The information needed to configure a delta time session window.
A time interval. You can use timeoutInMinutes
so that IoT Analytics can batch up late data notifications that have been generated since the last execution. IoT Analytics sends one batch of notifications to Amazon CloudWatch Events at one time.
For more information about how to write a timestamp expression, see Date and Time Functions and Operators , in the Presto 0.172 Documentation .
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
describe_datastore
(**kwargs)¶Retrieves information about a data store.
See also: AWS API Documentation
Request Syntax
response = client.describe_datastore(
datastoreName='string',
includeStatistics=True|False
)
[REQUIRED]
The name of the data store
dict
Response Syntax
{
'datastore': {
'name': 'string',
'storage': {
'serviceManagedS3': {},
'customerManagedS3': {
'bucket': 'string',
'keyPrefix': 'string',
'roleArn': 'string'
},
'iotSiteWiseMultiLayerStorage': {
'customerManagedS3Storage': {
'bucket': 'string',
'keyPrefix': 'string'
}
}
},
'arn': 'string',
'status': 'CREATING'|'ACTIVE'|'DELETING',
'retentionPeriod': {
'unlimited': True|False,
'numberOfDays': 123
},
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1),
'lastMessageArrivalTime': datetime(2015, 1, 1),
'fileFormatConfiguration': {
'jsonConfiguration': {},
'parquetConfiguration': {
'schemaDefinition': {
'columns': [
{
'name': 'string',
'type': 'string'
},
]
}
}
},
'datastorePartitions': {
'partitions': [
{
'attributePartition': {
'attributeName': 'string'
},
'timestampPartition': {
'attributeName': 'string',
'timestampFormat': 'string'
}
},
]
}
},
'statistics': {
'size': {
'estimatedSizeInBytes': 123.0,
'estimatedOn': datetime(2015, 1, 1)
}
}
}
Response Structure
(dict) --
datastore (dict) --
Information about the data store.
name (string) --
The name of the data store.
storage (dict) --
Where data in a data store is stored.. You can choose serviceManagedS3
storage, customerManagedS3
storage, or iotSiteWiseMultiLayerStorage
storage. The default is serviceManagedS3
. You can't change the choice of Amazon S3 storage after your data store is created.
serviceManagedS3 (dict) --
Used to store data in an Amazon S3 bucket managed by IoT Analytics. You can't change the choice of Amazon S3 storage after your data store is created.
customerManagedS3 (dict) --
S3-customer-managed; When you choose customer-managed storage, the retentionPeriod
parameter is ignored. You can't change the choice of Amazon S3 storage after your data store is created.
bucket (string) --
The name of the Amazon S3 bucket where your data is stored.
keyPrefix (string) --
(Optional) The prefix used to create the keys of the data store data objects. Each object in an Amazon S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/).
roleArn (string) --
The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 resources.
iotSiteWiseMultiLayerStorage (dict) --
Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage. You can't change the choice of Amazon S3 storage after your data store is created.
customerManagedS3Storage (dict) --
Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage.
bucket (string) --
The name of the Amazon S3 bucket where your data is stored.
keyPrefix (string) --
(Optional) The prefix used to create the keys of the data store data objects. Each object in an Amazon S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/).
arn (string) --
The ARN of the data store.
status (string) --
The status of a data store:
CREATING
The data store is being created.
ACTIVE
The data store has been created and can be used.
DELETING
The data store is being deleted.
retentionPeriod (dict) --
How long, in days, message data is kept for the data store. When customerManagedS3
storage is selected, this parameter is ignored.
unlimited (boolean) --
If true, message data is kept indefinitely.
numberOfDays (integer) --
The number of days that message data is kept. The unlimited
parameter must be false.
creationTime (datetime) --
When the data store was created.
lastUpdateTime (datetime) --
The last time the data store was updated.
lastMessageArrivalTime (datetime) --
The last time when a new message arrived in the data store.
IoT Analytics updates this value at most once per minute for Amazon Simple Storage Service one data store. Hence, the lastMessageArrivalTime
value is an approximation.
This feature only applies to messages that arrived in the data store after October 23, 2020.
fileFormatConfiguration (dict) --
Contains the configuration information of file formats. IoT Analytics data stores support JSON and Parquet .
The default file format is JSON. You can specify only one format.
You can't change the file format after you create the data store.
jsonConfiguration (dict) --
Contains the configuration information of the JSON format.
parquetConfiguration (dict) --
Contains the configuration information of the Parquet format.
schemaDefinition (dict) --
Information needed to define a schema.
columns (list) --
Specifies one or more columns that store your data.
Each schema can have up to 100 columns. Each column can have up to 100 nested types.
(dict) --
Contains information about a column that stores your data.
name (string) --
The name of the column.
type (string) --
The type of data. For more information about the supported data types, see Common data types in the Glue Developer Guide .
datastorePartitions (dict) --
Contains information about the partition dimensions in a data store.
partitions (list) --
A list of partition dimensions in a data store.
(dict) --
A single dimension to partition a data store. The dimension must be an AttributePartition
or a TimestampPartition
.
attributePartition (dict) --
A partition dimension defined by an attributeName
.
attributeName (string) --
The name of the attribute that defines a partition dimension.
timestampPartition (dict) --
A partition dimension defined by a timestamp attribute.
attributeName (string) --
The attribute name of the partition defined by a timestamp.
timestampFormat (string) --
The timestamp format of a partition defined by a timestamp. The default format is seconds since epoch (January 1, 1970 at midnight UTC time).
statistics (dict) --
Additional statistical information about the data store. Included if the includeStatistics
parameter is set to true
in the request.
size (dict) --
The estimated size of the data store.
estimatedSizeInBytes (float) --
The estimated size of the resource, in bytes.
estimatedOn (datetime) --
The time when the estimate of the size of the resource was made.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
describe_logging_options
()¶Retrieves the current settings of the IoT Analytics logging options.
See also: AWS API Documentation
Request Syntax
response = client.describe_logging_options()
{
'loggingOptions': {
'roleArn': 'string',
'level': 'ERROR',
'enabled': True|False
}
}
Response Structure
The current settings of the IoT Analytics logging options.
The ARN of the role that grants permission to IoT Analytics to perform logging.
The logging level. Currently, only ERROR is supported.
If true, logging is enabled for IoT Analytics.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
describe_pipeline
(**kwargs)¶Retrieves information about a pipeline.
See also: AWS API Documentation
Request Syntax
response = client.describe_pipeline(
pipelineName='string'
)
[REQUIRED]
The name of the pipeline whose information is retrieved.
{
'pipeline': {
'name': 'string',
'arn': 'string',
'activities': [
{
'channel': {
'name': 'string',
'channelName': 'string',
'next': 'string'
},
'lambda': {
'name': 'string',
'lambdaName': 'string',
'batchSize': 123,
'next': 'string'
},
'datastore': {
'name': 'string',
'datastoreName': 'string'
},
'addAttributes': {
'name': 'string',
'attributes': {
'string': 'string'
},
'next': 'string'
},
'removeAttributes': {
'name': 'string',
'attributes': [
'string',
],
'next': 'string'
},
'selectAttributes': {
'name': 'string',
'attributes': [
'string',
],
'next': 'string'
},
'filter': {
'name': 'string',
'filter': 'string',
'next': 'string'
},
'math': {
'name': 'string',
'attribute': 'string',
'math': 'string',
'next': 'string'
},
'deviceRegistryEnrich': {
'name': 'string',
'attribute': 'string',
'thingName': 'string',
'roleArn': 'string',
'next': 'string'
},
'deviceShadowEnrich': {
'name': 'string',
'attribute': 'string',
'thingName': 'string',
'roleArn': 'string',
'next': 'string'
}
},
],
'reprocessingSummaries': [
{
'id': 'string',
'status': 'RUNNING'|'SUCCEEDED'|'CANCELLED'|'FAILED',
'creationTime': datetime(2015, 1, 1)
},
],
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1)
}
}
Response Structure
A Pipeline
object that contains information about the pipeline.
The name of the pipeline.
The ARN of the pipeline.
The activities that perform transformations on the messages.
An activity that performs a transformation on a message.
Determines the source of the messages to be processed.
The name of the channel activity.
The name of the channel from which the messages are processed.
The next activity in the pipeline.
Runs a Lambda function to modify the message.
The name of the lambda activity.
The name of the Lambda function that is run on the message.
The number of messages passed to the Lambda function for processing.
The Lambda function must be able to process all of these messages within five minutes, which is the maximum timeout duration for Lambda functions.
The next activity in the pipeline.
Specifies where to store the processed message data.
The name of the datastore activity.
The name of the data store where processed messages are stored.
Adds other attributes based on existing attributes in the message.
The name of the addAttributes activity.
A list of 1-50 AttributeNameMapping
objects that map an existing attribute to a new attribute.
Note
The existing attributes remain in the message, so if you want to remove the originals, use RemoveAttributeActivity
.
The next activity in the pipeline.
Removes attributes from a message.
The name of the removeAttributes
activity.
A list of 1-50 attributes to remove from the message.
The next activity in the pipeline.
Used to create a new message using only the specified attributes from the original message.
The name of the selectAttributes
activity.
A list of the attributes to select from the message.
The next activity in the pipeline.
Filters a message based on its attributes.
The name of the filter activity.
An expression that looks like a SQL WHERE clause that must return a Boolean value. Messages that satisfy the condition are passed to the next activity.
The next activity in the pipeline.
Computes an arithmetic expression using the message's attributes and adds it to the message.
The name of the math activity.
The name of the attribute that contains the result of the math operation.
An expression that uses one or more existing attributes and must return an integer value.
The next activity in the pipeline.
Adds data from the IoT device registry to your message.
The name of the deviceRegistryEnrich
activity.
The name of the attribute that is added to the message.
The name of the IoT device whose registry information is added to the message.
The ARN of the role that allows access to the device's registry information.
The next activity in the pipeline.
Adds information from the IoT Device Shadow service to a message.
The name of the deviceShadowEnrich
activity.
The name of the attribute that is added to the message.
The name of the IoT device whose shadow information is added to the message.
The ARN of the role that allows access to the device's shadow.
The next activity in the pipeline.
A summary of information about the pipeline reprocessing.
Information about pipeline reprocessing.
The reprocessingId
returned by StartPipelineReprocessing
.
The status of the pipeline reprocessing.
The time the pipeline reprocessing was created.
When the pipeline was created.
The last time the pipeline was updated.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
get_dataset_content
(**kwargs)¶Retrieves the contents of a dataset as presigned URIs.
See also: AWS API Documentation
Request Syntax
response = client.get_dataset_content(
datasetName='string',
versionId='string'
)
[REQUIRED]
The name of the dataset whose contents are retrieved.
dict
Response Syntax
{
'entries': [
{
'entryName': 'string',
'dataURI': 'string'
},
],
'timestamp': datetime(2015, 1, 1),
'status': {
'state': 'CREATING'|'SUCCEEDED'|'FAILED',
'reason': 'string'
}
}
Response Structure
(dict) --
entries (list) --
A list of DatasetEntry
objects.
(dict) --
The reference to a dataset entry.
entryName (string) --
The name of the dataset item.
dataURI (string) --
The presigned URI of the dataset item.
timestamp (datetime) --
The time when the request was made.
status (dict) --
The status of the dataset content.
state (string) --
The state of the dataset contents. Can be one of READY, CREATING, SUCCEEDED, or FAILED.
reason (string) --
The reason the dataset contents are in this state.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
get_paginator
(operation_name)¶Create a paginator for an operation.
create_foo
, and you'd normally invoke the
operation as client.create_foo(**kwargs)
, if the
create_foo
operation can be paginated, you can use the
call client.get_paginator("create_foo")
.client.can_paginate
method to
check if an operation is pageable.get_waiter
(waiter_name)¶Returns an object that can wait for some condition.
list_channels
(**kwargs)¶Retrieves a list of channels.
See also: AWS API Documentation
Request Syntax
response = client.list_channels(
nextToken='string',
maxResults=123
)
The maximum number of results to return in this request.
The default value is 100.
dict
Response Syntax
{
'channelSummaries': [
{
'channelName': 'string',
'channelStorage': {
'serviceManagedS3': {},
'customerManagedS3': {
'bucket': 'string',
'keyPrefix': 'string',
'roleArn': 'string'
}
},
'status': 'CREATING'|'ACTIVE'|'DELETING',
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1),
'lastMessageArrivalTime': datetime(2015, 1, 1)
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
channelSummaries (list) --
A list of ChannelSummary
objects.
(dict) --
A summary of information about a channel.
channelName (string) --
The name of the channel.
channelStorage (dict) --
Where channel data is stored.
serviceManagedS3 (dict) --
Used to store channel data in an S3 bucket managed by IoT Analytics.
customerManagedS3 (dict) --
Used to store channel data in an S3 bucket that you manage.
bucket (string) --
The name of the S3 bucket in which channel data is stored.
keyPrefix (string) --
(Optional) The prefix used to create the keys of the channel data objects. Each object in an S3 bucket has a key that is its unique identifier within the bucket (each object in a bucket has exactly one key). The prefix must end with a forward slash (/).
roleArn (string) --
The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 resources.
status (string) --
The status of the channel.
creationTime (datetime) --
When the channel was created.
lastUpdateTime (datetime) --
The last time the channel was updated.
lastMessageArrivalTime (datetime) --
The last time when a new message arrived in the channel.
IoT Analytics updates this value at most once per minute for one channel. Hence, the lastMessageArrivalTime
value is an approximation.
This feature only applies to messages that arrived in the data store after October 23, 2020.
nextToken (string) --
The token to retrieve the next set of results, or null
if there are no more results.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
list_dataset_contents
(**kwargs)¶Lists information about dataset contents that have been created.
See also: AWS API Documentation
Request Syntax
response = client.list_dataset_contents(
datasetName='string',
nextToken='string',
maxResults=123,
scheduledOnOrAfter=datetime(2015, 1, 1),
scheduledBefore=datetime(2015, 1, 1)
)
[REQUIRED]
The name of the dataset whose contents information you want to list.
triggers.schedule
in the CreateDataset
request. (timestamp)triggers.schedule
in the CreateDataset
request. (timestamp)dict
Response Syntax
{
'datasetContentSummaries': [
{
'version': 'string',
'status': {
'state': 'CREATING'|'SUCCEEDED'|'FAILED',
'reason': 'string'
},
'creationTime': datetime(2015, 1, 1),
'scheduleTime': datetime(2015, 1, 1),
'completionTime': datetime(2015, 1, 1)
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
datasetContentSummaries (list) --
Summary information about dataset contents that have been created.
(dict) --
Summary information about dataset contents.
version (string) --
The version of the dataset contents.
status (dict) --
The status of the dataset contents.
state (string) --
The state of the dataset contents. Can be one of READY, CREATING, SUCCEEDED, or FAILED.
reason (string) --
The reason the dataset contents are in this state.
creationTime (datetime) --
The actual time the creation of the dataset contents was started.
scheduleTime (datetime) --
The time the creation of the dataset contents was scheduled to start.
completionTime (datetime) --
The time the dataset content status was updated to SUCCEEDED or FAILED.
nextToken (string) --
The token to retrieve the next set of results, or null
if there are no more results.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
list_datasets
(**kwargs)¶Retrieves information about datasets.
See also: AWS API Documentation
Request Syntax
response = client.list_datasets(
nextToken='string',
maxResults=123
)
The maximum number of results to return in this request.
The default value is 100.
dict
Response Syntax
{
'datasetSummaries': [
{
'datasetName': 'string',
'status': 'CREATING'|'ACTIVE'|'DELETING',
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1),
'triggers': [
{
'schedule': {
'expression': 'string'
},
'dataset': {
'name': 'string'
}
},
],
'actions': [
{
'actionName': 'string',
'actionType': 'QUERY'|'CONTAINER'
},
]
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
datasetSummaries (list) --
A list of DatasetSummary
objects.
(dict) --
A summary of information about a dataset.
datasetName (string) --
The name of the dataset.
status (string) --
The status of the dataset.
creationTime (datetime) --
The time the dataset was created.
lastUpdateTime (datetime) --
The last time the dataset was updated.
triggers (list) --
A list of triggers. A trigger causes dataset content to be populated at a specified time interval or when another dataset is populated. The list of triggers can be empty or contain up to five DataSetTrigger
objects
(dict) --
The DatasetTrigger
that specifies when the dataset is automatically updated.
schedule (dict) --
The Schedule when the trigger is initiated.
expression (string) --
The expression that defines when to trigger an update. For more information, see Schedule Expressions for Rules in the Amazon CloudWatch Events User Guide .
dataset (dict) --
The dataset whose content creation triggers the creation of this dataset's contents.
name (string) --
The name of the dataset whose content generation triggers the new dataset content generation.
actions (list) --
A list of DataActionSummary
objects.
(dict) --
Information about the action that automatically creates the dataset's contents.
actionName (string) --
The name of the action that automatically creates the dataset's contents.
actionType (string) --
The type of action by which the dataset's contents are automatically created.
nextToken (string) --
The token to retrieve the next set of results, or null
if there are no more results.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
list_datastores
(**kwargs)¶Retrieves a list of data stores.
See also: AWS API Documentation
Request Syntax
response = client.list_datastores(
nextToken='string',
maxResults=123
)
The maximum number of results to return in this request.
The default value is 100.
dict
Response Syntax
{
'datastoreSummaries': [
{
'datastoreName': 'string',
'datastoreStorage': {
'serviceManagedS3': {},
'customerManagedS3': {
'bucket': 'string',
'keyPrefix': 'string',
'roleArn': 'string'
},
'iotSiteWiseMultiLayerStorage': {
'customerManagedS3Storage': {
'bucket': 'string',
'keyPrefix': 'string'
}
}
},
'status': 'CREATING'|'ACTIVE'|'DELETING',
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1),
'lastMessageArrivalTime': datetime(2015, 1, 1),
'fileFormatType': 'JSON'|'PARQUET',
'datastorePartitions': {
'partitions': [
{
'attributePartition': {
'attributeName': 'string'
},
'timestampPartition': {
'attributeName': 'string',
'timestampFormat': 'string'
}
},
]
}
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
datastoreSummaries (list) --
A list of DatastoreSummary
objects.
(dict) --
A summary of information about a data store.
datastoreName (string) --
The name of the data store.
datastoreStorage (dict) --
Where data in a data store is stored.
serviceManagedS3 (dict) --
Used to store data in an Amazon S3 bucket managed by IoT Analytics.
customerManagedS3 (dict) --
Used to store data in an Amazon S3 bucket managed by IoT Analytics.
bucket (string) --
The name of the Amazon S3 bucket where your data is stored.
keyPrefix (string) --
(Optional) The prefix used to create the keys of the data store data objects. Each object in an Amazon S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/).
roleArn (string) --
The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 resources.
iotSiteWiseMultiLayerStorage (dict) --
Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage.
customerManagedS3Storage (dict) --
Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage.
bucket (string) --
The name of the Amazon S3 bucket where your data is stored.
keyPrefix (string) --
(Optional) The prefix used to create the keys of the data store data objects. Each object in an Amazon S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/).
status (string) --
The status of the data store.
creationTime (datetime) --
When the data store was created.
lastUpdateTime (datetime) --
The last time the data store was updated.
lastMessageArrivalTime (datetime) --
The last time when a new message arrived in the data store.
IoT Analytics updates this value at most once per minute for Amazon Simple Storage Service one data store. Hence, the lastMessageArrivalTime
value is an approximation.
This feature only applies to messages that arrived in the data store after October 23, 2020.
fileFormatType (string) --
The file format of the data in the data store.
datastorePartitions (dict) --
Contains information about the partition dimensions in a data store.
partitions (list) --
A list of partition dimensions in a data store.
(dict) --
A single dimension to partition a data store. The dimension must be an AttributePartition
or a TimestampPartition
.
attributePartition (dict) --
A partition dimension defined by an attributeName
.
attributeName (string) --
The name of the attribute that defines a partition dimension.
timestampPartition (dict) --
A partition dimension defined by a timestamp attribute.
attributeName (string) --
The attribute name of the partition defined by a timestamp.
timestampFormat (string) --
The timestamp format of a partition defined by a timestamp. The default format is seconds since epoch (January 1, 1970 at midnight UTC time).
nextToken (string) --
The token to retrieve the next set of results, or null
if there are no more results.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
list_pipelines
(**kwargs)¶Retrieves a list of pipelines.
See also: AWS API Documentation
Request Syntax
response = client.list_pipelines(
nextToken='string',
maxResults=123
)
The maximum number of results to return in this request.
The default value is 100.
dict
Response Syntax
{
'pipelineSummaries': [
{
'pipelineName': 'string',
'reprocessingSummaries': [
{
'id': 'string',
'status': 'RUNNING'|'SUCCEEDED'|'CANCELLED'|'FAILED',
'creationTime': datetime(2015, 1, 1)
},
],
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1)
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
pipelineSummaries (list) --
A list of PipelineSummary
objects.
(dict) --
A summary of information about a pipeline.
pipelineName (string) --
The name of the pipeline.
reprocessingSummaries (list) --
A summary of information about the pipeline reprocessing.
(dict) --
Information about pipeline reprocessing.
id (string) --
The reprocessingId
returned by StartPipelineReprocessing
.
status (string) --
The status of the pipeline reprocessing.
creationTime (datetime) --
The time the pipeline reprocessing was created.
creationTime (datetime) --
When the pipeline was created.
lastUpdateTime (datetime) --
When the pipeline was last updated.
nextToken (string) --
The token to retrieve the next set of results, or null
if there are no more results.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
Lists the tags (metadata) that you have assigned to the resource.
See also: AWS API Documentation
Request Syntax
response = client.list_tags_for_resource(
resourceArn='string'
)
[REQUIRED]
The ARN of the resource whose tags you want to list.
{
'tags': [
{
'key': 'string',
'value': 'string'
},
]
}
Response Structure
The tags (metadata) that you have assigned to the resource.
A set of key-value pairs that are used to manage the resource.
The tag's key.
The tag's value.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
IoTAnalytics.Client.exceptions.LimitExceededException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
put_logging_options
(**kwargs)¶Sets or updates the IoT Analytics logging options.
If you update the value of any loggingOptions
field, it takes up to one minute for the change to take effect. Also, if you change the policy attached to the role you specified in the roleArn
field (for example, to correct an invalid policy), it takes up to five minutes for that change to take effect.
See also: AWS API Documentation
Request Syntax
response = client.put_logging_options(
loggingOptions={
'roleArn': 'string',
'level': 'ERROR',
'enabled': True|False
}
)
[REQUIRED]
The new values of the IoT Analytics logging options.
The ARN of the role that grants permission to IoT Analytics to perform logging.
The logging level. Currently, only ERROR is supported.
If true, logging is enabled for IoT Analytics.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
run_pipeline_activity
(**kwargs)¶Simulates the results of running a pipeline activity on a message payload.
See also: AWS API Documentation
Request Syntax
response = client.run_pipeline_activity(
pipelineActivity={
'channel': {
'name': 'string',
'channelName': 'string',
'next': 'string'
},
'lambda': {
'name': 'string',
'lambdaName': 'string',
'batchSize': 123,
'next': 'string'
},
'datastore': {
'name': 'string',
'datastoreName': 'string'
},
'addAttributes': {
'name': 'string',
'attributes': {
'string': 'string'
},
'next': 'string'
},
'removeAttributes': {
'name': 'string',
'attributes': [
'string',
],
'next': 'string'
},
'selectAttributes': {
'name': 'string',
'attributes': [
'string',
],
'next': 'string'
},
'filter': {
'name': 'string',
'filter': 'string',
'next': 'string'
},
'math': {
'name': 'string',
'attribute': 'string',
'math': 'string',
'next': 'string'
},
'deviceRegistryEnrich': {
'name': 'string',
'attribute': 'string',
'thingName': 'string',
'roleArn': 'string',
'next': 'string'
},
'deviceShadowEnrich': {
'name': 'string',
'attribute': 'string',
'thingName': 'string',
'roleArn': 'string',
'next': 'string'
}
},
payloads=[
b'bytes',
]
)
[REQUIRED]
The pipeline activity that is run. This must not be a channel activity or a data store activity because these activities are used in a pipeline only to load the original message and to store the (possibly) transformed message. If a Lambda activity is specified, only short-running Lambda functions (those with a timeout of less than 30 seconds or less) can be used.
Determines the source of the messages to be processed.
The name of the channel activity.
The name of the channel from which the messages are processed.
The next activity in the pipeline.
Runs a Lambda function to modify the message.
The name of the lambda activity.
The name of the Lambda function that is run on the message.
The number of messages passed to the Lambda function for processing.
The Lambda function must be able to process all of these messages within five minutes, which is the maximum timeout duration for Lambda functions.
The next activity in the pipeline.
Specifies where to store the processed message data.
The name of the datastore activity.
The name of the data store where processed messages are stored.
Adds other attributes based on existing attributes in the message.
The name of the addAttributes activity.
A list of 1-50 AttributeNameMapping
objects that map an existing attribute to a new attribute.
Note
The existing attributes remain in the message, so if you want to remove the originals, use RemoveAttributeActivity
.
The next activity in the pipeline.
Removes attributes from a message.
The name of the removeAttributes
activity.
A list of 1-50 attributes to remove from the message.
The next activity in the pipeline.
Used to create a new message using only the specified attributes from the original message.
The name of the selectAttributes
activity.
A list of the attributes to select from the message.
The next activity in the pipeline.
Filters a message based on its attributes.
The name of the filter activity.
An expression that looks like a SQL WHERE clause that must return a Boolean value. Messages that satisfy the condition are passed to the next activity.
The next activity in the pipeline.
Computes an arithmetic expression using the message's attributes and adds it to the message.
The name of the math activity.
The name of the attribute that contains the result of the math operation.
An expression that uses one or more existing attributes and must return an integer value.
The next activity in the pipeline.
Adds data from the IoT device registry to your message.
The name of the deviceRegistryEnrich
activity.
The name of the attribute that is added to the message.
The name of the IoT device whose registry information is added to the message.
The ARN of the role that allows access to the device's registry information.
The next activity in the pipeline.
Adds information from the IoT Device Shadow service to a message.
The name of the deviceShadowEnrich
activity.
The name of the attribute that is added to the message.
The name of the IoT device whose shadow information is added to the message.
The ARN of the role that allows access to the device's shadow.
The next activity in the pipeline.
[REQUIRED]
The sample message payloads on which the pipeline activity is run.
dict
Response Syntax
{
'payloads': [
b'bytes',
],
'logResult': 'string'
}
Response Structure
(dict) --
payloads (list) --
The enriched or transformed sample message payloads as base64-encoded strings. (The results of running the pipeline activity on each input sample message payload, encoded in base64.)
logResult (string) --
In case the pipeline activity fails, the log message that is generated.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
sample_channel_data
(**kwargs)¶Retrieves a sample of messages from the specified channel ingested during the specified timeframe. Up to 10 messages can be retrieved.
See also: AWS API Documentation
Request Syntax
response = client.sample_channel_data(
channelName='string',
maxMessages=123,
startTime=datetime(2015, 1, 1),
endTime=datetime(2015, 1, 1)
)
[REQUIRED]
The name of the channel whose message samples are retrieved.
dict
Response Syntax
{
'payloads': [
b'bytes',
]
}
Response Structure
(dict) --
payloads (list) --
The list of message samples. Each sample message is returned as a base64-encoded string.
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
start_pipeline_reprocessing
(**kwargs)¶Starts the reprocessing of raw message data through the pipeline.
See also: AWS API Documentation
Request Syntax
response = client.start_pipeline_reprocessing(
pipelineName='string',
startTime=datetime(2015, 1, 1),
endTime=datetime(2015, 1, 1),
channelMessages={
's3Paths': [
'string',
]
}
)
[REQUIRED]
The name of the pipeline on which to start reprocessing.
The start time (inclusive) of raw message data that is reprocessed.
If you specify a value for the startTime
parameter, you must not use the channelMessages
object.
The end time (exclusive) of raw message data that is reprocessed.
If you specify a value for the endTime
parameter, you must not use the channelMessages
object.
Specifies one or more sets of channel messages that you want to reprocess.
If you use the channelMessages
object, you must not specify a value for startTime
and endTime
.
Specifies one or more keys that identify the Amazon Simple Storage Service (Amazon S3) objects that save your channel messages.
You must use the full path for the key.
Example path: channel/mychannel/__dt=2020-02-29 00:00:00/1582940490000_1582940520000_123456789012_mychannel_0_2118.0.json.gz
dict
Response Syntax
{
'reprocessingId': 'string'
}
Response Structure
(dict) --
reprocessingId (string) --
The ID of the pipeline reprocessing activity that was started.
Exceptions
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.ResourceAlreadyExistsException
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
tag_resource
(**kwargs)¶Adds to or modifies the tags of the given resource. Tags are metadata that can be used to manage a resource.
See also: AWS API Documentation
Request Syntax
response = client.tag_resource(
resourceArn='string',
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
[REQUIRED]
The ARN of the resource whose tags you want to modify.
[REQUIRED]
The new or modified tags for the resource.
A set of key-value pairs that are used to manage the resource.
The tag's key.
The tag's value.
dict
Response Syntax
{}
Response Structure
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
IoTAnalytics.Client.exceptions.LimitExceededException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
untag_resource
(**kwargs)¶Removes the given tags (metadata) from the resource.
See also: AWS API Documentation
Request Syntax
response = client.untag_resource(
resourceArn='string',
tagKeys=[
'string',
]
)
[REQUIRED]
The ARN of the resource whose tags you want to remove.
[REQUIRED]
The keys of those tags which you want to remove.
dict
Response Syntax
{}
Response Structure
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
IoTAnalytics.Client.exceptions.LimitExceededException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
update_channel
(**kwargs)¶Used to update the settings of a channel.
See also: AWS API Documentation
Request Syntax
response = client.update_channel(
channelName='string',
channelStorage={
'serviceManagedS3': {}
,
'customerManagedS3': {
'bucket': 'string',
'keyPrefix': 'string',
'roleArn': 'string'
}
},
retentionPeriod={
'unlimited': True|False,
'numberOfDays': 123
}
)
[REQUIRED]
The name of the channel to be updated.
Where channel data is stored. You can choose one of serviceManagedS3
or customerManagedS3
storage. If not specified, the default is serviceManagedS3
. You can't change this storage option after the channel is created.
Used to store channel data in an S3 bucket managed by IoT Analytics. You can't change the choice of S3 storage after the data store is created.
Used to store channel data in an S3 bucket that you manage. If customer managed storage is selected, the retentionPeriod
parameter is ignored. You can't change the choice of S3 storage after the data store is created.
The name of the S3 bucket in which channel data is stored.
(Optional) The prefix used to create the keys of the channel data objects. Each object in an S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/).
The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 resources.
How long, in days, message data is kept for the channel. The retention period can't be updated if the channel's Amazon S3 storage is customer-managed.
If true, message data is kept indefinitely.
The number of days that message data is kept. The unlimited
parameter must be false.
None
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
update_dataset
(**kwargs)¶Updates the settings of a dataset.
See also: AWS API Documentation
Request Syntax
response = client.update_dataset(
datasetName='string',
actions=[
{
'actionName': 'string',
'queryAction': {
'sqlQuery': 'string',
'filters': [
{
'deltaTime': {
'offsetSeconds': 123,
'timeExpression': 'string'
}
},
]
},
'containerAction': {
'image': 'string',
'executionRoleArn': 'string',
'resourceConfiguration': {
'computeType': 'ACU_1'|'ACU_2',
'volumeSizeInGB': 123
},
'variables': [
{
'name': 'string',
'stringValue': 'string',
'doubleValue': 123.0,
'datasetContentVersionValue': {
'datasetName': 'string'
},
'outputFileUriValue': {
'fileName': 'string'
}
},
]
}
},
],
triggers=[
{
'schedule': {
'expression': 'string'
},
'dataset': {
'name': 'string'
}
},
],
contentDeliveryRules=[
{
'entryName': 'string',
'destination': {
'iotEventsDestinationConfiguration': {
'inputName': 'string',
'roleArn': 'string'
},
's3DestinationConfiguration': {
'bucket': 'string',
'key': 'string',
'glueConfiguration': {
'tableName': 'string',
'databaseName': 'string'
},
'roleArn': 'string'
}
}
},
],
retentionPeriod={
'unlimited': True|False,
'numberOfDays': 123
},
versioningConfiguration={
'unlimited': True|False,
'maxVersions': 123
},
lateDataRules=[
{
'ruleName': 'string',
'ruleConfiguration': {
'deltaTimeSessionWindowConfiguration': {
'timeoutInMinutes': 123
}
}
},
]
)
[REQUIRED]
The name of the dataset to update.
[REQUIRED]
A list of DatasetAction
objects.
A DatasetAction
object that specifies how dataset contents are automatically created.
The name of the dataset action by which dataset contents are automatically created.
An SqlQueryDatasetAction
object that uses an SQL query to automatically create dataset contents.
A SQL query string.
Prefilters applied to message data.
Information that is used to filter message data, to segregate it according to the timeframe in which it arrives.
Used to limit data to that which has arrived since the last execution of the action.
The number of seconds of estimated in-flight lag time of message data. When you create dataset contents using message data from a specified timeframe, some message data might still be in flight when processing begins, and so do not arrive in time to be processed. Use this field to make allowances for the in flight time of your message data, so that data not processed from a previous timeframe is included with the next timeframe. Otherwise, missed message data would be excluded from processing during the next timeframe too, because its timestamp places it within the previous timeframe.
An expression by which the time of the message data might be determined. This can be the name of a timestamp field or a SQL expression that is used to derive the time the message data was generated.
Information that allows the system to run a containerized application to create the dataset contents. The application must be in a Docker container along with any required support libraries.
The ARN of the Docker container stored in your account. The Docker container contains an application and required support libraries and is used to generate dataset contents.
The ARN of the role that gives permission to the system to access required resources to run the containerAction
. This includes, at minimum, permission to retrieve the dataset contents that are the input to the containerized application.
Configuration of the resource that executes the containerAction
.
The type of the compute resource used to execute the containerAction
. Possible values are: ACU_1
(vCPU=4, memory=16 GiB) or ACU_2
(vCPU=8, memory=32 GiB).
The size, in GB, of the persistent storage available to the resource instance used to execute the containerAction
(min: 1, max: 50).
The values of variables used in the context of the execution of the containerized application (basically, parameters passed to the application). Each variable must have a name and a value given by one of stringValue
, datasetContentVersionValue
, or outputFileUriValue
.
An instance of a variable to be passed to the containerAction
execution. Each variable must have a name and a value given by one of stringValue
, datasetContentVersionValue
, or outputFileUriValue
.
The name of the variable.
The value of the variable as a string.
The value of the variable as a double (numeric).
The value of the variable as a structure that specifies a dataset content version.
The name of the dataset whose latest contents are used as input to the notebook or application.
The value of the variable as a structure that specifies an output file URI.
The URI of the location where dataset contents are stored, usually the URI of a file in an S3 bucket.
A list of DatasetTrigger
objects. The list can be empty or can contain up to five DatasetTrigger
objects.
The DatasetTrigger
that specifies when the dataset is automatically updated.
The Schedule when the trigger is initiated.
The expression that defines when to trigger an update. For more information, see Schedule Expressions for Rules in the Amazon CloudWatch Events User Guide .
The dataset whose content creation triggers the creation of this dataset's contents.
The name of the dataset whose content generation triggers the new dataset content generation.
When dataset contents are created, they are delivered to destinations specified here.
When dataset contents are created, they are delivered to destination specified here.
The name of the dataset content delivery rules entry.
The destination to which dataset contents are delivered.
Configuration information for delivery of dataset contents to IoT Events.
The name of the IoT Events input to which dataset contents are delivered.
The ARN of the role that grants IoT Analytics permission to deliver dataset contents to an IoT Events input.
Configuration information for delivery of dataset contents to Amazon S3.
The name of the S3 bucket to which dataset contents are delivered.
The key of the dataset contents object in an S3 bucket. Each object has a key that is a unique identifier. Each object has exactly one key.
You can create a unique key with the following options:
!{iotanalytics:scheduleTime}
to insert the time of a scheduled SQL query run.!{iotanalytics:versionId}
to insert a unique hash that identifies a dataset content.!{iotanalytics:creationTime}
to insert the creation time of a dataset content.The following example creates a unique key for a CSV file: dataset/mydataset/!{iotanalytics:scheduleTime}/!{iotanalytics:versionId}.csv
Note
If you don't use !{iotanalytics:versionId}
to specify the key, you might get duplicate keys. For example, you might have two dataset contents with the same scheduleTime
but different versionId
s. This means that one dataset content overwrites the other.
Configuration information for coordination with Glue, a fully managed extract, transform and load (ETL) service.
The name of the table in your Glue Data Catalog that is used to perform the ETL operations. An Glue Data Catalog table contains partitioned data and descriptions of data sources and targets.
The name of the database in your Glue Data Catalog in which the table is located. An Glue Data Catalog database contains metadata tables.
The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 and Glue resources.
How long, in days, dataset contents are kept for the dataset.
If true, message data is kept indefinitely.
The number of days that message data is kept. The unlimited
parameter must be false.
Optional. How many versions of dataset contents are kept. If not specified or set to null, only the latest version plus the latest succeeded version (if they are different) are kept for the time period specified by the retentionPeriod
parameter. For more information, see Keeping Multiple Versions of IoT Analytics datasets in the IoT Analytics User Guide .
If true, unlimited versions of dataset contents are kept.
How many versions of dataset contents are kept. The unlimited
parameter must be false
.
A list of data rules that send notifications to CloudWatch, when data arrives late. To specify lateDataRules
, the dataset must use a DeltaTimer filter.
A structure that contains the name and configuration information of a late data rule.
The name of the late data rule.
The information needed to configure the late data rule.
The information needed to configure a delta time session window.
A time interval. You can use timeoutInMinutes
so that IoT Analytics can batch up late data notifications that have been generated since the last execution. IoT Analytics sends one batch of notifications to Amazon CloudWatch Events at one time.
For more information about how to write a timestamp expression, see Date and Time Functions and Operators , in the Presto 0.172 Documentation .
None
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
update_datastore
(**kwargs)¶Used to update the settings of a data store.
See also: AWS API Documentation
Request Syntax
response = client.update_datastore(
datastoreName='string',
retentionPeriod={
'unlimited': True|False,
'numberOfDays': 123
},
datastoreStorage={
'serviceManagedS3': {}
,
'customerManagedS3': {
'bucket': 'string',
'keyPrefix': 'string',
'roleArn': 'string'
},
'iotSiteWiseMultiLayerStorage': {
'customerManagedS3Storage': {
'bucket': 'string',
'keyPrefix': 'string'
}
}
},
fileFormatConfiguration={
'jsonConfiguration': {}
,
'parquetConfiguration': {
'schemaDefinition': {
'columns': [
{
'name': 'string',
'type': 'string'
},
]
}
}
}
)
[REQUIRED]
The name of the data store to be updated.
How long, in days, message data is kept for the data store. The retention period can't be updated if the data store's Amazon S3 storage is customer-managed.
If true, message data is kept indefinitely.
The number of days that message data is kept. The unlimited
parameter must be false.
Where data in a data store is stored.. You can choose serviceManagedS3
storage, customerManagedS3
storage, or iotSiteWiseMultiLayerStorage
storage. The default is serviceManagedS3
. You can't change the choice of Amazon S3 storage after your data store is created.
Used to store data in an Amazon S3 bucket managed by IoT Analytics. You can't change the choice of Amazon S3 storage after your data store is created.
S3-customer-managed; When you choose customer-managed storage, the retentionPeriod
parameter is ignored. You can't change the choice of Amazon S3 storage after your data store is created.
The name of the Amazon S3 bucket where your data is stored.
(Optional) The prefix used to create the keys of the data store data objects. Each object in an Amazon S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/).
The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 resources.
Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage. You can't change the choice of Amazon S3 storage after your data store is created.
Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage.
The name of the Amazon S3 bucket where your data is stored.
(Optional) The prefix used to create the keys of the data store data objects. Each object in an Amazon S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/).
Contains the configuration information of file formats. IoT Analytics data stores support JSON and Parquet .
The default file format is JSON. You can specify only one format.
You can't change the file format after you create the data store.
Contains the configuration information of the JSON format.
Contains the configuration information of the Parquet format.
Information needed to define a schema.
Specifies one or more columns that store your data.
Each schema can have up to 100 columns. Each column can have up to 100 nested types.
Contains information about a column that stores your data.
The name of the column.
The type of data. For more information about the supported data types, see Common data types in the Glue Developer Guide .
None
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
update_pipeline
(**kwargs)¶Updates the settings of a pipeline. You must specify both a channel
and a datastore
activity and, optionally, as many as 23 additional activities in the pipelineActivities
array.
See also: AWS API Documentation
Request Syntax
response = client.update_pipeline(
pipelineName='string',
pipelineActivities=[
{
'channel': {
'name': 'string',
'channelName': 'string',
'next': 'string'
},
'lambda': {
'name': 'string',
'lambdaName': 'string',
'batchSize': 123,
'next': 'string'
},
'datastore': {
'name': 'string',
'datastoreName': 'string'
},
'addAttributes': {
'name': 'string',
'attributes': {
'string': 'string'
},
'next': 'string'
},
'removeAttributes': {
'name': 'string',
'attributes': [
'string',
],
'next': 'string'
},
'selectAttributes': {
'name': 'string',
'attributes': [
'string',
],
'next': 'string'
},
'filter': {
'name': 'string',
'filter': 'string',
'next': 'string'
},
'math': {
'name': 'string',
'attribute': 'string',
'math': 'string',
'next': 'string'
},
'deviceRegistryEnrich': {
'name': 'string',
'attribute': 'string',
'thingName': 'string',
'roleArn': 'string',
'next': 'string'
},
'deviceShadowEnrich': {
'name': 'string',
'attribute': 'string',
'thingName': 'string',
'roleArn': 'string',
'next': 'string'
}
},
]
)
[REQUIRED]
The name of the pipeline to update.
[REQUIRED]
A list of PipelineActivity
objects. Activities perform transformations on your messages, such as removing, renaming or adding message attributes; filtering messages based on attribute values; invoking your Lambda functions on messages for advanced processing; or performing mathematical transformations to normalize device data.
The list can be 2-25 PipelineActivity
objects and must contain both a channel
and a datastore
activity. Each entry in the list must contain only one activity. For example:
pipelineActivities = [ { "channel": { ... } }, { "lambda": { ... } }, ... ]
An activity that performs a transformation on a message.
Determines the source of the messages to be processed.
The name of the channel activity.
The name of the channel from which the messages are processed.
The next activity in the pipeline.
Runs a Lambda function to modify the message.
The name of the lambda activity.
The name of the Lambda function that is run on the message.
The number of messages passed to the Lambda function for processing.
The Lambda function must be able to process all of these messages within five minutes, which is the maximum timeout duration for Lambda functions.
The next activity in the pipeline.
Specifies where to store the processed message data.
The name of the datastore activity.
The name of the data store where processed messages are stored.
Adds other attributes based on existing attributes in the message.
The name of the addAttributes activity.
A list of 1-50 AttributeNameMapping
objects that map an existing attribute to a new attribute.
Note
The existing attributes remain in the message, so if you want to remove the originals, use RemoveAttributeActivity
.
The next activity in the pipeline.
Removes attributes from a message.
The name of the removeAttributes
activity.
A list of 1-50 attributes to remove from the message.
The next activity in the pipeline.
Used to create a new message using only the specified attributes from the original message.
The name of the selectAttributes
activity.
A list of the attributes to select from the message.
The next activity in the pipeline.
Filters a message based on its attributes.
The name of the filter activity.
An expression that looks like a SQL WHERE clause that must return a Boolean value. Messages that satisfy the condition are passed to the next activity.
The next activity in the pipeline.
Computes an arithmetic expression using the message's attributes and adds it to the message.
The name of the math activity.
The name of the attribute that contains the result of the math operation.
An expression that uses one or more existing attributes and must return an integer value.
The next activity in the pipeline.
Adds data from the IoT device registry to your message.
The name of the deviceRegistryEnrich
activity.
The name of the attribute that is added to the message.
The name of the IoT device whose registry information is added to the message.
The ARN of the role that allows access to the device's registry information.
The next activity in the pipeline.
Adds information from the IoT Device Shadow service to a message.
The name of the deviceShadowEnrich
activity.
The name of the attribute that is added to the message.
The name of the IoT device whose shadow information is added to the message.
The ARN of the role that allows access to the device's shadow.
The next activity in the pipeline.
None
Exceptions
IoTAnalytics.Client.exceptions.InvalidRequestException
IoTAnalytics.Client.exceptions.ResourceNotFoundException
IoTAnalytics.Client.exceptions.InternalFailureException
IoTAnalytics.Client.exceptions.ServiceUnavailableException
IoTAnalytics.Client.exceptions.ThrottlingException
IoTAnalytics.Client.exceptions.LimitExceededException
The available paginators are:
IoTAnalytics.Paginator.ListChannels
IoTAnalytics.Paginator.ListDatasetContents
IoTAnalytics.Paginator.ListDatasets
IoTAnalytics.Paginator.ListDatastores
IoTAnalytics.Paginator.ListPipelines
IoTAnalytics.Paginator.
ListChannels
¶paginator = client.get_paginator('list_channels')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from IoTAnalytics.Client.list_channels()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
{
'channelSummaries': [
{
'channelName': 'string',
'channelStorage': {
'serviceManagedS3': {},
'customerManagedS3': {
'bucket': 'string',
'keyPrefix': 'string',
'roleArn': 'string'
}
},
'status': 'CREATING'|'ACTIVE'|'DELETING',
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1),
'lastMessageArrivalTime': datetime(2015, 1, 1)
},
],
'NextToken': 'string'
}
Response Structure
A list of ChannelSummary
objects.
A summary of information about a channel.
The name of the channel.
Where channel data is stored.
Used to store channel data in an S3 bucket managed by IoT Analytics.
Used to store channel data in an S3 bucket that you manage.
The name of the S3 bucket in which channel data is stored.
(Optional) The prefix used to create the keys of the channel data objects. Each object in an S3 bucket has a key that is its unique identifier within the bucket (each object in a bucket has exactly one key). The prefix must end with a forward slash (/).
The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 resources.
The status of the channel.
When the channel was created.
The last time the channel was updated.
The last time when a new message arrived in the channel.
IoT Analytics updates this value at most once per minute for one channel. Hence, the lastMessageArrivalTime
value is an approximation.
This feature only applies to messages that arrived in the data store after October 23, 2020.
A token to resume pagination.
IoTAnalytics.Paginator.
ListDatasetContents
¶paginator = client.get_paginator('list_dataset_contents')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from IoTAnalytics.Client.list_dataset_contents()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
datasetName='string',
scheduledOnOrAfter=datetime(2015, 1, 1),
scheduledBefore=datetime(2015, 1, 1),
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
[REQUIRED]
The name of the dataset whose contents information you want to list.
triggers.schedule
in the CreateDataset
request. (timestamp)triggers.schedule
in the CreateDataset
request. (timestamp)A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
dict
Response Syntax
{
'datasetContentSummaries': [
{
'version': 'string',
'status': {
'state': 'CREATING'|'SUCCEEDED'|'FAILED',
'reason': 'string'
},
'creationTime': datetime(2015, 1, 1),
'scheduleTime': datetime(2015, 1, 1),
'completionTime': datetime(2015, 1, 1)
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
datasetContentSummaries (list) --
Summary information about dataset contents that have been created.
(dict) --
Summary information about dataset contents.
version (string) --
The version of the dataset contents.
status (dict) --
The status of the dataset contents.
state (string) --
The state of the dataset contents. Can be one of READY, CREATING, SUCCEEDED, or FAILED.
reason (string) --
The reason the dataset contents are in this state.
creationTime (datetime) --
The actual time the creation of the dataset contents was started.
scheduleTime (datetime) --
The time the creation of the dataset contents was scheduled to start.
completionTime (datetime) --
The time the dataset content status was updated to SUCCEEDED or FAILED.
NextToken (string) --
A token to resume pagination.
IoTAnalytics.Paginator.
ListDatasets
¶paginator = client.get_paginator('list_datasets')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from IoTAnalytics.Client.list_datasets()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
{
'datasetSummaries': [
{
'datasetName': 'string',
'status': 'CREATING'|'ACTIVE'|'DELETING',
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1),
'triggers': [
{
'schedule': {
'expression': 'string'
},
'dataset': {
'name': 'string'
}
},
],
'actions': [
{
'actionName': 'string',
'actionType': 'QUERY'|'CONTAINER'
},
]
},
],
'NextToken': 'string'
}
Response Structure
A list of DatasetSummary
objects.
A summary of information about a dataset.
The name of the dataset.
The status of the dataset.
The time the dataset was created.
The last time the dataset was updated.
A list of triggers. A trigger causes dataset content to be populated at a specified time interval or when another dataset is populated. The list of triggers can be empty or contain up to five DataSetTrigger
objects
The DatasetTrigger
that specifies when the dataset is automatically updated.
The Schedule when the trigger is initiated.
The expression that defines when to trigger an update. For more information, see Schedule Expressions for Rules in the Amazon CloudWatch Events User Guide .
The dataset whose content creation triggers the creation of this dataset's contents.
The name of the dataset whose content generation triggers the new dataset content generation.
A list of DataActionSummary
objects.
Information about the action that automatically creates the dataset's contents.
The name of the action that automatically creates the dataset's contents.
The type of action by which the dataset's contents are automatically created.
A token to resume pagination.
IoTAnalytics.Paginator.
ListDatastores
¶paginator = client.get_paginator('list_datastores')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from IoTAnalytics.Client.list_datastores()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
{
'datastoreSummaries': [
{
'datastoreName': 'string',
'datastoreStorage': {
'serviceManagedS3': {},
'customerManagedS3': {
'bucket': 'string',
'keyPrefix': 'string',
'roleArn': 'string'
},
'iotSiteWiseMultiLayerStorage': {
'customerManagedS3Storage': {
'bucket': 'string',
'keyPrefix': 'string'
}
}
},
'status': 'CREATING'|'ACTIVE'|'DELETING',
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1),
'lastMessageArrivalTime': datetime(2015, 1, 1),
'fileFormatType': 'JSON'|'PARQUET',
'datastorePartitions': {
'partitions': [
{
'attributePartition': {
'attributeName': 'string'
},
'timestampPartition': {
'attributeName': 'string',
'timestampFormat': 'string'
}
},
]
}
},
],
'NextToken': 'string'
}
Response Structure
A list of DatastoreSummary
objects.
A summary of information about a data store.
The name of the data store.
Where data in a data store is stored.
Used to store data in an Amazon S3 bucket managed by IoT Analytics.
Used to store data in an Amazon S3 bucket managed by IoT Analytics.
The name of the Amazon S3 bucket where your data is stored.
(Optional) The prefix used to create the keys of the data store data objects. Each object in an Amazon S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/).
The ARN of the role that grants IoT Analytics permission to interact with your Amazon S3 resources.
Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage.
Used to store data used by IoT SiteWise in an Amazon S3 bucket that you manage.
The name of the Amazon S3 bucket where your data is stored.
(Optional) The prefix used to create the keys of the data store data objects. Each object in an Amazon S3 bucket has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/).
The status of the data store.
When the data store was created.
The last time the data store was updated.
The last time when a new message arrived in the data store.
IoT Analytics updates this value at most once per minute for Amazon Simple Storage Service one data store. Hence, the lastMessageArrivalTime
value is an approximation.
This feature only applies to messages that arrived in the data store after October 23, 2020.
The file format of the data in the data store.
Contains information about the partition dimensions in a data store.
A list of partition dimensions in a data store.
A single dimension to partition a data store. The dimension must be an AttributePartition
or a TimestampPartition
.
A partition dimension defined by an attributeName
.
The name of the attribute that defines a partition dimension.
A partition dimension defined by a timestamp attribute.
The attribute name of the partition defined by a timestamp.
The timestamp format of a partition defined by a timestamp. The default format is seconds since epoch (January 1, 1970 at midnight UTC time).
A token to resume pagination.
IoTAnalytics.Paginator.
ListPipelines
¶paginator = client.get_paginator('list_pipelines')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from IoTAnalytics.Client.list_pipelines()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
{
'pipelineSummaries': [
{
'pipelineName': 'string',
'reprocessingSummaries': [
{
'id': 'string',
'status': 'RUNNING'|'SUCCEEDED'|'CANCELLED'|'FAILED',
'creationTime': datetime(2015, 1, 1)
},
],
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1)
},
],
'NextToken': 'string'
}
Response Structure
A list of PipelineSummary
objects.
A summary of information about a pipeline.
The name of the pipeline.
A summary of information about the pipeline reprocessing.
Information about pipeline reprocessing.
The reprocessingId
returned by StartPipelineReprocessing
.
The status of the pipeline reprocessing.
The time the pipeline reprocessing was created.
When the pipeline was created.
When the pipeline was last updated.
A token to resume pagination.