SupplyChain / Paginator / ListDataIntegrationFlows

ListDataIntegrationFlows#

class SupplyChain.Paginator.ListDataIntegrationFlows#
paginator = client.get_paginator('list_data_integration_flows')
paginate(**kwargs)#

Creates an iterator that will paginate through responses from SupplyChain.Client.list_data_integration_flows().

See also: AWS API Documentation

Request Syntax

response_iterator = paginator.paginate(
    instanceId='string',
    PaginationConfig={
        'MaxItems': 123,
        'PageSize': 123,
        'StartingToken': 'string'
    }
)
Parameters:
  • instanceId (string) –

    [REQUIRED]

    The Amazon Web Services Supply Chain instance identifier.

  • PaginationConfig (dict) –

    A dictionary that provides parameters to control pagination.

    • MaxItems (integer) –

      The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.

    • PageSize (integer) –

      The size of each page.

    • StartingToken (string) –

      A token to specify where to start paginating. This is the NextToken from a previous response.

Return type:

dict

Returns:

Response Syntax

{
    'flows': [
        {
            'instanceId': 'string',
            'name': 'string',
            'sources': [
                {
                    'sourceType': 'S3'|'DATASET',
                    'sourceName': 'string',
                    's3Source': {
                        'bucketName': 'string',
                        'prefix': 'string',
                        'options': {
                            'fileType': 'CSV'|'PARQUET'|'JSON'
                        }
                    },
                    'datasetSource': {
                        'datasetIdentifier': 'string',
                        'options': {
                            'loadType': 'INCREMENTAL'|'REPLACE',
                            'dedupeRecords': True|False
                        }
                    }
                },
            ],
            'transformation': {
                'transformationType': 'SQL'|'NONE',
                'sqlTransformation': {
                    'query': 'string'
                }
            },
            'target': {
                'targetType': 'S3'|'DATASET',
                's3Target': {
                    'bucketName': 'string',
                    'prefix': 'string',
                    'options': {
                        'fileType': 'CSV'|'PARQUET'|'JSON'
                    }
                },
                'datasetTarget': {
                    'datasetIdentifier': 'string',
                    'options': {
                        'loadType': 'INCREMENTAL'|'REPLACE',
                        'dedupeRecords': True|False
                    }
                }
            },
            'createdTime': datetime(2015, 1, 1),
            'lastModifiedTime': datetime(2015, 1, 1)
        },
    ],
    'NextToken': 'string'
}

Response Structure

  • (dict) –

    The response parameters for ListDataIntegrationFlows.

    • flows (list) –

      The response parameters for ListDataIntegrationFlows.

      • (dict) –

        The DataIntegrationFlow details.

        • instanceId (string) –

          The DataIntegrationFlow instance ID.

        • name (string) –

          The DataIntegrationFlow name.

        • sources (list) –

          The DataIntegrationFlow source configurations.

          • (dict) –

            The DataIntegrationFlow source parameters.

            • sourceType (string) –

              The DataIntegrationFlow source type.

            • sourceName (string) –

              The DataIntegrationFlow source name that can be used as table alias in SQL transformation query.

            • s3Source (dict) –

              The S3 DataIntegrationFlow source.

              • bucketName (string) –

                The bucketName of the S3 source objects.

              • prefix (string) –

                The prefix of the S3 source objects.

              • options (dict) –

                The other options of the S3 DataIntegrationFlow source.

                • fileType (string) –

                  The Amazon S3 file type in S3 options.

            • datasetSource (dict) –

              The dataset DataIntegrationFlow source.

              • datasetIdentifier (string) –

                The ARN of the dataset.

              • options (dict) –

                The dataset DataIntegrationFlow source options.

                • loadType (string) –

                  The dataset data load type in dataset options.

                • dedupeRecords (boolean) –

                  The dataset load option to remove duplicates.

        • transformation (dict) –

          The DataIntegrationFlow transformation configurations.

          • transformationType (string) –

            The DataIntegrationFlow transformation type.

          • sqlTransformation (dict) –

            The SQL DataIntegrationFlow transformation configuration.

            • query (string) –

              The transformation SQL query body based on SparkSQL.

        • target (dict) –

          The DataIntegrationFlow target configuration.

          • targetType (string) –

            The DataIntegrationFlow target type.

          • s3Target (dict) –

            The S3 DataIntegrationFlow target.

            • bucketName (string) –

              The bucketName of the S3 target objects.

            • prefix (string) –

              The prefix of the S3 target objects.

            • options (dict) –

              The S3 DataIntegrationFlow target options.

              • fileType (string) –

                The Amazon S3 file type in S3 options.

          • datasetTarget (dict) –

            The dataset DataIntegrationFlow target.

            • datasetIdentifier (string) –

              The dataset ARN.

            • options (dict) –

              The dataset DataIntegrationFlow target options.

              • loadType (string) –

                The dataset data load type in dataset options.

              • dedupeRecords (boolean) –

                The dataset load option to remove duplicates.

        • createdTime (datetime) –

          The DataIntegrationFlow creation timestamp.

        • lastModifiedTime (datetime) –

          The DataIntegrationFlow last modified timestamp.

    • NextToken (string) –

      A token to resume pagination.