SupplyChain / Client / list_data_integration_flows

list_data_integration_flows#

SupplyChain.Client.list_data_integration_flows(**kwargs)#

Enables you to programmatically list all data pipelines for the provided Amazon Web Services Supply Chain instance.

See also: AWS API Documentation

Request Syntax

response = client.list_data_integration_flows(
    instanceId='string',
    nextToken='string',
    maxResults=123
)
Parameters:
  • instanceId (string) –

    [REQUIRED]

    The Amazon Web Services Supply Chain instance identifier.

  • nextToken (string) – The pagination token to fetch the next page of the DataIntegrationFlows.

  • maxResults (integer) – Specify the maximum number of DataIntegrationFlows to fetch in one paginated request.

Return type:

dict

Returns:

Response Syntax

{
    'flows': [
        {
            'instanceId': 'string',
            'name': 'string',
            'sources': [
                {
                    'sourceType': 'S3'|'DATASET',
                    'sourceName': 'string',
                    's3Source': {
                        'bucketName': 'string',
                        'prefix': 'string',
                        'options': {
                            'fileType': 'CSV'|'PARQUET'|'JSON'
                        }
                    },
                    'datasetSource': {
                        'datasetIdentifier': 'string',
                        'options': {
                            'loadType': 'INCREMENTAL'|'REPLACE',
                            'dedupeRecords': True|False
                        }
                    }
                },
            ],
            'transformation': {
                'transformationType': 'SQL'|'NONE',
                'sqlTransformation': {
                    'query': 'string'
                }
            },
            'target': {
                'targetType': 'S3'|'DATASET',
                's3Target': {
                    'bucketName': 'string',
                    'prefix': 'string',
                    'options': {
                        'fileType': 'CSV'|'PARQUET'|'JSON'
                    }
                },
                'datasetTarget': {
                    'datasetIdentifier': 'string',
                    'options': {
                        'loadType': 'INCREMENTAL'|'REPLACE',
                        'dedupeRecords': True|False
                    }
                }
            },
            'createdTime': datetime(2015, 1, 1),
            'lastModifiedTime': datetime(2015, 1, 1)
        },
    ],
    'nextToken': 'string'
}

Response Structure

  • (dict) –

    The response parameters for ListDataIntegrationFlows.

    • flows (list) –

      The response parameters for ListDataIntegrationFlows.

      • (dict) –

        The DataIntegrationFlow details.

        • instanceId (string) –

          The DataIntegrationFlow instance ID.

        • name (string) –

          The DataIntegrationFlow name.

        • sources (list) –

          The DataIntegrationFlow source configurations.

          • (dict) –

            The DataIntegrationFlow source parameters.

            • sourceType (string) –

              The DataIntegrationFlow source type.

            • sourceName (string) –

              The DataIntegrationFlow source name that can be used as table alias in SQL transformation query.

            • s3Source (dict) –

              The S3 DataIntegrationFlow source.

              • bucketName (string) –

                The bucketName of the S3 source objects.

              • prefix (string) –

                The prefix of the S3 source objects.

              • options (dict) –

                The other options of the S3 DataIntegrationFlow source.

                • fileType (string) –

                  The Amazon S3 file type in S3 options.

            • datasetSource (dict) –

              The dataset DataIntegrationFlow source.

              • datasetIdentifier (string) –

                The ARN of the dataset.

              • options (dict) –

                The dataset DataIntegrationFlow source options.

                • loadType (string) –

                  The dataset data load type in dataset options.

                • dedupeRecords (boolean) –

                  The dataset load option to remove duplicates.

        • transformation (dict) –

          The DataIntegrationFlow transformation configurations.

          • transformationType (string) –

            The DataIntegrationFlow transformation type.

          • sqlTransformation (dict) –

            The SQL DataIntegrationFlow transformation configuration.

            • query (string) –

              The transformation SQL query body based on SparkSQL.

        • target (dict) –

          The DataIntegrationFlow target configuration.

          • targetType (string) –

            The DataIntegrationFlow target type.

          • s3Target (dict) –

            The S3 DataIntegrationFlow target.

            • bucketName (string) –

              The bucketName of the S3 target objects.

            • prefix (string) –

              The prefix of the S3 target objects.

            • options (dict) –

              The S3 DataIntegrationFlow target options.

              • fileType (string) –

                The Amazon S3 file type in S3 options.

          • datasetTarget (dict) –

            The dataset DataIntegrationFlow target.

            • datasetIdentifier (string) –

              The dataset ARN.

            • options (dict) –

              The dataset DataIntegrationFlow target options.

              • loadType (string) –

                The dataset data load type in dataset options.

              • dedupeRecords (boolean) –

                The dataset load option to remove duplicates.

        • createdTime (datetime) –

          The DataIntegrationFlow creation timestamp.

        • lastModifiedTime (datetime) –

          The DataIntegrationFlow last modified timestamp.

    • nextToken (string) –

      The pagination token to fetch the next page of the DataIntegrationFlows.

Exceptions