SupplyChain / Client / get_data_integration_flow

get_data_integration_flow

SupplyChain.Client.get_data_integration_flow(**kwargs)

Enables you to programmatically view a specific data pipeline for the provided Amazon Web Services Supply Chain instance and DataIntegrationFlow name.

See also: AWS API Documentation

Request Syntax

response = client.get_data_integration_flow(
    instanceId='string',
    name='string'
)
Parameters:
  • instanceId (string) –

    [REQUIRED]

    The Amazon Web Services Supply Chain instance identifier.

  • name (string) –

    [REQUIRED]

    The name of the DataIntegrationFlow created.

Return type:

dict

Returns:

Response Syntax

{
    'flow': {
        'instanceId': 'string',
        'name': 'string',
        'sources': [
            {
                'sourceType': 'S3'|'DATASET',
                'sourceName': 'string',
                's3Source': {
                    'bucketName': 'string',
                    'prefix': 'string',
                    'options': {
                        'fileType': 'CSV'|'PARQUET'|'JSON'
                    }
                },
                'datasetSource': {
                    'datasetIdentifier': 'string',
                    'options': {
                        'loadType': 'INCREMENTAL'|'REPLACE',
                        'dedupeRecords': True|False,
                        'dedupeStrategy': {
                            'type': 'FIELD_PRIORITY',
                            'fieldPriority': {
                                'fields': [
                                    {
                                        'name': 'string',
                                        'sortOrder': 'ASC'|'DESC'
                                    },
                                ]
                            }
                        }
                    }
                }
            },
        ],
        'transformation': {
            'transformationType': 'SQL'|'NONE',
            'sqlTransformation': {
                'query': 'string'
            }
        },
        'target': {
            'targetType': 'S3'|'DATASET',
            's3Target': {
                'bucketName': 'string',
                'prefix': 'string',
                'options': {
                    'fileType': 'CSV'|'PARQUET'|'JSON'
                }
            },
            'datasetTarget': {
                'datasetIdentifier': 'string',
                'options': {
                    'loadType': 'INCREMENTAL'|'REPLACE',
                    'dedupeRecords': True|False,
                    'dedupeStrategy': {
                        'type': 'FIELD_PRIORITY',
                        'fieldPriority': {
                            'fields': [
                                {
                                    'name': 'string',
                                    'sortOrder': 'ASC'|'DESC'
                                },
                            ]
                        }
                    }
                }
            }
        },
        'createdTime': datetime(2015, 1, 1),
        'lastModifiedTime': datetime(2015, 1, 1)
    }
}

Response Structure

  • (dict) –

    The response parameters for GetDataIntegrationFlow.

    • flow (dict) –

      The details of the DataIntegrationFlow returned.

      • instanceId (string) –

        The DataIntegrationFlow instance ID.

      • name (string) –

        The DataIntegrationFlow name.

      • sources (list) –

        The DataIntegrationFlow source configurations.

        • (dict) –

          The DataIntegrationFlow source parameters.

          • sourceType (string) –

            The DataIntegrationFlow source type.

          • sourceName (string) –

            The DataIntegrationFlow source name that can be used as table alias in SQL transformation query.

          • s3Source (dict) –

            The S3 DataIntegrationFlow source.

            • bucketName (string) –

              The bucketName of the S3 source objects.

            • prefix (string) –

              The prefix of the S3 source objects. To trigger data ingestion, S3 files need to be put under s3://bucketName/prefix/.

            • options (dict) –

              The other options of the S3 DataIntegrationFlow source.

              • fileType (string) –

                The Amazon S3 file type in S3 options.

          • datasetSource (dict) –

            The dataset DataIntegrationFlow source.

            • datasetIdentifier (string) –

              The ARN of the dataset.

            • options (dict) –

              The dataset DataIntegrationFlow source options.

              • loadType (string) –

                The target dataset’s data load type. This only affects how source S3 files are selected in the S3-to-dataset flow.

                • REPLACE - Target dataset will get replaced with the new file added under the source s3 prefix.

                • INCREMENTAL - Target dataset will get updated with the up-to-date content under S3 prefix incorporating any file additions or removals there.

              • dedupeRecords (boolean) –

                The option to perform deduplication on data records sharing same primary key values. If disabled, transformed data with duplicate primary key values will ingest into dataset, for datasets within asc namespace, such duplicates will cause ingestion fail. If enabled without dedupeStrategy, deduplication is done by retaining a random data record among those sharing the same primary key values. If enabled with dedupeStragtegy, the deduplication is done following the strategy.

                Note that target dataset may have partition configured, when dedupe is enabled, it only dedupe against primary keys and retain only one record out of those duplicates regardless of its partition status.

              • dedupeStrategy (dict) –

                The deduplication strategy to dedupe the data records sharing same primary key values of the target dataset. This strategy only applies to target dataset with primary keys and with dedupeRecords option enabled. If transformed data still got duplicates after the dedupeStrategy evaluation, a random data record is chosen to be retained.

                • type (string) –

                  The type of the deduplication strategy.

                  • FIELD_PRIORITY - Field priority configuration for the deduplication strategy specifies an ordered list of fields used to tie-break the data records sharing the same primary key values. Fields earlier in the list have higher priority for evaluation. For each field, the sort order determines whether to retain data record with larger or smaller field value.

                • fieldPriority (dict) –

                  The field priority deduplication strategy.

                  • fields (list) –

                    The list of field names and their sort order for deduplication, arranged in descending priority from highest to lowest.

                    • (dict) –

                      The field used in the field priority deduplication strategy.

                      • name (string) –

                        The name of the deduplication field. Must exist in the dataset and not be a primary key.

                      • sortOrder (string) –

                        The sort order for the deduplication field.

      • transformation (dict) –

        The DataIntegrationFlow transformation configurations.

        • transformationType (string) –

          The DataIntegrationFlow transformation type.

        • sqlTransformation (dict) –

          The SQL DataIntegrationFlow transformation configuration.

          • query (string) –

            The transformation SQL query body based on SparkSQL.

      • target (dict) –

        The DataIntegrationFlow target configuration.

        • targetType (string) –

          The DataIntegrationFlow target type.

        • s3Target (dict) –

          The S3 DataIntegrationFlow target.

          • bucketName (string) –

            The bucketName of the S3 target objects.

          • prefix (string) –

            The prefix of the S3 target objects.

          • options (dict) –

            The S3 DataIntegrationFlow target options.

            • fileType (string) –

              The Amazon S3 file type in S3 options.

        • datasetTarget (dict) –

          The dataset DataIntegrationFlow target. Note that for AWS Supply Chain dataset under asc namespace, it has a connection_id internal field that is not allowed to be provided by client directly, they will be auto populated.

          • datasetIdentifier (string) –

            The dataset ARN.

          • options (dict) –

            The dataset DataIntegrationFlow target options.

            • loadType (string) –

              The target dataset’s data load type. This only affects how source S3 files are selected in the S3-to-dataset flow.

              • REPLACE - Target dataset will get replaced with the new file added under the source s3 prefix.

              • INCREMENTAL - Target dataset will get updated with the up-to-date content under S3 prefix incorporating any file additions or removals there.

            • dedupeRecords (boolean) –

              The option to perform deduplication on data records sharing same primary key values. If disabled, transformed data with duplicate primary key values will ingest into dataset, for datasets within asc namespace, such duplicates will cause ingestion fail. If enabled without dedupeStrategy, deduplication is done by retaining a random data record among those sharing the same primary key values. If enabled with dedupeStragtegy, the deduplication is done following the strategy.

              Note that target dataset may have partition configured, when dedupe is enabled, it only dedupe against primary keys and retain only one record out of those duplicates regardless of its partition status.

            • dedupeStrategy (dict) –

              The deduplication strategy to dedupe the data records sharing same primary key values of the target dataset. This strategy only applies to target dataset with primary keys and with dedupeRecords option enabled. If transformed data still got duplicates after the dedupeStrategy evaluation, a random data record is chosen to be retained.

              • type (string) –

                The type of the deduplication strategy.

                • FIELD_PRIORITY - Field priority configuration for the deduplication strategy specifies an ordered list of fields used to tie-break the data records sharing the same primary key values. Fields earlier in the list have higher priority for evaluation. For each field, the sort order determines whether to retain data record with larger or smaller field value.

              • fieldPriority (dict) –

                The field priority deduplication strategy.

                • fields (list) –

                  The list of field names and their sort order for deduplication, arranged in descending priority from highest to lowest.

                  • (dict) –

                    The field used in the field priority deduplication strategy.

                    • name (string) –

                      The name of the deduplication field. Must exist in the dataset and not be a primary key.

                    • sortOrder (string) –

                      The sort order for the deduplication field.

      • createdTime (datetime) –

        The DataIntegrationFlow creation timestamp.

      • lastModifiedTime (datetime) –

        The DataIntegrationFlow last modified timestamp.

Exceptions

  • SupplyChain.Client.exceptions.ServiceQuotaExceededException

  • SupplyChain.Client.exceptions.ResourceNotFoundException

  • SupplyChain.Client.exceptions.ThrottlingException

  • SupplyChain.Client.exceptions.AccessDeniedException

  • SupplyChain.Client.exceptions.ValidationException

  • SupplyChain.Client.exceptions.InternalServerException

  • SupplyChain.Client.exceptions.ConflictException