MainframeModernization / Client / create_data_set_import_task

create_data_set_import_task#

MainframeModernization.Client.create_data_set_import_task(**kwargs)#

Starts a data set import task for a specific application.

See also: AWS API Documentation

Request Syntax

response = client.create_data_set_import_task(
    applicationId='string',
    clientToken='string',
    importConfig={
        'dataSets': [
            {
                'dataSet': {
                    'datasetName': 'string',
                    'datasetOrg': {
                        'gdg': {
                            'limit': 123,
                            'rollDisposition': 'string'
                        },
                        'vsam': {
                            'alternateKeys': [
                                {
                                    'allowDuplicates': True|False,
                                    'length': 123,
                                    'name': 'string',
                                    'offset': 123
                                },
                            ],
                            'compressed': True|False,
                            'encoding': 'string',
                            'format': 'string',
                            'primaryKey': {
                                'length': 123,
                                'name': 'string',
                                'offset': 123
                            }
                        }
                    },
                    'recordLength': {
                        'max': 123,
                        'min': 123
                    },
                    'relativePath': 'string',
                    'storageType': 'string'
                },
                'externalLocation': {
                    's3Location': 'string'
                }
            },
        ],
        's3Location': 'string'
    }
)
Parameters:
  • applicationId (string) –

    [REQUIRED]

    The unique identifier of the application for which you want to import data sets.

  • clientToken (string) –

    Unique, case-sensitive identifier you provide to ensure the idempotency of the request to create a data set import. The service generates the clientToken when the API call is triggered. The token expires after one hour, so if you retry the API within this timeframe with the same clientToken, you will get the same response. The service also handles deleting the clientToken after it expires.

    This field is autopopulated if not provided.

  • importConfig (dict) –

    [REQUIRED]

    The data set import task configuration.

    Note

    This is a Tagged Union structure. Only one of the following top level keys can be set: dataSets, s3Location.

    • dataSets (list) –

      The data sets.

      • (dict) –

        Identifies a specific data set to import from an external location.

        • dataSet (dict) – [REQUIRED]

          The data set.

          • datasetName (string) – [REQUIRED]

            The logical identifier for a specific data set (in mainframe format).

          • datasetOrg (dict) – [REQUIRED]

            The type of dataset. The only supported value is VSAM.

            Note

            This is a Tagged Union structure. Only one of the following top level keys can be set: gdg, vsam.

            • gdg (dict) –

              The generation data group of the data set.

              • limit (integer) –

                The maximum number of generation data sets, up to 255, in a GDG.

              • rollDisposition (string) –

                The disposition of the data set in the catalog.

            • vsam (dict) –

              The details of a VSAM data set.

              • alternateKeys (list) –

                The alternate key definitions, if any. A legacy dataset might not have any alternate key defined, but if those alternate keys definitions exist, provide them as some applications will make use of them.

                • (dict) –

                  Defines an alternate key. This value is optional. A legacy data set might not have any alternate key defined but if those alternate keys definitions exist, provide them, as some applications will make use of them.

                  • allowDuplicates (boolean) –

                    Indicates whether the alternate key values are supposed to be unique for the given data set.

                  • length (integer) – [REQUIRED]

                    A strictly positive integer value representing the length of the alternate key.

                  • name (string) –

                    The name of the alternate key.

                  • offset (integer) – [REQUIRED]

                    A positive integer value representing the offset to mark the start of the alternate key part in the record byte array.

              • compressed (boolean) –

                Indicates whether indexes for this dataset are stored as compressed values. If you have a large data set (typically > 100 Mb), consider setting this flag to True.

              • encoding (string) –

                The character set used by the data set. Can be ASCII, EBCDIC, or unknown.

              • format (string) – [REQUIRED]

                The record format of the data set.

              • primaryKey (dict) –

                The primary key of the data set.

                • length (integer) – [REQUIRED]

                  A strictly positive integer value representing the length of the primary key.

                • name (string) –

                  A name for the Primary Key.

                • offset (integer) – [REQUIRED]

                  A positive integer value representing the offset to mark the start of the primary key in the record byte array.

          • recordLength (dict) – [REQUIRED]

            The length of a record.

            • max (integer) – [REQUIRED]

              The maximum record length. In case of fixed, both minimum and maximum are the same.

            • min (integer) – [REQUIRED]

              The minimum record length of a record.

          • relativePath (string) –

            The relative location of the data set in the database or file system.

          • storageType (string) –

            The storage type of the data set: database or file system. For Micro Focus, database corresponds to datastore and file system corresponds to EFS/FSX. For Blu Age, there is no support of file system and database corresponds to Blusam.

        • externalLocation (dict) – [REQUIRED]

          The location of the data set.

          Note

          This is a Tagged Union structure. Only one of the following top level keys can be set: s3Location.

          • s3Location (string) –

            The URI of the Amazon S3 bucket.

    • s3Location (string) –

      The Amazon S3 location of the data sets.

Return type:

dict

Returns:

Response Syntax

{
    'taskId': 'string'
}

Response Structure

  • (dict) –

    • taskId (string) –

      The task identifier. This operation is asynchronous. Use this identifier with the GetDataSetImportTask operation to obtain the status of this task.

Exceptions

  • MainframeModernization.Client.exceptions.ValidationException

  • MainframeModernization.Client.exceptions.ServiceQuotaExceededException

  • MainframeModernization.Client.exceptions.ConflictException

  • MainframeModernization.Client.exceptions.InternalServerException

  • MainframeModernization.Client.exceptions.AccessDeniedException

  • MainframeModernization.Client.exceptions.ThrottlingException

  • MainframeModernization.Client.exceptions.ResourceNotFoundException