get_data_source

MachineLearning.Client.get_data_source(**kwargs)

Returns a DataSource that includes metadata and data file information, as well as the current status of the DataSource .

GetDataSource provides results in normal or verbose format. The verbose format adds the schema description and the list of files pointed to by the DataSource to the normal format.

See also: AWS API Documentation

Request Syntax

response = client.get_data_source(
    DataSourceId='string',
    Verbose=True|False
)
Parameters
  • DataSourceId (string) --

    [REQUIRED]

    The ID assigned to the DataSource at creation.

  • Verbose (boolean) --

    Specifies whether the GetDataSource operation should return DataSourceSchema .

    If true, DataSourceSchema is returned.

    If false, DataSourceSchema is not returned.

Return type

dict

Returns

Response Syntax

{
    'DataSourceId': 'string',
    'DataLocationS3': 'string',
    'DataRearrangement': 'string',
    'CreatedByIamUser': 'string',
    'CreatedAt': datetime(2015, 1, 1),
    'LastUpdatedAt': datetime(2015, 1, 1),
    'DataSizeInBytes': 123,
    'NumberOfFiles': 123,
    'Name': 'string',
    'Status': 'PENDING'|'INPROGRESS'|'FAILED'|'COMPLETED'|'DELETED',
    'LogUri': 'string',
    'Message': 'string',
    'RedshiftMetadata': {
        'RedshiftDatabase': {
            'DatabaseName': 'string',
            'ClusterIdentifier': 'string'
        },
        'DatabaseUserName': 'string',
        'SelectSqlQuery': 'string'
    },
    'RDSMetadata': {
        'Database': {
            'InstanceIdentifier': 'string',
            'DatabaseName': 'string'
        },
        'DatabaseUserName': 'string',
        'SelectSqlQuery': 'string',
        'ResourceRole': 'string',
        'ServiceRole': 'string',
        'DataPipelineId': 'string'
    },
    'RoleARN': 'string',
    'ComputeStatistics': True|False,
    'ComputeTime': 123,
    'FinishedAt': datetime(2015, 1, 1),
    'StartedAt': datetime(2015, 1, 1),
    'DataSourceSchema': 'string'
}

Response Structure

  • (dict) --

    Represents the output of a GetDataSource operation and describes a DataSource .

    • DataSourceId (string) --

      The ID assigned to the DataSource at creation. This value should be identical to the value of the DataSourceId in the request.

    • DataLocationS3 (string) --

      The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).

    • DataRearrangement (string) --

      A JSON string that represents the splitting and rearrangement requirement used when this DataSource was created.

    • CreatedByIamUser (string) --

      The AWS user account from which the DataSource was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

    • CreatedAt (datetime) --

      The time that the DataSource was created. The time is expressed in epoch time.

    • LastUpdatedAt (datetime) --

      The time of the most recent edit to the DataSource . The time is expressed in epoch time.

    • DataSizeInBytes (integer) --

      The total size of observations in the data files.

    • NumberOfFiles (integer) --

      The number of data files referenced by the DataSource .

    • Name (string) --

      A user-supplied name or description of the DataSource .

    • Status (string) --

      The current status of the DataSource . This element can have one of the following values:

      • PENDING - Amazon ML submitted a request to create a DataSource .
      • INPROGRESS - The creation process is underway.
      • FAILED - The request to create a DataSource did not run to completion. It is not usable.
      • COMPLETED - The creation process completed successfully.
      • DELETED - The DataSource is marked as deleted. It is not usable.
    • LogUri (string) --

      A link to the file containing logs of CreateDataSourceFrom* operations.

    • Message (string) --

      The user-supplied description of the most recent details about creating the DataSource .

    • RedshiftMetadata (dict) --

      Describes the DataSource details specific to Amazon Redshift.

      • RedshiftDatabase (dict) --

        Describes the database details required to connect to an Amazon Redshift database.

        • DatabaseName (string) --

          The name of a database hosted on an Amazon Redshift cluster.

        • ClusterIdentifier (string) --

          The ID of an Amazon Redshift cluster.

      • DatabaseUserName (string) --

        A username to be used by Amazon Machine Learning (Amazon ML)to connect to a database on an Amazon Redshift cluster. The username should have sufficient permissions to execute the RedshiftSelectSqlQuery query. The username should be valid for an Amazon Redshift USER.

      • SelectSqlQuery (string) --

        The SQL query that is specified during CreateDataSourceFromRedshift. Returns only if Verbose is true in GetDataSourceInput.

    • RDSMetadata (dict) --

      The datasource details that are specific to Amazon RDS.

      • Database (dict) --

        The database details required to connect to an Amazon RDS.

        • InstanceIdentifier (string) --

          The ID of an RDS DB instance.

        • DatabaseName (string) --

          The name of a database hosted on an RDS DB instance.

      • DatabaseUserName (string) --

        The username to be used by Amazon ML to connect to database on an Amazon RDS instance. The username should have sufficient permissions to execute an RDSSelectSqlQuery query.

      • SelectSqlQuery (string) --

        The SQL query that is supplied during CreateDataSourceFromRDS. Returns only if Verbose is true in GetDataSourceInput .

      • ResourceRole (string) --

        The role (DataPipelineDefaultResourceRole) assumed by an Amazon EC2 instance to carry out the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.

      • ServiceRole (string) --

        The role (DataPipelineDefaultRole) assumed by the Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.

      • DataPipelineId (string) --

        The ID of the Data Pipeline instance that is used to carry to copy data from Amazon RDS to Amazon S3. You can use the ID to find details about the instance in the Data Pipeline console.

    • RoleARN (string) --

      The Amazon Resource Name (ARN) of an AWS IAM Role, such as the following: arn:aws:iam::account:role/rolename.

    • ComputeStatistics (boolean) --

      The parameter is true if statistics need to be generated from the observation data.

    • ComputeTime (integer) --

      The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the DataSource , normalized and scaled on computation resources. ComputeTime is only available if the DataSource is in the COMPLETED state and the ComputeStatistics is set to true.

    • FinishedAt (datetime) --

      The epoch time when Amazon Machine Learning marked the DataSource as COMPLETED or FAILED . FinishedAt is only available when the DataSource is in the COMPLETED or FAILED state.

    • StartedAt (datetime) --

      The epoch time when Amazon Machine Learning marked the DataSource as INPROGRESS . StartedAt isn't available if the DataSource is in the PENDING state.

    • DataSourceSchema (string) --

      The schema used by all of the data files of this DataSource .

      Note: This parameter is provided as part of the verbose format.

Exceptions

  • MachineLearning.Client.exceptions.InvalidInputException
  • MachineLearning.Client.exceptions.ResourceNotFoundException
  • MachineLearning.Client.exceptions.InternalServerException