GlueDataBrew

Table of Contents

Client

class GlueDataBrew.Client

A low-level client representing AWS Glue DataBrew

Glue DataBrew is a visual, cloud-scale data-preparation service. DataBrew simplifies data preparation tasks, targeting data issues that are hard to spot and time-consuming to fix. DataBrew empowers users of all technical levels to visualize the data and perform one-click data transformations, with no coding required.

import boto3

client = boto3.client('databrew')

These are the available methods:

batch_delete_recipe_version(**kwargs)

Deletes one or more versions of a recipe at a time.

The entire request will be rejected if:

  • The recipe does not exist.
  • There is an invalid version identifier in the list of versions.
  • The version list is empty.
  • The version list size exceeds 50.
  • The version list contains duplicate entries.

The request will complete successfully, but with partial failures, if:

  • A version does not exist.
  • A version is being used by a job.
  • You specify LATEST_WORKING , but it's being used by a project.
  • The version fails to be deleted.

The LATEST_WORKING version will only be deleted if the recipe has no other versions. If you try to delete LATEST_WORKING while other versions exist (or if they can't be deleted), then LATEST_WORKING will be listed as partial failure in the response.

See also: AWS API Documentation

Request Syntax

response = client.batch_delete_recipe_version(
    Name='string',
    RecipeVersions=[
        'string',
    ]
)
Parameters
  • Name (string) --

    [REQUIRED]

    The name of the recipe whose versions are to be deleted.

  • RecipeVersions (list) --

    [REQUIRED]

    An array of version identifiers, for the recipe versions to be deleted. You can specify numeric versions (X.Y ) or LATEST_WORKING . LATEST_PUBLISHED is not supported.

    • (string) --
Return type

dict

Returns

Response Syntax

{
    'Name': 'string',
    'Errors': [
        {
            'ErrorCode': 'string',
            'ErrorMessage': 'string',
            'RecipeVersion': 'string'
        },
    ]
}

Response Structure

  • (dict) --

    • Name (string) --

      The name of the recipe that was modified.

    • Errors (list) --

      Errors, if any, that occurred while attempting to delete the recipe versions.

      • (dict) --

        Represents any errors encountered when attempting to delete multiple recipe versions.

        • ErrorCode (string) --

          The HTTP status code for the error.

        • ErrorMessage (string) --

          The text of the error message.

        • RecipeVersion (string) --

          The identifier for the recipe version associated with this error.

Exceptions

  • GlueDataBrew.Client.exceptions.ConflictException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
can_paginate(operation_name)

Check if an operation can be paginated.

Parameters
operation_name (string) -- The operation name. This is the same name as the method name on the client. For example, if the method name is create_foo, and you'd normally invoke the operation as client.create_foo(**kwargs), if the create_foo operation can be paginated, you can use the call client.get_paginator("create_foo").
Returns
True if the operation can be paginated, False otherwise.
create_dataset(**kwargs)

Creates a new DataBrew dataset.

See also: AWS API Documentation

Request Syntax

response = client.create_dataset(
    Name='string',
    Format='CSV'|'JSON'|'PARQUET'|'EXCEL',
    FormatOptions={
        'Json': {
            'MultiLine': True|False
        },
        'Excel': {
            'SheetNames': [
                'string',
            ],
            'SheetIndexes': [
                123,
            ],
            'HeaderRow': True|False
        },
        'Csv': {
            'Delimiter': 'string',
            'HeaderRow': True|False
        }
    },
    Input={
        'S3InputDefinition': {
            'Bucket': 'string',
            'Key': 'string'
        },
        'DataCatalogInputDefinition': {
            'CatalogId': 'string',
            'DatabaseName': 'string',
            'TableName': 'string',
            'TempDirectory': {
                'Bucket': 'string',
                'Key': 'string'
            }
        },
        'DatabaseInputDefinition': {
            'GlueConnectionName': 'string',
            'DatabaseTableName': 'string',
            'TempDirectory': {
                'Bucket': 'string',
                'Key': 'string'
            }
        }
    },
    PathOptions={
        'LastModifiedDateCondition': {
            'Expression': 'string',
            'ValuesMap': {
                'string': 'string'
            }
        },
        'FilesLimit': {
            'MaxFiles': 123,
            'OrderedBy': 'LAST_MODIFIED_DATE',
            'Order': 'DESCENDING'|'ASCENDING'
        },
        'Parameters': {
            'string': {
                'Name': 'string',
                'Type': 'Datetime'|'Number'|'String',
                'DatetimeOptions': {
                    'Format': 'string',
                    'TimezoneOffset': 'string',
                    'LocaleCode': 'string'
                },
                'CreateColumn': True|False,
                'Filter': {
                    'Expression': 'string',
                    'ValuesMap': {
                        'string': 'string'
                    }
                }
            }
        }
    },
    Tags={
        'string': 'string'
    }
)
Parameters
  • Name (string) --

    [REQUIRED]

    The name of the dataset to be created. Valid characters are alphanumeric (A-Z, a-z, 0-9), hyphen (-), period (.), and space.

  • Format (string) -- The file format of a dataset that is created from an Amazon S3 file or folder.
  • FormatOptions (dict) --

    Represents a set of options that define the structure of either comma-separated value (CSV), Excel, or JSON input.

    • Json (dict) --

      Options that define how JSON input is to be interpreted by DataBrew.

      • MultiLine (boolean) --

        A value that specifies whether JSON input contains embedded new line characters.

    • Excel (dict) --

      Options that define how Excel input is to be interpreted by DataBrew.

      • SheetNames (list) --

        One or more named sheets in the Excel file that will be included in the dataset.

        • (string) --
      • SheetIndexes (list) --

        One or more sheet numbers in the Excel file that will be included in the dataset.

        • (integer) --
      • HeaderRow (boolean) --

        A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.

    • Csv (dict) --

      Options that define how CSV input is to be interpreted by DataBrew.

      • Delimiter (string) --

        A single character that specifies the delimiter being used in the CSV file.

      • HeaderRow (boolean) --

        A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.

  • Input (dict) --

    [REQUIRED]

    Represents information on how DataBrew can find data, in either the Glue Data Catalog or Amazon S3.

    • S3InputDefinition (dict) --

      The Amazon S3 location where the data is stored.

      • Bucket (string) -- [REQUIRED]

        The Amazon S3 bucket name.

      • Key (string) --

        The unique name of the object in the bucket.

    • DataCatalogInputDefinition (dict) --

      The Glue Data Catalog parameters for the data.

      • CatalogId (string) --

        The unique identifier of the Amazon Web Services account that holds the Data Catalog that stores the data.

      • DatabaseName (string) -- [REQUIRED]

        The name of a database in the Data Catalog.

      • TableName (string) -- [REQUIRED]

        The name of a database table in the Data Catalog. This table corresponds to a DataBrew dataset.

      • TempDirectory (dict) --

        Represents an Amazon location where DataBrew can store intermediate results.

        • Bucket (string) -- [REQUIRED]

          The Amazon S3 bucket name.

        • Key (string) --

          The unique name of the object in the bucket.

    • DatabaseInputDefinition (dict) --

      Connection information for dataset input files stored in a database.

      • GlueConnectionName (string) -- [REQUIRED]

        The Glue Connection that stores the connection information for the target database.

      • DatabaseTableName (string) -- [REQUIRED]

        The table within the target database.

      • TempDirectory (dict) --

        Represents an Amazon S3 location (bucket name and object key) where DataBrew can read input data, or write output from a job.

        • Bucket (string) -- [REQUIRED]

          The Amazon S3 bucket name.

        • Key (string) --

          The unique name of the object in the bucket.

  • PathOptions (dict) --

    A set of options that defines how DataBrew interprets an Amazon S3 path of the dataset.

    • LastModifiedDateCondition (dict) --

      If provided, this structure defines a date range for matching Amazon S3 objects based on their LastModifiedDate attribute in Amazon S3.

      • Expression (string) -- [REQUIRED]

        The expression which includes condition names followed by substitution variables, possibly grouped and combined with other conditions. For example, "(starts_with :prefix1 or starts_with :prefix2) and (ends_with :suffix1 or ends_with :suffix2)". Substitution variables should start with ':' symbol.

      • ValuesMap (dict) -- [REQUIRED]

        The map of substitution variable names to their values used in this filter expression.

        • (string) --
          • (string) --
    • FilesLimit (dict) --

      If provided, this structure imposes a limit on a number of files that should be selected.

      • MaxFiles (integer) -- [REQUIRED]

        The number of Amazon S3 files to select.

      • OrderedBy (string) --

        A criteria to use for Amazon S3 files sorting before their selection. By default uses LAST_MODIFIED_DATE as a sorting criteria. Currently it's the only allowed value.

      • Order (string) --

        A criteria to use for Amazon S3 files sorting before their selection. By default uses DESCENDING order, i.e. most recent files are selected first. Anotherpossible value is ASCENDING.

    • Parameters (dict) --

      A structure that maps names of parameters used in the Amazon S3 path of a dataset to their definitions.

      • (string) --
        • (dict) --

          Represents a dataset paramater that defines type and conditions for a parameter in the Amazon S3 path of the dataset.

          • Name (string) -- [REQUIRED]

            The name of the parameter that is used in the dataset's Amazon S3 path.

          • Type (string) -- [REQUIRED]

            The type of the dataset parameter, can be one of a 'String', 'Number' or 'Datetime'.

          • DatetimeOptions (dict) --

            Additional parameter options such as a format and a timezone. Required for datetime parameters.

            • Format (string) -- [REQUIRED]

              Required option, that defines the datetime format used for a date parameter in the Amazon S3 path. Should use only supported datetime specifiers and separation characters, all literal a-z or A-Z characters should be escaped with single quotes. E.g. "MM.dd.yyyy-'at'-HH:mm".

            • TimezoneOffset (string) --

              Optional value for a timezone offset of the datetime parameter value in the Amazon S3 path. Shouldn't be used if Format for this parameter includes timezone fields. If no offset specified, UTC is assumed.

            • LocaleCode (string) --

              Optional value for a non-US locale code, needed for correct interpretation of some date formats.

          • CreateColumn (boolean) --

            Optional boolean value that defines whether the captured value of this parameter should be used to create a new column in a dataset.

          • Filter (dict) --

            The optional filter expression structure to apply additional matching criteria to the parameter.

            • Expression (string) -- [REQUIRED]

              The expression which includes condition names followed by substitution variables, possibly grouped and combined with other conditions. For example, "(starts_with :prefix1 or starts_with :prefix2) and (ends_with :suffix1 or ends_with :suffix2)". Substitution variables should start with ':' symbol.

            • ValuesMap (dict) -- [REQUIRED]

              The map of substitution variable names to their values used in this filter expression.

              • (string) --
                • (string) --
  • Tags (dict) --

    Metadata tags to apply to this dataset.

    • (string) --
      • (string) --
Return type

dict

Returns

Response Syntax

{
    'Name': 'string'
}

Response Structure

  • (dict) --

    • Name (string) --

      The name of the dataset that you created.

Exceptions

  • GlueDataBrew.Client.exceptions.AccessDeniedException
  • GlueDataBrew.Client.exceptions.ConflictException
  • GlueDataBrew.Client.exceptions.ServiceQuotaExceededException
  • GlueDataBrew.Client.exceptions.ValidationException
create_profile_job(**kwargs)

Creates a new job to analyze a dataset and create its data profile.

See also: AWS API Documentation

Request Syntax

response = client.create_profile_job(
    DatasetName='string',
    EncryptionKeyArn='string',
    EncryptionMode='SSE-KMS'|'SSE-S3',
    Name='string',
    LogSubscription='ENABLE'|'DISABLE',
    MaxCapacity=123,
    MaxRetries=123,
    OutputLocation={
        'Bucket': 'string',
        'Key': 'string'
    },
    Configuration={
        'DatasetStatisticsConfiguration': {
            'IncludedStatistics': [
                'string',
            ],
            'Overrides': [
                {
                    'Statistic': 'string',
                    'Parameters': {
                        'string': 'string'
                    }
                },
            ]
        },
        'ProfileColumns': [
            {
                'Regex': 'string',
                'Name': 'string'
            },
        ],
        'ColumnStatisticsConfigurations': [
            {
                'Selectors': [
                    {
                        'Regex': 'string',
                        'Name': 'string'
                    },
                ],
                'Statistics': {
                    'IncludedStatistics': [
                        'string',
                    ],
                    'Overrides': [
                        {
                            'Statistic': 'string',
                            'Parameters': {
                                'string': 'string'
                            }
                        },
                    ]
                }
            },
        ]
    },
    RoleArn='string',
    Tags={
        'string': 'string'
    },
    Timeout=123,
    JobSample={
        'Mode': 'FULL_DATASET'|'CUSTOM_ROWS',
        'Size': 123
    }
)
Parameters
  • DatasetName (string) --

    [REQUIRED]

    The name of the dataset that this job is to act upon.

  • EncryptionKeyArn (string) -- The Amazon Resource Name (ARN) of an encryption key that is used to protect the job.
  • EncryptionMode (string) --

    The encryption mode for the job, which can be one of the following:

    • SSE-KMS - SSE-KMS - Server-side encryption with KMS-managed keys.
    • SSE-S3 - Server-side encryption with keys managed by Amazon S3.
  • Name (string) --

    [REQUIRED]

    The name of the job to be created. Valid characters are alphanumeric (A-Z, a-z, 0-9), hyphen (-), period (.), and space.

  • LogSubscription (string) -- Enables or disables Amazon CloudWatch logging for the job. If logging is enabled, CloudWatch writes one log stream for each job run.
  • MaxCapacity (integer) -- The maximum number of nodes that DataBrew can use when the job processes data.
  • MaxRetries (integer) -- The maximum number of times to retry the job after a job run fails.
  • OutputLocation (dict) --

    [REQUIRED]

    Represents an Amazon S3 location (bucket name and object key) where DataBrew can read input data, or write output from a job.

    • Bucket (string) -- [REQUIRED]

      The Amazon S3 bucket name.

    • Key (string) --

      The unique name of the object in the bucket.

  • Configuration (dict) --

    Configuration for profile jobs. Used to select columns, do evaluations, and override default parameters of evaluations. When configuration is null, the profile job will run with default settings.

    • DatasetStatisticsConfiguration (dict) --

      Configuration for inter-column evaluations. Configuration can be used to select evaluations and override parameters of evaluations. When configuration is undefined, the profile job will run all supported inter-column evaluations.

      • IncludedStatistics (list) --

        List of included evaluations. When the list is undefined, all supported evaluations will be included.

        • (string) --
      • Overrides (list) --

        List of overrides for evaluations.

        • (dict) --

          Override of a particular evaluation for a profile job.

          • Statistic (string) -- [REQUIRED]

            The name of an evaluation

          • Parameters (dict) -- [REQUIRED]

            A map that includes overrides of an evaluation’s parameters.

            • (string) --
              • (string) --
    • ProfileColumns (list) --

      List of column selectors. ProfileColumns can be used to select columns from the dataset. When ProfileColumns is undefined, the profile job will profile all supported columns.

      • (dict) --

        Selector of a column from a dataset for profile job configuration. One selector includes either a column name or a regular expression.

        • Regex (string) --

          A regular expression for selecting a column from a dataset.

        • Name (string) --

          The name of a column from a dataset.

    • ColumnStatisticsConfigurations (list) --

      List of configurations for column evaluations. ColumnStatisticsConfigurations are used to select evaluations and override parameters of evaluations for particular columns. When ColumnStatisticsConfigurations is undefined, the profile job will profile all supported columns and run all supported evaluations.

      • (dict) --

        Configuration for column evaluations for a profile job. ColumnStatisticsConfiguration can be used to select evaluations and override parameters of evaluations for particular columns.

        • Selectors (list) --

          List of column selectors. Selectors can be used to select columns from the dataset. When selectors are undefined, configuration will be applied to all supported columns.

          • (dict) --

            Selector of a column from a dataset for profile job configuration. One selector includes either a column name or a regular expression.

            • Regex (string) --

              A regular expression for selecting a column from a dataset.

            • Name (string) --

              The name of a column from a dataset.

        • Statistics (dict) -- [REQUIRED]

          Configuration for evaluations. Statistics can be used to select evaluations and override parameters of evaluations.

          • IncludedStatistics (list) --

            List of included evaluations. When the list is undefined, all supported evaluations will be included.

            • (string) --
          • Overrides (list) --

            List of overrides for evaluations.

            • (dict) --

              Override of a particular evaluation for a profile job.

              • Statistic (string) -- [REQUIRED]

                The name of an evaluation

              • Parameters (dict) -- [REQUIRED]

                A map that includes overrides of an evaluation’s parameters.

                • (string) --
                  • (string) --
  • RoleArn (string) --

    [REQUIRED]

    The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role to be assumed when DataBrew runs the job.

  • Tags (dict) --

    Metadata tags to apply to this job.

    • (string) --
      • (string) --
  • Timeout (integer) -- The job's timeout in minutes. A job that attempts to run longer than this timeout period ends with a status of TIMEOUT .
  • JobSample (dict) --

    Sample configuration for profile jobs only. Determines the number of rows on which the profile job will be executed. If a JobSample value is not provided, the default value will be used. The default value is CUSTOM_ROWS for the mode parameter and 20000 for the size parameter.

    • Mode (string) --

      A value that determines whether the profile job is run on the entire dataset or a specified number of rows. This value must be one of the following:

      • FULL_DATASET - The profile job is run on the entire dataset.
      • CUSTOM_ROWS - The profile job is run on the number of rows specified in the Size parameter.
    • Size (integer) --

      The Size parameter is only required when the mode is CUSTOM_ROWS. The profile job is run on the specified number of rows. The maximum value for size is Long.MAX_VALUE.

      Long.MAX_VALUE = 9223372036854775807

Return type

dict

Returns

Response Syntax

{
    'Name': 'string'
}

Response Structure

  • (dict) --

    • Name (string) --

      The name of the job that was created.

Exceptions

  • GlueDataBrew.Client.exceptions.AccessDeniedException
  • GlueDataBrew.Client.exceptions.ConflictException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ServiceQuotaExceededException
  • GlueDataBrew.Client.exceptions.ValidationException
create_project(**kwargs)

Creates a new DataBrew project.

See also: AWS API Documentation

Request Syntax

response = client.create_project(
    DatasetName='string',
    Name='string',
    RecipeName='string',
    Sample={
        'Size': 123,
        'Type': 'FIRST_N'|'LAST_N'|'RANDOM'
    },
    RoleArn='string',
    Tags={
        'string': 'string'
    }
)
Parameters
  • DatasetName (string) --

    [REQUIRED]

    The name of an existing dataset to associate this project with.

  • Name (string) --

    [REQUIRED]

    A unique name for the new project. Valid characters are alphanumeric (A-Z, a-z, 0-9), hyphen (-), period (.), and space.

  • RecipeName (string) --

    [REQUIRED]

    The name of an existing recipe to associate with the project.

  • Sample (dict) --

    Represents the sample size and sampling type for DataBrew to use for interactive data analysis.

    • Size (integer) --

      The number of rows in the sample.

    • Type (string) -- [REQUIRED]

      The way in which DataBrew obtains rows from a dataset.

  • RoleArn (string) --

    [REQUIRED]

    The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role to be assumed for this request.

  • Tags (dict) --

    Metadata tags to apply to this project.

    • (string) --
      • (string) --
Return type

dict

Returns

Response Syntax

{
    'Name': 'string'
}

Response Structure

  • (dict) --

    • Name (string) --

      The name of the project that you created.

Exceptions

  • GlueDataBrew.Client.exceptions.ConflictException
  • GlueDataBrew.Client.exceptions.InternalServerException
  • GlueDataBrew.Client.exceptions.ServiceQuotaExceededException
  • GlueDataBrew.Client.exceptions.ValidationException
create_recipe(**kwargs)

Creates a new DataBrew recipe.

See also: AWS API Documentation

Request Syntax

response = client.create_recipe(
    Description='string',
    Name='string',
    Steps=[
        {
            'Action': {
                'Operation': 'string',
                'Parameters': {
                    'string': 'string'
                }
            },
            'ConditionExpressions': [
                {
                    'Condition': 'string',
                    'Value': 'string',
                    'TargetColumn': 'string'
                },
            ]
        },
    ],
    Tags={
        'string': 'string'
    }
)
Parameters
  • Description (string) -- A description for the recipe.
  • Name (string) --

    [REQUIRED]

    A unique name for the recipe. Valid characters are alphanumeric (A-Z, a-z, 0-9), hyphen (-), period (.), and space.

  • Steps (list) --

    [REQUIRED]

    An array containing the steps to be performed by the recipe. Each recipe step consists of one recipe action and (optionally) an array of condition expressions.

    • (dict) --

      Represents a single step from a DataBrew recipe to be performed.

      • Action (dict) -- [REQUIRED]

        The particular action to be performed in the recipe step.

        • Operation (string) -- [REQUIRED]

          The name of a valid DataBrew transformation to be performed on the data.

        • Parameters (dict) --

          Contextual parameters for the transformation.

          • (string) --
            • (string) --
      • ConditionExpressions (list) --

        One or more conditions that must be met for the recipe step to succeed.

        Note

        All of the conditions in the array must be met. In other words, all of the conditions must be combined using a logical AND operation.

        • (dict) --

          Represents an individual condition that evaluates to true or false.

          Conditions are used with recipe actions. The action is only performed for column values where the condition evaluates to true.

          If a recipe requires more than one condition, then the recipe must specify multiple ConditionExpression elements. Each condition is applied to the rows in a dataset first, before the recipe action is performed.

          • Condition (string) -- [REQUIRED]

            A specific condition to apply to a recipe action. For more information, see Recipe structure in the Glue DataBrew Developer Guide .

          • Value (string) --

            A value that the condition must evaluate to for the condition to succeed.

          • TargetColumn (string) -- [REQUIRED]

            A column to apply this condition to.

  • Tags (dict) --

    Metadata tags to apply to this recipe.

    • (string) --
      • (string) --
Return type

dict

Returns

Response Syntax

{
    'Name': 'string'
}

Response Structure

  • (dict) --

    • Name (string) --

      The name of the recipe that you created.

Exceptions

  • GlueDataBrew.Client.exceptions.ConflictException
  • GlueDataBrew.Client.exceptions.ServiceQuotaExceededException
  • GlueDataBrew.Client.exceptions.ValidationException
create_recipe_job(**kwargs)

Creates a new job to transform input data, using steps defined in an existing Glue DataBrew recipe

See also: AWS API Documentation

Request Syntax

response = client.create_recipe_job(
    DatasetName='string',
    EncryptionKeyArn='string',
    EncryptionMode='SSE-KMS'|'SSE-S3',
    Name='string',
    LogSubscription='ENABLE'|'DISABLE',
    MaxCapacity=123,
    MaxRetries=123,
    Outputs=[
        {
            'CompressionFormat': 'GZIP'|'LZ4'|'SNAPPY'|'BZIP2'|'DEFLATE'|'LZO'|'BROTLI'|'ZSTD'|'ZLIB',
            'Format': 'CSV'|'JSON'|'PARQUET'|'GLUEPARQUET'|'AVRO'|'ORC'|'XML'|'TABLEAUHYPER',
            'PartitionColumns': [
                'string',
            ],
            'Location': {
                'Bucket': 'string',
                'Key': 'string'
            },
            'Overwrite': True|False,
            'FormatOptions': {
                'Csv': {
                    'Delimiter': 'string'
                }
            }
        },
    ],
    DataCatalogOutputs=[
        {
            'CatalogId': 'string',
            'DatabaseName': 'string',
            'TableName': 'string',
            'S3Options': {
                'Location': {
                    'Bucket': 'string',
                    'Key': 'string'
                }
            },
            'DatabaseOptions': {
                'TempDirectory': {
                    'Bucket': 'string',
                    'Key': 'string'
                },
                'TableName': 'string'
            },
            'Overwrite': True|False
        },
    ],
    DatabaseOutputs=[
        {
            'GlueConnectionName': 'string',
            'DatabaseOptions': {
                'TempDirectory': {
                    'Bucket': 'string',
                    'Key': 'string'
                },
                'TableName': 'string'
            },
            'DatabaseOutputMode': 'NEW_TABLE'
        },
    ],
    ProjectName='string',
    RecipeReference={
        'Name': 'string',
        'RecipeVersion': 'string'
    },
    RoleArn='string',
    Tags={
        'string': 'string'
    },
    Timeout=123
)
Parameters
  • DatasetName (string) -- The name of the dataset that this job processes.
  • EncryptionKeyArn (string) -- The Amazon Resource Name (ARN) of an encryption key that is used to protect the job.
  • EncryptionMode (string) --

    The encryption mode for the job, which can be one of the following:

    • SSE-KMS - Server-side encryption with keys managed by KMS.
    • SSE-S3 - Server-side encryption with keys managed by Amazon S3.
  • Name (string) --

    [REQUIRED]

    A unique name for the job. Valid characters are alphanumeric (A-Z, a-z, 0-9), hyphen (-), period (.), and space.

  • LogSubscription (string) -- Enables or disables Amazon CloudWatch logging for the job. If logging is enabled, CloudWatch writes one log stream for each job run.
  • MaxCapacity (integer) -- The maximum number of nodes that DataBrew can consume when the job processes data.
  • MaxRetries (integer) -- The maximum number of times to retry the job after a job run fails.
  • Outputs (list) --

    One or more artifacts that represent the output from running the job.

    • (dict) --

      Represents options that specify how and where in Amazon S3 DataBrew writes the output generated by recipe jobs or profile jobs.

      • CompressionFormat (string) --

        The compression algorithm used to compress the output text of the job.

      • Format (string) --

        The data format of the output of the job.

      • PartitionColumns (list) --

        The names of one or more partition columns for the output of the job.

        • (string) --
      • Location (dict) -- [REQUIRED]

        The location in Amazon S3 where the job writes its output.

        • Bucket (string) -- [REQUIRED]

          The Amazon S3 bucket name.

        • Key (string) --

          The unique name of the object in the bucket.

      • Overwrite (boolean) --

        A value that, if true, means that any data in the location specified for output is overwritten with new output.

      • FormatOptions (dict) --

        Represents options that define how DataBrew formats job output files.

        • Csv (dict) --

          Represents a set of options that define the structure of comma-separated value (CSV) job output.

          • Delimiter (string) --

            A single character that specifies the delimiter used to create CSV job output.

  • DataCatalogOutputs (list) --

    One or more artifacts that represent the Glue Data Catalog output from running the job.

    • (dict) --

      Represents options that specify how and where in the Glue Data Catalog DataBrew writes the output generated by recipe jobs.

      • CatalogId (string) --

        The unique identifier of the Amazon Web Services account that holds the Data Catalog that stores the data.

      • DatabaseName (string) -- [REQUIRED]

        The name of a database in the Data Catalog.

      • TableName (string) -- [REQUIRED]

        The name of a table in the Data Catalog.

      • S3Options (dict) --

        Represents options that specify how and where DataBrew writes the Amazon S3 output generated by recipe jobs.

        • Location (dict) -- [REQUIRED]

          Represents an Amazon S3 location (bucket name and object key) where DataBrew can write output from a job.

          • Bucket (string) -- [REQUIRED]

            The Amazon S3 bucket name.

          • Key (string) --

            The unique name of the object in the bucket.

      • DatabaseOptions (dict) --

        Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.

        • TempDirectory (dict) --

          Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.

          • Bucket (string) -- [REQUIRED]

            The Amazon S3 bucket name.

          • Key (string) --

            The unique name of the object in the bucket.

        • TableName (string) -- [REQUIRED]

          A prefix for the name of a table DataBrew will create in the database.

      • Overwrite (boolean) --

        A value that, if true, means that any data in the location specified for output is overwritten with new output. Not supported with DatabaseOptions.

  • DatabaseOutputs (list) --

    Represents a list of JDBC database output objects which defines the output destination for a DataBrew recipe job to write to.

    • (dict) --

      Represents a JDBC database output object which defines the output destination for a DataBrew recipe job to write into.

      • GlueConnectionName (string) -- [REQUIRED]

        The Glue connection that stores the connection information for the target database.

      • DatabaseOptions (dict) -- [REQUIRED]

        Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.

        • TempDirectory (dict) --

          Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.

          • Bucket (string) -- [REQUIRED]

            The Amazon S3 bucket name.

          • Key (string) --

            The unique name of the object in the bucket.

        • TableName (string) -- [REQUIRED]

          A prefix for the name of a table DataBrew will create in the database.

      • DatabaseOutputMode (string) --

        The output mode to write into the database. Currently supported option: NEW_TABLE.

  • ProjectName (string) -- Either the name of an existing project, or a combination of a recipe and a dataset to associate with the recipe.
  • RecipeReference (dict) --

    Represents the name and version of a DataBrew recipe.

    • Name (string) -- [REQUIRED]

      The name of the recipe.

    • RecipeVersion (string) --

      The identifier for the version for the recipe.

  • RoleArn (string) --

    [REQUIRED]

    The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role to be assumed when DataBrew runs the job.

  • Tags (dict) --

    Metadata tags to apply to this job.

    • (string) --
      • (string) --
  • Timeout (integer) -- The job's timeout in minutes. A job that attempts to run longer than this timeout period ends with a status of TIMEOUT .
Return type

dict

Returns

Response Syntax

{
    'Name': 'string'
}

Response Structure

  • (dict) --

    • Name (string) --

      The name of the job that you created.

Exceptions

  • GlueDataBrew.Client.exceptions.AccessDeniedException
  • GlueDataBrew.Client.exceptions.ConflictException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ServiceQuotaExceededException
  • GlueDataBrew.Client.exceptions.ValidationException
create_schedule(**kwargs)

Creates a new schedule for one or more DataBrew jobs. Jobs can be run at a specific date and time, or at regular intervals.

See also: AWS API Documentation

Request Syntax

response = client.create_schedule(
    JobNames=[
        'string',
    ],
    CronExpression='string',
    Tags={
        'string': 'string'
    },
    Name='string'
)
Parameters
  • JobNames (list) --

    The name or names of one or more jobs to be run.

    • (string) --
  • CronExpression (string) --

    [REQUIRED]

    The date or dates and time or times when the jobs are to be run. For more information, see Cron expressions in the Glue DataBrew Developer Guide .

  • Tags (dict) --

    Metadata tags to apply to this schedule.

    • (string) --
      • (string) --
  • Name (string) --

    [REQUIRED]

    A unique name for the schedule. Valid characters are alphanumeric (A-Z, a-z, 0-9), hyphen (-), period (.), and space.

Return type

dict

Returns

Response Syntax

{
    'Name': 'string'
}

Response Structure

  • (dict) --

    • Name (string) --

      The name of the schedule that was created.

Exceptions

  • GlueDataBrew.Client.exceptions.ConflictException
  • GlueDataBrew.Client.exceptions.ServiceQuotaExceededException
  • GlueDataBrew.Client.exceptions.ValidationException
delete_dataset(**kwargs)

Deletes a dataset from DataBrew.

See also: AWS API Documentation

Request Syntax

response = client.delete_dataset(
    Name='string'
)
Parameters
Name (string) --

[REQUIRED]

The name of the dataset to be deleted.

Return type
dict
Returns
Response Syntax
{
    'Name': 'string'
}

Response Structure

  • (dict) --
    • Name (string) --

      The name of the dataset that you deleted.

Exceptions

  • GlueDataBrew.Client.exceptions.ConflictException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
delete_job(**kwargs)

Deletes the specified DataBrew job.

See also: AWS API Documentation

Request Syntax

response = client.delete_job(
    Name='string'
)
Parameters
Name (string) --

[REQUIRED]

The name of the job to be deleted.

Return type
dict
Returns
Response Syntax
{
    'Name': 'string'
}

Response Structure

  • (dict) --
    • Name (string) --

      The name of the job that you deleted.

Exceptions

  • GlueDataBrew.Client.exceptions.ConflictException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
delete_project(**kwargs)

Deletes an existing DataBrew project.

See also: AWS API Documentation

Request Syntax

response = client.delete_project(
    Name='string'
)
Parameters
Name (string) --

[REQUIRED]

The name of the project to be deleted.

Return type
dict
Returns
Response Syntax
{
    'Name': 'string'
}

Response Structure

  • (dict) --
    • Name (string) --

      The name of the project that you deleted.

Exceptions

  • GlueDataBrew.Client.exceptions.ConflictException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
delete_recipe_version(**kwargs)

Deletes a single version of a DataBrew recipe.

See also: AWS API Documentation

Request Syntax

response = client.delete_recipe_version(
    Name='string',
    RecipeVersion='string'
)
Parameters
  • Name (string) --

    [REQUIRED]

    The name of the recipe.

  • RecipeVersion (string) --

    [REQUIRED]

    The version of the recipe to be deleted. You can specify a numeric versions (X.Y ) or LATEST_WORKING . LATEST_PUBLISHED is not supported.

Return type

dict

Returns

Response Syntax

{
    'Name': 'string',
    'RecipeVersion': 'string'
}

Response Structure

  • (dict) --

    • Name (string) --

      The name of the recipe that was deleted.

    • RecipeVersion (string) --

      The version of the recipe that was deleted.

Exceptions

  • GlueDataBrew.Client.exceptions.ConflictException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
delete_schedule(**kwargs)

Deletes the specified DataBrew schedule.

See also: AWS API Documentation

Request Syntax

response = client.delete_schedule(
    Name='string'
)
Parameters
Name (string) --

[REQUIRED]

The name of the schedule to be deleted.

Return type
dict
Returns
Response Syntax
{
    'Name': 'string'
}

Response Structure

  • (dict) --
    • Name (string) --

      The name of the schedule that was deleted.

Exceptions

  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
describe_dataset(**kwargs)

Returns the definition of a specific DataBrew dataset.

See also: AWS API Documentation

Request Syntax

response = client.describe_dataset(
    Name='string'
)
Parameters
Name (string) --

[REQUIRED]

The name of the dataset to be described.

Return type
dict
Returns
Response Syntax
{
    'CreatedBy': 'string',
    'CreateDate': datetime(2015, 1, 1),
    'Name': 'string',
    'Format': 'CSV'|'JSON'|'PARQUET'|'EXCEL',
    'FormatOptions': {
        'Json': {
            'MultiLine': True|False
        },
        'Excel': {
            'SheetNames': [
                'string',
            ],
            'SheetIndexes': [
                123,
            ],
            'HeaderRow': True|False
        },
        'Csv': {
            'Delimiter': 'string',
            'HeaderRow': True|False
        }
    },
    'Input': {
        'S3InputDefinition': {
            'Bucket': 'string',
            'Key': 'string'
        },
        'DataCatalogInputDefinition': {
            'CatalogId': 'string',
            'DatabaseName': 'string',
            'TableName': 'string',
            'TempDirectory': {
                'Bucket': 'string',
                'Key': 'string'
            }
        },
        'DatabaseInputDefinition': {
            'GlueConnectionName': 'string',
            'DatabaseTableName': 'string',
            'TempDirectory': {
                'Bucket': 'string',
                'Key': 'string'
            }
        }
    },
    'LastModifiedDate': datetime(2015, 1, 1),
    'LastModifiedBy': 'string',
    'Source': 'S3'|'DATA-CATALOG'|'DATABASE',
    'PathOptions': {
        'LastModifiedDateCondition': {
            'Expression': 'string',
            'ValuesMap': {
                'string': 'string'
            }
        },
        'FilesLimit': {
            'MaxFiles': 123,
            'OrderedBy': 'LAST_MODIFIED_DATE',
            'Order': 'DESCENDING'|'ASCENDING'
        },
        'Parameters': {
            'string': {
                'Name': 'string',
                'Type': 'Datetime'|'Number'|'String',
                'DatetimeOptions': {
                    'Format': 'string',
                    'TimezoneOffset': 'string',
                    'LocaleCode': 'string'
                },
                'CreateColumn': True|False,
                'Filter': {
                    'Expression': 'string',
                    'ValuesMap': {
                        'string': 'string'
                    }
                }
            }
        }
    },
    'Tags': {
        'string': 'string'
    },
    'ResourceArn': 'string'
}

Response Structure

  • (dict) --
    • CreatedBy (string) --

      The identifier (user name) of the user who created the dataset.

    • CreateDate (datetime) --

      The date and time that the dataset was created.

    • Name (string) --

      The name of the dataset.

    • Format (string) --

      The file format of a dataset that is created from an Amazon S3 file or folder.

    • FormatOptions (dict) --

      Represents a set of options that define the structure of either comma-separated value (CSV), Excel, or JSON input.

      • Json (dict) --

        Options that define how JSON input is to be interpreted by DataBrew.

        • MultiLine (boolean) --

          A value that specifies whether JSON input contains embedded new line characters.

      • Excel (dict) --

        Options that define how Excel input is to be interpreted by DataBrew.

        • SheetNames (list) --

          One or more named sheets in the Excel file that will be included in the dataset.

          • (string) --
        • SheetIndexes (list) --

          One or more sheet numbers in the Excel file that will be included in the dataset.

          • (integer) --
        • HeaderRow (boolean) --

          A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.

      • Csv (dict) --

        Options that define how CSV input is to be interpreted by DataBrew.

        • Delimiter (string) --

          A single character that specifies the delimiter being used in the CSV file.

        • HeaderRow (boolean) --

          A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.

    • Input (dict) --

      Represents information on how DataBrew can find data, in either the Glue Data Catalog or Amazon S3.

      • S3InputDefinition (dict) --

        The Amazon S3 location where the data is stored.

        • Bucket (string) --

          The Amazon S3 bucket name.

        • Key (string) --

          The unique name of the object in the bucket.

      • DataCatalogInputDefinition (dict) --

        The Glue Data Catalog parameters for the data.

        • CatalogId (string) --

          The unique identifier of the Amazon Web Services account that holds the Data Catalog that stores the data.

        • DatabaseName (string) --

          The name of a database in the Data Catalog.

        • TableName (string) --

          The name of a database table in the Data Catalog. This table corresponds to a DataBrew dataset.

        • TempDirectory (dict) --

          Represents an Amazon location where DataBrew can store intermediate results.

          • Bucket (string) --

            The Amazon S3 bucket name.

          • Key (string) --

            The unique name of the object in the bucket.

      • DatabaseInputDefinition (dict) --

        Connection information for dataset input files stored in a database.

        • GlueConnectionName (string) --

          The Glue Connection that stores the connection information for the target database.

        • DatabaseTableName (string) --

          The table within the target database.

        • TempDirectory (dict) --

          Represents an Amazon S3 location (bucket name and object key) where DataBrew can read input data, or write output from a job.

          • Bucket (string) --

            The Amazon S3 bucket name.

          • Key (string) --

            The unique name of the object in the bucket.

    • LastModifiedDate (datetime) --

      The date and time that the dataset was last modified.

    • LastModifiedBy (string) --

      The identifier (user name) of the user who last modified the dataset.

    • Source (string) --

      The location of the data for this dataset, Amazon S3 or the Glue Data Catalog.

    • PathOptions (dict) --

      A set of options that defines how DataBrew interprets an Amazon S3 path of the dataset.

      • LastModifiedDateCondition (dict) --

        If provided, this structure defines a date range for matching Amazon S3 objects based on their LastModifiedDate attribute in Amazon S3.

        • Expression (string) --

          The expression which includes condition names followed by substitution variables, possibly grouped and combined with other conditions. For example, "(starts_with :prefix1 or starts_with :prefix2) and (ends_with :suffix1 or ends_with :suffix2)". Substitution variables should start with ':' symbol.

        • ValuesMap (dict) --

          The map of substitution variable names to their values used in this filter expression.

          • (string) --
            • (string) --
      • FilesLimit (dict) --

        If provided, this structure imposes a limit on a number of files that should be selected.

        • MaxFiles (integer) --

          The number of Amazon S3 files to select.

        • OrderedBy (string) --

          A criteria to use for Amazon S3 files sorting before their selection. By default uses LAST_MODIFIED_DATE as a sorting criteria. Currently it's the only allowed value.

        • Order (string) --

          A criteria to use for Amazon S3 files sorting before their selection. By default uses DESCENDING order, i.e. most recent files are selected first. Anotherpossible value is ASCENDING.

      • Parameters (dict) --

        A structure that maps names of parameters used in the Amazon S3 path of a dataset to their definitions.

        • (string) --
          • (dict) --

            Represents a dataset paramater that defines type and conditions for a parameter in the Amazon S3 path of the dataset.

            • Name (string) --

              The name of the parameter that is used in the dataset's Amazon S3 path.

            • Type (string) --

              The type of the dataset parameter, can be one of a 'String', 'Number' or 'Datetime'.

            • DatetimeOptions (dict) --

              Additional parameter options such as a format and a timezone. Required for datetime parameters.

              • Format (string) --

                Required option, that defines the datetime format used for a date parameter in the Amazon S3 path. Should use only supported datetime specifiers and separation characters, all literal a-z or A-Z characters should be escaped with single quotes. E.g. "MM.dd.yyyy-'at'-HH:mm".

              • TimezoneOffset (string) --

                Optional value for a timezone offset of the datetime parameter value in the Amazon S3 path. Shouldn't be used if Format for this parameter includes timezone fields. If no offset specified, UTC is assumed.

              • LocaleCode (string) --

                Optional value for a non-US locale code, needed for correct interpretation of some date formats.

            • CreateColumn (boolean) --

              Optional boolean value that defines whether the captured value of this parameter should be used to create a new column in a dataset.

            • Filter (dict) --

              The optional filter expression structure to apply additional matching criteria to the parameter.

              • Expression (string) --

                The expression which includes condition names followed by substitution variables, possibly grouped and combined with other conditions. For example, "(starts_with :prefix1 or starts_with :prefix2) and (ends_with :suffix1 or ends_with :suffix2)". Substitution variables should start with ':' symbol.

              • ValuesMap (dict) --

                The map of substitution variable names to their values used in this filter expression.

                • (string) --
                  • (string) --
    • Tags (dict) --

      Metadata tags associated with this dataset.

      • (string) --
        • (string) --
    • ResourceArn (string) --

      The Amazon Resource Name (ARN) of the dataset.

Exceptions

  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
describe_job(**kwargs)

Returns the definition of a specific DataBrew job.

See also: AWS API Documentation

Request Syntax

response = client.describe_job(
    Name='string'
)
Parameters
Name (string) --

[REQUIRED]

The name of the job to be described.

Return type
dict
Returns
Response Syntax
{
    'CreateDate': datetime(2015, 1, 1),
    'CreatedBy': 'string',
    'DatasetName': 'string',
    'EncryptionKeyArn': 'string',
    'EncryptionMode': 'SSE-KMS'|'SSE-S3',
    'Name': 'string',
    'Type': 'PROFILE'|'RECIPE',
    'LastModifiedBy': 'string',
    'LastModifiedDate': datetime(2015, 1, 1),
    'LogSubscription': 'ENABLE'|'DISABLE',
    'MaxCapacity': 123,
    'MaxRetries': 123,
    'Outputs': [
        {
            'CompressionFormat': 'GZIP'|'LZ4'|'SNAPPY'|'BZIP2'|'DEFLATE'|'LZO'|'BROTLI'|'ZSTD'|'ZLIB',
            'Format': 'CSV'|'JSON'|'PARQUET'|'GLUEPARQUET'|'AVRO'|'ORC'|'XML'|'TABLEAUHYPER',
            'PartitionColumns': [
                'string',
            ],
            'Location': {
                'Bucket': 'string',
                'Key': 'string'
            },
            'Overwrite': True|False,
            'FormatOptions': {
                'Csv': {
                    'Delimiter': 'string'
                }
            }
        },
    ],
    'DataCatalogOutputs': [
        {
            'CatalogId': 'string',
            'DatabaseName': 'string',
            'TableName': 'string',
            'S3Options': {
                'Location': {
                    'Bucket': 'string',
                    'Key': 'string'
                }
            },
            'DatabaseOptions': {
                'TempDirectory': {
                    'Bucket': 'string',
                    'Key': 'string'
                },
                'TableName': 'string'
            },
            'Overwrite': True|False
        },
    ],
    'DatabaseOutputs': [
        {
            'GlueConnectionName': 'string',
            'DatabaseOptions': {
                'TempDirectory': {
                    'Bucket': 'string',
                    'Key': 'string'
                },
                'TableName': 'string'
            },
            'DatabaseOutputMode': 'NEW_TABLE'
        },
    ],
    'ProjectName': 'string',
    'ProfileConfiguration': {
        'DatasetStatisticsConfiguration': {
            'IncludedStatistics': [
                'string',
            ],
            'Overrides': [
                {
                    'Statistic': 'string',
                    'Parameters': {
                        'string': 'string'
                    }
                },
            ]
        },
        'ProfileColumns': [
            {
                'Regex': 'string',
                'Name': 'string'
            },
        ],
        'ColumnStatisticsConfigurations': [
            {
                'Selectors': [
                    {
                        'Regex': 'string',
                        'Name': 'string'
                    },
                ],
                'Statistics': {
                    'IncludedStatistics': [
                        'string',
                    ],
                    'Overrides': [
                        {
                            'Statistic': 'string',
                            'Parameters': {
                                'string': 'string'
                            }
                        },
                    ]
                }
            },
        ]
    },
    'RecipeReference': {
        'Name': 'string',
        'RecipeVersion': 'string'
    },
    'ResourceArn': 'string',
    'RoleArn': 'string',
    'Tags': {
        'string': 'string'
    },
    'Timeout': 123,
    'JobSample': {
        'Mode': 'FULL_DATASET'|'CUSTOM_ROWS',
        'Size': 123
    }
}

Response Structure

  • (dict) --
    • CreateDate (datetime) --

      The date and time that the job was created.

    • CreatedBy (string) --

      The identifier (user name) of the user associated with the creation of the job.

    • DatasetName (string) --

      The dataset that the job acts upon.

    • EncryptionKeyArn (string) --

      The Amazon Resource Name (ARN) of an encryption key that is used to protect the job.

    • EncryptionMode (string) --

      The encryption mode for the job, which can be one of the following:

      • SSE-KMS - Server-side encryption with keys managed by KMS.
      • SSE-S3 - Server-side encryption with keys managed by Amazon S3.
    • Name (string) --

      The name of the job.

    • Type (string) --

      The job type, which must be one of the following:

      • PROFILE - The job analyzes the dataset to determine its size, data types, data distribution, and more.
      • RECIPE - The job applies one or more transformations to a dataset.
    • LastModifiedBy (string) --

      The identifier (user name) of the user who last modified the job.

    • LastModifiedDate (datetime) --

      The date and time that the job was last modified.

    • LogSubscription (string) --

      Indicates whether Amazon CloudWatch logging is enabled for this job.

    • MaxCapacity (integer) --

      The maximum number of compute nodes that DataBrew can consume when the job processes data.

    • MaxRetries (integer) --

      The maximum number of times to retry the job after a job run fails.

    • Outputs (list) --

      One or more artifacts that represent the output from running the job.

      • (dict) --

        Represents options that specify how and where in Amazon S3 DataBrew writes the output generated by recipe jobs or profile jobs.

        • CompressionFormat (string) --

          The compression algorithm used to compress the output text of the job.

        • Format (string) --

          The data format of the output of the job.

        • PartitionColumns (list) --

          The names of one or more partition columns for the output of the job.

          • (string) --
        • Location (dict) --

          The location in Amazon S3 where the job writes its output.

          • Bucket (string) --

            The Amazon S3 bucket name.

          • Key (string) --

            The unique name of the object in the bucket.

        • Overwrite (boolean) --

          A value that, if true, means that any data in the location specified for output is overwritten with new output.

        • FormatOptions (dict) --

          Represents options that define how DataBrew formats job output files.

          • Csv (dict) --

            Represents a set of options that define the structure of comma-separated value (CSV) job output.

            • Delimiter (string) --

              A single character that specifies the delimiter used to create CSV job output.

    • DataCatalogOutputs (list) --

      One or more artifacts that represent the Glue Data Catalog output from running the job.

      • (dict) --

        Represents options that specify how and where in the Glue Data Catalog DataBrew writes the output generated by recipe jobs.

        • CatalogId (string) --

          The unique identifier of the Amazon Web Services account that holds the Data Catalog that stores the data.

        • DatabaseName (string) --

          The name of a database in the Data Catalog.

        • TableName (string) --

          The name of a table in the Data Catalog.

        • S3Options (dict) --

          Represents options that specify how and where DataBrew writes the Amazon S3 output generated by recipe jobs.

          • Location (dict) --

            Represents an Amazon S3 location (bucket name and object key) where DataBrew can write output from a job.

            • Bucket (string) --

              The Amazon S3 bucket name.

            • Key (string) --

              The unique name of the object in the bucket.

        • DatabaseOptions (dict) --

          Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.

          • TempDirectory (dict) --

            Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.

            • Bucket (string) --

              The Amazon S3 bucket name.

            • Key (string) --

              The unique name of the object in the bucket.

          • TableName (string) --

            A prefix for the name of a table DataBrew will create in the database.

        • Overwrite (boolean) --

          A value that, if true, means that any data in the location specified for output is overwritten with new output. Not supported with DatabaseOptions.

    • DatabaseOutputs (list) --

      Represents a list of JDBC database output objects which defines the output destination for a DataBrew recipe job to write into.

      • (dict) --

        Represents a JDBC database output object which defines the output destination for a DataBrew recipe job to write into.

        • GlueConnectionName (string) --

          The Glue connection that stores the connection information for the target database.

        • DatabaseOptions (dict) --

          Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.

          • TempDirectory (dict) --

            Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.

            • Bucket (string) --

              The Amazon S3 bucket name.

            • Key (string) --

              The unique name of the object in the bucket.

          • TableName (string) --

            A prefix for the name of a table DataBrew will create in the database.

        • DatabaseOutputMode (string) --

          The output mode to write into the database. Currently supported option: NEW_TABLE.

    • ProjectName (string) --

      The DataBrew project associated with this job.

    • ProfileConfiguration (dict) --

      Configuration for profile jobs. Used to select columns, do evaluations, and override default parameters of evaluations. When configuration is null, the profile job will run with default settings.

      • DatasetStatisticsConfiguration (dict) --

        Configuration for inter-column evaluations. Configuration can be used to select evaluations and override parameters of evaluations. When configuration is undefined, the profile job will run all supported inter-column evaluations.

        • IncludedStatistics (list) --

          List of included evaluations. When the list is undefined, all supported evaluations will be included.

          • (string) --
        • Overrides (list) --

          List of overrides for evaluations.

          • (dict) --

            Override of a particular evaluation for a profile job.

            • Statistic (string) --

              The name of an evaluation

            • Parameters (dict) --

              A map that includes overrides of an evaluation’s parameters.

              • (string) --
                • (string) --
      • ProfileColumns (list) --

        List of column selectors. ProfileColumns can be used to select columns from the dataset. When ProfileColumns is undefined, the profile job will profile all supported columns.

        • (dict) --

          Selector of a column from a dataset for profile job configuration. One selector includes either a column name or a regular expression.

          • Regex (string) --

            A regular expression for selecting a column from a dataset.

          • Name (string) --

            The name of a column from a dataset.

      • ColumnStatisticsConfigurations (list) --

        List of configurations for column evaluations. ColumnStatisticsConfigurations are used to select evaluations and override parameters of evaluations for particular columns. When ColumnStatisticsConfigurations is undefined, the profile job will profile all supported columns and run all supported evaluations.

        • (dict) --

          Configuration for column evaluations for a profile job. ColumnStatisticsConfiguration can be used to select evaluations and override parameters of evaluations for particular columns.

          • Selectors (list) --

            List of column selectors. Selectors can be used to select columns from the dataset. When selectors are undefined, configuration will be applied to all supported columns.

            • (dict) --

              Selector of a column from a dataset for profile job configuration. One selector includes either a column name or a regular expression.

              • Regex (string) --

                A regular expression for selecting a column from a dataset.

              • Name (string) --

                The name of a column from a dataset.

          • Statistics (dict) --

            Configuration for evaluations. Statistics can be used to select evaluations and override parameters of evaluations.

            • IncludedStatistics (list) --

              List of included evaluations. When the list is undefined, all supported evaluations will be included.

              • (string) --
            • Overrides (list) --

              List of overrides for evaluations.

              • (dict) --

                Override of a particular evaluation for a profile job.

                • Statistic (string) --

                  The name of an evaluation

                • Parameters (dict) --

                  A map that includes overrides of an evaluation’s parameters.

                  • (string) --
                    • (string) --
    • RecipeReference (dict) --

      Represents the name and version of a DataBrew recipe.

      • Name (string) --

        The name of the recipe.

      • RecipeVersion (string) --

        The identifier for the version for the recipe.

    • ResourceArn (string) --

      The Amazon Resource Name (ARN) of the job.

    • RoleArn (string) --

      The ARN of the Identity and Access Management (IAM) role to be assumed when DataBrew runs the job.

    • Tags (dict) --

      Metadata tags associated with this job.

      • (string) --
        • (string) --
    • Timeout (integer) --

      The job's timeout in minutes. A job that attempts to run longer than this timeout period ends with a status of TIMEOUT .

    • JobSample (dict) --

      Sample configuration for profile jobs only. Determines the number of rows on which the profile job will be executed.

      • Mode (string) --

        A value that determines whether the profile job is run on the entire dataset or a specified number of rows. This value must be one of the following:

        • FULL_DATASET - The profile job is run on the entire dataset.
        • CUSTOM_ROWS - The profile job is run on the number of rows specified in the Size parameter.
      • Size (integer) --

        The Size parameter is only required when the mode is CUSTOM_ROWS. The profile job is run on the specified number of rows. The maximum value for size is Long.MAX_VALUE.

        Long.MAX_VALUE = 9223372036854775807

Exceptions

  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
describe_job_run(**kwargs)

Represents one run of a DataBrew job.

See also: AWS API Documentation

Request Syntax

response = client.describe_job_run(
    Name='string',
    RunId='string'
)
Parameters
  • Name (string) --

    [REQUIRED]

    The name of the job being processed during this run.

  • RunId (string) --

    [REQUIRED]

    The unique identifier of the job run.

Return type

dict

Returns

Response Syntax

{
    'Attempt': 123,
    'CompletedOn': datetime(2015, 1, 1),
    'DatasetName': 'string',
    'ErrorMessage': 'string',
    'ExecutionTime': 123,
    'JobName': 'string',
    'ProfileConfiguration': {
        'DatasetStatisticsConfiguration': {
            'IncludedStatistics': [
                'string',
            ],
            'Overrides': [
                {
                    'Statistic': 'string',
                    'Parameters': {
                        'string': 'string'
                    }
                },
            ]
        },
        'ProfileColumns': [
            {
                'Regex': 'string',
                'Name': 'string'
            },
        ],
        'ColumnStatisticsConfigurations': [
            {
                'Selectors': [
                    {
                        'Regex': 'string',
                        'Name': 'string'
                    },
                ],
                'Statistics': {
                    'IncludedStatistics': [
                        'string',
                    ],
                    'Overrides': [
                        {
                            'Statistic': 'string',
                            'Parameters': {
                                'string': 'string'
                            }
                        },
                    ]
                }
            },
        ]
    },
    'RunId': 'string',
    'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT',
    'LogSubscription': 'ENABLE'|'DISABLE',
    'LogGroupName': 'string',
    'Outputs': [
        {
            'CompressionFormat': 'GZIP'|'LZ4'|'SNAPPY'|'BZIP2'|'DEFLATE'|'LZO'|'BROTLI'|'ZSTD'|'ZLIB',
            'Format': 'CSV'|'JSON'|'PARQUET'|'GLUEPARQUET'|'AVRO'|'ORC'|'XML'|'TABLEAUHYPER',
            'PartitionColumns': [
                'string',
            ],
            'Location': {
                'Bucket': 'string',
                'Key': 'string'
            },
            'Overwrite': True|False,
            'FormatOptions': {
                'Csv': {
                    'Delimiter': 'string'
                }
            }
        },
    ],
    'DataCatalogOutputs': [
        {
            'CatalogId': 'string',
            'DatabaseName': 'string',
            'TableName': 'string',
            'S3Options': {
                'Location': {
                    'Bucket': 'string',
                    'Key': 'string'
                }
            },
            'DatabaseOptions': {
                'TempDirectory': {
                    'Bucket': 'string',
                    'Key': 'string'
                },
                'TableName': 'string'
            },
            'Overwrite': True|False
        },
    ],
    'DatabaseOutputs': [
        {
            'GlueConnectionName': 'string',
            'DatabaseOptions': {
                'TempDirectory': {
                    'Bucket': 'string',
                    'Key': 'string'
                },
                'TableName': 'string'
            },
            'DatabaseOutputMode': 'NEW_TABLE'
        },
    ],
    'RecipeReference': {
        'Name': 'string',
        'RecipeVersion': 'string'
    },
    'StartedBy': 'string',
    'StartedOn': datetime(2015, 1, 1),
    'JobSample': {
        'Mode': 'FULL_DATASET'|'CUSTOM_ROWS',
        'Size': 123
    }
}

Response Structure

  • (dict) --

    • Attempt (integer) --

      The number of times that DataBrew has attempted to run the job.

    • CompletedOn (datetime) --

      The date and time when the job completed processing.

    • DatasetName (string) --

      The name of the dataset for the job to process.

    • ErrorMessage (string) --

      A message indicating an error (if any) that was encountered when the job ran.

    • ExecutionTime (integer) --

      The amount of time, in seconds, during which the job run consumed resources.

    • JobName (string) --

      The name of the job being processed during this run.

    • ProfileConfiguration (dict) --

      Configuration for profile jobs. Used to select columns, do evaluations, and override default parameters of evaluations. When configuration is null, the profile job will run with default settings.

      • DatasetStatisticsConfiguration (dict) --

        Configuration for inter-column evaluations. Configuration can be used to select evaluations and override parameters of evaluations. When configuration is undefined, the profile job will run all supported inter-column evaluations.

        • IncludedStatistics (list) --

          List of included evaluations. When the list is undefined, all supported evaluations will be included.

          • (string) --
        • Overrides (list) --

          List of overrides for evaluations.

          • (dict) --

            Override of a particular evaluation for a profile job.

            • Statistic (string) --

              The name of an evaluation

            • Parameters (dict) --

              A map that includes overrides of an evaluation’s parameters.

              • (string) --
                • (string) --
      • ProfileColumns (list) --

        List of column selectors. ProfileColumns can be used to select columns from the dataset. When ProfileColumns is undefined, the profile job will profile all supported columns.

        • (dict) --

          Selector of a column from a dataset for profile job configuration. One selector includes either a column name or a regular expression.

          • Regex (string) --

            A regular expression for selecting a column from a dataset.

          • Name (string) --

            The name of a column from a dataset.

      • ColumnStatisticsConfigurations (list) --

        List of configurations for column evaluations. ColumnStatisticsConfigurations are used to select evaluations and override parameters of evaluations for particular columns. When ColumnStatisticsConfigurations is undefined, the profile job will profile all supported columns and run all supported evaluations.

        • (dict) --

          Configuration for column evaluations for a profile job. ColumnStatisticsConfiguration can be used to select evaluations and override parameters of evaluations for particular columns.

          • Selectors (list) --

            List of column selectors. Selectors can be used to select columns from the dataset. When selectors are undefined, configuration will be applied to all supported columns.

            • (dict) --

              Selector of a column from a dataset for profile job configuration. One selector includes either a column name or a regular expression.

              • Regex (string) --

                A regular expression for selecting a column from a dataset.

              • Name (string) --

                The name of a column from a dataset.

          • Statistics (dict) --

            Configuration for evaluations. Statistics can be used to select evaluations and override parameters of evaluations.

            • IncludedStatistics (list) --

              List of included evaluations. When the list is undefined, all supported evaluations will be included.

              • (string) --
            • Overrides (list) --

              List of overrides for evaluations.

              • (dict) --

                Override of a particular evaluation for a profile job.

                • Statistic (string) --

                  The name of an evaluation

                • Parameters (dict) --

                  A map that includes overrides of an evaluation’s parameters.

                  • (string) --
                    • (string) --
    • RunId (string) --

      The unique identifier of the job run.

    • State (string) --

      The current state of the job run entity itself.

    • LogSubscription (string) --

      The current status of Amazon CloudWatch logging for the job run.

    • LogGroupName (string) --

      The name of an Amazon CloudWatch log group, where the job writes diagnostic messages when it runs.

    • Outputs (list) --

      One or more output artifacts from a job run.

      • (dict) --

        Represents options that specify how and where in Amazon S3 DataBrew writes the output generated by recipe jobs or profile jobs.

        • CompressionFormat (string) --

          The compression algorithm used to compress the output text of the job.

        • Format (string) --

          The data format of the output of the job.

        • PartitionColumns (list) --

          The names of one or more partition columns for the output of the job.

          • (string) --
        • Location (dict) --

          The location in Amazon S3 where the job writes its output.

          • Bucket (string) --

            The Amazon S3 bucket name.

          • Key (string) --

            The unique name of the object in the bucket.

        • Overwrite (boolean) --

          A value that, if true, means that any data in the location specified for output is overwritten with new output.

        • FormatOptions (dict) --

          Represents options that define how DataBrew formats job output files.

          • Csv (dict) --

            Represents a set of options that define the structure of comma-separated value (CSV) job output.

            • Delimiter (string) --

              A single character that specifies the delimiter used to create CSV job output.

    • DataCatalogOutputs (list) --

      One or more artifacts that represent the Glue Data Catalog output from running the job.

      • (dict) --

        Represents options that specify how and where in the Glue Data Catalog DataBrew writes the output generated by recipe jobs.

        • CatalogId (string) --

          The unique identifier of the Amazon Web Services account that holds the Data Catalog that stores the data.

        • DatabaseName (string) --

          The name of a database in the Data Catalog.

        • TableName (string) --

          The name of a table in the Data Catalog.

        • S3Options (dict) --

          Represents options that specify how and where DataBrew writes the Amazon S3 output generated by recipe jobs.

          • Location (dict) --

            Represents an Amazon S3 location (bucket name and object key) where DataBrew can write output from a job.

            • Bucket (string) --

              The Amazon S3 bucket name.

            • Key (string) --

              The unique name of the object in the bucket.

        • DatabaseOptions (dict) --

          Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.

          • TempDirectory (dict) --

            Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.

            • Bucket (string) --

              The Amazon S3 bucket name.

            • Key (string) --

              The unique name of the object in the bucket.

          • TableName (string) --

            A prefix for the name of a table DataBrew will create in the database.

        • Overwrite (boolean) --

          A value that, if true, means that any data in the location specified for output is overwritten with new output. Not supported with DatabaseOptions.

    • DatabaseOutputs (list) --

      Represents a list of JDBC database output objects which defines the output destination for a DataBrew recipe job to write into.

      • (dict) --

        Represents a JDBC database output object which defines the output destination for a DataBrew recipe job to write into.

        • GlueConnectionName (string) --

          The Glue connection that stores the connection information for the target database.

        • DatabaseOptions (dict) --

          Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.

          • TempDirectory (dict) --

            Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.

            • Bucket (string) --

              The Amazon S3 bucket name.

            • Key (string) --

              The unique name of the object in the bucket.

          • TableName (string) --

            A prefix for the name of a table DataBrew will create in the database.

        • DatabaseOutputMode (string) --

          The output mode to write into the database. Currently supported option: NEW_TABLE.

    • RecipeReference (dict) --

      Represents the name and version of a DataBrew recipe.

      • Name (string) --

        The name of the recipe.

      • RecipeVersion (string) --

        The identifier for the version for the recipe.

    • StartedBy (string) --

      The Amazon Resource Name (ARN) of the user who started the job run.

    • StartedOn (datetime) --

      The date and time when the job run began.

    • JobSample (dict) --

      Sample configuration for profile jobs only. Determines the number of rows on which the profile job will be executed. If a JobSample value is not provided, the default value will be used. The default value is CUSTOM_ROWS for the mode parameter and 20000 for the size parameter.

      • Mode (string) --

        A value that determines whether the profile job is run on the entire dataset or a specified number of rows. This value must be one of the following:

        • FULL_DATASET - The profile job is run on the entire dataset.
        • CUSTOM_ROWS - The profile job is run on the number of rows specified in the Size parameter.
      • Size (integer) --

        The Size parameter is only required when the mode is CUSTOM_ROWS. The profile job is run on the specified number of rows. The maximum value for size is Long.MAX_VALUE.

        Long.MAX_VALUE = 9223372036854775807

Exceptions

  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
describe_project(**kwargs)

Returns the definition of a specific DataBrew project.

See also: AWS API Documentation

Request Syntax

response = client.describe_project(
    Name='string'
)
Parameters
Name (string) --

[REQUIRED]

The name of the project to be described.

Return type
dict
Returns
Response Syntax
{
    'CreateDate': datetime(2015, 1, 1),
    'CreatedBy': 'string',
    'DatasetName': 'string',
    'LastModifiedDate': datetime(2015, 1, 1),
    'LastModifiedBy': 'string',
    'Name': 'string',
    'RecipeName': 'string',
    'ResourceArn': 'string',
    'Sample': {
        'Size': 123,
        'Type': 'FIRST_N'|'LAST_N'|'RANDOM'
    },
    'RoleArn': 'string',
    'Tags': {
        'string': 'string'
    },
    'SessionStatus': 'ASSIGNED'|'FAILED'|'INITIALIZING'|'PROVISIONING'|'READY'|'RECYCLING'|'ROTATING'|'TERMINATED'|'TERMINATING'|'UPDATING',
    'OpenedBy': 'string',
    'OpenDate': datetime(2015, 1, 1)
}

Response Structure

  • (dict) --
    • CreateDate (datetime) --

      The date and time that the project was created.

    • CreatedBy (string) --

      The identifier (user name) of the user who created the project.

    • DatasetName (string) --

      The dataset associated with the project.

    • LastModifiedDate (datetime) --

      The date and time that the project was last modified.

    • LastModifiedBy (string) --

      The identifier (user name) of the user who last modified the project.

    • Name (string) --

      The name of the project.

    • RecipeName (string) --

      The recipe associated with this job.

    • ResourceArn (string) --

      The Amazon Resource Name (ARN) of the project.

    • Sample (dict) --

      Represents the sample size and sampling type for DataBrew to use for interactive data analysis.

      • Size (integer) --

        The number of rows in the sample.

      • Type (string) --

        The way in which DataBrew obtains rows from a dataset.

    • RoleArn (string) --

      The ARN of the Identity and Access Management (IAM) role to be assumed when DataBrew runs the job.

    • Tags (dict) --

      Metadata tags associated with this project.

      • (string) --
        • (string) --
    • SessionStatus (string) --

      Describes the current state of the session:

      • PROVISIONING - allocating resources for the session.
      • INITIALIZING - getting the session ready for first use.
      • ASSIGNED - the session is ready for use.
    • OpenedBy (string) --

      The identifier (user name) of the user that opened the project for use.

    • OpenDate (datetime) --

      The date and time when the project was opened.

Exceptions

  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
describe_recipe(**kwargs)

Returns the definition of a specific DataBrew recipe corresponding to a particular version.

See also: AWS API Documentation

Request Syntax

response = client.describe_recipe(
    Name='string',
    RecipeVersion='string'
)
Parameters
  • Name (string) --

    [REQUIRED]

    The name of the recipe to be described.

  • RecipeVersion (string) -- The recipe version identifier. If this parameter isn't specified, then the latest published version is returned.
Return type

dict

Returns

Response Syntax

{
    'CreatedBy': 'string',
    'CreateDate': datetime(2015, 1, 1),
    'LastModifiedBy': 'string',
    'LastModifiedDate': datetime(2015, 1, 1),
    'ProjectName': 'string',
    'PublishedBy': 'string',
    'PublishedDate': datetime(2015, 1, 1),
    'Description': 'string',
    'Name': 'string',
    'Steps': [
        {
            'Action': {
                'Operation': 'string',
                'Parameters': {
                    'string': 'string'
                }
            },
            'ConditionExpressions': [
                {
                    'Condition': 'string',
                    'Value': 'string',
                    'TargetColumn': 'string'
                },
            ]
        },
    ],
    'Tags': {
        'string': 'string'
    },
    'ResourceArn': 'string',
    'RecipeVersion': 'string'
}

Response Structure

  • (dict) --

    • CreatedBy (string) --

      The identifier (user name) of the user who created the recipe.

    • CreateDate (datetime) --

      The date and time that the recipe was created.

    • LastModifiedBy (string) --

      The identifier (user name) of the user who last modified the recipe.

    • LastModifiedDate (datetime) --

      The date and time that the recipe was last modified.

    • ProjectName (string) --

      The name of the project associated with this recipe.

    • PublishedBy (string) --

      The identifier (user name) of the user who last published the recipe.

    • PublishedDate (datetime) --

      The date and time when the recipe was last published.

    • Description (string) --

      The description of the recipe.

    • Name (string) --

      The name of the recipe.

    • Steps (list) --

      One or more steps to be performed by the recipe. Each step consists of an action, and the conditions under which the action should succeed.

      • (dict) --

        Represents a single step from a DataBrew recipe to be performed.

        • Action (dict) --

          The particular action to be performed in the recipe step.

          • Operation (string) --

            The name of a valid DataBrew transformation to be performed on the data.

          • Parameters (dict) --

            Contextual parameters for the transformation.

            • (string) --
              • (string) --
        • ConditionExpressions (list) --

          One or more conditions that must be met for the recipe step to succeed.

          Note

          All of the conditions in the array must be met. In other words, all of the conditions must be combined using a logical AND operation.

          • (dict) --

            Represents an individual condition that evaluates to true or false.

            Conditions are used with recipe actions. The action is only performed for column values where the condition evaluates to true.

            If a recipe requires more than one condition, then the recipe must specify multiple ConditionExpression elements. Each condition is applied to the rows in a dataset first, before the recipe action is performed.

            • Condition (string) --

              A specific condition to apply to a recipe action. For more information, see Recipe structure in the Glue DataBrew Developer Guide .

            • Value (string) --

              A value that the condition must evaluate to for the condition to succeed.

            • TargetColumn (string) --

              A column to apply this condition to.

    • Tags (dict) --

      Metadata tags associated with this project.

      • (string) --
        • (string) --
    • ResourceArn (string) --

      The ARN of the recipe.

    • RecipeVersion (string) --

      The recipe version identifier.

Exceptions

  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
describe_schedule(**kwargs)

Returns the definition of a specific DataBrew schedule.

See also: AWS API Documentation

Request Syntax

response = client.describe_schedule(
    Name='string'
)
Parameters
Name (string) --

[REQUIRED]

The name of the schedule to be described.

Return type
dict
Returns
Response Syntax
{
    'CreateDate': datetime(2015, 1, 1),
    'CreatedBy': 'string',
    'JobNames': [
        'string',
    ],
    'LastModifiedBy': 'string',
    'LastModifiedDate': datetime(2015, 1, 1),
    'ResourceArn': 'string',
    'CronExpression': 'string',
    'Tags': {
        'string': 'string'
    },
    'Name': 'string'
}

Response Structure

  • (dict) --
    • CreateDate (datetime) --

      The date and time that the schedule was created.

    • CreatedBy (string) --

      The identifier (user name) of the user who created the schedule.

    • JobNames (list) --

      The name or names of one or more jobs to be run by using the schedule.

      • (string) --
    • LastModifiedBy (string) --

      The identifier (user name) of the user who last modified the schedule.

    • LastModifiedDate (datetime) --

      The date and time that the schedule was last modified.

    • ResourceArn (string) --

      The Amazon Resource Name (ARN) of the schedule.

    • CronExpression (string) --

      The date or dates and time or times when the jobs are to be run for the schedule. For more information, see Cron expressions in the Glue DataBrew Developer Guide .

    • Tags (dict) --

      Metadata tags associated with this schedule.

      • (string) --
        • (string) --
    • Name (string) --

      The name of the schedule.

Exceptions

  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
generate_presigned_url(ClientMethod, Params=None, ExpiresIn=3600, HttpMethod=None)

Generate a presigned url given a client, its method, and arguments

Parameters
  • ClientMethod (string) -- The client method to presign for
  • Params (dict) -- The parameters normally passed to ClientMethod.
  • ExpiresIn (int) -- The number of seconds the presigned url is valid for. By default it expires in an hour (3600 seconds)
  • HttpMethod (string) -- The http method to use on the generated url. By default, the http method is whatever is used in the method's model.
Returns

The presigned url

get_paginator(operation_name)

Create a paginator for an operation.

Parameters
operation_name (string) -- The operation name. This is the same name as the method name on the client. For example, if the method name is create_foo, and you'd normally invoke the operation as client.create_foo(**kwargs), if the create_foo operation can be paginated, you can use the call client.get_paginator("create_foo").
Raises OperationNotPageableError
Raised if the operation is not pageable. You can use the client.can_paginate method to check if an operation is pageable.
Return type
L{botocore.paginate.Paginator}
Returns
A paginator object.
get_waiter(waiter_name)

Returns an object that can wait for some condition.

Parameters
waiter_name (str) -- The name of the waiter to get. See the waiters section of the service docs for a list of available waiters.
Returns
The specified waiter object.
Return type
botocore.waiter.Waiter
list_datasets(**kwargs)

Lists all of the DataBrew datasets.

See also: AWS API Documentation

Request Syntax

response = client.list_datasets(
    MaxResults=123,
    NextToken='string'
)
Parameters
  • MaxResults (integer) -- The maximum number of results to return in this request.
  • NextToken (string) -- The token returned by a previous call to retrieve the next set of results.
Return type

dict

Returns

Response Syntax

{
    'Datasets': [
        {
            'AccountId': 'string',
            'CreatedBy': 'string',
            'CreateDate': datetime(2015, 1, 1),
            'Name': 'string',
            'Format': 'CSV'|'JSON'|'PARQUET'|'EXCEL',
            'FormatOptions': {
                'Json': {
                    'MultiLine': True|False
                },
                'Excel': {
                    'SheetNames': [
                        'string',
                    ],
                    'SheetIndexes': [
                        123,
                    ],
                    'HeaderRow': True|False
                },
                'Csv': {
                    'Delimiter': 'string',
                    'HeaderRow': True|False
                }
            },
            'Input': {
                'S3InputDefinition': {
                    'Bucket': 'string',
                    'Key': 'string'
                },
                'DataCatalogInputDefinition': {
                    'CatalogId': 'string',
                    'DatabaseName': 'string',
                    'TableName': 'string',
                    'TempDirectory': {
                        'Bucket': 'string',
                        'Key': 'string'
                    }
                },
                'DatabaseInputDefinition': {
                    'GlueConnectionName': 'string',
                    'DatabaseTableName': 'string',
                    'TempDirectory': {
                        'Bucket': 'string',
                        'Key': 'string'
                    }
                }
            },
            'LastModifiedDate': datetime(2015, 1, 1),
            'LastModifiedBy': 'string',
            'Source': 'S3'|'DATA-CATALOG'|'DATABASE',
            'PathOptions': {
                'LastModifiedDateCondition': {
                    'Expression': 'string',
                    'ValuesMap': {
                        'string': 'string'
                    }
                },
                'FilesLimit': {
                    'MaxFiles': 123,
                    'OrderedBy': 'LAST_MODIFIED_DATE',
                    'Order': 'DESCENDING'|'ASCENDING'
                },
                'Parameters': {
                    'string': {
                        'Name': 'string',
                        'Type': 'Datetime'|'Number'|'String',
                        'DatetimeOptions': {
                            'Format': 'string',
                            'TimezoneOffset': 'string',
                            'LocaleCode': 'string'
                        },
                        'CreateColumn': True|False,
                        'Filter': {
                            'Expression': 'string',
                            'ValuesMap': {
                                'string': 'string'
                            }
                        }
                    }
                }
            },
            'Tags': {
                'string': 'string'
            },
            'ResourceArn': 'string'
        },
    ],
    'NextToken': 'string'
}

Response Structure

  • (dict) --

    • Datasets (list) --

      A list of datasets that are defined.

      • (dict) --

        Represents a dataset that can be processed by DataBrew.

        • AccountId (string) --

          The ID of the Amazon Web Services account that owns the dataset.

        • CreatedBy (string) --

          The Amazon Resource Name (ARN) of the user who created the dataset.

        • CreateDate (datetime) --

          The date and time that the dataset was created.

        • Name (string) --

          The unique name of the dataset.

        • Format (string) --

          The file format of a dataset that is created from an Amazon S3 file or folder.

        • FormatOptions (dict) --

          A set of options that define how DataBrew interprets the data in the dataset.

          • Json (dict) --

            Options that define how JSON input is to be interpreted by DataBrew.

            • MultiLine (boolean) --

              A value that specifies whether JSON input contains embedded new line characters.

          • Excel (dict) --

            Options that define how Excel input is to be interpreted by DataBrew.

            • SheetNames (list) --

              One or more named sheets in the Excel file that will be included in the dataset.

              • (string) --
            • SheetIndexes (list) --

              One or more sheet numbers in the Excel file that will be included in the dataset.

              • (integer) --
            • HeaderRow (boolean) --

              A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.

          • Csv (dict) --

            Options that define how CSV input is to be interpreted by DataBrew.

            • Delimiter (string) --

              A single character that specifies the delimiter being used in the CSV file.

            • HeaderRow (boolean) --

              A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.

        • Input (dict) --

          Information on how DataBrew can find the dataset, in either the Glue Data Catalog or Amazon S3.

          • S3InputDefinition (dict) --

            The Amazon S3 location where the data is stored.

            • Bucket (string) --

              The Amazon S3 bucket name.

            • Key (string) --

              The unique name of the object in the bucket.

          • DataCatalogInputDefinition (dict) --

            The Glue Data Catalog parameters for the data.

            • CatalogId (string) --

              The unique identifier of the Amazon Web Services account that holds the Data Catalog that stores the data.

            • DatabaseName (string) --

              The name of a database in the Data Catalog.

            • TableName (string) --

              The name of a database table in the Data Catalog. This table corresponds to a DataBrew dataset.

            • TempDirectory (dict) --

              Represents an Amazon location where DataBrew can store intermediate results.

              • Bucket (string) --

                The Amazon S3 bucket name.

              • Key (string) --

                The unique name of the object in the bucket.

          • DatabaseInputDefinition (dict) --

            Connection information for dataset input files stored in a database.

            • GlueConnectionName (string) --

              The Glue Connection that stores the connection information for the target database.

            • DatabaseTableName (string) --

              The table within the target database.

            • TempDirectory (dict) --

              Represents an Amazon S3 location (bucket name and object key) where DataBrew can read input data, or write output from a job.

              • Bucket (string) --

                The Amazon S3 bucket name.

              • Key (string) --

                The unique name of the object in the bucket.

        • LastModifiedDate (datetime) --

          The last modification date and time of the dataset.

        • LastModifiedBy (string) --

          The Amazon Resource Name (ARN) of the user who last modified the dataset.

        • Source (string) --

          The location of the data for the dataset, either Amazon S3 or the Glue Data Catalog.

        • PathOptions (dict) --

          A set of options that defines how DataBrew interprets an Amazon S3 path of the dataset.

          • LastModifiedDateCondition (dict) --

            If provided, this structure defines a date range for matching Amazon S3 objects based on their LastModifiedDate attribute in Amazon S3.

            • Expression (string) --

              The expression which includes condition names followed by substitution variables, possibly grouped and combined with other conditions. For example, "(starts_with :prefix1 or starts_with :prefix2) and (ends_with :suffix1 or ends_with :suffix2)". Substitution variables should start with ':' symbol.

            • ValuesMap (dict) --

              The map of substitution variable names to their values used in this filter expression.

              • (string) --
                • (string) --
          • FilesLimit (dict) --

            If provided, this structure imposes a limit on a number of files that should be selected.

            • MaxFiles (integer) --

              The number of Amazon S3 files to select.

            • OrderedBy (string) --

              A criteria to use for Amazon S3 files sorting before their selection. By default uses LAST_MODIFIED_DATE as a sorting criteria. Currently it's the only allowed value.

            • Order (string) --

              A criteria to use for Amazon S3 files sorting before their selection. By default uses DESCENDING order, i.e. most recent files are selected first. Anotherpossible value is ASCENDING.

          • Parameters (dict) --

            A structure that maps names of parameters used in the Amazon S3 path of a dataset to their definitions.

            • (string) --

              • (dict) --

                Represents a dataset paramater that defines type and conditions for a parameter in the Amazon S3 path of the dataset.

                • Name (string) --

                  The name of the parameter that is used in the dataset's Amazon S3 path.

                • Type (string) --

                  The type of the dataset parameter, can be one of a 'String', 'Number' or 'Datetime'.

                • DatetimeOptions (dict) --

                  Additional parameter options such as a format and a timezone. Required for datetime parameters.

                  • Format (string) --

                    Required option, that defines the datetime format used for a date parameter in the Amazon S3 path. Should use only supported datetime specifiers and separation characters, all literal a-z or A-Z characters should be escaped with single quotes. E.g. "MM.dd.yyyy-'at'-HH:mm".

                  • TimezoneOffset (string) --

                    Optional value for a timezone offset of the datetime parameter value in the Amazon S3 path. Shouldn't be used if Format for this parameter includes timezone fields. If no offset specified, UTC is assumed.

                  • LocaleCode (string) --

                    Optional value for a non-US locale code, needed for correct interpretation of some date formats.

                • CreateColumn (boolean) --

                  Optional boolean value that defines whether the captured value of this parameter should be used to create a new column in a dataset.

                • Filter (dict) --

                  The optional filter expression structure to apply additional matching criteria to the parameter.

                  • Expression (string) --

                    The expression which includes condition names followed by substitution variables, possibly grouped and combined with other conditions. For example, "(starts_with :prefix1 or starts_with :prefix2) and (ends_with :suffix1 or ends_with :suffix2)". Substitution variables should start with ':' symbol.

                  • ValuesMap (dict) --

                    The map of substitution variable names to their values used in this filter expression.

                    • (string) --
                      • (string) --
        • Tags (dict) --

          Metadata tags that have been applied to the dataset.

          • (string) --
            • (string) --
        • ResourceArn (string) --

          The unique Amazon Resource Name (ARN) for the dataset.

    • NextToken (string) --

      A token that you can use in a subsequent call to retrieve the next set of results.

Exceptions

  • GlueDataBrew.Client.exceptions.ValidationException
list_job_runs(**kwargs)

Lists all of the previous runs of a particular DataBrew job.

See also: AWS API Documentation

Request Syntax

response = client.list_job_runs(
    Name='string',
    MaxResults=123,
    NextToken='string'
)
Parameters
  • Name (string) --

    [REQUIRED]

    The name of the job.

  • MaxResults (integer) -- The maximum number of results to return in this request.
  • NextToken (string) -- The token returned by a previous call to retrieve the next set of results.
Return type

dict

Returns

Response Syntax

{
    'JobRuns': [
        {
            'Attempt': 123,
            'CompletedOn': datetime(2015, 1, 1),
            'DatasetName': 'string',
            'ErrorMessage': 'string',
            'ExecutionTime': 123,
            'JobName': 'string',
            'RunId': 'string',
            'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT',
            'LogSubscription': 'ENABLE'|'DISABLE',
            'LogGroupName': 'string',
            'Outputs': [
                {
                    'CompressionFormat': 'GZIP'|'LZ4'|'SNAPPY'|'BZIP2'|'DEFLATE'|'LZO'|'BROTLI'|'ZSTD'|'ZLIB',
                    'Format': 'CSV'|'JSON'|'PARQUET'|'GLUEPARQUET'|'AVRO'|'ORC'|'XML'|'TABLEAUHYPER',
                    'PartitionColumns': [
                        'string',
                    ],
                    'Location': {
                        'Bucket': 'string',
                        'Key': 'string'
                    },
                    'Overwrite': True|False,
                    'FormatOptions': {
                        'Csv': {
                            'Delimiter': 'string'
                        }
                    }
                },
            ],
            'DataCatalogOutputs': [
                {
                    'CatalogId': 'string',
                    'DatabaseName': 'string',
                    'TableName': 'string',
                    'S3Options': {
                        'Location': {
                            'Bucket': 'string',
                            'Key': 'string'
                        }
                    },
                    'DatabaseOptions': {
                        'TempDirectory': {
                            'Bucket': 'string',
                            'Key': 'string'
                        },
                        'TableName': 'string'
                    },
                    'Overwrite': True|False
                },
            ],
            'DatabaseOutputs': [
                {
                    'GlueConnectionName': 'string',
                    'DatabaseOptions': {
                        'TempDirectory': {
                            'Bucket': 'string',
                            'Key': 'string'
                        },
                        'TableName': 'string'
                    },
                    'DatabaseOutputMode': 'NEW_TABLE'
                },
            ],
            'RecipeReference': {
                'Name': 'string',
                'RecipeVersion': 'string'
            },
            'StartedBy': 'string',
            'StartedOn': datetime(2015, 1, 1),
            'JobSample': {
                'Mode': 'FULL_DATASET'|'CUSTOM_ROWS',
                'Size': 123
            }
        },
    ],
    'NextToken': 'string'
}

Response Structure

  • (dict) --

    • JobRuns (list) --

      A list of job runs that have occurred for the specified job.

      • (dict) --

        Represents one run of a DataBrew job.

        • Attempt (integer) --

          The number of times that DataBrew has attempted to run the job.

        • CompletedOn (datetime) --

          The date and time when the job completed processing.

        • DatasetName (string) --

          The name of the dataset for the job to process.

        • ErrorMessage (string) --

          A message indicating an error (if any) that was encountered when the job ran.

        • ExecutionTime (integer) --

          The amount of time, in seconds, during which a job run consumed resources.

        • JobName (string) --

          The name of the job being processed during this run.

        • RunId (string) --

          The unique identifier of the job run.

        • State (string) --

          The current state of the job run entity itself.

        • LogSubscription (string) --

          The current status of Amazon CloudWatch logging for the job run.

        • LogGroupName (string) --

          The name of an Amazon CloudWatch log group, where the job writes diagnostic messages when it runs.

        • Outputs (list) --

          One or more output artifacts from a job run.

          • (dict) --

            Represents options that specify how and where in Amazon S3 DataBrew writes the output generated by recipe jobs or profile jobs.

            • CompressionFormat (string) --

              The compression algorithm used to compress the output text of the job.

            • Format (string) --

              The data format of the output of the job.

            • PartitionColumns (list) --

              The names of one or more partition columns for the output of the job.

              • (string) --
            • Location (dict) --

              The location in Amazon S3 where the job writes its output.

              • Bucket (string) --

                The Amazon S3 bucket name.

              • Key (string) --

                The unique name of the object in the bucket.

            • Overwrite (boolean) --

              A value that, if true, means that any data in the location specified for output is overwritten with new output.

            • FormatOptions (dict) --

              Represents options that define how DataBrew formats job output files.

              • Csv (dict) --

                Represents a set of options that define the structure of comma-separated value (CSV) job output.

                • Delimiter (string) --

                  A single character that specifies the delimiter used to create CSV job output.

        • DataCatalogOutputs (list) --

          One or more artifacts that represent the Glue Data Catalog output from running the job.

          • (dict) --

            Represents options that specify how and where in the Glue Data Catalog DataBrew writes the output generated by recipe jobs.

            • CatalogId (string) --

              The unique identifier of the Amazon Web Services account that holds the Data Catalog that stores the data.

            • DatabaseName (string) --

              The name of a database in the Data Catalog.

            • TableName (string) --

              The name of a table in the Data Catalog.

            • S3Options (dict) --

              Represents options that specify how and where DataBrew writes the Amazon S3 output generated by recipe jobs.

              • Location (dict) --

                Represents an Amazon S3 location (bucket name and object key) where DataBrew can write output from a job.

                • Bucket (string) --

                  The Amazon S3 bucket name.

                • Key (string) --

                  The unique name of the object in the bucket.

            • DatabaseOptions (dict) --

              Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.

              • TempDirectory (dict) --

                Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.

                • Bucket (string) --

                  The Amazon S3 bucket name.

                • Key (string) --

                  The unique name of the object in the bucket.

              • TableName (string) --

                A prefix for the name of a table DataBrew will create in the database.

            • Overwrite (boolean) --

              A value that, if true, means that any data in the location specified for output is overwritten with new output. Not supported with DatabaseOptions.

        • DatabaseOutputs (list) --

          Represents a list of JDBC database output objects which defines the output destination for a DataBrew recipe job to write into.

          • (dict) --

            Represents a JDBC database output object which defines the output destination for a DataBrew recipe job to write into.

            • GlueConnectionName (string) --

              The Glue connection that stores the connection information for the target database.

            • DatabaseOptions (dict) --

              Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.

              • TempDirectory (dict) --

                Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.

                • Bucket (string) --

                  The Amazon S3 bucket name.

                • Key (string) --

                  The unique name of the object in the bucket.

              • TableName (string) --

                A prefix for the name of a table DataBrew will create in the database.

            • DatabaseOutputMode (string) --

              The output mode to write into the database. Currently supported option: NEW_TABLE.

        • RecipeReference (dict) --

          The set of steps processed by the job.

          • Name (string) --

            The name of the recipe.

          • RecipeVersion (string) --

            The identifier for the version for the recipe.

        • StartedBy (string) --

          The Amazon Resource Name (ARN) of the user who initiated the job run.

        • StartedOn (datetime) --

          The date and time when the job run began.

        • JobSample (dict) --

          A sample configuration for profile jobs only, which determines the number of rows on which the profile job is run. If a JobSample value isn't provided, the default is used. The default value is CUSTOM_ROWS for the mode parameter and 20,000 for the size parameter.

          • Mode (string) --

            A value that determines whether the profile job is run on the entire dataset or a specified number of rows. This value must be one of the following:

            • FULL_DATASET - The profile job is run on the entire dataset.
            • CUSTOM_ROWS - The profile job is run on the number of rows specified in the Size parameter.
          • Size (integer) --

            The Size parameter is only required when the mode is CUSTOM_ROWS. The profile job is run on the specified number of rows. The maximum value for size is Long.MAX_VALUE.

            Long.MAX_VALUE = 9223372036854775807

    • NextToken (string) --

      A token that you can use in a subsequent call to retrieve the next set of results.

Exceptions

  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
list_jobs(**kwargs)

Lists all of the DataBrew jobs that are defined.

See also: AWS API Documentation

Request Syntax

response = client.list_jobs(
    DatasetName='string',
    MaxResults=123,
    NextToken='string',
    ProjectName='string'
)
Parameters
  • DatasetName (string) -- The name of a dataset. Using this parameter indicates to return only those jobs that act on the specified dataset.
  • MaxResults (integer) -- The maximum number of results to return in this request.
  • NextToken (string) -- A token generated by DataBrew that specifies where to continue pagination if a previous request was truncated. To get the next set of pages, pass in the NextToken value from the response object of the previous page call.
  • ProjectName (string) -- The name of a project. Using this parameter indicates to return only those jobs that are associated with the specified project.
Return type

dict

Returns

Response Syntax

{
    'Jobs': [
        {
            'AccountId': 'string',
            'CreatedBy': 'string',
            'CreateDate': datetime(2015, 1, 1),
            'DatasetName': 'string',
            'EncryptionKeyArn': 'string',
            'EncryptionMode': 'SSE-KMS'|'SSE-S3',
            'Name': 'string',
            'Type': 'PROFILE'|'RECIPE',
            'LastModifiedBy': 'string',
            'LastModifiedDate': datetime(2015, 1, 1),
            'LogSubscription': 'ENABLE'|'DISABLE',
            'MaxCapacity': 123,
            'MaxRetries': 123,
            'Outputs': [
                {
                    'CompressionFormat': 'GZIP'|'LZ4'|'SNAPPY'|'BZIP2'|'DEFLATE'|'LZO'|'BROTLI'|'ZSTD'|'ZLIB',
                    'Format': 'CSV'|'JSON'|'PARQUET'|'GLUEPARQUET'|'AVRO'|'ORC'|'XML'|'TABLEAUHYPER',
                    'PartitionColumns': [
                        'string',
                    ],
                    'Location': {
                        'Bucket': 'string',
                        'Key': 'string'
                    },
                    'Overwrite': True|False,
                    'FormatOptions': {
                        'Csv': {
                            'Delimiter': 'string'
                        }
                    }
                },
            ],
            'DataCatalogOutputs': [
                {
                    'CatalogId': 'string',
                    'DatabaseName': 'string',
                    'TableName': 'string',
                    'S3Options': {
                        'Location': {
                            'Bucket': 'string',
                            'Key': 'string'
                        }
                    },
                    'DatabaseOptions': {
                        'TempDirectory': {
                            'Bucket': 'string',
                            'Key': 'string'
                        },
                        'TableName': 'string'
                    },
                    'Overwrite': True|False
                },
            ],
            'DatabaseOutputs': [
                {
                    'GlueConnectionName': 'string',
                    'DatabaseOptions': {
                        'TempDirectory': {
                            'Bucket': 'string',
                            'Key': 'string'
                        },
                        'TableName': 'string'
                    },
                    'DatabaseOutputMode': 'NEW_TABLE'
                },
            ],
            'ProjectName': 'string',
            'RecipeReference': {
                'Name': 'string',
                'RecipeVersion': 'string'
            },
            'ResourceArn': 'string',
            'RoleArn': 'string',
            'Timeout': 123,
            'Tags': {
                'string': 'string'
            },
            'JobSample': {
                'Mode': 'FULL_DATASET'|'CUSTOM_ROWS',
                'Size': 123
            }
        },
    ],
    'NextToken': 'string'
}

Response Structure

  • (dict) --

    • Jobs (list) --

      A list of jobs that are defined.

      • (dict) --

        Represents all of the attributes of a DataBrew job.

        • AccountId (string) --

          The ID of the Amazon Web Services account that owns the job.

        • CreatedBy (string) --

          The Amazon Resource Name (ARN) of the user who created the job.

        • CreateDate (datetime) --

          The date and time that the job was created.

        • DatasetName (string) --

          A dataset that the job is to process.

        • EncryptionKeyArn (string) --

          The Amazon Resource Name (ARN) of an encryption key that is used to protect the job output. For more information, see Encrypting data written by DataBrew jobs

        • EncryptionMode (string) --

          The encryption mode for the job, which can be one of the following:

          • SSE-KMS - Server-side encryption with keys managed by KMS.
          • SSE-S3 - Server-side encryption with keys managed by Amazon S3.
        • Name (string) --

          The unique name of the job.

        • Type (string) --

          The job type of the job, which must be one of the following:

          • PROFILE - A job to analyze a dataset, to determine its size, data types, data distribution, and more.
          • RECIPE - A job to apply one or more transformations to a dataset.
        • LastModifiedBy (string) --

          The Amazon Resource Name (ARN) of the user who last modified the job.

        • LastModifiedDate (datetime) --

          The modification date and time of the job.

        • LogSubscription (string) --

          The current status of Amazon CloudWatch logging for the job.

        • MaxCapacity (integer) --

          The maximum number of nodes that can be consumed when the job processes data.

        • MaxRetries (integer) --

          The maximum number of times to retry the job after a job run fails.

        • Outputs (list) --

          One or more artifacts that represent output from running the job.

          • (dict) --

            Represents options that specify how and where in Amazon S3 DataBrew writes the output generated by recipe jobs or profile jobs.

            • CompressionFormat (string) --

              The compression algorithm used to compress the output text of the job.

            • Format (string) --

              The data format of the output of the job.

            • PartitionColumns (list) --

              The names of one or more partition columns for the output of the job.

              • (string) --
            • Location (dict) --

              The location in Amazon S3 where the job writes its output.

              • Bucket (string) --

                The Amazon S3 bucket name.

              • Key (string) --

                The unique name of the object in the bucket.

            • Overwrite (boolean) --

              A value that, if true, means that any data in the location specified for output is overwritten with new output.

            • FormatOptions (dict) --

              Represents options that define how DataBrew formats job output files.

              • Csv (dict) --

                Represents a set of options that define the structure of comma-separated value (CSV) job output.

                • Delimiter (string) --

                  A single character that specifies the delimiter used to create CSV job output.

        • DataCatalogOutputs (list) --

          One or more artifacts that represent the Glue Data Catalog output from running the job.

          • (dict) --

            Represents options that specify how and where in the Glue Data Catalog DataBrew writes the output generated by recipe jobs.

            • CatalogId (string) --

              The unique identifier of the Amazon Web Services account that holds the Data Catalog that stores the data.

            • DatabaseName (string) --

              The name of a database in the Data Catalog.

            • TableName (string) --

              The name of a table in the Data Catalog.

            • S3Options (dict) --

              Represents options that specify how and where DataBrew writes the Amazon S3 output generated by recipe jobs.

              • Location (dict) --

                Represents an Amazon S3 location (bucket name and object key) where DataBrew can write output from a job.

                • Bucket (string) --

                  The Amazon S3 bucket name.

                • Key (string) --

                  The unique name of the object in the bucket.

            • DatabaseOptions (dict) --

              Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.

              • TempDirectory (dict) --

                Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.

                • Bucket (string) --

                  The Amazon S3 bucket name.

                • Key (string) --

                  The unique name of the object in the bucket.

              • TableName (string) --

                A prefix for the name of a table DataBrew will create in the database.

            • Overwrite (boolean) --

              A value that, if true, means that any data in the location specified for output is overwritten with new output. Not supported with DatabaseOptions.

        • DatabaseOutputs (list) --

          Represents a list of JDBC database output objects which defines the output destination for a DataBrew recipe job to write into.

          • (dict) --

            Represents a JDBC database output object which defines the output destination for a DataBrew recipe job to write into.

            • GlueConnectionName (string) --

              The Glue connection that stores the connection information for the target database.

            • DatabaseOptions (dict) --

              Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.

              • TempDirectory (dict) --

                Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.

                • Bucket (string) --

                  The Amazon S3 bucket name.

                • Key (string) --

                  The unique name of the object in the bucket.

              • TableName (string) --

                A prefix for the name of a table DataBrew will create in the database.

            • DatabaseOutputMode (string) --

              The output mode to write into the database. Currently supported option: NEW_TABLE.

        • ProjectName (string) --

          The name of the project that the job is associated with.

        • RecipeReference (dict) --

          A set of steps that the job runs.

          • Name (string) --

            The name of the recipe.

          • RecipeVersion (string) --

            The identifier for the version for the recipe.

        • ResourceArn (string) --

          The unique Amazon Resource Name (ARN) for the job.

        • RoleArn (string) --

          The Amazon Resource Name (ARN) of the role to be assumed for this job.

        • Timeout (integer) --

          The job's timeout in minutes. A job that attempts to run longer than this timeout period ends with a status of TIMEOUT .

        • Tags (dict) --

          Metadata tags that have been applied to the job.

          • (string) --
            • (string) --
        • JobSample (dict) --

          A sample configuration for profile jobs only, which determines the number of rows on which the profile job is run. If a JobSample value isn't provided, the default value is used. The default value is CUSTOM_ROWS for the mode parameter and 20,000 for the size parameter.

          • Mode (string) --

            A value that determines whether the profile job is run on the entire dataset or a specified number of rows. This value must be one of the following:

            • FULL_DATASET - The profile job is run on the entire dataset.
            • CUSTOM_ROWS - The profile job is run on the number of rows specified in the Size parameter.
          • Size (integer) --

            The Size parameter is only required when the mode is CUSTOM_ROWS. The profile job is run on the specified number of rows. The maximum value for size is Long.MAX_VALUE.

            Long.MAX_VALUE = 9223372036854775807

    • NextToken (string) --

      A token that you can use in a subsequent call to retrieve the next set of results.

Exceptions

  • GlueDataBrew.Client.exceptions.ValidationException
list_projects(**kwargs)

Lists all of the DataBrew projects that are defined.

See also: AWS API Documentation

Request Syntax

response = client.list_projects(
    NextToken='string',
    MaxResults=123
)
Parameters
  • NextToken (string) -- The token returned by a previous call to retrieve the next set of results.
  • MaxResults (integer) -- The maximum number of results to return in this request.
Return type

dict

Returns

Response Syntax

{
    'Projects': [
        {
            'AccountId': 'string',
            'CreateDate': datetime(2015, 1, 1),
            'CreatedBy': 'string',
            'DatasetName': 'string',
            'LastModifiedDate': datetime(2015, 1, 1),
            'LastModifiedBy': 'string',
            'Name': 'string',
            'RecipeName': 'string',
            'ResourceArn': 'string',
            'Sample': {
                'Size': 123,
                'Type': 'FIRST_N'|'LAST_N'|'RANDOM'
            },
            'Tags': {
                'string': 'string'
            },
            'RoleArn': 'string',
            'OpenedBy': 'string',
            'OpenDate': datetime(2015, 1, 1)
        },
    ],
    'NextToken': 'string'
}

Response Structure

  • (dict) --

    • Projects (list) --

      A list of projects that are defined .

      • (dict) --

        Represents all of the attributes of a DataBrew project.

        • AccountId (string) --

          The ID of the Amazon Web Services account that owns the project.

        • CreateDate (datetime) --

          The date and time that the project was created.

        • CreatedBy (string) --

          The Amazon Resource Name (ARN) of the user who crated the project.

        • DatasetName (string) --

          The dataset that the project is to act upon.

        • LastModifiedDate (datetime) --

          The last modification date and time for the project.

        • LastModifiedBy (string) --

          The Amazon Resource Name (ARN) of the user who last modified the project.

        • Name (string) --

          The unique name of a project.

        • RecipeName (string) --

          The name of a recipe that will be developed during a project session.

        • ResourceArn (string) --

          The Amazon Resource Name (ARN) for the project.

        • Sample (dict) --

          The sample size and sampling type to apply to the data. If this parameter isn't specified, then the sample consists of the first 500 rows from the dataset.

          • Size (integer) --

            The number of rows in the sample.

          • Type (string) --

            The way in which DataBrew obtains rows from a dataset.

        • Tags (dict) --

          Metadata tags that have been applied to the project.

          • (string) --
            • (string) --
        • RoleArn (string) --

          The Amazon Resource Name (ARN) of the role that will be assumed for this project.

        • OpenedBy (string) --

          The Amazon Resource Name (ARN) of the user that opened the project for use.

        • OpenDate (datetime) --

          The date and time when the project was opened.

    • NextToken (string) --

      A token that you can use in a subsequent call to retrieve the next set of results.

Exceptions

  • GlueDataBrew.Client.exceptions.ValidationException
list_recipe_versions(**kwargs)

Lists the versions of a particular DataBrew recipe, except for LATEST_WORKING .

See also: AWS API Documentation

Request Syntax

response = client.list_recipe_versions(
    MaxResults=123,
    NextToken='string',
    Name='string'
)
Parameters
  • MaxResults (integer) -- The maximum number of results to return in this request.
  • NextToken (string) -- The token returned by a previous call to retrieve the next set of results.
  • Name (string) --

    [REQUIRED]

    The name of the recipe for which to return version information.

Return type

dict

Returns

Response Syntax

{
    'NextToken': 'string',
    'Recipes': [
        {
            'CreatedBy': 'string',
            'CreateDate': datetime(2015, 1, 1),
            'LastModifiedBy': 'string',
            'LastModifiedDate': datetime(2015, 1, 1),
            'ProjectName': 'string',
            'PublishedBy': 'string',
            'PublishedDate': datetime(2015, 1, 1),
            'Description': 'string',
            'Name': 'string',
            'ResourceArn': 'string',
            'Steps': [
                {
                    'Action': {
                        'Operation': 'string',
                        'Parameters': {
                            'string': 'string'
                        }
                    },
                    'ConditionExpressions': [
                        {
                            'Condition': 'string',
                            'Value': 'string',
                            'TargetColumn': 'string'
                        },
                    ]
                },
            ],
            'Tags': {
                'string': 'string'
            },
            'RecipeVersion': 'string'
        },
    ]
}

Response Structure

  • (dict) --

    • NextToken (string) --

      A token that you can use in a subsequent call to retrieve the next set of results.

    • Recipes (list) --

      A list of versions for the specified recipe.

      • (dict) --

        Represents one or more actions to be performed on a DataBrew dataset.

        • CreatedBy (string) --

          The Amazon Resource Name (ARN) of the user who created the recipe.

        • CreateDate (datetime) --

          The date and time that the recipe was created.

        • LastModifiedBy (string) --

          The Amazon Resource Name (ARN) of the user who last modified the recipe.

        • LastModifiedDate (datetime) --

          The last modification date and time of the recipe.

        • ProjectName (string) --

          The name of the project that the recipe is associated with.

        • PublishedBy (string) --

          The Amazon Resource Name (ARN) of the user who published the recipe.

        • PublishedDate (datetime) --

          The date and time when the recipe was published.

        • Description (string) --

          The description of the recipe.

        • Name (string) --

          The unique name for the recipe.

        • ResourceArn (string) --

          The Amazon Resource Name (ARN) for the recipe.

        • Steps (list) --

          A list of steps that are defined by the recipe.

          • (dict) --

            Represents a single step from a DataBrew recipe to be performed.

            • Action (dict) --

              The particular action to be performed in the recipe step.

              • Operation (string) --

                The name of a valid DataBrew transformation to be performed on the data.

              • Parameters (dict) --

                Contextual parameters for the transformation.

                • (string) --
                  • (string) --
            • ConditionExpressions (list) --

              One or more conditions that must be met for the recipe step to succeed.

              Note

              All of the conditions in the array must be met. In other words, all of the conditions must be combined using a logical AND operation.

              • (dict) --

                Represents an individual condition that evaluates to true or false.

                Conditions are used with recipe actions. The action is only performed for column values where the condition evaluates to true.

                If a recipe requires more than one condition, then the recipe must specify multiple ConditionExpression elements. Each condition is applied to the rows in a dataset first, before the recipe action is performed.

                • Condition (string) --

                  A specific condition to apply to a recipe action. For more information, see Recipe structure in the Glue DataBrew Developer Guide .

                • Value (string) --

                  A value that the condition must evaluate to for the condition to succeed.

                • TargetColumn (string) --

                  A column to apply this condition to.

        • Tags (dict) --

          Metadata tags that have been applied to the recipe.

          • (string) --
            • (string) --
        • RecipeVersion (string) --

          The identifier for the version for the recipe. Must be one of the following:

          • Numeric version (X.Y ) - X and Y stand for major and minor version numbers. The maximum length of each is 6 digits, and neither can be negative values. Both X and Y are required, and "0.0" isn't a valid version.
          • LATEST_WORKING - the most recent valid version being developed in a DataBrew project.
          • LATEST_PUBLISHED - the most recent published version.

Exceptions

  • GlueDataBrew.Client.exceptions.ValidationException
list_recipes(**kwargs)

Lists all of the DataBrew recipes that are defined.

See also: AWS API Documentation

Request Syntax

response = client.list_recipes(
    MaxResults=123,
    NextToken='string',
    RecipeVersion='string'
)
Parameters
  • MaxResults (integer) -- The maximum number of results to return in this request.
  • NextToken (string) -- The token returned by a previous call to retrieve the next set of results.
  • RecipeVersion (string) --

    Return only those recipes with a version identifier of LATEST_WORKING or LATEST_PUBLISHED . If RecipeVersion is omitted, ListRecipes returns all of the LATEST_PUBLISHED recipe versions.

    Valid values: LATEST_WORKING | LATEST_PUBLISHED

Return type

dict

Returns

Response Syntax

{
    'Recipes': [
        {
            'CreatedBy': 'string',
            'CreateDate': datetime(2015, 1, 1),
            'LastModifiedBy': 'string',
            'LastModifiedDate': datetime(2015, 1, 1),
            'ProjectName': 'string',
            'PublishedBy': 'string',
            'PublishedDate': datetime(2015, 1, 1),
            'Description': 'string',
            'Name': 'string',
            'ResourceArn': 'string',
            'Steps': [
                {
                    'Action': {
                        'Operation': 'string',
                        'Parameters': {
                            'string': 'string'
                        }
                    },
                    'ConditionExpressions': [
                        {
                            'Condition': 'string',
                            'Value': 'string',
                            'TargetColumn': 'string'
                        },
                    ]
                },
            ],
            'Tags': {
                'string': 'string'
            },
            'RecipeVersion': 'string'
        },
    ],
    'NextToken': 'string'
}

Response Structure

  • (dict) --

    • Recipes (list) --

      A list of recipes that are defined.

      • (dict) --

        Represents one or more actions to be performed on a DataBrew dataset.

        • CreatedBy (string) --

          The Amazon Resource Name (ARN) of the user who created the recipe.

        • CreateDate (datetime) --

          The date and time that the recipe was created.

        • LastModifiedBy (string) --

          The Amazon Resource Name (ARN) of the user who last modified the recipe.

        • LastModifiedDate (datetime) --

          The last modification date and time of the recipe.

        • ProjectName (string) --

          The name of the project that the recipe is associated with.

        • PublishedBy (string) --

          The Amazon Resource Name (ARN) of the user who published the recipe.

        • PublishedDate (datetime) --

          The date and time when the recipe was published.

        • Description (string) --

          The description of the recipe.

        • Name (string) --

          The unique name for the recipe.

        • ResourceArn (string) --

          The Amazon Resource Name (ARN) for the recipe.

        • Steps (list) --

          A list of steps that are defined by the recipe.

          • (dict) --

            Represents a single step from a DataBrew recipe to be performed.

            • Action (dict) --

              The particular action to be performed in the recipe step.

              • Operation (string) --

                The name of a valid DataBrew transformation to be performed on the data.

              • Parameters (dict) --

                Contextual parameters for the transformation.

                • (string) --
                  • (string) --
            • ConditionExpressions (list) --

              One or more conditions that must be met for the recipe step to succeed.

              Note

              All of the conditions in the array must be met. In other words, all of the conditions must be combined using a logical AND operation.

              • (dict) --

                Represents an individual condition that evaluates to true or false.

                Conditions are used with recipe actions. The action is only performed for column values where the condition evaluates to true.

                If a recipe requires more than one condition, then the recipe must specify multiple ConditionExpression elements. Each condition is applied to the rows in a dataset first, before the recipe action is performed.

                • Condition (string) --

                  A specific condition to apply to a recipe action. For more information, see Recipe structure in the Glue DataBrew Developer Guide .

                • Value (string) --

                  A value that the condition must evaluate to for the condition to succeed.

                • TargetColumn (string) --

                  A column to apply this condition to.

        • Tags (dict) --

          Metadata tags that have been applied to the recipe.

          • (string) --
            • (string) --
        • RecipeVersion (string) --

          The identifier for the version for the recipe. Must be one of the following:

          • Numeric version (X.Y ) - X and Y stand for major and minor version numbers. The maximum length of each is 6 digits, and neither can be negative values. Both X and Y are required, and "0.0" isn't a valid version.
          • LATEST_WORKING - the most recent valid version being developed in a DataBrew project.
          • LATEST_PUBLISHED - the most recent published version.
    • NextToken (string) --

      A token that you can use in a subsequent call to retrieve the next set of results.

Exceptions

  • GlueDataBrew.Client.exceptions.ValidationException
list_schedules(**kwargs)

Lists the DataBrew schedules that are defined.

See also: AWS API Documentation

Request Syntax

response = client.list_schedules(
    JobName='string',
    MaxResults=123,
    NextToken='string'
)
Parameters
  • JobName (string) -- The name of the job that these schedules apply to.
  • MaxResults (integer) -- The maximum number of results to return in this request.
  • NextToken (string) -- The token returned by a previous call to retrieve the next set of results.
Return type

dict

Returns

Response Syntax

{
    'Schedules': [
        {
            'AccountId': 'string',
            'CreatedBy': 'string',
            'CreateDate': datetime(2015, 1, 1),
            'JobNames': [
                'string',
            ],
            'LastModifiedBy': 'string',
            'LastModifiedDate': datetime(2015, 1, 1),
            'ResourceArn': 'string',
            'CronExpression': 'string',
            'Tags': {
                'string': 'string'
            },
            'Name': 'string'
        },
    ],
    'NextToken': 'string'
}

Response Structure

  • (dict) --

    • Schedules (list) --

      A list of schedules that are defined.

      • (dict) --

        Represents one or more dates and times when a job is to run.

        • AccountId (string) --

          The ID of the Amazon Web Services account that owns the schedule.

        • CreatedBy (string) --

          The Amazon Resource Name (ARN) of the user who created the schedule.

        • CreateDate (datetime) --

          The date and time that the schedule was created.

        • JobNames (list) --

          A list of jobs to be run, according to the schedule.

          • (string) --
        • LastModifiedBy (string) --

          The Amazon Resource Name (ARN) of the user who last modified the schedule.

        • LastModifiedDate (datetime) --

          The date and time when the schedule was last modified.

        • ResourceArn (string) --

          The Amazon Resource Name (ARN) of the schedule.

        • CronExpression (string) --

          The dates and times when the job is to run. For more information, see Cron expressions in the Glue DataBrew Developer Guide .

        • Tags (dict) --

          Metadata tags that have been applied to the schedule.

          • (string) --
            • (string) --
        • Name (string) --

          The name of the schedule.

    • NextToken (string) --

      A token that you can use in a subsequent call to retrieve the next set of results.

Exceptions

  • GlueDataBrew.Client.exceptions.ValidationException
list_tags_for_resource(**kwargs)

Lists all the tags for a DataBrew resource.

See also: AWS API Documentation

Request Syntax

response = client.list_tags_for_resource(
    ResourceArn='string'
)
Parameters
ResourceArn (string) --

[REQUIRED]

The Amazon Resource Name (ARN) string that uniquely identifies the DataBrew resource.

Return type
dict
Returns
Response Syntax
{
    'Tags': {
        'string': 'string'
    }
}

Response Structure

  • (dict) --
    • Tags (dict) --

      A list of tags associated with the DataBrew resource.

      • (string) --
        • (string) --

Exceptions

  • GlueDataBrew.Client.exceptions.InternalServerException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
publish_recipe(**kwargs)

Publishes a new version of a DataBrew recipe.

See also: AWS API Documentation

Request Syntax

response = client.publish_recipe(
    Description='string',
    Name='string'
)
Parameters
  • Description (string) -- A description of the recipe to be published, for this version of the recipe.
  • Name (string) --

    [REQUIRED]

    The name of the recipe to be published.

Return type

dict

Returns

Response Syntax

{
    'Name': 'string'
}

Response Structure

  • (dict) --

    • Name (string) --

      The name of the recipe that you published.

Exceptions

  • GlueDataBrew.Client.exceptions.ValidationException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ServiceQuotaExceededException
send_project_session_action(**kwargs)

Performs a recipe step within an interactive DataBrew session that's currently open.

See also: AWS API Documentation

Request Syntax

response = client.send_project_session_action(
    Preview=True|False,
    Name='string',
    RecipeStep={
        'Action': {
            'Operation': 'string',
            'Parameters': {
                'string': 'string'
            }
        },
        'ConditionExpressions': [
            {
                'Condition': 'string',
                'Value': 'string',
                'TargetColumn': 'string'
            },
        ]
    },
    StepIndex=123,
    ClientSessionId='string',
    ViewFrame={
        'StartColumnIndex': 123,
        'ColumnRange': 123,
        'HiddenColumns': [
            'string',
        ]
    }
)
Parameters
  • Preview (boolean) -- If true, the result of the recipe step will be returned, but not applied.
  • Name (string) --

    [REQUIRED]

    The name of the project to apply the action to.

  • RecipeStep (dict) --

    Represents a single step from a DataBrew recipe to be performed.

    • Action (dict) -- [REQUIRED]

      The particular action to be performed in the recipe step.

      • Operation (string) -- [REQUIRED]

        The name of a valid DataBrew transformation to be performed on the data.

      • Parameters (dict) --

        Contextual parameters for the transformation.

        • (string) --
          • (string) --
    • ConditionExpressions (list) --

      One or more conditions that must be met for the recipe step to succeed.

      Note

      All of the conditions in the array must be met. In other words, all of the conditions must be combined using a logical AND operation.

      • (dict) --

        Represents an individual condition that evaluates to true or false.

        Conditions are used with recipe actions. The action is only performed for column values where the condition evaluates to true.

        If a recipe requires more than one condition, then the recipe must specify multiple ConditionExpression elements. Each condition is applied to the rows in a dataset first, before the recipe action is performed.

        • Condition (string) -- [REQUIRED]

          A specific condition to apply to a recipe action. For more information, see Recipe structure in the Glue DataBrew Developer Guide .

        • Value (string) --

          A value that the condition must evaluate to for the condition to succeed.

        • TargetColumn (string) -- [REQUIRED]

          A column to apply this condition to.

  • StepIndex (integer) -- The index from which to preview a step. This index is used to preview the result of steps that have already been applied, so that the resulting view frame is from earlier in the view frame stack.
  • ClientSessionId (string) -- A unique identifier for an interactive session that's currently open and ready for work. The action will be performed on this session.
  • ViewFrame (dict) --

    Represents the data being transformed during an action.

    • StartColumnIndex (integer) -- [REQUIRED]

      The starting index for the range of columns to return in the view frame.

    • ColumnRange (integer) --

      The number of columns to include in the view frame, beginning with the StartColumnIndex value and ignoring any columns in the HiddenColumns list.

    • HiddenColumns (list) --

      A list of columns to hide in the view frame.

      • (string) --
Return type

dict

Returns

Response Syntax

{
    'Result': 'string',
    'Name': 'string',
    'ActionId': 123
}

Response Structure

  • (dict) --

    • Result (string) --

      A message indicating the result of performing the action.

    • Name (string) --

      The name of the project that was affected by the action.

    • ActionId (integer) --

      A unique identifier for the action that was performed.

Exceptions

  • GlueDataBrew.Client.exceptions.ConflictException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
start_job_run(**kwargs)

Runs a DataBrew job.

See also: AWS API Documentation

Request Syntax

response = client.start_job_run(
    Name='string'
)
Parameters
Name (string) --

[REQUIRED]

The name of the job to be run.

Return type
dict
Returns
Response Syntax
{
    'RunId': 'string'
}

Response Structure

  • (dict) --
    • RunId (string) --

      A system-generated identifier for this particular job run.

Exceptions

  • GlueDataBrew.Client.exceptions.ConflictException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ServiceQuotaExceededException
  • GlueDataBrew.Client.exceptions.ValidationException
start_project_session(**kwargs)

Creates an interactive session, enabling you to manipulate data in a DataBrew project.

See also: AWS API Documentation

Request Syntax

response = client.start_project_session(
    Name='string',
    AssumeControl=True|False
)
Parameters
  • Name (string) --

    [REQUIRED]

    The name of the project to act upon.

  • AssumeControl (boolean) -- A value that, if true, enables you to take control of a session, even if a different client is currently accessing the project.
Return type

dict

Returns

Response Syntax

{
    'Name': 'string',
    'ClientSessionId': 'string'
}

Response Structure

  • (dict) --

    • Name (string) --

      The name of the project to be acted upon.

    • ClientSessionId (string) --

      A system-generated identifier for the session.

Exceptions

  • GlueDataBrew.Client.exceptions.ConflictException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ServiceQuotaExceededException
  • GlueDataBrew.Client.exceptions.ValidationException
stop_job_run(**kwargs)

Stops a particular run of a job.

See also: AWS API Documentation

Request Syntax

response = client.stop_job_run(
    Name='string',
    RunId='string'
)
Parameters
  • Name (string) --

    [REQUIRED]

    The name of the job to be stopped.

  • RunId (string) --

    [REQUIRED]

    The ID of the job run to be stopped.

Return type

dict

Returns

Response Syntax

{
    'RunId': 'string'
}

Response Structure

  • (dict) --

    • RunId (string) --

      The ID of the job run that you stopped.

Exceptions

  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
tag_resource(**kwargs)

Adds metadata tags to a DataBrew resource, such as a dataset, project, recipe, job, or schedule.

See also: AWS API Documentation

Request Syntax

response = client.tag_resource(
    ResourceArn='string',
    Tags={
        'string': 'string'
    }
)
Parameters
  • ResourceArn (string) --

    [REQUIRED]

    The DataBrew resource to which tags should be added. The value for this parameter is an Amazon Resource Name (ARN). For DataBrew, you can tag a dataset, a job, a project, or a recipe.

  • Tags (dict) --

    [REQUIRED]

    One or more tags to be assigned to the resource.

    • (string) --
      • (string) --
Return type

dict

Returns

Response Syntax

{}

Response Structure

  • (dict) --

Exceptions

  • GlueDataBrew.Client.exceptions.InternalServerException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
untag_resource(**kwargs)

Removes metadata tags from a DataBrew resource.

See also: AWS API Documentation

Request Syntax

response = client.untag_resource(
    ResourceArn='string',
    TagKeys=[
        'string',
    ]
)
Parameters
  • ResourceArn (string) --

    [REQUIRED]

    A DataBrew resource from which you want to remove a tag or tags. The value for this parameter is an Amazon Resource Name (ARN).

  • TagKeys (list) --

    [REQUIRED]

    The tag keys (names) of one or more tags to be removed.

    • (string) --
Return type

dict

Returns

Response Syntax

{}

Response Structure

  • (dict) --

Exceptions

  • GlueDataBrew.Client.exceptions.InternalServerException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
update_dataset(**kwargs)

Modifies the definition of an existing DataBrew dataset.

See also: AWS API Documentation

Request Syntax

response = client.update_dataset(
    Name='string',
    Format='CSV'|'JSON'|'PARQUET'|'EXCEL',
    FormatOptions={
        'Json': {
            'MultiLine': True|False
        },
        'Excel': {
            'SheetNames': [
                'string',
            ],
            'SheetIndexes': [
                123,
            ],
            'HeaderRow': True|False
        },
        'Csv': {
            'Delimiter': 'string',
            'HeaderRow': True|False
        }
    },
    Input={
        'S3InputDefinition': {
            'Bucket': 'string',
            'Key': 'string'
        },
        'DataCatalogInputDefinition': {
            'CatalogId': 'string',
            'DatabaseName': 'string',
            'TableName': 'string',
            'TempDirectory': {
                'Bucket': 'string',
                'Key': 'string'
            }
        },
        'DatabaseInputDefinition': {
            'GlueConnectionName': 'string',
            'DatabaseTableName': 'string',
            'TempDirectory': {
                'Bucket': 'string',
                'Key': 'string'
            }
        }
    },
    PathOptions={
        'LastModifiedDateCondition': {
            'Expression': 'string',
            'ValuesMap': {
                'string': 'string'
            }
        },
        'FilesLimit': {
            'MaxFiles': 123,
            'OrderedBy': 'LAST_MODIFIED_DATE',
            'Order': 'DESCENDING'|'ASCENDING'
        },
        'Parameters': {
            'string': {
                'Name': 'string',
                'Type': 'Datetime'|'Number'|'String',
                'DatetimeOptions': {
                    'Format': 'string',
                    'TimezoneOffset': 'string',
                    'LocaleCode': 'string'
                },
                'CreateColumn': True|False,
                'Filter': {
                    'Expression': 'string',
                    'ValuesMap': {
                        'string': 'string'
                    }
                }
            }
        }
    }
)
Parameters
  • Name (string) --

    [REQUIRED]

    The name of the dataset to be updated.

  • Format (string) -- The file format of a dataset that is created from an Amazon S3 file or folder.
  • FormatOptions (dict) --

    Represents a set of options that define the structure of either comma-separated value (CSV), Excel, or JSON input.

    • Json (dict) --

      Options that define how JSON input is to be interpreted by DataBrew.

      • MultiLine (boolean) --

        A value that specifies whether JSON input contains embedded new line characters.

    • Excel (dict) --

      Options that define how Excel input is to be interpreted by DataBrew.

      • SheetNames (list) --

        One or more named sheets in the Excel file that will be included in the dataset.

        • (string) --
      • SheetIndexes (list) --

        One or more sheet numbers in the Excel file that will be included in the dataset.

        • (integer) --
      • HeaderRow (boolean) --

        A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.

    • Csv (dict) --

      Options that define how CSV input is to be interpreted by DataBrew.

      • Delimiter (string) --

        A single character that specifies the delimiter being used in the CSV file.

      • HeaderRow (boolean) --

        A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.

  • Input (dict) --

    [REQUIRED]

    Represents information on how DataBrew can find data, in either the Glue Data Catalog or Amazon S3.

    • S3InputDefinition (dict) --

      The Amazon S3 location where the data is stored.

      • Bucket (string) -- [REQUIRED]

        The Amazon S3 bucket name.

      • Key (string) --

        The unique name of the object in the bucket.

    • DataCatalogInputDefinition (dict) --

      The Glue Data Catalog parameters for the data.

      • CatalogId (string) --

        The unique identifier of the Amazon Web Services account that holds the Data Catalog that stores the data.

      • DatabaseName (string) -- [REQUIRED]

        The name of a database in the Data Catalog.

      • TableName (string) -- [REQUIRED]

        The name of a database table in the Data Catalog. This table corresponds to a DataBrew dataset.

      • TempDirectory (dict) --

        Represents an Amazon location where DataBrew can store intermediate results.

        • Bucket (string) -- [REQUIRED]

          The Amazon S3 bucket name.

        • Key (string) --

          The unique name of the object in the bucket.

    • DatabaseInputDefinition (dict) --

      Connection information for dataset input files stored in a database.

      • GlueConnectionName (string) -- [REQUIRED]

        The Glue Connection that stores the connection information for the target database.

      • DatabaseTableName (string) -- [REQUIRED]

        The table within the target database.

      • TempDirectory (dict) --

        Represents an Amazon S3 location (bucket name and object key) where DataBrew can read input data, or write output from a job.

        • Bucket (string) -- [REQUIRED]

          The Amazon S3 bucket name.

        • Key (string) --

          The unique name of the object in the bucket.

  • PathOptions (dict) --

    A set of options that defines how DataBrew interprets an Amazon S3 path of the dataset.

    • LastModifiedDateCondition (dict) --

      If provided, this structure defines a date range for matching Amazon S3 objects based on their LastModifiedDate attribute in Amazon S3.

      • Expression (string) -- [REQUIRED]

        The expression which includes condition names followed by substitution variables, possibly grouped and combined with other conditions. For example, "(starts_with :prefix1 or starts_with :prefix2) and (ends_with :suffix1 or ends_with :suffix2)". Substitution variables should start with ':' symbol.

      • ValuesMap (dict) -- [REQUIRED]

        The map of substitution variable names to their values used in this filter expression.

        • (string) --
          • (string) --
    • FilesLimit (dict) --

      If provided, this structure imposes a limit on a number of files that should be selected.

      • MaxFiles (integer) -- [REQUIRED]

        The number of Amazon S3 files to select.

      • OrderedBy (string) --

        A criteria to use for Amazon S3 files sorting before their selection. By default uses LAST_MODIFIED_DATE as a sorting criteria. Currently it's the only allowed value.

      • Order (string) --

        A criteria to use for Amazon S3 files sorting before their selection. By default uses DESCENDING order, i.e. most recent files are selected first. Anotherpossible value is ASCENDING.

    • Parameters (dict) --

      A structure that maps names of parameters used in the Amazon S3 path of a dataset to their definitions.

      • (string) --
        • (dict) --

          Represents a dataset paramater that defines type and conditions for a parameter in the Amazon S3 path of the dataset.

          • Name (string) -- [REQUIRED]

            The name of the parameter that is used in the dataset's Amazon S3 path.

          • Type (string) -- [REQUIRED]

            The type of the dataset parameter, can be one of a 'String', 'Number' or 'Datetime'.

          • DatetimeOptions (dict) --

            Additional parameter options such as a format and a timezone. Required for datetime parameters.

            • Format (string) -- [REQUIRED]

              Required option, that defines the datetime format used for a date parameter in the Amazon S3 path. Should use only supported datetime specifiers and separation characters, all literal a-z or A-Z characters should be escaped with single quotes. E.g. "MM.dd.yyyy-'at'-HH:mm".

            • TimezoneOffset (string) --

              Optional value for a timezone offset of the datetime parameter value in the Amazon S3 path. Shouldn't be used if Format for this parameter includes timezone fields. If no offset specified, UTC is assumed.

            • LocaleCode (string) --

              Optional value for a non-US locale code, needed for correct interpretation of some date formats.

          • CreateColumn (boolean) --

            Optional boolean value that defines whether the captured value of this parameter should be used to create a new column in a dataset.

          • Filter (dict) --

            The optional filter expression structure to apply additional matching criteria to the parameter.

            • Expression (string) -- [REQUIRED]

              The expression which includes condition names followed by substitution variables, possibly grouped and combined with other conditions. For example, "(starts_with :prefix1 or starts_with :prefix2) and (ends_with :suffix1 or ends_with :suffix2)". Substitution variables should start with ':' symbol.

            • ValuesMap (dict) -- [REQUIRED]

              The map of substitution variable names to their values used in this filter expression.

              • (string) --
                • (string) --
Return type

dict

Returns

Response Syntax

{
    'Name': 'string'
}

Response Structure

  • (dict) --

    • Name (string) --

      The name of the dataset that you updated.

Exceptions

  • GlueDataBrew.Client.exceptions.AccessDeniedException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
update_profile_job(**kwargs)

Modifies the definition of an existing profile job.

See also: AWS API Documentation

Request Syntax

response = client.update_profile_job(
    Configuration={
        'DatasetStatisticsConfiguration': {
            'IncludedStatistics': [
                'string',
            ],
            'Overrides': [
                {
                    'Statistic': 'string',
                    'Parameters': {
                        'string': 'string'
                    }
                },
            ]
        },
        'ProfileColumns': [
            {
                'Regex': 'string',
                'Name': 'string'
            },
        ],
        'ColumnStatisticsConfigurations': [
            {
                'Selectors': [
                    {
                        'Regex': 'string',
                        'Name': 'string'
                    },
                ],
                'Statistics': {
                    'IncludedStatistics': [
                        'string',
                    ],
                    'Overrides': [
                        {
                            'Statistic': 'string',
                            'Parameters': {
                                'string': 'string'
                            }
                        },
                    ]
                }
            },
        ]
    },
    EncryptionKeyArn='string',
    EncryptionMode='SSE-KMS'|'SSE-S3',
    Name='string',
    LogSubscription='ENABLE'|'DISABLE',
    MaxCapacity=123,
    MaxRetries=123,
    OutputLocation={
        'Bucket': 'string',
        'Key': 'string'
    },
    RoleArn='string',
    Timeout=123,
    JobSample={
        'Mode': 'FULL_DATASET'|'CUSTOM_ROWS',
        'Size': 123
    }
)
Parameters
  • Configuration (dict) --

    Configuration for profile jobs. Used to select columns, do evaluations, and override default parameters of evaluations. When configuration is null, the profile job will run with default settings.

    • DatasetStatisticsConfiguration (dict) --

      Configuration for inter-column evaluations. Configuration can be used to select evaluations and override parameters of evaluations. When configuration is undefined, the profile job will run all supported inter-column evaluations.

      • IncludedStatistics (list) --

        List of included evaluations. When the list is undefined, all supported evaluations will be included.

        • (string) --
      • Overrides (list) --

        List of overrides for evaluations.

        • (dict) --

          Override of a particular evaluation for a profile job.

          • Statistic (string) -- [REQUIRED]

            The name of an evaluation

          • Parameters (dict) -- [REQUIRED]

            A map that includes overrides of an evaluation’s parameters.

            • (string) --
              • (string) --
    • ProfileColumns (list) --

      List of column selectors. ProfileColumns can be used to select columns from the dataset. When ProfileColumns is undefined, the profile job will profile all supported columns.

      • (dict) --

        Selector of a column from a dataset for profile job configuration. One selector includes either a column name or a regular expression.

        • Regex (string) --

          A regular expression for selecting a column from a dataset.

        • Name (string) --

          The name of a column from a dataset.

    • ColumnStatisticsConfigurations (list) --

      List of configurations for column evaluations. ColumnStatisticsConfigurations are used to select evaluations and override parameters of evaluations for particular columns. When ColumnStatisticsConfigurations is undefined, the profile job will profile all supported columns and run all supported evaluations.

      • (dict) --

        Configuration for column evaluations for a profile job. ColumnStatisticsConfiguration can be used to select evaluations and override parameters of evaluations for particular columns.

        • Selectors (list) --

          List of column selectors. Selectors can be used to select columns from the dataset. When selectors are undefined, configuration will be applied to all supported columns.

          • (dict) --

            Selector of a column from a dataset for profile job configuration. One selector includes either a column name or a regular expression.

            • Regex (string) --

              A regular expression for selecting a column from a dataset.

            • Name (string) --

              The name of a column from a dataset.

        • Statistics (dict) -- [REQUIRED]

          Configuration for evaluations. Statistics can be used to select evaluations and override parameters of evaluations.

          • IncludedStatistics (list) --

            List of included evaluations. When the list is undefined, all supported evaluations will be included.

            • (string) --
          • Overrides (list) --

            List of overrides for evaluations.

            • (dict) --

              Override of a particular evaluation for a profile job.

              • Statistic (string) -- [REQUIRED]

                The name of an evaluation

              • Parameters (dict) -- [REQUIRED]

                A map that includes overrides of an evaluation’s parameters.

                • (string) --
                  • (string) --
  • EncryptionKeyArn (string) -- The Amazon Resource Name (ARN) of an encryption key that is used to protect the job.
  • EncryptionMode (string) --

    The encryption mode for the job, which can be one of the following:

    • SSE-KMS - Server-side encryption with keys managed by KMS.
    • SSE-S3 - Server-side encryption with keys managed by Amazon S3.
  • Name (string) --

    [REQUIRED]

    The name of the job to be updated.

  • LogSubscription (string) -- Enables or disables Amazon CloudWatch logging for the job. If logging is enabled, CloudWatch writes one log stream for each job run.
  • MaxCapacity (integer) -- The maximum number of compute nodes that DataBrew can use when the job processes data.
  • MaxRetries (integer) -- The maximum number of times to retry the job after a job run fails.
  • OutputLocation (dict) --

    [REQUIRED]

    Represents an Amazon S3 location (bucket name and object key) where DataBrew can read input data, or write output from a job.

    • Bucket (string) -- [REQUIRED]

      The Amazon S3 bucket name.

    • Key (string) --

      The unique name of the object in the bucket.

  • RoleArn (string) --

    [REQUIRED]

    The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role to be assumed when DataBrew runs the job.

  • Timeout (integer) -- The job's timeout in minutes. A job that attempts to run longer than this timeout period ends with a status of TIMEOUT .
  • JobSample (dict) --

    Sample configuration for Profile Jobs only. Determines the number of rows on which the Profile job will be executed. If a JobSample value is not provided for profile jobs, the default value will be used. The default value is CUSTOM_ROWS for the mode parameter and 20000 for the size parameter.

    • Mode (string) --

      A value that determines whether the profile job is run on the entire dataset or a specified number of rows. This value must be one of the following:

      • FULL_DATASET - The profile job is run on the entire dataset.
      • CUSTOM_ROWS - The profile job is run on the number of rows specified in the Size parameter.
    • Size (integer) --

      The Size parameter is only required when the mode is CUSTOM_ROWS. The profile job is run on the specified number of rows. The maximum value for size is Long.MAX_VALUE.

      Long.MAX_VALUE = 9223372036854775807

Return type

dict

Returns

Response Syntax

{
    'Name': 'string'
}

Response Structure

  • (dict) --

    • Name (string) --

      The name of the job that was updated.

Exceptions

  • GlueDataBrew.Client.exceptions.AccessDeniedException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
update_project(**kwargs)

Modifies the definition of an existing DataBrew project.

See also: AWS API Documentation

Request Syntax

response = client.update_project(
    Sample={
        'Size': 123,
        'Type': 'FIRST_N'|'LAST_N'|'RANDOM'
    },
    RoleArn='string',
    Name='string'
)
Parameters
  • Sample (dict) --

    Represents the sample size and sampling type for DataBrew to use for interactive data analysis.

    • Size (integer) --

      The number of rows in the sample.

    • Type (string) -- [REQUIRED]

      The way in which DataBrew obtains rows from a dataset.

  • RoleArn (string) --

    [REQUIRED]

    The Amazon Resource Name (ARN) of the IAM role to be assumed for this request.

  • Name (string) --

    [REQUIRED]

    The name of the project to be updated.

Return type

dict

Returns

Response Syntax

{
    'LastModifiedDate': datetime(2015, 1, 1),
    'Name': 'string'
}

Response Structure

  • (dict) --

    • LastModifiedDate (datetime) --

      The date and time that the project was last modified.

    • Name (string) --

      The name of the project that you updated.

Exceptions

  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
update_recipe(**kwargs)

Modifies the definition of the LATEST_WORKING version of a DataBrew recipe.

See also: AWS API Documentation

Request Syntax

response = client.update_recipe(
    Description='string',
    Name='string',
    Steps=[
        {
            'Action': {
                'Operation': 'string',
                'Parameters': {
                    'string': 'string'
                }
            },
            'ConditionExpressions': [
                {
                    'Condition': 'string',
                    'Value': 'string',
                    'TargetColumn': 'string'
                },
            ]
        },
    ]
)
Parameters
  • Description (string) -- A description of the recipe.
  • Name (string) --

    [REQUIRED]

    The name of the recipe to be updated.

  • Steps (list) --

    One or more steps to be performed by the recipe. Each step consists of an action, and the conditions under which the action should succeed.

    • (dict) --

      Represents a single step from a DataBrew recipe to be performed.

      • Action (dict) -- [REQUIRED]

        The particular action to be performed in the recipe step.

        • Operation (string) -- [REQUIRED]

          The name of a valid DataBrew transformation to be performed on the data.

        • Parameters (dict) --

          Contextual parameters for the transformation.

          • (string) --
            • (string) --
      • ConditionExpressions (list) --

        One or more conditions that must be met for the recipe step to succeed.

        Note

        All of the conditions in the array must be met. In other words, all of the conditions must be combined using a logical AND operation.

        • (dict) --

          Represents an individual condition that evaluates to true or false.

          Conditions are used with recipe actions. The action is only performed for column values where the condition evaluates to true.

          If a recipe requires more than one condition, then the recipe must specify multiple ConditionExpression elements. Each condition is applied to the rows in a dataset first, before the recipe action is performed.

          • Condition (string) -- [REQUIRED]

            A specific condition to apply to a recipe action. For more information, see Recipe structure in the Glue DataBrew Developer Guide .

          • Value (string) --

            A value that the condition must evaluate to for the condition to succeed.

          • TargetColumn (string) -- [REQUIRED]

            A column to apply this condition to.

Return type

dict

Returns

Response Syntax

{
    'Name': 'string'
}

Response Structure

  • (dict) --

    • Name (string) --

      The name of the recipe that was updated.

Exceptions

  • GlueDataBrew.Client.exceptions.ValidationException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
update_recipe_job(**kwargs)

Modifies the definition of an existing DataBrew recipe job.

See also: AWS API Documentation

Request Syntax

response = client.update_recipe_job(
    EncryptionKeyArn='string',
    EncryptionMode='SSE-KMS'|'SSE-S3',
    Name='string',
    LogSubscription='ENABLE'|'DISABLE',
    MaxCapacity=123,
    MaxRetries=123,
    Outputs=[
        {
            'CompressionFormat': 'GZIP'|'LZ4'|'SNAPPY'|'BZIP2'|'DEFLATE'|'LZO'|'BROTLI'|'ZSTD'|'ZLIB',
            'Format': 'CSV'|'JSON'|'PARQUET'|'GLUEPARQUET'|'AVRO'|'ORC'|'XML'|'TABLEAUHYPER',
            'PartitionColumns': [
                'string',
            ],
            'Location': {
                'Bucket': 'string',
                'Key': 'string'
            },
            'Overwrite': True|False,
            'FormatOptions': {
                'Csv': {
                    'Delimiter': 'string'
                }
            }
        },
    ],
    DataCatalogOutputs=[
        {
            'CatalogId': 'string',
            'DatabaseName': 'string',
            'TableName': 'string',
            'S3Options': {
                'Location': {
                    'Bucket': 'string',
                    'Key': 'string'
                }
            },
            'DatabaseOptions': {
                'TempDirectory': {
                    'Bucket': 'string',
                    'Key': 'string'
                },
                'TableName': 'string'
            },
            'Overwrite': True|False
        },
    ],
    DatabaseOutputs=[
        {
            'GlueConnectionName': 'string',
            'DatabaseOptions': {
                'TempDirectory': {
                    'Bucket': 'string',
                    'Key': 'string'
                },
                'TableName': 'string'
            },
            'DatabaseOutputMode': 'NEW_TABLE'
        },
    ],
    RoleArn='string',
    Timeout=123
)
Parameters
  • EncryptionKeyArn (string) -- The Amazon Resource Name (ARN) of an encryption key that is used to protect the job.
  • EncryptionMode (string) --

    The encryption mode for the job, which can be one of the following:

    • SSE-KMS - Server-side encryption with keys managed by KMS.
    • SSE-S3 - Server-side encryption with keys managed by Amazon S3.
  • Name (string) --

    [REQUIRED]

    The name of the job to update.

  • LogSubscription (string) -- Enables or disables Amazon CloudWatch logging for the job. If logging is enabled, CloudWatch writes one log stream for each job run.
  • MaxCapacity (integer) -- The maximum number of nodes that DataBrew can consume when the job processes data.
  • MaxRetries (integer) -- The maximum number of times to retry the job after a job run fails.
  • Outputs (list) --

    One or more artifacts that represent the output from running the job.

    • (dict) --

      Represents options that specify how and where in Amazon S3 DataBrew writes the output generated by recipe jobs or profile jobs.

      • CompressionFormat (string) --

        The compression algorithm used to compress the output text of the job.

      • Format (string) --

        The data format of the output of the job.

      • PartitionColumns (list) --

        The names of one or more partition columns for the output of the job.

        • (string) --
      • Location (dict) -- [REQUIRED]

        The location in Amazon S3 where the job writes its output.

        • Bucket (string) -- [REQUIRED]

          The Amazon S3 bucket name.

        • Key (string) --

          The unique name of the object in the bucket.

      • Overwrite (boolean) --

        A value that, if true, means that any data in the location specified for output is overwritten with new output.

      • FormatOptions (dict) --

        Represents options that define how DataBrew formats job output files.

        • Csv (dict) --

          Represents a set of options that define the structure of comma-separated value (CSV) job output.

          • Delimiter (string) --

            A single character that specifies the delimiter used to create CSV job output.

  • DataCatalogOutputs (list) --

    One or more artifacts that represent the Glue Data Catalog output from running the job.

    • (dict) --

      Represents options that specify how and where in the Glue Data Catalog DataBrew writes the output generated by recipe jobs.

      • CatalogId (string) --

        The unique identifier of the Amazon Web Services account that holds the Data Catalog that stores the data.

      • DatabaseName (string) -- [REQUIRED]

        The name of a database in the Data Catalog.

      • TableName (string) -- [REQUIRED]

        The name of a table in the Data Catalog.

      • S3Options (dict) --

        Represents options that specify how and where DataBrew writes the Amazon S3 output generated by recipe jobs.

        • Location (dict) -- [REQUIRED]

          Represents an Amazon S3 location (bucket name and object key) where DataBrew can write output from a job.

          • Bucket (string) -- [REQUIRED]

            The Amazon S3 bucket name.

          • Key (string) --

            The unique name of the object in the bucket.

      • DatabaseOptions (dict) --

        Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.

        • TempDirectory (dict) --

          Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.

          • Bucket (string) -- [REQUIRED]

            The Amazon S3 bucket name.

          • Key (string) --

            The unique name of the object in the bucket.

        • TableName (string) -- [REQUIRED]

          A prefix for the name of a table DataBrew will create in the database.

      • Overwrite (boolean) --

        A value that, if true, means that any data in the location specified for output is overwritten with new output. Not supported with DatabaseOptions.

  • DatabaseOutputs (list) --

    Represents a list of JDBC database output objects which defines the output destination for a DataBrew recipe job to write into.

    • (dict) --

      Represents a JDBC database output object which defines the output destination for a DataBrew recipe job to write into.

      • GlueConnectionName (string) -- [REQUIRED]

        The Glue connection that stores the connection information for the target database.

      • DatabaseOptions (dict) -- [REQUIRED]

        Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.

        • TempDirectory (dict) --

          Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.

          • Bucket (string) -- [REQUIRED]

            The Amazon S3 bucket name.

          • Key (string) --

            The unique name of the object in the bucket.

        • TableName (string) -- [REQUIRED]

          A prefix for the name of a table DataBrew will create in the database.

      • DatabaseOutputMode (string) --

        The output mode to write into the database. Currently supported option: NEW_TABLE.

  • RoleArn (string) --

    [REQUIRED]

    The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role to be assumed when DataBrew runs the job.

  • Timeout (integer) -- The job's timeout in minutes. A job that attempts to run longer than this timeout period ends with a status of TIMEOUT .
Return type

dict

Returns

Response Syntax

{
    'Name': 'string'
}

Response Structure

  • (dict) --

    • Name (string) --

      The name of the job that you updated.

Exceptions

  • GlueDataBrew.Client.exceptions.AccessDeniedException
  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ValidationException
update_schedule(**kwargs)

Modifies the definition of an existing DataBrew schedule.

See also: AWS API Documentation

Request Syntax

response = client.update_schedule(
    JobNames=[
        'string',
    ],
    CronExpression='string',
    Name='string'
)
Parameters
  • JobNames (list) --

    The name or names of one or more jobs to be run for this schedule.

    • (string) --
  • CronExpression (string) --

    [REQUIRED]

    The date or dates and time or times when the jobs are to be run. For more information, see Cron expressions in the Glue DataBrew Developer Guide .

  • Name (string) --

    [REQUIRED]

    The name of the schedule to update.

Return type

dict

Returns

Response Syntax

{
    'Name': 'string'
}

Response Structure

  • (dict) --

    • Name (string) --

      The name of the schedule that was updated.

Exceptions

  • GlueDataBrew.Client.exceptions.ResourceNotFoundException
  • GlueDataBrew.Client.exceptions.ServiceQuotaExceededException
  • GlueDataBrew.Client.exceptions.ValidationException

Paginators

The available paginators are:

class GlueDataBrew.Paginator.ListDatasets
paginator = client.get_paginator('list_datasets')
paginate(**kwargs)

Creates an iterator that will paginate through responses from GlueDataBrew.Client.list_datasets().

See also: AWS API Documentation

Request Syntax

response_iterator = paginator.paginate(
    PaginationConfig={
        'MaxItems': 123,
        'PageSize': 123,
        'StartingToken': 'string'
    }
)
Parameters
PaginationConfig (dict) --

A dictionary that provides parameters to control pagination.

  • MaxItems (integer) --

    The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.

  • PageSize (integer) --

    The size of each page.

  • StartingToken (string) --

    A token to specify where to start paginating. This is the NextToken from a previous response.

Return type
dict
Returns
Response Syntax
{
    'Datasets': [
        {
            'AccountId': 'string',
            'CreatedBy': 'string',
            'CreateDate': datetime(2015, 1, 1),
            'Name': 'string',
            'Format': 'CSV'|'JSON'|'PARQUET'|'EXCEL',
            'FormatOptions': {
                'Json': {
                    'MultiLine': True|False
                },
                'Excel': {
                    'SheetNames': [
                        'string',
                    ],
                    'SheetIndexes': [
                        123,
                    ],
                    'HeaderRow': True|False
                },
                'Csv': {
                    'Delimiter': 'string',
                    'HeaderRow': True|False
                }
            },
            'Input': {
                'S3InputDefinition': {
                    'Bucket': 'string',
                    'Key': 'string'
                },
                'DataCatalogInputDefinition': {
                    'CatalogId': 'string',
                    'DatabaseName': 'string',
                    'TableName': 'string',
                    'TempDirectory': {
                        'Bucket': 'string',
                        'Key': 'string'
                    }
                },
                'DatabaseInputDefinition': {
                    'GlueConnectionName': 'string',
                    'DatabaseTableName': 'string',
                    'TempDirectory': {
                        'Bucket': 'string',
                        'Key': 'string'
                    }
                }
            },
            'LastModifiedDate': datetime(2015, 1, 1),
            'LastModifiedBy': 'string',
            'Source': 'S3'|'DATA-CATALOG'|'DATABASE',
            'PathOptions': {
                'LastModifiedDateCondition': {
                    'Expression': 'string',
                    'ValuesMap': {
                        'string': 'string'
                    }
                },
                'FilesLimit': {
                    'MaxFiles': 123,
                    'OrderedBy': 'LAST_MODIFIED_DATE',
                    'Order': 'DESCENDING'|'ASCENDING'
                },
                'Parameters': {
                    'string': {
                        'Name': 'string',
                        'Type': 'Datetime'|'Number'|'String',
                        'DatetimeOptions': {
                            'Format': 'string',
                            'TimezoneOffset': 'string',
                            'LocaleCode': 'string'
                        },
                        'CreateColumn': True|False,
                        'Filter': {
                            'Expression': 'string',
                            'ValuesMap': {
                                'string': 'string'
                            }
                        }
                    }
                }
            },
            'Tags': {
                'string': 'string'
            },
            'ResourceArn': 'string'
        },
    ],

}

Response Structure

  • (dict) --
    • Datasets (list) --

      A list of datasets that are defined.

      • (dict) --

        Represents a dataset that can be processed by DataBrew.

        • AccountId (string) --

          The ID of the Amazon Web Services account that owns the dataset.

        • CreatedBy (string) --

          The Amazon Resource Name (ARN) of the user who created the dataset.

        • CreateDate (datetime) --

          The date and time that the dataset was created.

        • Name (string) --

          The unique name of the dataset.

        • Format (string) --

          The file format of a dataset that is created from an Amazon S3 file or folder.

        • FormatOptions (dict) --

          A set of options that define how DataBrew interprets the data in the dataset.

          • Json (dict) --

            Options that define how JSON input is to be interpreted by DataBrew.

            • MultiLine (boolean) --

              A value that specifies whether JSON input contains embedded new line characters.

          • Excel (dict) --

            Options that define how Excel input is to be interpreted by DataBrew.

            • SheetNames (list) --

              One or more named sheets in the Excel file that will be included in the dataset.

              • (string) --
            • SheetIndexes (list) --

              One or more sheet numbers in the Excel file that will be included in the dataset.

              • (integer) --
            • HeaderRow (boolean) --

              A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.

          • Csv (dict) --

            Options that define how CSV input is to be interpreted by DataBrew.

            • Delimiter (string) --

              A single character that specifies the delimiter being used in the CSV file.

            • HeaderRow (boolean) --

              A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.

        • Input (dict) --

          Information on how DataBrew can find the dataset, in either the Glue Data Catalog or Amazon S3.

          • S3InputDefinition (dict) --

            The Amazon S3 location where the data is stored.

            • Bucket (string) --

              The Amazon S3 bucket name.

            • Key (string) --

              The unique name of the object in the bucket.

          • DataCatalogInputDefinition (dict) --

            The Glue Data Catalog parameters for the data.

            • CatalogId (string) --

              The unique identifier of the Amazon Web Services account that holds the Data Catalog that stores the data.

            • DatabaseName (string) --

              The name of a database in the Data Catalog.

            • TableName (string) --

              The name of a database table in the Data Catalog. This table corresponds to a DataBrew dataset.

            • TempDirectory (dict) --

              Represents an Amazon location where DataBrew can store intermediate results.

              • Bucket (string) --

                The Amazon S3 bucket name.

              • Key (string) --

                The unique name of the object in the bucket.

          • DatabaseInputDefinition (dict) --

            Connection information for dataset input files stored in a database.

            • GlueConnectionName (string) --

              The Glue Connection that stores the connection information for the target database.

            • DatabaseTableName (string) --

              The table within the target database.

            • TempDirectory (dict) --

              Represents an Amazon S3 location (bucket name and object key) where DataBrew can read input data, or write output from a job.

              • Bucket (string) --

                The Amazon S3 bucket name.

              • Key (string) --

                The unique name of the object in the bucket.

        • LastModifiedDate (datetime) --

          The last modification date and time of the dataset.

        • LastModifiedBy (string) --

          The Amazon Resource Name (ARN) of the user who last modified the dataset.

        • Source (string) --

          The location of the data for the dataset, either Amazon S3 or the Glue Data Catalog.

        • PathOptions (dict) --

          A set of options that defines how DataBrew interprets an Amazon S3 path of the dataset.

          • LastModifiedDateCondition (dict) --

            If provided, this structure defines a date range for matching Amazon S3 objects based on their LastModifiedDate attribute in Amazon S3.

            • Expression (string) --

              The expression which includes condition names followed by substitution variables, possibly grouped and combined with other conditions. For example, "(starts_with :prefix1 or starts_with :prefix2) and (ends_with :suffix1 or ends_with :suffix2)". Substitution variables should start with ':' symbol.

            • ValuesMap (dict) --

              The map of substitution variable names to their values used in this filter expression.

              • (string) --
                • (string) --
          • FilesLimit (dict) --

            If provided, this structure imposes a limit on a number of files that should be selected.

            • MaxFiles (integer) --

              The number of Amazon S3 files to select.

            • OrderedBy (string) --

              A criteria to use for Amazon S3 files sorting before their selection. By default uses LAST_MODIFIED_DATE as a sorting criteria. Currently it's the only allowed value.

            • Order (string) --

              A criteria to use for Amazon S3 files sorting before their selection. By default uses DESCENDING order, i.e. most recent files are selected first. Anotherpossible value is ASCENDING.

          • Parameters (dict) --

            A structure that maps names of parameters used in the Amazon S3 path of a dataset to their definitions.

            • (string) --
              • (dict) --

                Represents a dataset paramater that defines type and conditions for a parameter in the Amazon S3 path of the dataset.

                • Name (string) --

                  The name of the parameter that is used in the dataset's Amazon S3 path.

                • Type (string) --

                  The type of the dataset parameter, can be one of a 'String', 'Number' or 'Datetime'.

                • DatetimeOptions (dict) --

                  Additional parameter options such as a format and a timezone. Required for datetime parameters.

                  • Format (string) --

                    Required option, that defines the datetime format used for a date parameter in the Amazon S3 path. Should use only supported datetime specifiers and separation characters, all literal a-z or A-Z characters should be escaped with single quotes. E.g. "MM.dd.yyyy-'at'-HH:mm".

                  • TimezoneOffset (string) --

                    Optional value for a timezone offset of the datetime parameter value in the Amazon S3 path. Shouldn't be used if Format for this parameter includes timezone fields. If no offset specified, UTC is assumed.

                  • LocaleCode (string) --

                    Optional value for a non-US locale code, needed for correct interpretation of some date formats.

                • CreateColumn (boolean) --

                  Optional boolean value that defines whether the captured value of this parameter should be used to create a new column in a dataset.

                • Filter (dict) --

                  The optional filter expression structure to apply additional matching criteria to the parameter.

                  • Expression (string) --

                    The expression which includes condition names followed by substitution variables, possibly grouped and combined with other conditions. For example, "(starts_with :prefix1 or starts_with :prefix2) and (ends_with :suffix1 or ends_with :suffix2)". Substitution variables should start with ':' symbol.

                  • ValuesMap (dict) --

                    The map of substitution variable names to their values used in this filter expression.

                    • (string) --
                      • (string) --
        • Tags (dict) --

          Metadata tags that have been applied to the dataset.

          • (string) --
            • (string) --
        • ResourceArn (string) --

          The unique Amazon Resource Name (ARN) for the dataset.

class GlueDataBrew.Paginator.ListJobRuns
paginator = client.get_paginator('list_job_runs')
paginate(**kwargs)

Creates an iterator that will paginate through responses from GlueDataBrew.Client.list_job_runs().

See also: AWS API Documentation

Request Syntax

response_iterator = paginator.paginate(
    Name='string',
    PaginationConfig={
        'MaxItems': 123,
        'PageSize': 123,
        'StartingToken': 'string'
    }
)
Parameters
  • Name (string) --

    [REQUIRED]

    The name of the job.

  • PaginationConfig (dict) --

    A dictionary that provides parameters to control pagination.

    • MaxItems (integer) --

      The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.

    • PageSize (integer) --

      The size of each page.

    • StartingToken (string) --

      A token to specify where to start paginating. This is the NextToken from a previous response.

Return type

dict

Returns

Response Syntax

{
    'JobRuns': [
        {
            'Attempt': 123,
            'CompletedOn': datetime(2015, 1, 1),
            'DatasetName': 'string',
            'ErrorMessage': 'string',
            'ExecutionTime': 123,
            'JobName': 'string',
            'RunId': 'string',
            'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT',
            'LogSubscription': 'ENABLE'|'DISABLE',
            'LogGroupName': 'string',
            'Outputs': [
                {
                    'CompressionFormat': 'GZIP'|'LZ4'|'SNAPPY'|'BZIP2'|'DEFLATE'|'LZO'|'BROTLI'|'ZSTD'|'ZLIB',
                    'Format': 'CSV'|'JSON'|'PARQUET'|'GLUEPARQUET'|'AVRO'|'ORC'|'XML'|'TABLEAUHYPER',
                    'PartitionColumns': [
                        'string',
                    ],
                    'Location': {
                        'Bucket': 'string',
                        'Key': 'string'
                    },
                    'Overwrite': True|False,
                    'FormatOptions': {
                        'Csv': {
                            'Delimiter': 'string'
                        }
                    }
                },
            ],
            'DataCatalogOutputs': [
                {
                    'CatalogId': 'string',
                    'DatabaseName': 'string',
                    'TableName': 'string',
                    'S3Options': {
                        'Location': {
                            'Bucket': 'string',
                            'Key': 'string'
                        }
                    },
                    'DatabaseOptions': {
                        'TempDirectory': {
                            'Bucket': 'string',
                            'Key': 'string'
                        },
                        'TableName': 'string'
                    },
                    'Overwrite': True|False
                },
            ],
            'DatabaseOutputs': [
                {
                    'GlueConnectionName': 'string',
                    'DatabaseOptions': {
                        'TempDirectory': {
                            'Bucket': 'string',
                            'Key': 'string'
                        },
                        'TableName': 'string'
                    },
                    'DatabaseOutputMode': 'NEW_TABLE'
                },
            ],
            'RecipeReference': {
                'Name': 'string',
                'RecipeVersion': 'string'
            },
            'StartedBy': 'string',
            'StartedOn': datetime(2015, 1, 1),
            'JobSample': {
                'Mode': 'FULL_DATASET'|'CUSTOM_ROWS',
                'Size': 123
            }
        },
    ],

}

Response Structure

  • (dict) --

    • JobRuns (list) --

      A list of job runs that have occurred for the specified job.

      • (dict) --

        Represents one run of a DataBrew job.

        • Attempt (integer) --

          The number of times that DataBrew has attempted to run the job.

        • CompletedOn (datetime) --

          The date and time when the job completed processing.

        • DatasetName (string) --

          The name of the dataset for the job to process.

        • ErrorMessage (string) --

          A message indicating an error (if any) that was encountered when the job ran.

        • ExecutionTime (integer) --

          The amount of time, in seconds, during which a job run consumed resources.

        • JobName (string) --

          The name of the job being processed during this run.

        • RunId (string) --

          The unique identifier of the job run.

        • State (string) --

          The current state of the job run entity itself.

        • LogSubscription (string) --

          The current status of Amazon CloudWatch logging for the job run.

        • LogGroupName (string) --

          The name of an Amazon CloudWatch log group, where the job writes diagnostic messages when it runs.

        • Outputs (list) --

          One or more output artifacts from a job run.

          • (dict) --

            Represents options that specify how and where in Amazon S3 DataBrew writes the output generated by recipe jobs or profile jobs.

            • CompressionFormat (string) --

              The compression algorithm used to compress the output text of the job.

            • Format (string) --

              The data format of the output of the job.

            • PartitionColumns (list) --

              The names of one or more partition columns for the output of the job.

              • (string) --
            • Location (dict) --

              The location in Amazon S3 where the job writes its output.

              • Bucket (string) --

                The Amazon S3 bucket name.

              • Key (string) --

                The unique name of the object in the bucket.

            • Overwrite (boolean) --

              A value that, if true, means that any data in the location specified for output is overwritten with new output.

            • FormatOptions (dict) --

              Represents options that define how DataBrew formats job output files.

              • Csv (dict) --

                Represents a set of options that define the structure of comma-separated value (CSV) job output.

                • Delimiter (string) --

                  A single character that specifies the delimiter used to create CSV job output.

        • DataCatalogOutputs (list) --

          One or more artifacts that represent the Glue Data Catalog output from running the job.

          • (dict) --

            Represents options that specify how and where in the Glue Data Catalog DataBrew writes the output generated by recipe jobs.

            • CatalogId (string) --

              The unique identifier of the Amazon Web Services account that holds the Data Catalog that stores the data.

            • DatabaseName (string) --

              The name of a database in the Data Catalog.

            • TableName (string) --

              The name of a table in the Data Catalog.

            • S3Options (dict) --

              Represents options that specify how and where DataBrew writes the Amazon S3 output generated by recipe jobs.

              • Location (dict) --

                Represents an Amazon S3 location (bucket name and object key) where DataBrew can write output from a job.

                • Bucket (string) --

                  The Amazon S3 bucket name.

                • Key (string) --

                  The unique name of the object in the bucket.

            • DatabaseOptions (dict) --

              Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.

              • TempDirectory (dict) --

                Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.

                • Bucket (string) --

                  The Amazon S3 bucket name.

                • Key (string) --

                  The unique name of the object in the bucket.

              • TableName (string) --

                A prefix for the name of a table DataBrew will create in the database.

            • Overwrite (boolean) --

              A value that, if true, means that any data in the location specified for output is overwritten with new output. Not supported with DatabaseOptions.

        • DatabaseOutputs (list) --

          Represents a list of JDBC database output objects which defines the output destination for a DataBrew recipe job to write into.

          • (dict) --

            Represents a JDBC database output object which defines the output destination for a DataBrew recipe job to write into.

            • GlueConnectionName (string) --

              The Glue connection that stores the connection information for the target database.

            • DatabaseOptions (dict) --

              Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.

              • TempDirectory (dict) --

                Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.

                • Bucket (string) --

                  The Amazon S3 bucket name.

                • Key (string) --

                  The unique name of the object in the bucket.

              • TableName (string) --

                A prefix for the name of a table DataBrew will create in the database.

            • DatabaseOutputMode (string) --

              The output mode to write into the database. Currently supported option: NEW_TABLE.

        • RecipeReference (dict) --

          The set of steps processed by the job.

          • Name (string) --

            The name of the recipe.

          • RecipeVersion (string) --

            The identifier for the version for the recipe.

        • StartedBy (string) --

          The Amazon Resource Name (ARN) of the user who initiated the job run.

        • StartedOn (datetime) --

          The date and time when the job run began.

        • JobSample (dict) --

          A sample configuration for profile jobs only, which determines the number of rows on which the profile job is run. If a JobSample value isn't provided, the default is used. The default value is CUSTOM_ROWS for the mode parameter and 20,000 for the size parameter.

          • Mode (string) --

            A value that determines whether the profile job is run on the entire dataset or a specified number of rows. This value must be one of the following:

            • FULL_DATASET - The profile job is run on the entire dataset.
            • CUSTOM_ROWS - The profile job is run on the number of rows specified in the Size parameter.
          • Size (integer) --

            The Size parameter is only required when the mode is CUSTOM_ROWS. The profile job is run on the specified number of rows. The maximum value for size is Long.MAX_VALUE.

            Long.MAX_VALUE = 9223372036854775807

class GlueDataBrew.Paginator.ListJobs
paginator = client.get_paginator('list_jobs')
paginate(**kwargs)

Creates an iterator that will paginate through responses from GlueDataBrew.Client.list_jobs().

See also: AWS API Documentation

Request Syntax

response_iterator = paginator.paginate(
    DatasetName='string',
    ProjectName='string',
    PaginationConfig={
        'MaxItems': 123,
        'PageSize': 123,
        'StartingToken': 'string'
    }
)
Parameters
  • DatasetName (string) -- The name of a dataset. Using this parameter indicates to return only those jobs that act on the specified dataset.
  • ProjectName (string) -- The name of a project. Using this parameter indicates to return only those jobs that are associated with the specified project.
  • PaginationConfig (dict) --

    A dictionary that provides parameters to control pagination.

    • MaxItems (integer) --

      The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.

    • PageSize (integer) --

      The size of each page.

    • StartingToken (string) --

      A token to specify where to start paginating. This is the NextToken from a previous response.

Return type

dict

Returns

Response Syntax

{
    'Jobs': [
        {
            'AccountId': 'string',
            'CreatedBy': 'string',
            'CreateDate': datetime(2015, 1, 1),
            'DatasetName': 'string',
            'EncryptionKeyArn': 'string',
            'EncryptionMode': 'SSE-KMS'|'SSE-S3',
            'Name': 'string',
            'Type': 'PROFILE'|'RECIPE',
            'LastModifiedBy': 'string',
            'LastModifiedDate': datetime(2015, 1, 1),
            'LogSubscription': 'ENABLE'|'DISABLE',
            'MaxCapacity': 123,
            'MaxRetries': 123,
            'Outputs': [
                {
                    'CompressionFormat': 'GZIP'|'LZ4'|'SNAPPY'|'BZIP2'|'DEFLATE'|'LZO'|'BROTLI'|'ZSTD'|'ZLIB',
                    'Format': 'CSV'|'JSON'|'PARQUET'|'GLUEPARQUET'|'AVRO'|'ORC'|'XML'|'TABLEAUHYPER',
                    'PartitionColumns': [
                        'string',
                    ],
                    'Location': {
                        'Bucket': 'string',
                        'Key': 'string'
                    },
                    'Overwrite': True|False,
                    'FormatOptions': {
                        'Csv': {
                            'Delimiter': 'string'
                        }
                    }
                },
            ],
            'DataCatalogOutputs': [
                {
                    'CatalogId': 'string',
                    'DatabaseName': 'string',
                    'TableName': 'string',
                    'S3Options': {
                        'Location': {
                            'Bucket': 'string',
                            'Key': 'string'
                        }
                    },
                    'DatabaseOptions': {
                        'TempDirectory': {
                            'Bucket': 'string',
                            'Key': 'string'
                        },
                        'TableName': 'string'
                    },
                    'Overwrite': True|False
                },
            ],
            'DatabaseOutputs': [
                {
                    'GlueConnectionName': 'string',
                    'DatabaseOptions': {
                        'TempDirectory': {
                            'Bucket': 'string',
                            'Key': 'string'
                        },
                        'TableName': 'string'
                    },
                    'DatabaseOutputMode': 'NEW_TABLE'
                },
            ],
            'ProjectName': 'string',
            'RecipeReference': {
                'Name': 'string',
                'RecipeVersion': 'string'
            },
            'ResourceArn': 'string',
            'RoleArn': 'string',
            'Timeout': 123,
            'Tags': {
                'string': 'string'
            },
            'JobSample': {
                'Mode': 'FULL_DATASET'|'CUSTOM_ROWS',
                'Size': 123
            }
        },
    ],

}

Response Structure

  • (dict) --

    • Jobs (list) --

      A list of jobs that are defined.

      • (dict) --

        Represents all of the attributes of a DataBrew job.

        • AccountId (string) --

          The ID of the Amazon Web Services account that owns the job.

        • CreatedBy (string) --

          The Amazon Resource Name (ARN) of the user who created the job.

        • CreateDate (datetime) --

          The date and time that the job was created.

        • DatasetName (string) --

          A dataset that the job is to process.

        • EncryptionKeyArn (string) --

          The Amazon Resource Name (ARN) of an encryption key that is used to protect the job output. For more information, see Encrypting data written by DataBrew jobs

        • EncryptionMode (string) --

          The encryption mode for the job, which can be one of the following:

          • SSE-KMS - Server-side encryption with keys managed by KMS.
          • SSE-S3 - Server-side encryption with keys managed by Amazon S3.
        • Name (string) --

          The unique name of the job.

        • Type (string) --

          The job type of the job, which must be one of the following:

          • PROFILE - A job to analyze a dataset, to determine its size, data types, data distribution, and more.
          • RECIPE - A job to apply one or more transformations to a dataset.
        • LastModifiedBy (string) --

          The Amazon Resource Name (ARN) of the user who last modified the job.

        • LastModifiedDate (datetime) --

          The modification date and time of the job.

        • LogSubscription (string) --

          The current status of Amazon CloudWatch logging for the job.

        • MaxCapacity (integer) --

          The maximum number of nodes that can be consumed when the job processes data.

        • MaxRetries (integer) --

          The maximum number of times to retry the job after a job run fails.

        • Outputs (list) --

          One or more artifacts that represent output from running the job.

          • (dict) --

            Represents options that specify how and where in Amazon S3 DataBrew writes the output generated by recipe jobs or profile jobs.

            • CompressionFormat (string) --

              The compression algorithm used to compress the output text of the job.

            • Format (string) --

              The data format of the output of the job.

            • PartitionColumns (list) --

              The names of one or more partition columns for the output of the job.

              • (string) --
            • Location (dict) --

              The location in Amazon S3 where the job writes its output.

              • Bucket (string) --

                The Amazon S3 bucket name.

              • Key (string) --

                The unique name of the object in the bucket.

            • Overwrite (boolean) --

              A value that, if true, means that any data in the location specified for output is overwritten with new output.

            • FormatOptions (dict) --

              Represents options that define how DataBrew formats job output files.

              • Csv (dict) --

                Represents a set of options that define the structure of comma-separated value (CSV) job output.

                • Delimiter (string) --

                  A single character that specifies the delimiter used to create CSV job output.

        • DataCatalogOutputs (list) --

          One or more artifacts that represent the Glue Data Catalog output from running the job.

          • (dict) --

            Represents options that specify how and where in the Glue Data Catalog DataBrew writes the output generated by recipe jobs.

            • CatalogId (string) --

              The unique identifier of the Amazon Web Services account that holds the Data Catalog that stores the data.

            • DatabaseName (string) --

              The name of a database in the Data Catalog.

            • TableName (string) --

              The name of a table in the Data Catalog.

            • S3Options (dict) --

              Represents options that specify how and where DataBrew writes the Amazon S3 output generated by recipe jobs.

              • Location (dict) --

                Represents an Amazon S3 location (bucket name and object key) where DataBrew can write output from a job.

                • Bucket (string) --

                  The Amazon S3 bucket name.

                • Key (string) --

                  The unique name of the object in the bucket.

            • DatabaseOptions (dict) --

              Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.

              • TempDirectory (dict) --

                Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.

                • Bucket (string) --

                  The Amazon S3 bucket name.

                • Key (string) --

                  The unique name of the object in the bucket.

              • TableName (string) --

                A prefix for the name of a table DataBrew will create in the database.

            • Overwrite (boolean) --

              A value that, if true, means that any data in the location specified for output is overwritten with new output. Not supported with DatabaseOptions.

        • DatabaseOutputs (list) --

          Represents a list of JDBC database output objects which defines the output destination for a DataBrew recipe job to write into.

          • (dict) --

            Represents a JDBC database output object which defines the output destination for a DataBrew recipe job to write into.

            • GlueConnectionName (string) --

              The Glue connection that stores the connection information for the target database.

            • DatabaseOptions (dict) --

              Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.

              • TempDirectory (dict) --

                Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.

                • Bucket (string) --

                  The Amazon S3 bucket name.

                • Key (string) --

                  The unique name of the object in the bucket.

              • TableName (string) --

                A prefix for the name of a table DataBrew will create in the database.

            • DatabaseOutputMode (string) --

              The output mode to write into the database. Currently supported option: NEW_TABLE.

        • ProjectName (string) --

          The name of the project that the job is associated with.

        • RecipeReference (dict) --

          A set of steps that the job runs.

          • Name (string) --

            The name of the recipe.

          • RecipeVersion (string) --

            The identifier for the version for the recipe.

        • ResourceArn (string) --

          The unique Amazon Resource Name (ARN) for the job.

        • RoleArn (string) --

          The Amazon Resource Name (ARN) of the role to be assumed for this job.

        • Timeout (integer) --

          The job's timeout in minutes. A job that attempts to run longer than this timeout period ends with a status of TIMEOUT .

        • Tags (dict) --

          Metadata tags that have been applied to the job.

          • (string) --
            • (string) --
        • JobSample (dict) --

          A sample configuration for profile jobs only, which determines the number of rows on which the profile job is run. If a JobSample value isn't provided, the default value is used. The default value is CUSTOM_ROWS for the mode parameter and 20,000 for the size parameter.

          • Mode (string) --

            A value that determines whether the profile job is run on the entire dataset or a specified number of rows. This value must be one of the following:

            • FULL_DATASET - The profile job is run on the entire dataset.
            • CUSTOM_ROWS - The profile job is run on the number of rows specified in the Size parameter.
          • Size (integer) --

            The Size parameter is only required when the mode is CUSTOM_ROWS. The profile job is run on the specified number of rows. The maximum value for size is Long.MAX_VALUE.

            Long.MAX_VALUE = 9223372036854775807

class GlueDataBrew.Paginator.ListProjects
paginator = client.get_paginator('list_projects')
paginate(**kwargs)

Creates an iterator that will paginate through responses from GlueDataBrew.Client.list_projects().

See also: AWS API Documentation

Request Syntax

response_iterator = paginator.paginate(
    PaginationConfig={
        'MaxItems': 123,
        'PageSize': 123,
        'StartingToken': 'string'
    }
)
Parameters
PaginationConfig (dict) --

A dictionary that provides parameters to control pagination.

  • MaxItems (integer) --

    The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.

  • PageSize (integer) --

    The size of each page.

  • StartingToken (string) --

    A token to specify where to start paginating. This is the NextToken from a previous response.

Return type
dict
Returns
Response Syntax
{
    'Projects': [
        {
            'AccountId': 'string',
            'CreateDate': datetime(2015, 1, 1),
            'CreatedBy': 'string',
            'DatasetName': 'string',
            'LastModifiedDate': datetime(2015, 1, 1),
            'LastModifiedBy': 'string',
            'Name': 'string',
            'RecipeName': 'string',
            'ResourceArn': 'string',
            'Sample': {
                'Size': 123,
                'Type': 'FIRST_N'|'LAST_N'|'RANDOM'
            },
            'Tags': {
                'string': 'string'
            },
            'RoleArn': 'string',
            'OpenedBy': 'string',
            'OpenDate': datetime(2015, 1, 1)
        },
    ],

}

Response Structure

  • (dict) --
    • Projects (list) --

      A list of projects that are defined .

      • (dict) --

        Represents all of the attributes of a DataBrew project.

        • AccountId (string) --

          The ID of the Amazon Web Services account that owns the project.

        • CreateDate (datetime) --

          The date and time that the project was created.

        • CreatedBy (string) --

          The Amazon Resource Name (ARN) of the user who crated the project.

        • DatasetName (string) --

          The dataset that the project is to act upon.

        • LastModifiedDate (datetime) --

          The last modification date and time for the project.

        • LastModifiedBy (string) --

          The Amazon Resource Name (ARN) of the user who last modified the project.

        • Name (string) --

          The unique name of a project.

        • RecipeName (string) --

          The name of a recipe that will be developed during a project session.

        • ResourceArn (string) --

          The Amazon Resource Name (ARN) for the project.

        • Sample (dict) --

          The sample size and sampling type to apply to the data. If this parameter isn't specified, then the sample consists of the first 500 rows from the dataset.

          • Size (integer) --

            The number of rows in the sample.

          • Type (string) --

            The way in which DataBrew obtains rows from a dataset.

        • Tags (dict) --

          Metadata tags that have been applied to the project.

          • (string) --
            • (string) --
        • RoleArn (string) --

          The Amazon Resource Name (ARN) of the role that will be assumed for this project.

        • OpenedBy (string) --

          The Amazon Resource Name (ARN) of the user that opened the project for use.

        • OpenDate (datetime) --

          The date and time when the project was opened.

class GlueDataBrew.Paginator.ListRecipeVersions
paginator = client.get_paginator('list_recipe_versions')
paginate(**kwargs)

Creates an iterator that will paginate through responses from GlueDataBrew.Client.list_recipe_versions().

See also: AWS API Documentation

Request Syntax

response_iterator = paginator.paginate(
    Name='string',
    PaginationConfig={
        'MaxItems': 123,
        'PageSize': 123,
        'StartingToken': 'string'
    }
)
Parameters
  • Name (string) --

    [REQUIRED]

    The name of the recipe for which to return version information.

  • PaginationConfig (dict) --

    A dictionary that provides parameters to control pagination.

    • MaxItems (integer) --

      The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.

    • PageSize (integer) --

      The size of each page.

    • StartingToken (string) --

      A token to specify where to start paginating. This is the NextToken from a previous response.

Return type

dict

Returns

Response Syntax

{
    'Recipes': [
        {
            'CreatedBy': 'string',
            'CreateDate': datetime(2015, 1, 1),
            'LastModifiedBy': 'string',
            'LastModifiedDate': datetime(2015, 1, 1),
            'ProjectName': 'string',
            'PublishedBy': 'string',
            'PublishedDate': datetime(2015, 1, 1),
            'Description': 'string',
            'Name': 'string',
            'ResourceArn': 'string',
            'Steps': [
                {
                    'Action': {
                        'Operation': 'string',
                        'Parameters': {
                            'string': 'string'
                        }
                    },
                    'ConditionExpressions': [
                        {
                            'Condition': 'string',
                            'Value': 'string',
                            'TargetColumn': 'string'
                        },
                    ]
                },
            ],
            'Tags': {
                'string': 'string'
            },
            'RecipeVersion': 'string'
        },
    ]
}

Response Structure

  • (dict) --

    • Recipes (list) --

      A list of versions for the specified recipe.

      • (dict) --

        Represents one or more actions to be performed on a DataBrew dataset.

        • CreatedBy (string) --

          The Amazon Resource Name (ARN) of the user who created the recipe.

        • CreateDate (datetime) --

          The date and time that the recipe was created.

        • LastModifiedBy (string) --

          The Amazon Resource Name (ARN) of the user who last modified the recipe.

        • LastModifiedDate (datetime) --

          The last modification date and time of the recipe.

        • ProjectName (string) --

          The name of the project that the recipe is associated with.

        • PublishedBy (string) --

          The Amazon Resource Name (ARN) of the user who published the recipe.

        • PublishedDate (datetime) --

          The date and time when the recipe was published.

        • Description (string) --

          The description of the recipe.

        • Name (string) --

          The unique name for the recipe.

        • ResourceArn (string) --

          The Amazon Resource Name (ARN) for the recipe.

        • Steps (list) --

          A list of steps that are defined by the recipe.

          • (dict) --

            Represents a single step from a DataBrew recipe to be performed.

            • Action (dict) --

              The particular action to be performed in the recipe step.

              • Operation (string) --

                The name of a valid DataBrew transformation to be performed on the data.

              • Parameters (dict) --

                Contextual parameters for the transformation.

                • (string) --
                  • (string) --
            • ConditionExpressions (list) --

              One or more conditions that must be met for the recipe step to succeed.

              Note

              All of the conditions in the array must be met. In other words, all of the conditions must be combined using a logical AND operation.

              • (dict) --

                Represents an individual condition that evaluates to true or false.

                Conditions are used with recipe actions. The action is only performed for column values where the condition evaluates to true.

                If a recipe requires more than one condition, then the recipe must specify multiple ConditionExpression elements. Each condition is applied to the rows in a dataset first, before the recipe action is performed.

                • Condition (string) --

                  A specific condition to apply to a recipe action. For more information, see Recipe structure in the Glue DataBrew Developer Guide .

                • Value (string) --

                  A value that the condition must evaluate to for the condition to succeed.

                • TargetColumn (string) --

                  A column to apply this condition to.

        • Tags (dict) --

          Metadata tags that have been applied to the recipe.

          • (string) --
            • (string) --
        • RecipeVersion (string) --

          The identifier for the version for the recipe. Must be one of the following:

          • Numeric version (X.Y ) - X and Y stand for major and minor version numbers. The maximum length of each is 6 digits, and neither can be negative values. Both X and Y are required, and "0.0" isn't a valid version.
          • LATEST_WORKING - the most recent valid version being developed in a DataBrew project.
          • LATEST_PUBLISHED - the most recent published version.

class GlueDataBrew.Paginator.ListRecipes
paginator = client.get_paginator('list_recipes')
paginate(**kwargs)

Creates an iterator that will paginate through responses from GlueDataBrew.Client.list_recipes().

See also: AWS API Documentation

Request Syntax

response_iterator = paginator.paginate(
    RecipeVersion='string',
    PaginationConfig={
        'MaxItems': 123,
        'PageSize': 123,
        'StartingToken': 'string'
    }
)
Parameters
  • RecipeVersion (string) --

    Return only those recipes with a version identifier of LATEST_WORKING or LATEST_PUBLISHED . If RecipeVersion is omitted, ListRecipes returns all of the LATEST_PUBLISHED recipe versions.

    Valid values: LATEST_WORKING | LATEST_PUBLISHED

  • PaginationConfig (dict) --

    A dictionary that provides parameters to control pagination.

    • MaxItems (integer) --

      The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.

    • PageSize (integer) --

      The size of each page.

    • StartingToken (string) --

      A token to specify where to start paginating. This is the NextToken from a previous response.

Return type

dict

Returns

Response Syntax

{
    'Recipes': [
        {
            'CreatedBy': 'string',
            'CreateDate': datetime(2015, 1, 1),
            'LastModifiedBy': 'string',
            'LastModifiedDate': datetime(2015, 1, 1),
            'ProjectName': 'string',
            'PublishedBy': 'string',
            'PublishedDate': datetime(2015, 1, 1),
            'Description': 'string',
            'Name': 'string',
            'ResourceArn': 'string',
            'Steps': [
                {
                    'Action': {
                        'Operation': 'string',
                        'Parameters': {
                            'string': 'string'
                        }
                    },
                    'ConditionExpressions': [
                        {
                            'Condition': 'string',
                            'Value': 'string',
                            'TargetColumn': 'string'
                        },
                    ]
                },
            ],
            'Tags': {
                'string': 'string'
            },
            'RecipeVersion': 'string'
        },
    ],

}

Response Structure

  • (dict) --

    • Recipes (list) --

      A list of recipes that are defined.

      • (dict) --

        Represents one or more actions to be performed on a DataBrew dataset.

        • CreatedBy (string) --

          The Amazon Resource Name (ARN) of the user who created the recipe.

        • CreateDate (datetime) --

          The date and time that the recipe was created.

        • LastModifiedBy (string) --

          The Amazon Resource Name (ARN) of the user who last modified the recipe.

        • LastModifiedDate (datetime) --

          The last modification date and time of the recipe.

        • ProjectName (string) --

          The name of the project that the recipe is associated with.

        • PublishedBy (string) --

          The Amazon Resource Name (ARN) of the user who published the recipe.

        • PublishedDate (datetime) --

          The date and time when the recipe was published.

        • Description (string) --

          The description of the recipe.

        • Name (string) --

          The unique name for the recipe.

        • ResourceArn (string) --

          The Amazon Resource Name (ARN) for the recipe.

        • Steps (list) --

          A list of steps that are defined by the recipe.

          • (dict) --

            Represents a single step from a DataBrew recipe to be performed.

            • Action (dict) --

              The particular action to be performed in the recipe step.

              • Operation (string) --

                The name of a valid DataBrew transformation to be performed on the data.

              • Parameters (dict) --

                Contextual parameters for the transformation.

                • (string) --
                  • (string) --
            • ConditionExpressions (list) --

              One or more conditions that must be met for the recipe step to succeed.

              Note

              All of the conditions in the array must be met. In other words, all of the conditions must be combined using a logical AND operation.

              • (dict) --

                Represents an individual condition that evaluates to true or false.

                Conditions are used with recipe actions. The action is only performed for column values where the condition evaluates to true.

                If a recipe requires more than one condition, then the recipe must specify multiple ConditionExpression elements. Each condition is applied to the rows in a dataset first, before the recipe action is performed.

                • Condition (string) --

                  A specific condition to apply to a recipe action. For more information, see Recipe structure in the Glue DataBrew Developer Guide .

                • Value (string) --

                  A value that the condition must evaluate to for the condition to succeed.

                • TargetColumn (string) --

                  A column to apply this condition to.

        • Tags (dict) --

          Metadata tags that have been applied to the recipe.

          • (string) --
            • (string) --
        • RecipeVersion (string) --

          The identifier for the version for the recipe. Must be one of the following:

          • Numeric version (X.Y ) - X and Y stand for major and minor version numbers. The maximum length of each is 6 digits, and neither can be negative values. Both X and Y are required, and "0.0" isn't a valid version.
          • LATEST_WORKING - the most recent valid version being developed in a DataBrew project.
          • LATEST_PUBLISHED - the most recent published version.

class GlueDataBrew.Paginator.ListSchedules
paginator = client.get_paginator('list_schedules')
paginate(**kwargs)

Creates an iterator that will paginate through responses from GlueDataBrew.Client.list_schedules().

See also: AWS API Documentation

Request Syntax

response_iterator = paginator.paginate(
    JobName='string',
    PaginationConfig={
        'MaxItems': 123,
        'PageSize': 123,
        'StartingToken': 'string'
    }
)
Parameters
  • JobName (string) -- The name of the job that these schedules apply to.
  • PaginationConfig (dict) --

    A dictionary that provides parameters to control pagination.

    • MaxItems (integer) --

      The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.

    • PageSize (integer) --

      The size of each page.

    • StartingToken (string) --

      A token to specify where to start paginating. This is the NextToken from a previous response.

Return type

dict

Returns

Response Syntax

{
    'Schedules': [
        {
            'AccountId': 'string',
            'CreatedBy': 'string',
            'CreateDate': datetime(2015, 1, 1),
            'JobNames': [
                'string',
            ],
            'LastModifiedBy': 'string',
            'LastModifiedDate': datetime(2015, 1, 1),
            'ResourceArn': 'string',
            'CronExpression': 'string',
            'Tags': {
                'string': 'string'
            },
            'Name': 'string'
        },
    ],

}

Response Structure

  • (dict) --

    • Schedules (list) --

      A list of schedules that are defined.

      • (dict) --

        Represents one or more dates and times when a job is to run.

        • AccountId (string) --

          The ID of the Amazon Web Services account that owns the schedule.

        • CreatedBy (string) --

          The Amazon Resource Name (ARN) of the user who created the schedule.

        • CreateDate (datetime) --

          The date and time that the schedule was created.

        • JobNames (list) --

          A list of jobs to be run, according to the schedule.

          • (string) --
        • LastModifiedBy (string) --

          The Amazon Resource Name (ARN) of the user who last modified the schedule.

        • LastModifiedDate (datetime) --

          The date and time when the schedule was last modified.

        • ResourceArn (string) --

          The Amazon Resource Name (ARN) of the schedule.

        • CronExpression (string) --

          The dates and times when the job is to run. For more information, see Cron expressions in the Glue DataBrew Developer Guide .

        • Tags (dict) --

          Metadata tags that have been applied to the schedule.

          • (string) --
            • (string) --
        • Name (string) --

          The name of the schedule.