GlueDataBrew / Client / list_job_runs
list_job_runs#
- GlueDataBrew.Client.list_job_runs(**kwargs)#
Lists all of the previous runs of a particular DataBrew job.
See also: AWS API Documentation
Request Syntax
response = client.list_job_runs( Name='string', MaxResults=123, NextToken='string' )
- Parameters:
Name (string) –
[REQUIRED]
The name of the job.
MaxResults (integer) – The maximum number of results to return in this request.
NextToken (string) – The token returned by a previous call to retrieve the next set of results.
- Return type:
dict
- Returns:
Response Syntax
{ 'JobRuns': [ { 'Attempt': 123, 'CompletedOn': datetime(2015, 1, 1), 'DatasetName': 'string', 'ErrorMessage': 'string', 'ExecutionTime': 123, 'JobName': 'string', 'RunId': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT', 'LogSubscription': 'ENABLE'|'DISABLE', 'LogGroupName': 'string', 'Outputs': [ { 'CompressionFormat': 'GZIP'|'LZ4'|'SNAPPY'|'BZIP2'|'DEFLATE'|'LZO'|'BROTLI'|'ZSTD'|'ZLIB', 'Format': 'CSV'|'JSON'|'PARQUET'|'GLUEPARQUET'|'AVRO'|'ORC'|'XML'|'TABLEAUHYPER', 'PartitionColumns': [ 'string', ], 'Location': { 'Bucket': 'string', 'Key': 'string', 'BucketOwner': 'string' }, 'Overwrite': True|False, 'FormatOptions': { 'Csv': { 'Delimiter': 'string' } }, 'MaxOutputFiles': 123 }, ], 'DataCatalogOutputs': [ { 'CatalogId': 'string', 'DatabaseName': 'string', 'TableName': 'string', 'S3Options': { 'Location': { 'Bucket': 'string', 'Key': 'string', 'BucketOwner': 'string' } }, 'DatabaseOptions': { 'TempDirectory': { 'Bucket': 'string', 'Key': 'string', 'BucketOwner': 'string' }, 'TableName': 'string' }, 'Overwrite': True|False }, ], 'DatabaseOutputs': [ { 'GlueConnectionName': 'string', 'DatabaseOptions': { 'TempDirectory': { 'Bucket': 'string', 'Key': 'string', 'BucketOwner': 'string' }, 'TableName': 'string' }, 'DatabaseOutputMode': 'NEW_TABLE' }, ], 'RecipeReference': { 'Name': 'string', 'RecipeVersion': 'string' }, 'StartedBy': 'string', 'StartedOn': datetime(2015, 1, 1), 'JobSample': { 'Mode': 'FULL_DATASET'|'CUSTOM_ROWS', 'Size': 123 }, 'ValidationConfigurations': [ { 'RulesetArn': 'string', 'ValidationMode': 'CHECK_ALL' }, ] }, ], 'NextToken': 'string' }
Response Structure
(dict) –
JobRuns (list) –
A list of job runs that have occurred for the specified job.
(dict) –
Represents one run of a DataBrew job.
Attempt (integer) –
The number of times that DataBrew has attempted to run the job.
CompletedOn (datetime) –
The date and time when the job completed processing.
DatasetName (string) –
The name of the dataset for the job to process.
ErrorMessage (string) –
A message indicating an error (if any) that was encountered when the job ran.
ExecutionTime (integer) –
The amount of time, in seconds, during which a job run consumed resources.
JobName (string) –
The name of the job being processed during this run.
RunId (string) –
The unique identifier of the job run.
State (string) –
The current state of the job run entity itself.
LogSubscription (string) –
The current status of Amazon CloudWatch logging for the job run.
LogGroupName (string) –
The name of an Amazon CloudWatch log group, where the job writes diagnostic messages when it runs.
Outputs (list) –
One or more output artifacts from a job run.
(dict) –
Represents options that specify how and where in Amazon S3 DataBrew writes the output generated by recipe jobs or profile jobs.
CompressionFormat (string) –
The compression algorithm used to compress the output text of the job.
Format (string) –
The data format of the output of the job.
PartitionColumns (list) –
The names of one or more partition columns for the output of the job.
(string) –
Location (dict) –
The location in Amazon S3 where the job writes its output.
Bucket (string) –
The Amazon S3 bucket name.
Key (string) –
The unique name of the object in the bucket.
BucketOwner (string) –
The Amazon Web Services account ID of the bucket owner.
Overwrite (boolean) –
A value that, if true, means that any data in the location specified for output is overwritten with new output.
FormatOptions (dict) –
Represents options that define how DataBrew formats job output files.
Csv (dict) –
Represents a set of options that define the structure of comma-separated value (CSV) job output.
Delimiter (string) –
A single character that specifies the delimiter used to create CSV job output.
MaxOutputFiles (integer) –
Maximum number of files to be generated by the job and written to the output folder. For output partitioned by column(s), the MaxOutputFiles value is the maximum number of files per partition.
DataCatalogOutputs (list) –
One or more artifacts that represent the Glue Data Catalog output from running the job.
(dict) –
Represents options that specify how and where in the Glue Data Catalog DataBrew writes the output generated by recipe jobs.
CatalogId (string) –
The unique identifier of the Amazon Web Services account that holds the Data Catalog that stores the data.
DatabaseName (string) –
The name of a database in the Data Catalog.
TableName (string) –
The name of a table in the Data Catalog.
S3Options (dict) –
Represents options that specify how and where DataBrew writes the Amazon S3 output generated by recipe jobs.
Location (dict) –
Represents an Amazon S3 location (bucket name and object key) where DataBrew can write output from a job.
Bucket (string) –
The Amazon S3 bucket name.
Key (string) –
The unique name of the object in the bucket.
BucketOwner (string) –
The Amazon Web Services account ID of the bucket owner.
DatabaseOptions (dict) –
Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.
TempDirectory (dict) –
Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.
Bucket (string) –
The Amazon S3 bucket name.
Key (string) –
The unique name of the object in the bucket.
BucketOwner (string) –
The Amazon Web Services account ID of the bucket owner.
TableName (string) –
A prefix for the name of a table DataBrew will create in the database.
Overwrite (boolean) –
A value that, if true, means that any data in the location specified for output is overwritten with new output. Not supported with DatabaseOptions.
DatabaseOutputs (list) –
Represents a list of JDBC database output objects which defines the output destination for a DataBrew recipe job to write into.
(dict) –
Represents a JDBC database output object which defines the output destination for a DataBrew recipe job to write into.
GlueConnectionName (string) –
The Glue connection that stores the connection information for the target database.
DatabaseOptions (dict) –
Represents options that specify how and where DataBrew writes the database output generated by recipe jobs.
TempDirectory (dict) –
Represents an Amazon S3 location (bucket name and object key) where DataBrew can store intermediate results.
Bucket (string) –
The Amazon S3 bucket name.
Key (string) –
The unique name of the object in the bucket.
BucketOwner (string) –
The Amazon Web Services account ID of the bucket owner.
TableName (string) –
A prefix for the name of a table DataBrew will create in the database.
DatabaseOutputMode (string) –
The output mode to write into the database. Currently supported option: NEW_TABLE.
RecipeReference (dict) –
The set of steps processed by the job.
Name (string) –
The name of the recipe.
RecipeVersion (string) –
The identifier for the version for the recipe.
StartedBy (string) –
The Amazon Resource Name (ARN) of the user who initiated the job run.
StartedOn (datetime) –
The date and time when the job run began.
JobSample (dict) –
A sample configuration for profile jobs only, which determines the number of rows on which the profile job is run. If a
JobSample
value isn’t provided, the default is used. The default value is CUSTOM_ROWS for the mode parameter and 20,000 for the size parameter.Mode (string) –
A value that determines whether the profile job is run on the entire dataset or a specified number of rows. This value must be one of the following:
FULL_DATASET - The profile job is run on the entire dataset.
CUSTOM_ROWS - The profile job is run on the number of rows specified in the
Size
parameter.
Size (integer) –
The
Size
parameter is only required when the mode is CUSTOM_ROWS. The profile job is run on the specified number of rows. The maximum value for size is Long.MAX_VALUE.Long.MAX_VALUE = 9223372036854775807
ValidationConfigurations (list) –
List of validation configurations that are applied to the profile job run.
(dict) –
Configuration for data quality validation. Used to select the Rulesets and Validation Mode to be used in the profile job. When ValidationConfiguration is null, the profile job will run without data quality validation.
RulesetArn (string) –
The Amazon Resource Name (ARN) for the ruleset to be validated in the profile job. The TargetArn of the selected ruleset should be the same as the Amazon Resource Name (ARN) of the dataset that is associated with the profile job.
ValidationMode (string) –
Mode of data quality validation. Default mode is “CHECK_ALL” which verifies all rules defined in the selected ruleset.
NextToken (string) –
A token that you can use in a subsequent call to retrieve the next set of results.
Exceptions
GlueDataBrew.Client.exceptions.ResourceNotFoundException
GlueDataBrew.Client.exceptions.ValidationException