EMRContainers / Client / list_job_runs



Lists job runs based on a set of parameters. A job run is a unit of work, such as a Spark jar, PySpark script, or SparkSQL query, that you submit to Amazon EMR on EKS.

See also: AWS API Documentation

Request Syntax

response = client.list_job_runs(
    createdBefore=datetime(2015, 1, 1),
    createdAfter=datetime(2015, 1, 1),
  • virtualClusterId (string) –


    The ID of the virtual cluster for which to list the job run.

  • createdBefore (datetime) – The date and time before which the job runs were submitted.

  • createdAfter (datetime) – The date and time after which the job runs were submitted.

  • name (string) – The name of the job run.

  • states (list) –

    The states of the job run.

    • (string) –

  • maxResults (integer) – The maximum number of job runs that can be listed.

  • nextToken (string) – The token for the next set of job runs to return.

Return type:



Response Syntax

    'jobRuns': [
            'id': 'string',
            'name': 'string',
            'virtualClusterId': 'string',
            'arn': 'string',
            'clientToken': 'string',
            'executionRoleArn': 'string',
            'releaseLabel': 'string',
            'configurationOverrides': {
                'applicationConfiguration': [
                        'classification': 'string',
                        'properties': {
                            'string': 'string'
                        'configurations': {'... recursive ...'}
                'monitoringConfiguration': {
                    'persistentAppUI': 'ENABLED'|'DISABLED',
                    'cloudWatchMonitoringConfiguration': {
                        'logGroupName': 'string',
                        'logStreamNamePrefix': 'string'
                    's3MonitoringConfiguration': {
                        'logUri': 'string'
                    'containerLogRotationConfiguration': {
                        'rotationSize': 'string',
                        'maxFilesToKeep': 123
            'jobDriver': {
                'sparkSubmitJobDriver': {
                    'entryPoint': 'string',
                    'entryPointArguments': [
                    'sparkSubmitParameters': 'string'
                'sparkSqlJobDriver': {
                    'entryPoint': 'string',
                    'sparkSqlParameters': 'string'
            'createdAt': datetime(2015, 1, 1),
            'createdBy': 'string',
            'finishedAt': datetime(2015, 1, 1),
            'stateDetails': 'string',
            'tags': {
                'string': 'string'
            'retryPolicyConfiguration': {
                'maxAttempts': 123
            'retryPolicyExecution': {
                'currentAttemptCount': 123
    'nextToken': 'string'

Response Structure

  • (dict) –

    • jobRuns (list) –

      This output lists information about the specified job runs.

      • (dict) –

        This entity describes a job run. A job run is a unit of work, such as a Spark jar, PySpark script, or SparkSQL query, that you submit to Amazon EMR on EKS.

        • id (string) –

          The ID of the job run.

        • name (string) –

          The name of the job run.

        • virtualClusterId (string) –

          The ID of the job run’s virtual cluster.

        • arn (string) –

          The ARN of job run.

        • state (string) –

          The state of the job run.

        • clientToken (string) –

          The client token used to start a job run.

        • executionRoleArn (string) –

          The execution role ARN of the job run.

        • releaseLabel (string) –

          The release version of Amazon EMR.

        • configurationOverrides (dict) –

          The configuration settings that are used to override default configuration.

          • applicationConfiguration (list) –

            The configurations for the application running by the job run.

            • (dict) –

              A configuration specification to be used when provisioning virtual clusters, which can include configurations for applications and software bundled with Amazon EMR on EKS. A configuration consists of a classification, properties, and optional nested configurations. A classification refers to an application-specific configuration file. Properties are the settings you want to change in that file.

              • classification (string) –

                The classification within a configuration.

              • properties (dict) –

                A set of properties specified within a configuration classification.

                • (string) –

                  • (string) –

              • configurations (list) –

                A list of additional configurations to apply within a configuration object.

          • monitoringConfiguration (dict) –

            The configurations for monitoring.

            • persistentAppUI (string) –

              Monitoring configurations for the persistent application UI.

            • cloudWatchMonitoringConfiguration (dict) –

              Monitoring configurations for CloudWatch.

              • logGroupName (string) –

                The name of the log group for log publishing.

              • logStreamNamePrefix (string) –

                The specified name prefix for log streams.

            • s3MonitoringConfiguration (dict) –

              Amazon S3 configuration for monitoring log publishing.

              • logUri (string) –

                Amazon S3 destination URI for log publishing.

            • containerLogRotationConfiguration (dict) –

              Enable or disable container log rotation.

              • rotationSize (string) –

                The file size at which to rotate logs. Minimum of 2KB, Maximum of 2GB.

              • maxFilesToKeep (integer) –

                The number of files to keep in container after rotation.

        • jobDriver (dict) –

          Parameters of job driver for the job run.

          • sparkSubmitJobDriver (dict) –

            The job driver parameters specified for spark submit.

            • entryPoint (string) –

              The entry point of job application.

            • entryPointArguments (list) –

              The arguments for job application.

              • (string) –

            • sparkSubmitParameters (string) –

              The Spark submit parameters that are used for job runs.

          • sparkSqlJobDriver (dict) –

            The job driver for job type.

            • entryPoint (string) –

              The SQL file to be executed.

            • sparkSqlParameters (string) –

              The Spark parameters to be included in the Spark SQL command.

        • createdAt (datetime) –

          The date and time when the job run was created.

        • createdBy (string) –

          The user who created the job run.

        • finishedAt (datetime) –

          The date and time when the job run has finished.

        • stateDetails (string) –

          Additional details of the job run state.

        • failureReason (string) –

          The reasons why the job run has failed.

        • tags (dict) –

          The assigned tags of the job run.

          • (string) –

            • (string) –

        • retryPolicyConfiguration (dict) –

          The configuration of the retry policy that the job runs on.

          • maxAttempts (integer) –

            The maximum number of attempts on the job’s driver.

        • retryPolicyExecution (dict) –

          The current status of the retry policy executed on the job.

          • currentAttemptCount (integer) –

            The current number of attempts made on the driver of the job.

    • nextToken (string) –

      This output displays the token for the next set of job runs.


  • EMRContainers.Client.exceptions.ValidationException

  • EMRContainers.Client.exceptions.InternalServerException