EMRContainers / Client / describe_job_template

describe_job_template#

EMRContainers.Client.describe_job_template(**kwargs)#

Displays detailed information about a specified job template. Job template stores values of StartJobRun API request in a template and can be used to start a job run. Job template allows two use cases: avoid repeating recurring StartJobRun API request values, enforcing certain values in StartJobRun API request.

See also: AWS API Documentation

Request Syntax

response = client.describe_job_template(
    id='string'
)
Parameters:

id (string) –

[REQUIRED]

The ID of the job template that will be described.

Return type:

dict

Returns:

Response Syntax

{
    'jobTemplate': {
        'name': 'string',
        'id': 'string',
        'arn': 'string',
        'createdAt': datetime(2015, 1, 1),
        'createdBy': 'string',
        'tags': {
            'string': 'string'
        },
        'jobTemplateData': {
            'executionRoleArn': 'string',
            'releaseLabel': 'string',
            'configurationOverrides': {
                'applicationConfiguration': [
                    {
                        'classification': 'string',
                        'properties': {
                            'string': 'string'
                        },
                        'configurations': {'... recursive ...'}
                    },
                ],
                'monitoringConfiguration': {
                    'persistentAppUI': 'string',
                    'cloudWatchMonitoringConfiguration': {
                        'logGroupName': 'string',
                        'logStreamNamePrefix': 'string'
                    },
                    's3MonitoringConfiguration': {
                        'logUri': 'string'
                    }
                }
            },
            'jobDriver': {
                'sparkSubmitJobDriver': {
                    'entryPoint': 'string',
                    'entryPointArguments': [
                        'string',
                    ],
                    'sparkSubmitParameters': 'string'
                },
                'sparkSqlJobDriver': {
                    'entryPoint': 'string',
                    'sparkSqlParameters': 'string'
                }
            },
            'parameterConfiguration': {
                'string': {
                    'type': 'NUMBER'|'STRING',
                    'defaultValue': 'string'
                }
            },
            'jobTags': {
                'string': 'string'
            }
        },
        'kmsKeyArn': 'string',
        'decryptionError': 'string'
    }
}

Response Structure

  • (dict) –

    • jobTemplate (dict) –

      This output displays information about the specified job template.

      • name (string) –

        The name of the job template.

      • id (string) –

        The ID of the job template.

      • arn (string) –

        The ARN of the job template.

      • createdAt (datetime) –

        The date and time when the job template was created.

      • createdBy (string) –

        The user who created the job template.

      • tags (dict) –

        The tags assigned to the job template.

        • (string) –

          • (string) –

      • jobTemplateData (dict) –

        The job template data which holds values of StartJobRun API request.

        • executionRoleArn (string) –

          The execution role ARN of the job run.

        • releaseLabel (string) –

          The release version of Amazon EMR.

        • configurationOverrides (dict) –

          The configuration settings that are used to override defaults configuration.

          • applicationConfiguration (list) –

            The configurations for the application running by the job run.

            • (dict) –

              A configuration specification to be used when provisioning virtual clusters, which can include configurations for applications and software bundled with Amazon EMR on EKS. A configuration consists of a classification, properties, and optional nested configurations. A classification refers to an application-specific configuration file. Properties are the settings you want to change in that file.

              • classification (string) –

                The classification within a configuration.

              • properties (dict) –

                A set of properties specified within a configuration classification.

                • (string) –

                  • (string) –

              • configurations (list) –

                A list of additional configurations to apply within a configuration object.

          • monitoringConfiguration (dict) –

            The configurations for monitoring.

            • persistentAppUI (string) –

              Monitoring configurations for the persistent application UI.

            • cloudWatchMonitoringConfiguration (dict) –

              Monitoring configurations for CloudWatch.

              • logGroupName (string) –

                The name of the log group for log publishing.

              • logStreamNamePrefix (string) –

                The specified name prefix for log streams.

            • s3MonitoringConfiguration (dict) –

              Amazon S3 configuration for monitoring log publishing.

              • logUri (string) –

                Amazon S3 destination URI for log publishing.

        • jobDriver (dict) –

          Specify the driver that the job runs on. Exactly one of the two available job drivers is required, either sparkSqlJobDriver or sparkSubmitJobDriver.

          • sparkSubmitJobDriver (dict) –

            The job driver parameters specified for spark submit.

            • entryPoint (string) –

              The entry point of job application.

            • entryPointArguments (list) –

              The arguments for job application.

              • (string) –

            • sparkSubmitParameters (string) –

              The Spark submit parameters that are used for job runs.

          • sparkSqlJobDriver (dict) –

            The job driver for job type.

            • entryPoint (string) –

              The SQL file to be executed.

            • sparkSqlParameters (string) –

              The Spark parameters to be included in the Spark SQL command.

        • parameterConfiguration (dict) –

          The configuration of parameters existing in the job template.

          • (string) –

            • (dict) –

              The configuration of a job template parameter.

              • type (string) –

                The type of the job template parameter. Allowed values are: ‘STRING’, ‘NUMBER’.

              • defaultValue (string) –

                The default value for the job template parameter.

        • jobTags (dict) –

          The tags assigned to jobs started using the job template.

          • (string) –

            • (string) –

      • kmsKeyArn (string) –

        The KMS key ARN used to encrypt the job template.

      • decryptionError (string) –

        The error message in case the decryption of job template fails.

Exceptions

  • EMRContainers.Client.exceptions.ValidationException

  • EMRContainers.Client.exceptions.ResourceNotFoundException

  • EMRContainers.Client.exceptions.InternalServerException