submit_job
(**kwargs)¶Submits an Batch job from a job definition. Parameters that are specified during SubmitJob override parameters defined in the job definition. vCPU and memory requirements that are specified in the resourceRequirements
objects in the job definition are the exception. They can't be overridden this way using the memory
and vcpus
parameters. Rather, you must specify updates to job definition parameters in a resourceRequirements
object that's included in the containerOverrides
parameter.
Note
Job queues with a scheduling policy are limited to 500 active fair share identifiers at a time.
Warning
Jobs that run on Fargate resources can't be guaranteed to run for more than 14 days. This is because, after 14 days, Fargate resources might become unavailable and job might be terminated.
See also: AWS API Documentation
Request Syntax
response = client.submit_job(
jobName='string',
jobQueue='string',
shareIdentifier='string',
schedulingPriorityOverride=123,
arrayProperties={
'size': 123
},
dependsOn=[
{
'jobId': 'string',
'type': 'N_TO_N'|'SEQUENTIAL'
},
],
jobDefinition='string',
parameters={
'string': 'string'
},
containerOverrides={
'vcpus': 123,
'memory': 123,
'command': [
'string',
],
'instanceType': 'string',
'environment': [
{
'name': 'string',
'value': 'string'
},
],
'resourceRequirements': [
{
'value': 'string',
'type': 'GPU'|'VCPU'|'MEMORY'
},
]
},
nodeOverrides={
'numNodes': 123,
'nodePropertyOverrides': [
{
'targetNodes': 'string',
'containerOverrides': {
'vcpus': 123,
'memory': 123,
'command': [
'string',
],
'instanceType': 'string',
'environment': [
{
'name': 'string',
'value': 'string'
},
],
'resourceRequirements': [
{
'value': 'string',
'type': 'GPU'|'VCPU'|'MEMORY'
},
]
}
},
]
},
retryStrategy={
'attempts': 123,
'evaluateOnExit': [
{
'onStatusReason': 'string',
'onReason': 'string',
'onExitCode': 'string',
'action': 'RETRY'|'EXIT'
},
]
},
propagateTags=True|False,
timeout={
'attemptDurationSeconds': 123
},
tags={
'string': 'string'
},
eksPropertiesOverride={
'podProperties': {
'containers': [
{
'image': 'string',
'command': [
'string',
],
'args': [
'string',
],
'env': [
{
'name': 'string',
'value': 'string'
},
],
'resources': {
'limits': {
'string': 'string'
},
'requests': {
'string': 'string'
}
}
},
]
}
}
)
[REQUIRED]
The name of the job. It can be up to 128 letters long. The first character must be alphanumeric, can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
[REQUIRED]
The job queue where the job is submitted. You can specify either the name or the Amazon Resource Name (ARN) of the queue.
The scheduling priority for the job. This only affects jobs in job queues with a fair share policy. Jobs with a higher scheduling priority are scheduled before jobs with a lower scheduling priority. This overrides any scheduling priority in the job definition.
The minimum supported value is 0 and the maximum supported value is 9999.
The array properties for the submitted job, such as the size of the array. The array size can be between 2 and 10,000. If you specify array properties for a job, it becomes an array job. For more information, see Array Jobs in the Batch User Guide .
The size of the array job.
A list of dependencies for the job. A job can depend upon a maximum of 20 jobs. You can specify a SEQUENTIAL
type dependency without specifying a job ID for array jobs so that each child array job completes sequentially, starting at index 0. You can also specify an N_TO_N
type dependency with a job ID for array jobs. In that case, each index child of this job must wait for the corresponding index child of each dependency to complete before it can begin.
An object that represents an Batch job dependency.
The job ID of the Batch job that's associated with this dependency.
The type of the job dependency.
[REQUIRED]
The job definition used by this job. This value can be one of name
, name:revision
, or the Amazon Resource Name (ARN) for the job definition. If name
is specified without a revision then the latest active revision is used.
Additional parameters passed to the job that replace parameter substitution placeholders that are set in the job definition. Parameters are specified as a key and value pair mapping. Parameters in a SubmitJob
request override any corresponding parameter defaults from the job definition.
An object with various properties that override the defaults for the job definition that specify the name of a container in the specified job definition and the overrides it should receive. You can override the default command for a container, which is specified in the job definition or the Docker image, with a command
override. You can also override existing environment variables on a container or add new environment variables to it with an environment
override.
This parameter is deprecated, use resourceRequirements
to override the vcpus
parameter that's set in the job definition. It's not supported for jobs running on Fargate resources. For jobs that run on EC2 resources, it overrides the vcpus
parameter set in the job definition, but doesn't override any vCPU requirement specified in the resourceRequirements
structure in the job definition. To override vCPU requirements that are specified in the resourceRequirements
structure in the job definition, resourceRequirements
must be specified in the SubmitJob
request, with type
set to VCPU
and value
set to the new value. For more information, see Can't override job definition resource requirements in the Batch User Guide .
This parameter is deprecated, use resourceRequirements
to override the memory requirements specified in the job definition. It's not supported for jobs running on Fargate resources. For jobs that run on EC2 resources, it overrides the memory
parameter set in the job definition, but doesn't override any memory requirement that's specified in the resourceRequirements
structure in the job definition. To override memory requirements that are specified in the resourceRequirements
structure in the job definition, resourceRequirements
must be specified in the SubmitJob
request, with type
set to MEMORY
and value
set to the new value. For more information, see Can't override job definition resource requirements in the Batch User Guide .
The command to send to the container that overrides the default command from the Docker image or the job definition.
The instance type to use for a multi-node parallel job.
Note
This parameter isn't applicable to single-node container jobs or jobs that run on Fargate resources, and shouldn't be provided.
The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the job definition.
Note
Environment variables cannot start with " AWS_BATCH
". This naming convention is reserved for variables that Batch sets.
A key-value pair object.
The name of the key-value pair. For environment variables, this is the name of the environment variable.
The value of the key-value pair. For environment variables, this is the value of the environment variable.
The type and amount of resources to assign to a container. This overrides the settings in the job definition. The supported resources include GPU
, MEMORY
, and VCPU
.
The type and amount of a resource to assign to a container. The supported resources include GPU
, MEMORY
, and VCPU
.
The quantity of the specified resource to reserve for the container. The values vary based on the type
specified.
type="GPU"
The number of physical GPUs to reserve for the container. Make sure that the number of GPUs reserved for all containers in a job doesn't exceed the number of available GPUs on the compute resource that the job is launched on.
Note
GPUs aren't available for jobs that are running on Fargate resources.
type="MEMORY"
The memory hard limit (in MiB) present to the container. This parameter is supported for jobs that are running on EC2 resources. If your container attempts to exceed the memory specified, the container is terminated. This parameter maps to Memory
in the Create a container section of the Docker Remote API and the --memory
option to docker run. You must specify at least 4 MiB of memory for a job. This is required but can be specified in several places for multi-node parallel (MNP) jobs. It must be specified for each node at least once. This parameter maps to Memory
in the Create a container section of the Docker Remote API and the --memory
option to docker run.
Note
If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see Memory management in the Batch User Guide .
For jobs that are running on Fargate resources, then value
is the hard limit (in MiB), and must match one of the supported values and the VCPU
values must be one of the values supported for that memory value.
value = 512
VCPU
= 0.25value = 1024
VCPU
= 0.25 or 0.5value = 2048
VCPU
= 0.25, 0.5, or 1value = 3072
VCPU
= 0.5, or 1value = 4096
VCPU
= 0.5, 1, or 2value = 5120, 6144, or 7168
VCPU
= 1 or 2value = 8192
VCPU
= 1, 2, 4, or 8value = 9216, 10240, 11264, 12288, 13312, 14336, or 15360
VCPU
= 2 or 4value = 16384
VCPU
= 2, 4, or 8value = 17408, 18432, 19456, 21504, 22528, 23552, 25600, 26624, 27648, 29696, or 30720
VCPU
= 4value = 20480, 24576, or 28672
VCPU
= 4 or 8value = 36864, 45056, 53248, or 61440
VCPU
= 8value = 32768, 40960, 49152, or 57344
VCPU
= 8 or 16value = 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880
VCPU
= 16type="VCPU"
The number of vCPUs reserved for the container. This parameter maps to CpuShares
in the Create a container section of the Docker Remote API and the --cpu-shares
option to docker run. Each vCPU is equivalent to 1,024 CPU shares. For EC2 resources, you must specify at least one vCPU. This is required but can be specified in several places; it must be specified for each node at least once.
The default for the Fargate On-Demand vCPU resource count quota is 6 vCPUs. For more information about Fargate quotas, see Fargate quotas in the Amazon Web Services General Reference .
For jobs that are running on Fargate resources, then value
must match one of the supported values and the MEMORY
values must be one of the values supported for that VCPU
value. The supported values are 0.25, 0.5, 1, 2, 4, 8, and 16
value = 0.25
MEMORY
= 512, 1024, or 2048value = 0.5
MEMORY
= 1024, 2048, 3072, or 4096value = 1
MEMORY
= 2048, 3072, 4096, 5120, 6144, 7168, or 8192value = 2
MEMORY
= 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, or 16384value = 4
MEMORY
= 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16384, 17408, 18432, 19456, 20480, 21504, 22528, 23552, 24576, 25600, 26624, 27648, 28672, 29696, or 30720value = 8
MEMORY
= 16384, 20480, 24576, 28672, 32768, 36864, 40960, 45056, 49152, 53248, 57344, or 61440value = 16
MEMORY
= 32768, 40960, 49152, 57344, 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880
The type of resource to assign to a container. The supported resources include GPU
, MEMORY
, and VCPU
.
A list of node overrides in JSON format that specify the node range to target and the container overrides for that node range.
Note
This parameter isn't applicable to jobs that are running on Fargate resources; use containerOverrides
instead.
The number of nodes to use with a multi-node parallel job. This value overrides the number of nodes that are specified in the job definition. To use this override, you must meet the following conditions:
:
or n:
.The node property overrides for the job.
The object that represents any node overrides to a job definition that's used in a SubmitJob API operation.
The range of nodes, using node index values, that's used to override. A range of 0:3
indicates nodes with index values of 0
through 3
. If the starting range value is omitted ( :n
), then 0
is used to start the range. If the ending range value is omitted ( n:
), then the highest possible node index is used to end the range.
The overrides that are sent to a node range.
This parameter is deprecated, use resourceRequirements
to override the vcpus
parameter that's set in the job definition. It's not supported for jobs running on Fargate resources. For jobs that run on EC2 resources, it overrides the vcpus
parameter set in the job definition, but doesn't override any vCPU requirement specified in the resourceRequirements
structure in the job definition. To override vCPU requirements that are specified in the resourceRequirements
structure in the job definition, resourceRequirements
must be specified in the SubmitJob
request, with type
set to VCPU
and value
set to the new value. For more information, see Can't override job definition resource requirements in the Batch User Guide .
This parameter is deprecated, use resourceRequirements
to override the memory requirements specified in the job definition. It's not supported for jobs running on Fargate resources. For jobs that run on EC2 resources, it overrides the memory
parameter set in the job definition, but doesn't override any memory requirement that's specified in the resourceRequirements
structure in the job definition. To override memory requirements that are specified in the resourceRequirements
structure in the job definition, resourceRequirements
must be specified in the SubmitJob
request, with type
set to MEMORY
and value
set to the new value. For more information, see Can't override job definition resource requirements in the Batch User Guide .
The command to send to the container that overrides the default command from the Docker image or the job definition.
The instance type to use for a multi-node parallel job.
Note
This parameter isn't applicable to single-node container jobs or jobs that run on Fargate resources, and shouldn't be provided.
The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the job definition.
Note
Environment variables cannot start with " AWS_BATCH
". This naming convention is reserved for variables that Batch sets.
A key-value pair object.
The name of the key-value pair. For environment variables, this is the name of the environment variable.
The value of the key-value pair. For environment variables, this is the value of the environment variable.
The type and amount of resources to assign to a container. This overrides the settings in the job definition. The supported resources include GPU
, MEMORY
, and VCPU
.
The type and amount of a resource to assign to a container. The supported resources include GPU
, MEMORY
, and VCPU
.
The quantity of the specified resource to reserve for the container. The values vary based on the type
specified.
type="GPU"
The number of physical GPUs to reserve for the container. Make sure that the number of GPUs reserved for all containers in a job doesn't exceed the number of available GPUs on the compute resource that the job is launched on.
Note
GPUs aren't available for jobs that are running on Fargate resources.
type="MEMORY"
The memory hard limit (in MiB) present to the container. This parameter is supported for jobs that are running on EC2 resources. If your container attempts to exceed the memory specified, the container is terminated. This parameter maps to Memory
in the Create a container section of the Docker Remote API and the --memory
option to docker run. You must specify at least 4 MiB of memory for a job. This is required but can be specified in several places for multi-node parallel (MNP) jobs. It must be specified for each node at least once. This parameter maps to Memory
in the Create a container section of the Docker Remote API and the --memory
option to docker run.
Note
If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see Memory management in the Batch User Guide .
For jobs that are running on Fargate resources, then value
is the hard limit (in MiB), and must match one of the supported values and the VCPU
values must be one of the values supported for that memory value.
value = 512
VCPU
= 0.25value = 1024
VCPU
= 0.25 or 0.5value = 2048
VCPU
= 0.25, 0.5, or 1value = 3072
VCPU
= 0.5, or 1value = 4096
VCPU
= 0.5, 1, or 2value = 5120, 6144, or 7168
VCPU
= 1 or 2value = 8192
VCPU
= 1, 2, 4, or 8value = 9216, 10240, 11264, 12288, 13312, 14336, or 15360
VCPU
= 2 or 4value = 16384
VCPU
= 2, 4, or 8value = 17408, 18432, 19456, 21504, 22528, 23552, 25600, 26624, 27648, 29696, or 30720
VCPU
= 4value = 20480, 24576, or 28672
VCPU
= 4 or 8value = 36864, 45056, 53248, or 61440
VCPU
= 8value = 32768, 40960, 49152, or 57344
VCPU
= 8 or 16value = 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880
VCPU
= 16type="VCPU"
The number of vCPUs reserved for the container. This parameter maps to CpuShares
in the Create a container section of the Docker Remote API and the --cpu-shares
option to docker run. Each vCPU is equivalent to 1,024 CPU shares. For EC2 resources, you must specify at least one vCPU. This is required but can be specified in several places; it must be specified for each node at least once.
The default for the Fargate On-Demand vCPU resource count quota is 6 vCPUs. For more information about Fargate quotas, see Fargate quotas in the Amazon Web Services General Reference .
For jobs that are running on Fargate resources, then value
must match one of the supported values and the MEMORY
values must be one of the values supported for that VCPU
value. The supported values are 0.25, 0.5, 1, 2, 4, 8, and 16
value = 0.25
MEMORY
= 512, 1024, or 2048value = 0.5
MEMORY
= 1024, 2048, 3072, or 4096value = 1
MEMORY
= 2048, 3072, 4096, 5120, 6144, 7168, or 8192value = 2
MEMORY
= 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, or 16384value = 4
MEMORY
= 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16384, 17408, 18432, 19456, 20480, 21504, 22528, 23552, 24576, 25600, 26624, 27648, 28672, 29696, or 30720value = 8
MEMORY
= 16384, 20480, 24576, 28672, 32768, 36864, 40960, 45056, 49152, 53248, 57344, or 61440value = 16
MEMORY
= 32768, 40960, 49152, 57344, 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880
The type of resource to assign to a container. The supported resources include GPU
, MEMORY
, and VCPU
.
The retry strategy to use for failed jobs from this SubmitJob operation. When a retry strategy is specified here, it overrides the retry strategy defined in the job definition.
The number of times to move a job to the RUNNABLE
status. You can specify between 1 and 10 attempts. If the value of attempts
is greater than one, the job is retried on failure the same number of attempts as the value.
Array of up to 5 objects that specify the conditions where jobs are retried or failed. If this parameter is specified, then the attempts
parameter must also be specified. If none of the listed conditions match, then the job is retried.
Specifies an array of up to 5 conditions to be met, and an action to take ( RETRY
or EXIT
) if all conditions are met. If none of the EvaluateOnExit
conditions in a RetryStrategy
match, then the job is retried.
Contains a glob pattern to match against the StatusReason
returned for a job. The pattern can contain up to 512 characters. It can contain letters, numbers, periods (.), colons (:), and white spaces (including spaces or tabs). It can optionally end with an asterisk (*) so that only the start of the string needs to be an exact match.
Contains a glob pattern to match against the Reason
returned for a job. The pattern can contain up to 512 characters. It can contain letters, numbers, periods (.), colons (:), and white space (including spaces and tabs). It can optionally end with an asterisk (*) so that only the start of the string needs to be an exact match.
Contains a glob pattern to match against the decimal representation of the ExitCode
returned for a job. The pattern can be up to 512 characters long. It can contain only numbers, and can end with an asterisk (*) so that only the start of the string needs to be an exact match.
The string can contain up to 512 characters.
Specifies the action to take if all of the specified conditions ( onStatusReason
, onReason
, and onExitCode
) are met. The values aren't case sensitive.
FAILED
state. When specified, this overrides the tag propagation setting in the job definition.The timeout configuration for this SubmitJob operation. You can specify a timeout duration after which Batch terminates your jobs if they haven't finished. If a job is terminated due to a timeout, it isn't retried. The minimum value for the timeout is 60 seconds. This configuration overrides any timeout configuration specified in the job definition. For array jobs, child jobs have the same timeout configuration as the parent job. For more information, see Job Timeouts in the Amazon Elastic Container Service Developer Guide .
The job timeout time (in seconds) that's measured from the job attempt's startedAt
timestamp. After this time passes, Batch terminates your jobs if they aren't finished. The minimum value for the timeout is 60 seconds.
For array jobs, the timeout applies to the child jobs, not to the parent array job.
For multi-node parallel (MNP) jobs, the timeout applies to the whole job, not to the individual nodes.
The tags that you apply to the job request to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see Tagging Amazon Web Services Resources in Amazon Web Services General Reference .
An object that can only be specified for jobs that are run on Amazon EKS resources with various properties that override defaults for the job definition.
The overrides for the Kubernetes pod resources of a job.
The overrides for the container that's used on the Amazon EKS pod.
Object representing any Kubernetes overrides to a job definition that's used in a SubmitJob API operation.
The override of the Docker image that's used to start the container.
The command to send to the container that overrides the default command from the Docker image or the job definition.
The arguments to the entrypoint to send to the container that overrides the default arguments from the Docker image or the job definition. For more information, see CMD in the Dockerfile reference and Define a command an arguments for a pod in the Kubernetes documentation .
The environment variables to send to the container. You can add new environment variables, which are added to the container at launch. Or, you can override the existing environment variables from the Docker image or the job definition.
Note
Environment variables cannot start with " AWS_BATCH
". This naming convention is reserved for variables that Batch sets.
An environment variable.
The name of the environment variable.
The value of the environment variable.
The type and amount of resources to assign to a container. These override the settings in the job definition. The supported resources include memory
, cpu
, and nvidia.com/gpu
. For more information, see Resource management for pods and containers in the Kubernetes documentation .
The type and quantity of the resources to reserve for the container. The values vary based on the name
that's specified. Resources can be requested using either the limits
or the requests
objects.
memory
The memory hard limit (in MiB) for the container, using whole integers, with a "Mi" suffix. If your container attempts to exceed the memory specified, the container is terminated. You must specify at least 4 MiB of memory for a job. memory
can be specified in limits
, requests
, or both. If memory
is specified in both places, then the value that's specified in limits
must be equal to the value that's specified in requests
.
Note
To maximize your resource utilization, provide your jobs with as much memory as possible for the specific instance type that you are using. To learn how, see Memory management in the Batch User Guide .
cpu
The number of CPUs that's reserved for the container. Values must be an even multiple of 0.25
. cpu
can be specified in limits
, requests
, or both. If cpu
is specified in both places, then the value that's specified in limits
must be at least as large as the value that's specified in requests
.
nvidia.com/gpu
The number of GPUs that's reserved for the container. Values must be a whole integer. memory
can be specified in limits
, requests
, or both. If memory
is specified in both places, then the value that's specified in limits
must be equal to the value that's specified in requests
.
The type and quantity of the resources to request for the container. The values vary based on the name
that's specified. Resources can be requested by using either the limits
or the requests
objects.
memory
The memory hard limit (in MiB) for the container, using whole integers, with a "Mi" suffix. If your container attempts to exceed the memory specified, the container is terminated. You must specify at least 4 MiB of memory for a job. memory
can be specified in limits
, requests
, or both. If memory
is specified in both, then the value that's specified in limits
must be equal to the value that's specified in requests
.
Note
If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see Memory management in the Batch User Guide .
cpu
The number of CPUs that are reserved for the container. Values must be an even multiple of 0.25
. cpu
can be specified in limits
, requests
, or both. If cpu
is specified in both, then the value that's specified in limits
must be at least as large as the value that's specified in requests
.
nvidia.com/gpu
The number of GPUs that are reserved for the container. Values must be a whole integer. nvidia.com/gpu
can be specified in limits
, requests
, or both. If nvidia.com/gpu
is specified in both, then the value that's specified in limits
must be equal to the value that's specified in requests
.
dict
Response Syntax
{
'jobArn': 'string',
'jobName': 'string',
'jobId': 'string'
}
Response Structure
(dict) --
jobArn (string) --
The Amazon Resource Name (ARN) for the job.
jobName (string) --
The name of the job.
jobId (string) --
The unique identifier for the job.
Exceptions
Batch.Client.exceptions.ClientException
Batch.Client.exceptions.ServerException
Examples
This example submits a simple container job called example to the HighPriority job queue.
response = client.submit_job(
jobDefinition='sleep60',
jobName='example',
jobQueue='HighPriority',
)
print(response)
Expected Output:
{
'jobId': '876da822-4198-45f2-a252-6cea32512ea8',
'jobName': 'example',
'ResponseMetadata': {
'...': '...',
},
}