Table of Contents
ECS.
Client
¶A low-level client representing Amazon EC2 Container Service (ECS)
Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service. It makes it easy to run, stop, and manage Docker containers. You can host your cluster on a serverless infrastructure that's managed by Amazon ECS by launching your services or tasks on Fargate. For more control, you can host your tasks on a cluster of Amazon Elastic Compute Cloud (Amazon EC2) or External (on-premises) instances that you manage.
Amazon ECS makes it easy to launch and stop container-based applications with simple API calls. This makes it easy to get the state of your cluster from a centralized service, and gives you access to many familiar Amazon EC2 features.
You can use Amazon ECS to schedule the placement of containers across your cluster based on your resource needs, isolation policies, and availability requirements. With Amazon ECS, you don't need to operate your own cluster management and configuration management systems. You also don't need to worry about scaling your management infrastructure.
import boto3
client = boto3.client('ecs')
These are the available methods:
can_paginate()
close()
create_capacity_provider()
create_cluster()
create_service()
create_task_set()
delete_account_setting()
delete_attributes()
delete_capacity_provider()
delete_cluster()
delete_service()
delete_task_set()
deregister_container_instance()
deregister_task_definition()
describe_capacity_providers()
describe_clusters()
describe_container_instances()
describe_services()
describe_task_definition()
describe_task_sets()
describe_tasks()
discover_poll_endpoint()
execute_command()
get_paginator()
get_task_protection()
get_waiter()
list_account_settings()
list_attributes()
list_clusters()
list_container_instances()
list_services()
list_services_by_namespace()
list_tags_for_resource()
list_task_definition_families()
list_task_definitions()
list_tasks()
put_account_setting()
put_account_setting_default()
put_attributes()
put_cluster_capacity_providers()
register_container_instance()
register_task_definition()
run_task()
start_task()
stop_task()
submit_attachment_state_changes()
submit_container_state_change()
submit_task_state_change()
tag_resource()
untag_resource()
update_capacity_provider()
update_cluster()
update_cluster_settings()
update_container_agent()
update_container_instances_state()
update_service()
update_service_primary_task_set()
update_task_protection()
update_task_set()
can_paginate
(operation_name)¶Check if an operation can be paginated.
create_foo
, and you'd normally invoke the
operation as client.create_foo(**kwargs)
, if the
create_foo
operation can be paginated, you can use the
call client.get_paginator("create_foo")
.True
if the operation can be paginated,
False
otherwise.close
()¶Closes underlying endpoint connections.
create_capacity_provider
(**kwargs)¶Creates a new capacity provider. Capacity providers are associated with an Amazon ECS cluster and are used in capacity provider strategies to facilitate cluster auto scaling.
Only capacity providers that use an Auto Scaling group can be created. Amazon ECS tasks on Fargate use the FARGATE
and FARGATE_SPOT
capacity providers. These providers are available to all accounts in the Amazon Web Services Regions that Fargate supports.
See also: AWS API Documentation
Request Syntax
response = client.create_capacity_provider(
name='string',
autoScalingGroupProvider={
'autoScalingGroupArn': 'string',
'managedScaling': {
'status': 'ENABLED'|'DISABLED',
'targetCapacity': 123,
'minimumScalingStepSize': 123,
'maximumScalingStepSize': 123,
'instanceWarmupPeriod': 123
},
'managedTerminationProtection': 'ENABLED'|'DISABLED'
},
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
[REQUIRED]
The name of the capacity provider. Up to 255 characters are allowed. They include letters (both upper and lowercase letters), numbers, underscores (_), and hyphens (-). The name can't be prefixed with " aws
", " ecs
", or " fargate
".
[REQUIRED]
The details of the Auto Scaling group for the capacity provider.
The Amazon Resource Name (ARN) that identifies the Auto Scaling group.
The managed scaling settings for the Auto Scaling group capacity provider.
Determines whether to use managed scaling for the capacity provider.
The target capacity value for the capacity provider. The specified value must be greater than 0
and less than or equal to 100
. A value of 100
results in the Amazon EC2 instances in your Auto Scaling group being completely used.
The minimum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter If this parameter is omitted, the default value of 1
is used.
When additional capacity is required, Amazon ECS will scale up the minimum scaling step size even if the actual demand is less than the minimum scaling step size.
If you use a capacity provider with an Auto Scaling group configured with more than one Amazon EC2 instance type or Availability Zone, Amazon ECS will scale up by the exact minimum scaling step size value and will ignore both the maximum scaling step size as well as the capacity demand.
The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter. If this parameter is omitted, the default value of 1
is used.
The period of time, in seconds, after a newly launched Amazon EC2 instance can contribute to CloudWatch metrics for Auto Scaling group. If this parameter is omitted, the default value of 300
seconds is used.
The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection. The default is disabled.
Warning
When using managed termination protection, managed scaling must also be used otherwise managed termination protection doesn't work.
When managed termination protection is enabled, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions enabled as well. For more information, see Instance Protection in the Auto Scaling User Guide .
When managed termination protection is disabled, your Amazon EC2 instances aren't protected from termination when the Auto Scaling group scales in.
The metadata that you apply to the capacity provider to categorize and organize them more conveniently. Each tag consists of a key and an optional value. You define both of them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
dict
Response Syntax
{
'capacityProvider': {
'capacityProviderArn': 'string',
'name': 'string',
'status': 'ACTIVE'|'INACTIVE',
'autoScalingGroupProvider': {
'autoScalingGroupArn': 'string',
'managedScaling': {
'status': 'ENABLED'|'DISABLED',
'targetCapacity': 123,
'minimumScalingStepSize': 123,
'maximumScalingStepSize': 123,
'instanceWarmupPeriod': 123
},
'managedTerminationProtection': 'ENABLED'|'DISABLED'
},
'updateStatus': 'DELETE_IN_PROGRESS'|'DELETE_COMPLETE'|'DELETE_FAILED'|'UPDATE_IN_PROGRESS'|'UPDATE_COMPLETE'|'UPDATE_FAILED',
'updateStatusReason': 'string',
'tags': [
{
'key': 'string',
'value': 'string'
},
]
}
}
Response Structure
(dict) --
capacityProvider (dict) --
The full description of the new capacity provider.
capacityProviderArn (string) --
The Amazon Resource Name (ARN) that identifies the capacity provider.
name (string) --
The name of the capacity provider.
status (string) --
The current status of the capacity provider. Only capacity providers in an ACTIVE
state can be used in a cluster. When a capacity provider is successfully deleted, it has an INACTIVE
status.
autoScalingGroupProvider (dict) --
The Auto Scaling group settings for the capacity provider.
autoScalingGroupArn (string) --
The Amazon Resource Name (ARN) that identifies the Auto Scaling group.
managedScaling (dict) --
The managed scaling settings for the Auto Scaling group capacity provider.
status (string) --
Determines whether to use managed scaling for the capacity provider.
targetCapacity (integer) --
The target capacity value for the capacity provider. The specified value must be greater than 0
and less than or equal to 100
. A value of 100
results in the Amazon EC2 instances in your Auto Scaling group being completely used.
minimumScalingStepSize (integer) --
The minimum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter If this parameter is omitted, the default value of 1
is used.
When additional capacity is required, Amazon ECS will scale up the minimum scaling step size even if the actual demand is less than the minimum scaling step size.
If you use a capacity provider with an Auto Scaling group configured with more than one Amazon EC2 instance type or Availability Zone, Amazon ECS will scale up by the exact minimum scaling step size value and will ignore both the maximum scaling step size as well as the capacity demand.
maximumScalingStepSize (integer) --
The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter. If this parameter is omitted, the default value of 1
is used.
instanceWarmupPeriod (integer) --
The period of time, in seconds, after a newly launched Amazon EC2 instance can contribute to CloudWatch metrics for Auto Scaling group. If this parameter is omitted, the default value of 300
seconds is used.
managedTerminationProtection (string) --
The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection. The default is disabled.
Warning
When using managed termination protection, managed scaling must also be used otherwise managed termination protection doesn't work.
When managed termination protection is enabled, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions enabled as well. For more information, see Instance Protection in the Auto Scaling User Guide .
When managed termination protection is disabled, your Amazon EC2 instances aren't protected from termination when the Auto Scaling group scales in.
updateStatus (string) --
The update status of the capacity provider. The following are the possible states that is returned.
DELETE_IN_PROGRESS
The capacity provider is in the process of being deleted.
DELETE_COMPLETE
The capacity provider was successfully deleted and has an INACTIVE
status.
DELETE_FAILED
The capacity provider can't be deleted. The update status reason provides further details about why the delete failed.
updateStatusReason (string) --
The update status reason. This provides further details about the update status for the capacity provider.
tags (list) --
The metadata that you apply to the capacity provider to help you categorize and organize it. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.key (string) --
One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
Exceptions
ECS.Client.exceptions.ServerException
ECS.Client.exceptions.ClientException
ECS.Client.exceptions.InvalidParameterException
ECS.Client.exceptions.LimitExceededException
ECS.Client.exceptions.UpdateInProgressException
create_cluster
(**kwargs)¶Creates a new Amazon ECS cluster. By default, your account receives a default
cluster when you launch your first container instance. However, you can create your own cluster with a unique name with the CreateCluster
action.
Note
When you call the CreateCluster API operation, Amazon ECS attempts to create the Amazon ECS service-linked role for your account. This is so that it can manage required resources in other Amazon Web Services services on your behalf. However, if the IAM user that makes the call doesn't have permissions to create the service-linked role, it isn't created. For more information, see Using service-linked roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide .
See also: AWS API Documentation
Request Syntax
response = client.create_cluster(
clusterName='string',
tags=[
{
'key': 'string',
'value': 'string'
},
],
settings=[
{
'name': 'containerInsights',
'value': 'string'
},
],
configuration={
'executeCommandConfiguration': {
'kmsKeyId': 'string',
'logging': 'NONE'|'DEFAULT'|'OVERRIDE',
'logConfiguration': {
'cloudWatchLogGroupName': 'string',
'cloudWatchEncryptionEnabled': True|False,
's3BucketName': 'string',
's3EncryptionEnabled': True|False,
's3KeyPrefix': 'string'
}
}
},
capacityProviders=[
'string',
],
defaultCapacityProviderStrategy=[
{
'capacityProvider': 'string',
'weight': 123,
'base': 123
},
],
serviceConnectDefaults={
'namespace': 'string'
}
)
default
. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.The metadata that you apply to the cluster to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
The setting to use when creating a cluster. This parameter is used to turn on CloudWatch Container Insights for a cluster. If this value is specified, it overrides the containerInsights
value set with PutAccountSetting or PutAccountSettingDefault.
The settings to use when creating a cluster. This parameter is used to turn on CloudWatch Container Insights for a cluster.
The name of the cluster setting. The only supported value is containerInsights
.
The value to set for the cluster setting. The supported values are enabled
and disabled
. If enabled
is specified, CloudWatch Container Insights will be enabled for the cluster, otherwise it will be disabled unless the containerInsights
account setting is enabled. If a cluster value is specified, it will override the containerInsights
value set with PutAccountSetting or PutAccountSettingDefault.
The execute
command configuration for the cluster.
The details of the execute command configuration.
Specify an Key Management Service key ID to encrypt the data between the local client and the container.
The log setting to use for redirecting logs for your execute command results. The following log settings are available.
NONE
: The execute command session is not logged.DEFAULT
: The awslogs
configuration in the task definition is used. If no logging parameter is specified, it defaults to this value. If no awslogs
log driver is configured in the task definition, the output won't be logged.OVERRIDE
: Specify the logging details as a part of logConfiguration
. If the OVERRIDE
logging option is specified, the logConfiguration
is required.The log configuration for the results of the execute command actions. The logs can be sent to CloudWatch Logs or an Amazon S3 bucket. When logging=OVERRIDE
is specified, a logConfiguration
must be provided.
The name of the CloudWatch log group to send logs to.
Note
The CloudWatch log group must already be created.
Determines whether to use encryption on the CloudWatch logs. If not specified, encryption will be disabled.
The name of the S3 bucket to send logs to.
Note
The S3 bucket must already be created.
Determines whether to use encryption on the S3 logs. If not specified, encryption is not used.
An optional folder in the S3 bucket to place logs in.
The short name of one or more capacity providers to associate with the cluster. A capacity provider must be associated with a cluster before it can be included as part of the default capacity provider strategy of the cluster or used in a capacity provider strategy when calling the CreateService or RunTask actions.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must be created but not associated with another cluster. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
The capacity provider strategy to set as the default for the cluster. After a default capacity provider strategy is set for a cluster, when you call the RunTask or CreateService APIs with no capacity provider strategy or launch type specified, the default capacity provider strategy for the cluster is used.
If a default capacity provider strategy isn't defined for a cluster when it was created, it can be defined later with the PutClusterCapacityProviders API operation.
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE
or UPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
A capacity provider strategy may contain a maximum of 6 capacity providers.
The short name of the capacity provider.
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight
value is taken into consideration after the base
value, if defined, is satisfied.
If no weight
value is specified, the default value of 0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0
, any RunTask
or CreateService
actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1
, then when the base
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1
for capacityProviderA and a weight of 4
for capacityProviderB , then for every one task that's run using capacityProviderA , four tasks would use capacityProviderB .
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0
is used.
Use this parameter to set a default Service Connect namespace. After you set a default Service Connect namespace, any new services with Service Connect turned on that are created in the cluster are added as client services in the namespace. This setting only applies to new services that set the enabled
parameter to true
in the ServiceConnectConfiguration
. You can set the namespace of each service individually in the ServiceConnectConfiguration
to override this default parameter.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace that's used when you create a service and don't specify a Service Connect configuration. The namespace name can include up to 1024 characters. The name is case-sensitive. The name can't include hyphens (-), tilde (~), greater than (>), less than (<), or slash (/).
If you enter an existing namespace name or ARN, then that namespace will be used. Any namespace type is supported. The namespace must be in this account and this Amazon Web Services Region.
If you enter a new name, a Cloud Map namespace will be created. Amazon ECS creates a Cloud Map namespace with the "API calls" method of instance discovery only. This instance discovery method is the "HTTP" namespace type in the Command Line Interface. Other types of instance discovery aren't used by Service Connect.
If you update the service with an empty string ""
for the namespace name, the cluster configuration for Service Connect is removed. Note that the namespace will remain in Cloud Map and must be deleted separately.
For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide .
dict
Response Syntax
{
'cluster': {
'clusterArn': 'string',
'clusterName': 'string',
'configuration': {
'executeCommandConfiguration': {
'kmsKeyId': 'string',
'logging': 'NONE'|'DEFAULT'|'OVERRIDE',
'logConfiguration': {
'cloudWatchLogGroupName': 'string',
'cloudWatchEncryptionEnabled': True|False,
's3BucketName': 'string',
's3EncryptionEnabled': True|False,
's3KeyPrefix': 'string'
}
}
},
'status': 'string',
'registeredContainerInstancesCount': 123,
'runningTasksCount': 123,
'pendingTasksCount': 123,
'activeServicesCount': 123,
'statistics': [
{
'name': 'string',
'value': 'string'
},
],
'tags': [
{
'key': 'string',
'value': 'string'
},
],
'settings': [
{
'name': 'containerInsights',
'value': 'string'
},
],
'capacityProviders': [
'string',
],
'defaultCapacityProviderStrategy': [
{
'capacityProvider': 'string',
'weight': 123,
'base': 123
},
],
'attachments': [
{
'id': 'string',
'type': 'string',
'status': 'string',
'details': [
{
'name': 'string',
'value': 'string'
},
]
},
],
'attachmentsStatus': 'string',
'serviceConnectDefaults': {
'namespace': 'string'
}
}
}
Response Structure
(dict) --
cluster (dict) --
The full description of your new cluster.
clusterArn (string) --
The Amazon Resource Name (ARN) that identifies the cluster. For more information about the ARN format, see Amazon Resource Name (ARN) in the Amazon ECS Developer Guide .
clusterName (string) --
A user-generated string that you use to identify your cluster.
configuration (dict) --
The execute command configuration for the cluster.
executeCommandConfiguration (dict) --
The details of the execute command configuration.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the data between the local client and the container.
logging (string) --
The log setting to use for redirecting logs for your execute command results. The following log settings are available.
NONE
: The execute command session is not logged.DEFAULT
: The awslogs
configuration in the task definition is used. If no logging parameter is specified, it defaults to this value. If no awslogs
log driver is configured in the task definition, the output won't be logged.OVERRIDE
: Specify the logging details as a part of logConfiguration
. If the OVERRIDE
logging option is specified, the logConfiguration
is required.logConfiguration (dict) --
The log configuration for the results of the execute command actions. The logs can be sent to CloudWatch Logs or an Amazon S3 bucket. When logging=OVERRIDE
is specified, a logConfiguration
must be provided.
cloudWatchLogGroupName (string) --
The name of the CloudWatch log group to send logs to.
Note
The CloudWatch log group must already be created.
cloudWatchEncryptionEnabled (boolean) --
Determines whether to use encryption on the CloudWatch logs. If not specified, encryption will be disabled.
s3BucketName (string) --
The name of the S3 bucket to send logs to.
Note
The S3 bucket must already be created.
s3EncryptionEnabled (boolean) --
Determines whether to use encryption on the S3 logs. If not specified, encryption is not used.
s3KeyPrefix (string) --
An optional folder in the S3 bucket to place logs in.
status (string) --
The status of the cluster. The following are the possible states that are returned.
ACTIVE
The cluster is ready to accept tasks and if applicable you can register container instances with the cluster.
PROVISIONING
The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being created.
DEPROVISIONING
The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being deleted.
FAILED
The cluster has capacity providers that are associated with it and the resources needed for the capacity provider have failed to create.
INACTIVE
The cluster has been deleted. Clusters with an INACTIVE
status may remain discoverable in your account for a period of time. However, this behavior is subject to change in the future. We don't recommend that you rely on INACTIVE
clusters persisting.
registeredContainerInstancesCount (integer) --
The number of container instances registered into the cluster. This includes container instances in both ACTIVE
and DRAINING
status.
runningTasksCount (integer) --
The number of tasks in the cluster that are in the RUNNING
state.
pendingTasksCount (integer) --
The number of tasks in the cluster that are in the PENDING
state.
activeServicesCount (integer) --
The number of services that are running on the cluster in an ACTIVE
state. You can view these services with ListServices.
statistics (list) --
Additional information about your clusters that are separated by launch type. They include the following:
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
tags (list) --
The metadata that you apply to the cluster to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.key (string) --
One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
settings (list) --
The settings for the cluster. This parameter indicates whether CloudWatch Container Insights is enabled or disabled for a cluster.
(dict) --
The settings to use when creating a cluster. This parameter is used to turn on CloudWatch Container Insights for a cluster.
name (string) --
The name of the cluster setting. The only supported value is containerInsights
.
value (string) --
The value to set for the cluster setting. The supported values are enabled
and disabled
. If enabled
is specified, CloudWatch Container Insights will be enabled for the cluster, otherwise it will be disabled unless the containerInsights
account setting is enabled. If a cluster value is specified, it will override the containerInsights
value set with PutAccountSetting or PutAccountSettingDefault.
capacityProviders (list) --
The capacity providers associated with the cluster.
defaultCapacityProviderStrategy (list) --
The default capacity provider strategy for the cluster. When services or tasks are run in the cluster with no launch type or capacity provider strategy specified, the default capacity provider strategy is used.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE
or UPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight
value is taken into consideration after the base
value, if defined, is satisfied.
If no weight
value is specified, the default value of 0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0
, any RunTask
or CreateService
actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1
, then when the base
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1
for capacityProviderA and a weight of 4
for capacityProviderB , then for every one task that's run using capacityProviderA , four tasks would use capacityProviderB .
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0
is used.
attachments (list) --
The resources attached to a cluster. When using a capacity provider with a cluster, the capacity provider and associated resources are returned as cluster attachments.
(dict) --
An object representing a container instance or task attachment.
id (string) --
The unique identifier for the attachment.
type (string) --
The type of the attachment, such as ElasticNetworkInterface
.
status (string) --
The status of the attachment. Valid values are PRECREATED
, CREATED
, ATTACHING
, ATTACHED
, DETACHING
, DETACHED
, DELETED
, and FAILED
.
details (list) --
Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
attachmentsStatus (string) --
The status of the capacity providers associated with the cluster. The following are the states that are returned.
UPDATE_IN_PROGRESS
The available capacity providers for the cluster are updating.
UPDATE_COMPLETE
The capacity providers have successfully updated.
UPDATE_FAILED
The capacity provider updates failed.
serviceConnectDefaults (dict) --
Use this parameter to set a default Service Connect namespace. After you set a default Service Connect namespace, any new services with Service Connect turned on that are created in the cluster are added as client services in the namespace. This setting only applies to new services that set the enabled
parameter to true
in the ServiceConnectConfiguration
. You can set the namespace of each service individually in the ServiceConnectConfiguration
to override this default parameter.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace. When you create a service and don't specify a Service Connect configuration, this namespace is used.
Exceptions
ECS.Client.exceptions.ServerException
ECS.Client.exceptions.ClientException
ECS.Client.exceptions.InvalidParameterException
Examples
This example creates a cluster in your default region.
response = client.create_cluster(
clusterName='my_cluster',
)
print(response)
Expected Output:
{
'cluster': {
'activeServicesCount': 0,
'clusterArn': 'arn:aws:ecs:us-east-1:012345678910:cluster/my_cluster',
'clusterName': 'my_cluster',
'pendingTasksCount': 0,
'registeredContainerInstancesCount': 0,
'runningTasksCount': 0,
'status': 'ACTIVE',
},
'ResponseMetadata': {
'...': '...',
},
}
create_service
(**kwargs)¶Runs and maintains your desired number of tasks from a specified task definition. If the number of tasks running in a service drops below the desiredCount
, Amazon ECS runs another copy of the task in the specified cluster. To update an existing service, see the UpdateService action.
In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind one or more load balancers. The load balancers distribute traffic across the tasks that are associated with the service. For more information, see Service load balancing in the Amazon Elastic Container Service Developer Guide .
Tasks for services that don't use a load balancer are considered healthy if they're in the RUNNING
state. Tasks for services that use a load balancer are considered healthy if they're in the RUNNING
state and are reported as healthy by the load balancer.
There are two service scheduler strategies available:
REPLICA
- The replica scheduling strategy places and maintains your desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide .DAEMON
- The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It also stops tasks that don't meet the placement constraints. When using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide .You can optionally specify a deployment configuration for your service. The deployment is initiated by changing properties. For example, the deployment might be initiated by the task definition or by your desired count of a service. This is done with an UpdateService operation. The default value for a replica service for minimumHealthyPercent
is 100%. The default value for a daemon service for minimumHealthyPercent
is 0%.
If a service uses the ECS
deployment controller, the minimum healthy percent represents a lower limit on the number of tasks in a service that must remain in the RUNNING
state during a deployment. Specifically, it represents it as a percentage of your desired number of tasks (rounded up to the nearest integer). This happens when any of your container instances are in the DRAINING
state if the service contains tasks using the EC2 launch type. Using this parameter, you can deploy without using additional cluster capacity. For example, if you set your service to have desired number of four tasks and a minimum healthy percent of 50%, the scheduler might stop two existing tasks to free up cluster capacity before starting two new tasks. If they're in the RUNNING
state, tasks for services that don't use a load balancer are considered healthy . If they're in the RUNNING
state and reported as healthy by the load balancer, tasks for services that do use a load balancer are considered healthy . The default value for minimum healthy percent is 100%.
If a service uses the ECS
deployment controller, the maximum percent parameter represents an upper limit on the number of tasks in a service that are allowed in the RUNNING
or PENDING
state during a deployment. Specifically, it represents it as a percentage of the desired number of tasks (rounded down to the nearest integer). This happens when any of your container instances are in the DRAINING
state if the service contains tasks using the EC2 launch type. Using this parameter, you can define the deployment batch size. For example, if your service has a desired number of four tasks and a maximum percent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default value for maximum percent is 200%.
If a service uses either the CODE_DEPLOY
or EXTERNAL
deployment controller types and tasks that use the EC2 launch type, the minimum healthy percent and maximum percent values are used only to define the lower and upper limit on the number of the tasks in the service that remain in the RUNNING
state. This is while the container instances are in the DRAINING
state. If the tasks in the service use the Fargate launch type, the minimum healthy percent and maximum percent values aren't used. This is the case even if they're currently visible when describing your service.
When creating a service that uses the EXTERNAL
deployment controller, you can specify only parameters that aren't controlled at the task set level. The only required parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide .
When the service scheduler launches new tasks, it determines task placement. For information about task placement and task placement strategies, see Amazon ECS task placement in the Amazon Elastic Container Service Developer Guide .
See also: AWS API Documentation
Request Syntax
response = client.create_service(
cluster='string',
serviceName='string',
taskDefinition='string',
loadBalancers=[
{
'targetGroupArn': 'string',
'loadBalancerName': 'string',
'containerName': 'string',
'containerPort': 123
},
],
serviceRegistries=[
{
'registryArn': 'string',
'port': 123,
'containerName': 'string',
'containerPort': 123
},
],
desiredCount=123,
clientToken='string',
launchType='EC2'|'FARGATE'|'EXTERNAL',
capacityProviderStrategy=[
{
'capacityProvider': 'string',
'weight': 123,
'base': 123
},
],
platformVersion='string',
role='string',
deploymentConfiguration={
'deploymentCircuitBreaker': {
'enable': True|False,
'rollback': True|False
},
'maximumPercent': 123,
'minimumHealthyPercent': 123
},
placementConstraints=[
{
'type': 'distinctInstance'|'memberOf',
'expression': 'string'
},
],
placementStrategy=[
{
'type': 'random'|'spread'|'binpack',
'field': 'string'
},
],
networkConfiguration={
'awsvpcConfiguration': {
'subnets': [
'string',
],
'securityGroups': [
'string',
],
'assignPublicIp': 'ENABLED'|'DISABLED'
}
},
healthCheckGracePeriodSeconds=123,
schedulingStrategy='REPLICA'|'DAEMON',
deploymentController={
'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL'
},
tags=[
{
'key': 'string',
'value': 'string'
},
],
enableECSManagedTags=True|False,
propagateTags='TASK_DEFINITION'|'SERVICE'|'NONE',
enableExecuteCommand=True|False,
serviceConnectConfiguration={
'enabled': True|False,
'namespace': 'string',
'services': [
{
'portName': 'string',
'discoveryName': 'string',
'clientAliases': [
{
'port': 123,
'dnsName': 'string'
},
],
'ingressPortOverride': 123
},
],
'logConfiguration': {
'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens',
'options': {
'string': 'string'
},
'secretOptions': [
{
'name': 'string',
'valueFrom': 'string'
},
]
}
}
)
[REQUIRED]
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster, but you can have similarly named services in multiple clusters within a Region or across multiple Regions.
The family
and revision
( family:revision
) or full ARN of the task definition to run in your service. If a revision
isn't specified, the latest ACTIVE
revision is used.
A task definition must be specified if the service uses either the ECS
or CODE_DEPLOY
deployment controllers.
A load balancer object representing the load balancers to use with your service. For more information, see Service load balancing in the Amazon Elastic Container Service Developer Guide .
If the service uses the rolling update ( ECS
) deployment controller and using either an Application Load Balancer or Network Load Balancer, you must specify one or more target group ARNs to attach to the service. The service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide .
If the service uses the CODE_DEPLOY
deployment controller, the service is required to use either an Application Load Balancer or Network Load Balancer. When creating an CodeDeploy deployment group, you specify two target groups (referred to as a targetGroupPair
). During a deployment, CodeDeploy determines which task set in your service has the status PRIMARY
, and it associates one target group with it. Then, it also associates the other target group with the replacement task set. The load balancer can also have up to two listeners: a required listener for production traffic and an optional listener that you can use to perform validation tests with Lambda functions before routing production traffic to it.
If you use the CODE_DEPLOY
deployment controller, these values can be changed when updating the service.
For Application Load Balancers and Network Load Balancers, this object must contain the load balancer target group ARN, the container name, and the container port to access from the load balancer. The container name must be as it appears in a container definition. The load balancer name parameter must be omitted. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group that's specified here.
For Classic Load Balancers, this object must contain the load balancer name, the container name , and the container port to access from the load balancer. The container name must be as it appears in a container definition. The target group ARN parameter must be omitted. When a task from this service is placed on a container instance, the container instance is registered with the load balancer that's specified here.
Services with tasks that use the awsvpc
network mode (for example, those with the Fargate launch type) only support Application Load Balancers and Network Load Balancers. Classic Load Balancers aren't supported. Also, when you create any target groups for these services, you must choose ip
as the target type, not instance
. This is because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance.
The load balancer configuration to use with a service or task set.
For specific notes and restrictions regarding the use of load balancers with services and task sets, see the CreateService and CreateTaskSet actions.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide .
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. If you're using a Classic Load Balancer, omit the target group ARN.
For services using the ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide .
For services using the CODE_DEPLOY
deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide .
Warning
If your service's task definition uses the awsvpc
network mode, you must choose ip
as the target type, not instance
. Do this when creating your target groups because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.
The name of the load balancer to associate with the Amazon ECS service or task set.
A load balancer name is only specified when using a Classic Load Balancer. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
The name of the container (as it appears in a container definition) to associate with the load balancer.
The port on the container to associate with the load balancer. This port must correspond to a containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort
of the port mapping.
The details of the service discovery registry to associate with this service. For more information, see Service discovery.
Note
Each service may be associated with one service registry. Multiple service registries for each service isn't supported.
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc
network mode and SRV records are used.
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition that your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
The number of instantiations of the specified task definition to place and keep running on your cluster.
This is required if schedulingStrategy
is REPLICA
or isn't specified. If schedulingStrategy
is DAEMON
then this isn't required.
The infrastructure that you run your service on. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide .
The FARGATE
launch type runs your tasks on Fargate On-Demand infrastructure.
Note
Fargate Spot infrastructure is available for use but a capacity provider strategy must be used. For more information, see Fargate capacity providers in the Amazon ECS User Guide for Fargate .
The EC2
launch type runs your tasks on Amazon EC2 instances registered to your cluster.
The EXTERNAL
launch type runs your tasks on your on-premises server or virtual machine (VM) capacity registered to your cluster.
A service can use either a launch type or a capacity provider strategy. If a launchType
is specified, the capacityProviderStrategy
parameter must be omitted.
The capacity provider strategy to use for the service.
If a capacityProviderStrategy
is specified, the launchType
parameter must be omitted. If no capacityProviderStrategy
or launchType
is specified, the defaultCapacityProviderStrategy
for the cluster is used.
A capacity provider strategy may contain a maximum of 6 capacity providers.
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE
or UPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
A capacity provider strategy may contain a maximum of 6 capacity providers.
The short name of the capacity provider.
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight
value is taken into consideration after the base
value, if defined, is satisfied.
If no weight
value is specified, the default value of 0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0
, any RunTask
or CreateService
actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1
, then when the base
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1
for capacityProviderA and a weight of 4
for capacityProviderB , then for every one task that's run using capacityProviderA , four tasks would use capacityProviderB .
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0
is used.
LATEST
platform version is used. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide .The name or full Amazon Resource Name (ARN) of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf. This parameter is only permitted if you are using a load balancer with your service and your task definition doesn't use the awsvpc
network mode. If you specify the role
parameter, you must also specify a load balancer object with the loadBalancers
parameter.
Warning
If your account has already created the Amazon ECS service-linked role, that role is used for your service unless you specify a role here. The service-linked role is required if your task definition uses the awsvpc
network mode or if the service is configured to use service discovery, an external deployment controller, multiple target groups, or Elastic Inference accelerators in which case you don't specify a role here. For more information, see Using service-linked roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide .
If your specified role has a path other than /
, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path. For example, if a role with the name bar
has a path of /foo/
then you would specify /foo/bar
as the role name. For more information, see Friendly names and paths in the IAM User Guide .
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
Note
The deployment circuit breaker can only be used for services using the rolling update ( ECS
) deployment type.
The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If deployment circuit breaker is enabled, a service deployment will transition to a failed state and stop launching new tasks. If rollback is enabled, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
Determines whether to use the deployment circuit breaker logic for the service.
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is enabled, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
If a service is using the rolling update ( ECS
) deployment type, the maximumPercent
parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING
or PENDING
state during a deployment, as a percentage of the desiredCount
(rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA
service scheduler and has a desiredCount
of four tasks and a maximumPercent
value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent
value for a service using the REPLICA
service scheduler is 200%.
If a service is using either the blue/green ( CODE_DEPLOY
) or EXTERNAL
deployment types and tasks that use the EC2 launch type, the maximum percent value is set to the default value and is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING
state while the container instances are in the DRAINING
state. If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service.
If a service is using the rolling update ( ECS
) deployment type, the minimumHealthyPercent
represents a lower limit on the number of your service's tasks that must remain in the RUNNING
state during a deployment, as a percentage of the desiredCount
(rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desiredCount
of four tasks and a minimumHealthyPercent
of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks.
For services that do not use a load balancer, the following should be noted:
RUNNING
state before the task is counted towards the minimum healthy percent total.For services are that do use a load balancer, the following should be noted:
If a service is using either the blue/green ( CODE_DEPLOY
) or EXTERNAL
deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value and is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING
state while the container instances are in the DRAINING
state. If a service is using either the blue/green ( CODE_DEPLOY
) or EXTERNAL
deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime.
An object representing a constraint on task placement. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide .
Note
If you're using the Fargate launch type, task placement constraints aren't supported.
The type of constraint. Use distinctInstance
to ensure that each task in a particular group is running on a different container instance. Use memberOf
to restrict the selection to a group of valid candidates.
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is distinctInstance
. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide .
The placement strategy objects to use for tasks in your service. You can specify a maximum of 5 strategy rules for each service.
The task placement strategy for a task or service. For more information, see Task placement strategies in the Amazon Elastic Container Service Developer Guide .
The type of placement strategy. The random
placement strategy randomly places tasks on available candidates. The spread
placement strategy spreads placement across available candidates evenly based on the field
parameter. The binpack
strategy places tasks on available candidates that have the least available amount of the resource that's specified with the field
parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.
The field to apply the placement strategy against. For the spread
placement strategy, valid values are instanceId
(or host
, which has the same effect), or any platform or custom attribute that's applied to a container instance, such as attribute:ecs.availability-zone
. For the binpack
placement strategy, valid values are cpu
and memory
. For the random
placement strategy, this field is not used.
The network configuration for the service. This parameter is required for task definitions that use the awsvpc
network mode to receive their own elastic network interface, and it isn't supported for other network modes. For more information, see Task networking in the Amazon Elastic Container Service Developer Guide .
The VPC subnets and security groups that are associated with a task.
Note
All specified subnets and security groups must be from the same VPC.
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per AwsVpcConfiguration
.
Note
All specified subnets must be from the same VPC.
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per AwsVpcConfiguration
.
Note
All specified security groups must be from the same VPC.
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED
.
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started. This is only used when your service is configured to use a load balancer. If your service has a load balancer defined and you don't specify a health check grace period value, the default value of 0
is used.
If you do not use an Elastic Load Balancing, we recommend that you use the startPeriod
in the task definition health check parameters. For more information, see Health check.
If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you can specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up.
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available:
REPLICA
-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. This scheduler strategy is required if the service uses the CODE_DEPLOY
or EXTERNAL
deployment controller types.DAEMON
-The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks and will stop tasks that don't meet the placement constraints. When you're using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies.Note
Tasks using the Fargate launch type or the CODE_DEPLOY
or EXTERNAL
deployment controller types don't support the DAEMON
scheduling strategy.
The deployment controller to use for the service. If no deployment controller is specified, the default value of ECS
is used.
The deployment controller type to use.
There are three deployment controller types available:
ECS
The rolling update ( ECS
) deployment type involves replacing the current running version of the container with the latest version. The number of containers Amazon ECS adds or removes from the service during a rolling update is controlled by adjusting the minimum and maximum number of healthy tasks allowed during a service deployment, as specified in the DeploymentConfiguration.
CODE_DEPLOY
The blue/green ( CODE_DEPLOY
) deployment type uses the blue/green deployment model powered by CodeDeploy, which allows you to verify a new deployment of a service before sending production traffic to it.
EXTERNAL
The external ( EXTERNAL
) deployment type enables you to use any third-party deployment controller for full control over the deployment process for an Amazon ECS service.
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
true
, this enables execute command functionality on all containers in the service tasks.The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
Specifies whether to use Service Connect with this service.
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide .
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.
This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.
An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service.
The Service Connect service object configuration. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
The portName
must match the name of one of the portMappings
from all the containers in the task definition of this Amazon ECS service.
The discoveryName
is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace
is used. If the discoveryName
isn't specified, the port mapping name from the task definition is used in portName.namespace
.
The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1.
Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
For each ServiceConnectService
, you must provide at least one clientAlias
with one port
.
Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
The dnsName
is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace
is used. If the discoveryName
isn't specified, the port mapping name from the task definition is used in portName.namespace
.
To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are database
, db
, or the lowercase name of a database, such as mysql
or redis
. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
The port number for the Service Connect proxy to listen on.
Use the value of this field to bypass the proxy for traffic on the port number specified in the named portMapping
in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service.
In awsvpc
mode and Fargate, the default value is the container port number. The container port number is in the portMapping
in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy.
The log configuration for the container. This parameter maps to LogConfig
in the Create a container section of the Docker Remote API and the --log-driver
option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. For more information about the options for different supported log drivers, see Configure logging drivers in the Docker documentation.
Understand the following when specifying a log configuration for your containers.
ECS_AVAILABLE_LOGGING_DRIVERS
environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide .The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs
, splunk
, and awsfirelens
.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs
, fluentd
, gelf
, json-file
, journald
, logentries
, syslog
, splunk
, and awsfirelens
.
For more information about using the awslogs
log driver, see Using the awslogs log driver in the Amazon Elastic Container Service Developer Guide .
For more information about using the awsfirelens
log driver, see Custom log routing in the Amazon Elastic Container Service Developer Guide .
Note
If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.
The configuration options to send to the log driver. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide .
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
secrets
container definition parameter.secretOptions
container definition parameter.For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide .
The name of the secret.
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide .
Note
If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
dict
Response Syntax
{
'service': {
'serviceArn': 'string',
'serviceName': 'string',
'clusterArn': 'string',
'loadBalancers': [
{
'targetGroupArn': 'string',
'loadBalancerName': 'string',
'containerName': 'string',
'containerPort': 123
},
],
'serviceRegistries': [
{
'registryArn': 'string',
'port': 123,
'containerName': 'string',
'containerPort': 123
},
],
'status': 'string',
'desiredCount': 123,
'runningCount': 123,
'pendingCount': 123,
'launchType': 'EC2'|'FARGATE'|'EXTERNAL',
'capacityProviderStrategy': [
{
'capacityProvider': 'string',
'weight': 123,
'base': 123
},
],
'platformVersion': 'string',
'platformFamily': 'string',
'taskDefinition': 'string',
'deploymentConfiguration': {
'deploymentCircuitBreaker': {
'enable': True|False,
'rollback': True|False
},
'maximumPercent': 123,
'minimumHealthyPercent': 123
},
'taskSets': [
{
'id': 'string',
'taskSetArn': 'string',
'serviceArn': 'string',
'clusterArn': 'string',
'startedBy': 'string',
'externalId': 'string',
'status': 'string',
'taskDefinition': 'string',
'computedDesiredCount': 123,
'pendingCount': 123,
'runningCount': 123,
'createdAt': datetime(2015, 1, 1),
'updatedAt': datetime(2015, 1, 1),
'launchType': 'EC2'|'FARGATE'|'EXTERNAL',
'capacityProviderStrategy': [
{
'capacityProvider': 'string',
'weight': 123,
'base': 123
},
],
'platformVersion': 'string',
'platformFamily': 'string',
'networkConfiguration': {
'awsvpcConfiguration': {
'subnets': [
'string',
],
'securityGroups': [
'string',
],
'assignPublicIp': 'ENABLED'|'DISABLED'
}
},
'loadBalancers': [
{
'targetGroupArn': 'string',
'loadBalancerName': 'string',
'containerName': 'string',
'containerPort': 123
},
],
'serviceRegistries': [
{
'registryArn': 'string',
'port': 123,
'containerName': 'string',
'containerPort': 123
},
],
'scale': {
'value': 123.0,
'unit': 'PERCENT'
},
'stabilityStatus': 'STEADY_STATE'|'STABILIZING',
'stabilityStatusAt': datetime(2015, 1, 1),
'tags': [
{
'key': 'string',
'value': 'string'
},
]
},
],
'deployments': [
{
'id': 'string',
'status': 'string',
'taskDefinition': 'string',
'desiredCount': 123,
'pendingCount': 123,
'runningCount': 123,
'failedTasks': 123,
'createdAt': datetime(2015, 1, 1),
'updatedAt': datetime(2015, 1, 1),
'capacityProviderStrategy': [
{
'capacityProvider': 'string',
'weight': 123,
'base': 123
},
],
'launchType': 'EC2'|'FARGATE'|'EXTERNAL',
'platformVersion': 'string',
'platformFamily': 'string',
'networkConfiguration': {
'awsvpcConfiguration': {
'subnets': [
'string',
],
'securityGroups': [
'string',
],
'assignPublicIp': 'ENABLED'|'DISABLED'
}
},
'rolloutState': 'COMPLETED'|'FAILED'|'IN_PROGRESS',
'rolloutStateReason': 'string',
'serviceConnectConfiguration': {
'enabled': True|False,
'namespace': 'string',
'services': [
{
'portName': 'string',
'discoveryName': 'string',
'clientAliases': [
{
'port': 123,
'dnsName': 'string'
},
],
'ingressPortOverride': 123
},
],
'logConfiguration': {
'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens',
'options': {
'string': 'string'
},
'secretOptions': [
{
'name': 'string',
'valueFrom': 'string'
},
]
}
},
'serviceConnectResources': [
{
'discoveryName': 'string',
'discoveryArn': 'string'
},
]
},
],
'roleArn': 'string',
'events': [
{
'id': 'string',
'createdAt': datetime(2015, 1, 1),
'message': 'string'
},
],
'createdAt': datetime(2015, 1, 1),
'placementConstraints': [
{
'type': 'distinctInstance'|'memberOf',
'expression': 'string'
},
],
'placementStrategy': [
{
'type': 'random'|'spread'|'binpack',
'field': 'string'
},
],
'networkConfiguration': {
'awsvpcConfiguration': {
'subnets': [
'string',
],
'securityGroups': [
'string',
],
'assignPublicIp': 'ENABLED'|'DISABLED'
}
},
'healthCheckGracePeriodSeconds': 123,
'schedulingStrategy': 'REPLICA'|'DAEMON',
'deploymentController': {
'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL'
},
'tags': [
{
'key': 'string',
'value': 'string'
},
],
'createdBy': 'string',
'enableECSManagedTags': True|False,
'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE',
'enableExecuteCommand': True|False
}
}
Response Structure
(dict) --
service (dict) --
The full description of your service following the create call.
A service will return either a capacityProviderStrategy
or launchType
parameter, but not both, depending where one was specified when it was created.
If a service is using the ECS
deployment controller, the deploymentController
and taskSets
parameters will not be returned.
if the service uses the CODE_DEPLOY
deployment controller, the deploymentController
, taskSets
and deployments
parameters will be returned, however the deployments
parameter will be an empty list.
serviceArn (string) --
The ARN that identifies the service. For more information about the ARN format, see Amazon Resource Name (ARN) in the Amazon ECS Developer Guide .
serviceName (string) --
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster. However, you can have similarly named services in multiple clusters within a Region or across multiple Regions.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that hosts the service.
loadBalancers (list) --
A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition.
(dict) --
The load balancer configuration to use with a service or task set.
For specific notes and restrictions regarding the use of load balancers with services and task sets, see the CreateService and CreateTaskSet actions.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide .
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. If you're using a Classic Load Balancer, omit the target group ARN.
For services using the ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide .
For services using the CODE_DEPLOY
deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide .
Warning
If your service's task definition uses the awsvpc
network mode, you must choose ip
as the target type, not instance
. Do this when creating your target groups because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
A load balancer name is only specified when using a Classic Load Balancer. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort
of the port mapping.
serviceRegistries (list) --
The details for the service discovery registries to assign to this service. For more information, see Service Discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc
network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition that your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
status (string) --
The status of the service. The valid values are ACTIVE
, DRAINING
, or INACTIVE
.
desiredCount (integer) --
The desired number of instantiations of the task definition to keep running on the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService.
runningCount (integer) --
The number of tasks in the cluster that are in the RUNNING
state.
pendingCount (integer) --
The number of tasks in the cluster that are in the PENDING
state.
launchType (string) --
The launch type the service is using. When using the DescribeServices API, this field is omitted if the service was created using a capacity provider strategy.
capacityProviderStrategy (list) --
The capacity provider strategy the service uses. When using the DescribeServices API, this field is omitted if the service was created using a launch type.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE
or UPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight
value is taken into consideration after the base
value, if defined, is satisfied.
If no weight
value is specified, the default value of 0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0
, any RunTask
or CreateService
actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1
, then when the base
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1
for capacityProviderA and a weight of 4
for capacityProviderB , then for every one task that's run using capacityProviderA , four tasks would use capacityProviderB .
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0
is used.
platformVersion (string) --
The platform version to run your service on. A platform version is only specified for tasks that are hosted on Fargate. If one isn't specified, the LATEST
platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide .
platformFamily (string) --
The operating system that your tasks in the service run on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily
value as the service (for example, LINUX
).
taskDefinition (string) --
The task definition to use for tasks in the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService.
deploymentConfiguration (dict) --
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
deploymentCircuitBreaker (dict) --
Note
The deployment circuit breaker can only be used for services using the rolling update ( ECS
) deployment type.
The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If deployment circuit breaker is enabled, a service deployment will transition to a failed state and stop launching new tasks. If rollback is enabled, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
enable (boolean) --
Determines whether to use the deployment circuit breaker logic for the service.
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is enabled, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
maximumPercent (integer) --
If a service is using the rolling update ( ECS
) deployment type, the maximumPercent
parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING
or PENDING
state during a deployment, as a percentage of the desiredCount
(rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA
service scheduler and has a desiredCount
of four tasks and a maximumPercent
value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent
value for a service using the REPLICA
service scheduler is 200%.
If a service is using either the blue/green ( CODE_DEPLOY
) or EXTERNAL
deployment types and tasks that use the EC2 launch type, the maximum percent value is set to the default value and is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING
state while the container instances are in the DRAINING
state. If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service.
minimumHealthyPercent (integer) --
If a service is using the rolling update ( ECS
) deployment type, the minimumHealthyPercent
represents a lower limit on the number of your service's tasks that must remain in the RUNNING
state during a deployment, as a percentage of the desiredCount
(rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desiredCount
of four tasks and a minimumHealthyPercent
of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks.
For services that do not use a load balancer, the following should be noted:
RUNNING
state before the task is counted towards the minimum healthy percent total.For services are that do use a load balancer, the following should be noted:
If a service is using either the blue/green ( CODE_DEPLOY
) or EXTERNAL
deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value and is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING
state while the container instances are in the DRAINING
state. If a service is using either the blue/green ( CODE_DEPLOY
) or EXTERNAL
deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
taskSets (list) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL
deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
(dict) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL
deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy
parameter is CODE_DEPLOY
. If an external deployment created the task set, the startedBy
field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId
parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId
parameter contains the ECS_TASK_SET_EXTERNAL_ID
Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount
by the task set's scale
percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING
status during a deployment. A task in the PENDING
state is preparing to enter the RUNNING
state. A task set enters the PENDING
status when it launches for the first time or when it's restarted after being in the STOPPED
state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING
status during a deployment. A task in the RUNNING
state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide .
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE
or UPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight
value is taken into consideration after the base
value, if defined, is satisfied.
If no weight
value is specified, the default value of 0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0
, any RunTask
or CreateService
actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1
, then when the base
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1
for capacityProviderA and a weight of 4
for capacityProviderB , then for every one task that's run using capacityProviderA , four tasks would use capacityProviderB .
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0
is used.
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide .
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
Note
All specified subnets and security groups must be from the same VPC.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per AwsVpcConfiguration
.
Note
All specified subnets must be from the same VPC.
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per AwsVpcConfiguration
.
Note
All specified security groups must be from the same VPC.
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED
.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
For specific notes and restrictions regarding the use of load balancers with services and task sets, see the CreateService and CreateTaskSet actions.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide .
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. If you're using a Classic Load Balancer, omit the target group ARN.
For services using the ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide .
For services using the CODE_DEPLOY
deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide .
Warning
If your service's task definition uses the awsvpc
network mode, you must choose ip
as the target type, not instance
. Do this when creating your target groups because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
A load balancer name is only specified when using a Classic Load Balancer. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort
of the port mapping.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc
network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition that your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount
, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE
:
runningCount
is equal to the computedDesiredCount
.pendingCount
is 0
.DRAINING
status.If any of those conditions aren't met, the stability status returns STABILIZING
.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.key (string) --
One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
deployments (list) --
The current state of deployments for the service.
(dict) --
The details of an Amazon ECS service deployment. This is used only when a service uses the ECS
deployment controller type.
id (string) --
The ID of the deployment.
status (string) --
The status of the deployment. The following describes each state.
PRIMARY
The most recent deployment of a service.
ACTIVE
A service deployment that still has running tasks, but are in the process of being replaced with a new PRIMARY
deployment.
INACTIVE
A deployment that has been completely replaced.
taskDefinition (string) --
The most recent task definition that was specified for the tasks in the service to use.
desiredCount (integer) --
The most recent desired count of tasks that was specified for the service to deploy or maintain.
pendingCount (integer) --
The number of tasks in the deployment that are in the PENDING
status.
runningCount (integer) --
The number of tasks in the deployment that are in the RUNNING
status.
failedTasks (integer) --
The number of consecutively failed tasks in the deployment. A task is considered a failure if the service scheduler can't launch the task, the task doesn't transition to a RUNNING
state, or if it fails any of its defined health checks and is stopped.
Note
Once a service deployment has one or more successfully running tasks, the failed task count resets to zero and stops being evaluated.
createdAt (datetime) --
The Unix timestamp for the time when the service deployment was created.
updatedAt (datetime) --
The Unix timestamp for the time when the service deployment was last updated.
capacityProviderStrategy (list) --
The capacity provider strategy that the deployment is using.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE
or UPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight
value is taken into consideration after the base
value, if defined, is satisfied.
If no weight
value is specified, the default value of 0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0
, any RunTask
or CreateService
actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1
, then when the base
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1
for capacityProviderA and a weight of 4
for capacityProviderB , then for every one task that's run using capacityProviderA , four tasks would use capacityProviderB .
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0
is used.
launchType (string) --
The launch type the tasks in the service are using. For more information, see Amazon ECS Launch Types in the Amazon Elastic Container Service Developer Guide .
platformVersion (string) --
The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If one isn't specified, the LATEST
platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide .
platformFamily (string) --
The operating system that your tasks in the service, or tasks are running on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily
value as the service, for example, LINUX.
.
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc
networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
Note
All specified subnets and security groups must be from the same VPC.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per AwsVpcConfiguration
.
Note
All specified subnets must be from the same VPC.
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per AwsVpcConfiguration
.
Note
All specified security groups must be from the same VPC.
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED
.
rolloutState (string) --
Note
The rolloutState
of a service is only returned for services that use the rolling update ( ECS
) deployment type that aren't behind a Classic Load Balancer.
The rollout state of the deployment. When a service deployment is started, it begins in an IN_PROGRESS
state. When the service reaches a steady state, the deployment transitions to a COMPLETED
state. If the service fails to reach a steady state and circuit breaker is enabled, the deployment transitions to a FAILED
state. A deployment in FAILED
state doesn't launch any new tasks. For more information, see DeploymentCircuitBreaker.
rolloutStateReason (string) --
A description of the rollout state of a deployment.
serviceConnectConfiguration (dict) --
The details of the Service Connect configuration that's used by this deployment. Compare the configuration between multiple deployments when troubleshooting issues with new deployments.
The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
enabled (boolean) --
Specifies whether to use Service Connect with this service.
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide .
services (list) --
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.
This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.
An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service.
(dict) --
The Service Connect service object configuration. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
portName (string) --
The portName
must match the name of one of the portMappings
from all the containers in the task definition of this Amazon ECS service.
discoveryName (string) --
The discoveryName
is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace
is used. If the discoveryName
isn't specified, the port mapping name from the task definition is used in portName.namespace
.
clientAliases (list) --
The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1.
Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
For each ServiceConnectService
, you must provide at least one clientAlias
with one port
.
(dict) --
Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
port (integer) --
The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
dnsName (string) --
The dnsName
is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace
is used. If the discoveryName
isn't specified, the port mapping name from the task definition is used in portName.namespace
.
To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are database
, db
, or the lowercase name of a database, such as mysql
or redis
. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
ingressPortOverride (integer) --
The port number for the Service Connect proxy to listen on.
Use the value of this field to bypass the proxy for traffic on the port number specified in the named portMapping
in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service.
In awsvpc
mode and Fargate, the default value is the container port number. The container port number is in the portMapping
in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy.
logConfiguration (dict) --
The log configuration for the container. This parameter maps to LogConfig
in the Create a container section of the Docker Remote API and the --log-driver
option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. For more information about the options for different supported log drivers, see Configure logging drivers in the Docker documentation.
Understand the following when specifying a log configuration for your containers.
ECS_AVAILABLE_LOGGING_DRIVERS
environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide .logDriver (string) --
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs
, splunk
, and awsfirelens
.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs
, fluentd
, gelf
, json-file
, journald
, logentries
, syslog
, splunk
, and awsfirelens
.
For more information about using the awslogs
log driver, see Using the awslogs log driver in the Amazon Elastic Container Service Developer Guide .
For more information about using the awsfirelens
log driver, see Custom log routing in the Amazon Elastic Container Service Developer Guide .
Note
If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.
options (dict) --
The configuration options to send to the log driver. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide .
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
secrets
container definition parameter.secretOptions
container definition parameter.For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide .
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide .
Note
If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
serviceConnectResources (list) --
The list of Service Connect resources that are associated with this deployment. Each list entry maps a discovery name to a Cloud Map service name.
(dict) --
The Service Connect resource. Each configuration maps a discovery name to a Cloud Map service name. The data is stored in Cloud Map as part of the Service Connect configuration for each discovery name of this Amazon ECS service.
A task can resolve the dnsName
for each of the clientAliases
of a service. However a task can't resolve the discovery names. If you want to connect to a service, refer to the ServiceConnectConfiguration
of that service for the list of clientAliases
that you can use.
discoveryName (string) --
The discovery name of this Service Connect resource.
The discoveryName
is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace
is used. If the discoveryName
isn't specified, the port mapping name from the task definition is used in portName.namespace
.
discoveryArn (string) --
The Amazon Resource Name (ARN) for the namespace in Cloud Map that matches the discovery name for this Service Connect resource. You can use this ARN in other integrations with Cloud Map. However, Service Connect can't ensure connectivity outside of Amazon ECS.
roleArn (string) --
The ARN of the IAM role that's associated with the service. It allows the Amazon ECS container agent to register container instances with an Elastic Load Balancing load balancer.
events (list) --
The event stream for your service. A maximum of 100 of the latest events are displayed.
(dict) --
The details for an event that's associated with a service.
id (string) --
The ID string for the event.
createdAt (datetime) --
The Unix timestamp for the time when the event was triggered.
message (string) --
The event message.
createdAt (datetime) --
The Unix timestamp for the time when the service was created.
placementConstraints (list) --
The placement constraints for the tasks in the service.
(dict) --
An object representing a constraint on task placement. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide .
Note
If you're using the Fargate launch type, task placement constraints aren't supported.
type (string) --
The type of constraint. Use distinctInstance
to ensure that each task in a particular group is running on a different container instance. Use memberOf
to restrict the selection to a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is distinctInstance
. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide .
placementStrategy (list) --
The placement strategy that determines how tasks for the service are placed.
(dict) --
The task placement strategy for a task or service. For more information, see Task placement strategies in the Amazon Elastic Container Service Developer Guide .
type (string) --
The type of placement strategy. The random
placement strategy randomly places tasks on available candidates. The spread
placement strategy spreads placement across available candidates evenly based on the field
parameter. The binpack
strategy places tasks on available candidates that have the least available amount of the resource that's specified with the field
parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.
field (string) --
The field to apply the placement strategy against. For the spread
placement strategy, valid values are instanceId
(or host
, which has the same effect), or any platform or custom attribute that's applied to a container instance, such as attribute:ecs.availability-zone
. For the binpack
placement strategy, valid values are cpu
and memory
. For the random
placement strategy, this field is not used.
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc
networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
Note
All specified subnets and security groups must be from the same VPC.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per AwsVpcConfiguration
.
Note
All specified subnets must be from the same VPC.
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per AwsVpcConfiguration
.
Note
All specified security groups must be from the same VPC.
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED
.
healthCheckGracePeriodSeconds (integer) --
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started.
schedulingStrategy (string) --
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available.
REPLICA
-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions.DAEMON
-The daemon scheduling strategy deploys exactly one task on each active container instance. This task meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It stop tasks that don't meet the placement constraints.Note
Fargate tasks don't support the DAEMON
scheduling strategy.
deploymentController (dict) --
The deployment controller type the service is using.
type (string) --
The deployment controller type to use.
There are three deployment controller types available:
ECS
The rolling update ( ECS
) deployment type involves replacing the current running version of the container with the latest version. The number of containers Amazon ECS adds or removes from the service during a rolling update is controlled by adjusting the minimum and maximum number of healthy tasks allowed during a service deployment, as specified in the DeploymentConfiguration.
CODE_DEPLOY
The blue/green ( CODE_DEPLOY
) deployment type uses the blue/green deployment model powered by CodeDeploy, which allows you to verify a new deployment of a service before sending production traffic to it.
EXTERNAL
The external ( EXTERNAL
) deployment type enables you to use any third-party deployment controller for full control over the deployment process for an Amazon ECS service.
tags (list) --
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value. You define bot the key and value.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.key (string) --
One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
createdBy (string) --
The principal that created the service.
enableECSManagedTags (boolean) --
Determines whether to use Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide .
propagateTags (string) --
Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated.
enableExecuteCommand (boolean) --
Determines whether the execute command functionality is enabled for the service. If true
, the execute command functionality is enabled for all containers in tasks as part of the service.
Exceptions
ECS.Client.exceptions.ServerException
ECS.Client.exceptions.ClientException
ECS.Client.exceptions.InvalidParameterException
ECS.Client.exceptions.ClusterNotFoundException
ECS.Client.exceptions.UnsupportedFeatureException
ECS.Client.exceptions.PlatformUnknownException
ECS.Client.exceptions.PlatformTaskDefinitionIncompatibilityException
ECS.Client.exceptions.AccessDeniedException
ECS.Client.exceptions.NamespaceNotFoundException
Examples
This example creates a service in your default region called ecs-simple-service
. The service uses the hello_world
task definition and it maintains 10 copies of that task.
response = client.create_service(
desiredCount=10,
serviceName='ecs-simple-service',
taskDefinition='hello_world',
)
print(response)
Expected Output:
{
'service': {
'clusterArn': 'arn:aws:ecs:us-east-1:012345678910:cluster/default',
'createdAt': datetime(2016, 8, 29, 16, 13, 47, 0, 242, 0),
'deploymentConfiguration': {
'maximumPercent': 200,
'minimumHealthyPercent': 100,
},
'deployments': [
{
'createdAt': datetime(2016, 8, 29, 16, 13, 47, 0, 242, 0),
'desiredCount': 10,
'id': 'ecs-svc/9223370564342348388',
'pendingCount': 0,
'runningCount': 0,
'status': 'PRIMARY',
'taskDefinition': 'arn:aws:ecs:us-east-1:012345678910:task-definition/hello_world:6',
'updatedAt': datetime(2016, 8, 29, 16, 13, 47, 0, 242, 0),
},
{
'createdAt': datetime(2016, 8, 29, 15, 52, 44, 0, 242, 0),
'desiredCount': 0,
'id': 'ecs-svc/9223370564343611322',
'pendingCount': 0,
'runningCount': 0,
'status': 'ACTIVE',
'taskDefinition': 'arn:aws:ecs:us-east-1:012345678910:task-definition/hello_world:6',
'updatedAt': datetime(2016, 8, 29, 16, 11, 38, 0, 242, 0),
},
],
'desiredCount': 10,
'events': [
],
'loadBalancers': [
],
'pendingCount': 0,
'runningCount': 0,
'serviceArn': 'arn:aws:ecs:us-east-1:012345678910:service/ecs-simple-service',
'serviceName': 'ecs-simple-service',
'status': 'ACTIVE',
'taskDefinition': 'arn:aws:ecs:us-east-1:012345678910:task-definition/hello_world:6',
},
'ResponseMetadata': {
'...': '...',
},
}
This example creates a service in your default region called ecs-simple-service-elb
. The service uses the ecs-demo
task definition and it maintains 10 copies of that task. You must reference an existing load balancer in the same region by its name.
response = client.create_service(
desiredCount=10,
loadBalancers=[
{
'containerName': 'simple-app',
'containerPort': 80,
'loadBalancerName': 'EC2Contai-EcsElast-15DCDAURT3ZO2',
},
],
role='ecsServiceRole',
serviceName='ecs-simple-service-elb',
taskDefinition='console-sample-app-static',
)
print(response)
Expected Output:
{
'service': {
'clusterArn': 'arn:aws:ecs:us-east-1:012345678910:cluster/default',
'createdAt': datetime(2016, 8, 29, 16, 2, 54, 0, 242, 0),
'deploymentConfiguration': {
'maximumPercent': 200,
'minimumHealthyPercent': 100,
},
'deployments': [
{
'createdAt': datetime(2016, 8, 29, 16, 2, 54, 0, 242, 0),
'desiredCount': 10,
'id': 'ecs-svc/9223370564343000923',
'pendingCount': 0,
'runningCount': 0,
'status': 'PRIMARY',
'taskDefinition': 'arn:aws:ecs:us-east-1:012345678910:task-definition/console-sample-app-static:6',
'updatedAt': datetime(2016, 8, 29, 16, 2, 54, 0, 242, 0),
},
],
'desiredCount': 10,
'events': [
],
'loadBalancers': [
{
'containerName': 'simple-app',
'containerPort': 80,
'loadBalancerName': 'EC2Contai-EcsElast-15DCDAURT3ZO2',
},
],
'pendingCount': 0,
'roleArn': 'arn:aws:iam::012345678910:role/ecsServiceRole',
'runningCount': 0,
'serviceArn': 'arn:aws:ecs:us-east-1:012345678910:service/ecs-simple-service-elb',
'serviceName': 'ecs-simple-service-elb',
'status': 'ACTIVE',
'taskDefinition': 'arn:aws:ecs:us-east-1:012345678910:task-definition/console-sample-app-static:6',
},
'ResponseMetadata': {
'...': '...',
},
}
create_task_set
(**kwargs)¶Create a task set in the specified cluster and service. This is used when a service uses the EXTERNAL
deployment controller type. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide .
See also: AWS API Documentation
Request Syntax
response = client.create_task_set(
service='string',
cluster='string',
externalId='string',
taskDefinition='string',
networkConfiguration={
'awsvpcConfiguration': {
'subnets': [
'string',
],
'securityGroups': [
'string',
],
'assignPublicIp': 'ENABLED'|'DISABLED'
}
},
loadBalancers=[
{
'targetGroupArn': 'string',
'loadBalancerName': 'string',
'containerName': 'string',
'containerPort': 123
},
],
serviceRegistries=[
{
'registryArn': 'string',
'port': 123,
'containerName': 'string',
'containerPort': 123
},
],
launchType='EC2'|'FARGATE'|'EXTERNAL',
capacityProviderStrategy=[
{
'capacityProvider': 'string',
'weight': 123,
'base': 123
},
],
platformVersion='string',
scale={
'value': 123.0,
'unit': 'PERCENT'
},
clientToken='string',
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the service to create the task set in.
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service to create the task set in.
ECS_TASK_SET_EXTERNAL_ID
Cloud Map attribute set to the provided value.[REQUIRED]
The task definition for the tasks in the task set to use.
An object representing the network configuration for a task set.
The VPC subnets and security groups that are associated with a task.
Note
All specified subnets and security groups must be from the same VPC.
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per AwsVpcConfiguration
.
Note
All specified subnets must be from the same VPC.
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per AwsVpcConfiguration
.
Note
All specified security groups must be from the same VPC.
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED
.
A load balancer object representing the load balancer to use with the task set. The supported load balancer types are either an Application Load Balancer or a Network Load Balancer.
The load balancer configuration to use with a service or task set.
For specific notes and restrictions regarding the use of load balancers with services and task sets, see the CreateService and CreateTaskSet actions.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide .
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. If you're using a Classic Load Balancer, omit the target group ARN.
For services using the ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide .
For services using the CODE_DEPLOY
deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide .
Warning
If your service's task definition uses the awsvpc
network mode, you must choose ip
as the target type, not instance
. Do this when creating your target groups because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.
The name of the load balancer to associate with the Amazon ECS service or task set.
A load balancer name is only specified when using a Classic Load Balancer. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
The name of the container (as it appears in a container definition) to associate with the load balancer.
The port on the container to associate with the load balancer. This port must correspond to a containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort
of the port mapping.
The details of the service discovery registries to assign to this task set. For more information, see Service discovery.
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc
network mode and SRV records are used.
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition that your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
The launch type that new tasks in the task set uses. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide .
If a launchType
is specified, the capacityProviderStrategy
parameter must be omitted.
The capacity provider strategy to use for the task set.
A capacity provider strategy consists of one or more capacity providers along with the base
and weight
to assign to them. A capacity provider must be associated with the cluster to be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. Only capacity providers with an ACTIVE
or UPDATING
status can be used.
If a capacityProviderStrategy
is specified, the launchType
parameter must be omitted. If no capacityProviderStrategy
or launchType
is specified, the defaultCapacityProviderStrategy
for the cluster is used.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE
or UPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
A capacity provider strategy may contain a maximum of 6 capacity providers.
The short name of the capacity provider.
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight
value is taken into consideration after the base
value, if defined, is satisfied.
If no weight
value is specified, the default value of 0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0
, any RunTask
or CreateService
actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1
, then when the base
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1
for capacityProviderA and a weight of 4
for capacityProviderB , then for every one task that's run using capacityProviderA , four tasks would use capacityProviderB .
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0
is used.
LATEST
platform version is used.A floating-point percentage of the desired number of tasks to place and keep running in the task set.
The value, specified as a percent total of a service's desiredCount
, to scale the task set. Accepted values are numbers between 0 and 100.
The unit of measure for the scale value.
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. When a service is deleted, the tags are deleted.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
dict
Response Syntax
{
'taskSet': {
'id': 'string',
'taskSetArn': 'string',
'serviceArn': 'string',
'clusterArn': 'string',
'startedBy': 'string',
'externalId': 'string',
'status': 'string',
'taskDefinition': 'string',
'computedDesiredCount': 123,
'pendingCount': 123,
'runningCount': 123,
'createdAt': datetime(2015, 1, 1),
'updatedAt': datetime(2015, 1, 1),
'launchType': 'EC2'|'FARGATE'|'EXTERNAL',
'capacityProviderStrategy': [
{
'capacityProvider': 'string',
'weight': 123,
'base': 123
},
],
'platformVersion': 'string',
'platformFamily': 'string',
'networkConfiguration': {
'awsvpcConfiguration': {
'subnets': [
'string',
],
'securityGroups': [
'string',
],
'assignPublicIp': 'ENABLED'|'DISABLED'
}
},
'loadBalancers': [
{
'targetGroupArn': 'string',
'loadBalancerName': 'string',
'containerName': 'string',
'containerPort': 123
},
],
'serviceRegistries': [
{
'registryArn': 'string',
'port': 123,
'containerName': 'string',
'containerPort': 123
},
],
'scale': {
'value': 123.0,
'unit': 'PERCENT'
},
'stabilityStatus': 'STEADY_STATE'|'STABILIZING',
'stabilityStatusAt': datetime(2015, 1, 1),
'tags': [
{
'key': 'string',
'value': 'string'
},
]
}
}
Response Structure
(dict) --
taskSet (dict) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL
deployment. A task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy
parameter is CODE_DEPLOY
. If an external deployment created the task set, the startedBy
field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId
parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId
parameter contains the ECS_TASK_SET_EXTERNAL_ID
Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount
by the task set's scale
percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING
status during a deployment. A task in the PENDING
state is preparing to enter the RUNNING
state. A task set enters the PENDING
status when it launches for the first time or when it's restarted after being in the STOPPED
state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING
status during a deployment. A task in the RUNNING
state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide .
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE
or UPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight
value is taken into consideration after the base
value, if defined, is satisfied.
If no weight
value is specified, the default value of 0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0
, any RunTask
or CreateService
actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1
, then when the base
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1
for capacityProviderA and a weight of 4
for capacityProviderB , then for every one task that's run using capacityProviderA , four tasks would use capacityProviderB .
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0
is used.
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide .
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
Note
All specified subnets and security groups must be from the same VPC.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per AwsVpcConfiguration
.
Note
All specified subnets must be from the same VPC.
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per AwsVpcConfiguration
.
Note
All specified security groups must be from the same VPC.
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED
.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
For specific notes and restrictions regarding the use of load balancers with services and task sets, see the CreateService and CreateTaskSet actions.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide .
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. If you're using a Classic Load Balancer, omit the target group ARN.
For services using the ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide .
For services using the CODE_DEPLOY
deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide .
Warning
If your service's task definition uses the awsvpc
network mode, you must choose ip
as the target type, not instance
. Do this when creating your target groups because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
A load balancer name is only specified when using a Classic Load Balancer. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort
of the port mapping.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc
network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition that your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount
, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE
:
runningCount
is equal to the computedDesiredCount
.pendingCount
is 0
.DRAINING
status.If any of those conditions aren't met, the stability status returns STABILIZING
.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.key (string) --
One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
Exceptions
ECS.Client.exceptions.ServerException
ECS.Client.exceptions.ClientException
ECS.Client.exceptions.InvalidParameterException
ECS.Client.exceptions.ClusterNotFoundException
ECS.Client.exceptions.UnsupportedFeatureException
ECS.Client.exceptions.PlatformUnknownException
ECS.Client.exceptions.PlatformTaskDefinitionIncompatibilityException
ECS.Client.exceptions.AccessDeniedException
ECS.Client.exceptions.ServiceNotFoundException
ECS.Client.exceptions.ServiceNotActiveException
ECS.Client.exceptions.NamespaceNotFoundException
delete_account_setting
(**kwargs)¶Disables an account setting for a specified IAM user, IAM role, or the root user for an account.
See also: AWS API Documentation
Request Syntax
response = client.delete_account_setting(
name='serviceLongArnFormat'|'taskLongArnFormat'|'containerInstanceLongArnFormat'|'awsvpcTrunking'|'containerInsights',
principalArn='string'
)
[REQUIRED]
The resource name to disable the account setting for. If serviceLongArnFormat
is specified, the ARN for your Amazon ECS services is affected. If taskLongArnFormat
is specified, the ARN and resource ID for your Amazon ECS tasks is affected. If containerInstanceLongArnFormat
is specified, the ARN and resource ID for your Amazon ECS container instances is affected. If awsvpcTrunking
is specified, the ENI limit for your Amazon ECS container instances is affected.
dict
Response Syntax
{
'setting': {
'name': 'serviceLongArnFormat'|'taskLongArnFormat'|'containerInstanceLongArnFormat'|'awsvpcTrunking'|'containerInsights',
'value': 'string',
'principalArn': 'string'
}
}
Response Structure
(dict) --
setting (dict) --
The account setting for the specified principal ARN.
name (string) --
The Amazon ECS resource name.
value (string) --
Determines whether the account setting is enabled or disabled for the specified resource.
principalArn (string) --
The ARN of the principal. It can be an IAM user, IAM role, or the root user. If this field is omitted, the authenticated user is assumed.
Exceptions
ECS.Client.exceptions.ServerException
ECS.Client.exceptions.ClientException
ECS.Client.exceptions.InvalidParameterException
Examples
This example deletes the account setting for your user for the specified resource type.
response = client.delete_account_setting(
name='serviceLongArnFormat',
)
print(response)
Expected Output:
{
'setting': {
'name': 'serviceLongArnFormat',
'value': 'enabled',
'principalArn': 'arn:aws:iam::<aws_account_id>:user/principalName',
},
'ResponseMetadata': {
'...': '...',
},
}
This example deletes the account setting for a specific IAM user or IAM role for the specified resource type. Only the root user can view or modify the account settings for another user.
response = client.delete_account_setting(
name='containerInstanceLongArnFormat',
principalArn='arn:aws:iam::<aws_account_id>:user/principalName',
)
print(response)
Expected Output:
{
'setting': {
'name': 'containerInstanceLongArnFormat',
'value': 'enabled',
'principalArn': 'arn:aws:iam::<aws_account_id>:user/principalName',
},
'ResponseMetadata': {
'...': '...',
},
}
delete_attributes
(**kwargs)¶Deletes one or more custom attributes from an Amazon ECS resource.
See also: AWS API Documentation
Request Syntax
response = client.delete_attributes(
cluster='string',
attributes=[
{
'name': 'string',
'value': 'string',
'targetType': 'container-instance',
'targetId': 'string'
},
]
)
[REQUIRED]
The attributes to delete from your resource. You can specify up to 10 attributes for each request. For custom attributes, specify the attribute name and target ID, but don't specify the value. If you specify the target ID using the short form, you must also specify the target type.
An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide .
The name of the attribute. The name
must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.).
The value of the attribute. The value
must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space.
The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN.
The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN).
dict
Response Syntax
{
'attributes': [
{
'name': 'string',
'value': 'string',
'targetType': 'container-instance',
'targetId': 'string'
},
]
}
Response Structure
(dict) --
attributes (list) --
A list of attribute objects that were successfully deleted from your resource.
(dict) --
An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide .
name (string) --
The name of the attribute. The name
must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.).
value (string) --
The value of the attribute. The value
must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space.
targetType (string) --
The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN.
targetId (string) --
The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN).
Exceptions
ECS.Client.exceptions.ClusterNotFoundException
ECS.Client.exceptions.TargetNotFoundException
ECS.Client.exceptions.InvalidParameterException
delete_capacity_provider
(**kwargs)¶Deletes the specified capacity provider.
Note
The FARGATE
and FARGATE_SPOT
capacity providers are reserved and can't be deleted. You can disassociate them from a cluster using either the PutClusterCapacityProviders API or by deleting the cluster.
Prior to a capacity provider being deleted, the capacity provider must be removed from the capacity provider strategy from all services. The UpdateService API can be used to remove a capacity provider from a service's capacity provider strategy. When updating a service, the forceNewDeployment
option can be used to ensure that any tasks using the Amazon EC2 instance capacity provided by the capacity provider are transitioned to use the capacity from the remaining capacity providers. Only capacity providers that aren't associated with a cluster can be deleted. To remove a capacity provider from a cluster, you can either use PutClusterCapacityProviders or delete the cluster.
See also: AWS API Documentation
Request Syntax
response = client.delete_capacity_provider(
capacityProvider='string'
)
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the capacity provider to delete.
{
'capacityProvider': {
'capacityProviderArn': 'string',
'name': 'string',
'status': 'ACTIVE'|'INACTIVE',
'autoScalingGroupProvider': {
'autoScalingGroupArn': 'string',
'managedScaling': {
'status': 'ENABLED'|'DISABLED',
'targetCapacity': 123,
'minimumScalingStepSize': 123,
'maximumScalingStepSize': 123,
'instanceWarmupPeriod': 123
},
'managedTerminationProtection': 'ENABLED'|'DISABLED'
},
'updateStatus': 'DELETE_IN_PROGRESS'|'DELETE_COMPLETE'|'DELETE_FAILED'|'UPDATE_IN_PROGRESS'|'UPDATE_COMPLETE'|'UPDATE_FAILED',
'updateStatusReason': 'string',
'tags': [
{
'key': 'string',
'value': 'string'
},
]
}
}
Response Structure
The details of the capacity provider.
The Amazon Resource Name (ARN) that identifies the capacity provider.
The name of the capacity provider.
The current status of the capacity provider. Only capacity providers in an ACTIVE
state can be used in a cluster. When a capacity provider is successfully deleted, it has an INACTIVE
status.
The Auto Scaling group settings for the capacity provider.
The Amazon Resource Name (ARN) that identifies the Auto Scaling group.
The managed scaling settings for the Auto Scaling group capacity provider.
Determines whether to use managed scaling for the capacity provider.
The target capacity value for the capacity provider. The specified value must be greater than 0
and less than or equal to 100
. A value of 100
results in the Amazon EC2 instances in your Auto Scaling group being completely used.
The minimum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter If this parameter is omitted, the default value of 1
is used.
When additional capacity is required, Amazon ECS will scale up the minimum scaling step size even if the actual demand is less than the minimum scaling step size.
If you use a capacity provider with an Auto Scaling group configured with more than one Amazon EC2 instance type or Availability Zone, Amazon ECS will scale up by the exact minimum scaling step size value and will ignore both the maximum scaling step size as well as the capacity demand.
The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter. If this parameter is omitted, the default value of 1
is used.
The period of time, in seconds, after a newly launched Amazon EC2 instance can contribute to CloudWatch metrics for Auto Scaling group. If this parameter is omitted, the default value of 300
seconds is used.
The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection. The default is disabled.
Warning
When using managed termination protection, managed scaling must also be used otherwise managed termination protection doesn't work.
When managed termination protection is enabled, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions enabled as well. For more information, see Instance Protection in the Auto Scaling User Guide .
When managed termination protection is disabled, your Amazon EC2 instances aren't protected from termination when the Auto Scaling group scales in.
The update status of the capacity provider. The following are the possible states that is returned.
DELETE_IN_PROGRESS
The capacity provider is in the process of being deleted.
DELETE_COMPLETE
The capacity provider was successfully deleted and has an INACTIVE
status.
DELETE_FAILED
The capacity provider can't be deleted. The update status reason provides further details about why the delete failed.
The update status reason. This provides further details about the update status for the capacity provider.
The metadata that you apply to the capacity provider to help you categorize and organize it. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
Exceptions
ECS.Client.exceptions.ServerException
ECS.Client.exceptions.ClientException
ECS.Client.exceptions.InvalidParameterException
delete_cluster
(**kwargs)¶Deletes the specified cluster. The cluster transitions to the INACTIVE
state. Clusters with an INACTIVE
status might remain discoverable in your account for a period of time. However, this behavior is subject to change in the future. We don't recommend that you rely on INACTIVE
clusters persisting.
You must deregister all container instances from this cluster before you may delete it. You can list the container instances in a cluster with ListContainerInstances and deregister them with DeregisterContainerInstance.
See also: AWS API Documentation
Request Syntax
response = client.delete_cluster(
cluster='string'
)
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the cluster to delete.
{
'cluster': {
'clusterArn': 'string',
'clusterName': 'string',
'configuration': {
'executeCommandConfiguration': {
'kmsKeyId': 'string',
'logging': 'NONE'|'DEFAULT'|'OVERRIDE',
'logConfiguration': {
'cloudWatchLogGroupName': 'string',
'cloudWatchEncryptionEnabled': True|False,
's3BucketName': 'string',
's3EncryptionEnabled': True|False,
's3KeyPrefix': 'string'
}
}
},
'status': 'string',
'registeredContainerInstancesCount': 123,
'runningTasksCount': 123,
'pendingTasksCount': 123,
'activeServicesCount': 123,
'statistics': [
{
'name': 'string',
'value': 'string'
},
],
'tags': [
{
'key': 'string',
'value': 'string'
},
],
'settings': [
{
'name': 'containerInsights',
'value': 'string'
},
],
'capacityProviders': [
'string',
],
'defaultCapacityProviderStrategy': [
{
'capacityProvider': 'string',
'weight': 123,
'base': 123
},
],
'attachments': [
{
'id': 'string',
'type': 'string',
'status': 'string',
'details': [
{
'name': 'string',
'value': 'string'
},
]
},
],
'attachmentsStatus': 'string',
'serviceConnectDefaults': {
'namespace': 'string'
}
}
}
Response Structure
The full description of the deleted cluster.
The Amazon Resource Name (ARN) that identifies the cluster. For more information about the ARN format, see Amazon Resource Name (ARN) in the Amazon ECS Developer Guide .
A user-generated string that you use to identify your cluster.
The execute command configuration for the cluster.
The details of the execute command configuration.
Specify an Key Management Service key ID to encrypt the data between the local client and the container.
The log setting to use for redirecting logs for your execute command results. The following log settings are available.
NONE
: The execute command session is not logged.DEFAULT
: The awslogs
configuration in the task definition is used. If no logging parameter is specified, it defaults to this value. If no awslogs
log driver is configured in the task definition, the output won't be logged.OVERRIDE
: Specify the logging details as a part of logConfiguration
. If the OVERRIDE
logging option is specified, the logConfiguration
is required.The log configuration for the results of the execute command actions. The logs can be sent to CloudWatch Logs or an Amazon S3 bucket. When logging=OVERRIDE
is specified, a logConfiguration
must be provided.
The name of the CloudWatch log group to send logs to.
Note
The CloudWatch log group must already be created.
Determines whether to use encryption on the CloudWatch logs. If not specified, encryption will be disabled.
The name of the S3 bucket to send logs to.
Note
The S3 bucket must already be created.
Determines whether to use encryption on the S3 logs. If not specified, encryption is not used.
An optional folder in the S3 bucket to place logs in.
The status of the cluster. The following are the possible states that are returned.
ACTIVE
The cluster is ready to accept tasks and if applicable you can register container instances with the cluster.
PROVISIONING
The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being created.
DEPROVISIONING
The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being deleted.
FAILED
The cluster has capacity providers that are associated with it and the resources needed for the capacity provider have failed to create.
INACTIVE
The cluster has been deleted. Clusters with an INACTIVE
status may remain discoverable in your account for a period of time. However, this behavior is subject to change in the future. We don't recommend that you rely on INACTIVE
clusters persisting.
The number of container instances registered into the cluster. This includes container instances in both ACTIVE
and DRAINING
status.
The number of tasks in the cluster that are in the RUNNING
state.
The number of tasks in the cluster that are in the PENDING
state.
The number of services that are running on the cluster in an ACTIVE
state. You can view these services with ListServices.
Additional information about your clusters that are separated by launch type. They include the following:
A key-value pair object.
The name of the key-value pair. For environment variables, this is the name of the environment variable.
The value of the key-value pair. For environment variables, this is the value of the environment variable.
The metadata that you apply to the cluster to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
The settings for the cluster. This parameter indicates whether CloudWatch Container Insights is enabled or disabled for a cluster.
The settings to use when creating a cluster. This parameter is used to turn on CloudWatch Container Insights for a cluster.
The name of the cluster setting. The only supported value is containerInsights
.
The value to set for the cluster setting. The supported values are enabled
and disabled
. If enabled
is specified, CloudWatch Container Insights will be enabled for the cluster, otherwise it will be disabled unless the containerInsights
account setting is enabled. If a cluster value is specified, it will override the containerInsights
value set with PutAccountSetting or PutAccountSettingDefault.
The capacity providers associated with the cluster.
The default capacity provider strategy for the cluster. When services or tasks are run in the cluster with no launch type or capacity provider strategy specified, the default capacity provider strategy is used.
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE
or UPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
A capacity provider strategy may contain a maximum of 6 capacity providers.
The short name of the capacity provider.
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight
value is taken into consideration after the base
value, if defined, is satisfied.
If no weight
value is specified, the default value of 0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0
, any RunTask
or CreateService
actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1
, then when the base
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1
for capacityProviderA and a weight of 4
for capacityProviderB , then for every one task that's run using capacityProviderA , four tasks would use capacityProviderB .
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0
is used.
The resources attached to a cluster. When using a capacity provider with a cluster, the capacity provider and associated resources are returned as cluster attachments.
An object representing a container instance or task attachment.
The unique identifier for the attachment.
The type of the attachment, such as ElasticNetworkInterface
.
The status of the attachment. Valid values are PRECREATED
, CREATED
, ATTACHING
, ATTACHED
, DETACHING
, DETACHED
, DELETED
, and FAILED
.
Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address.
A key-value pair object.
The name of the key-value pair. For environment variables, this is the name of the environment variable.
The value of the key-value pair. For environment variables, this is the value of the environment variable.
The status of the capacity providers associated with the cluster. The following are the states that are returned.
UPDATE_IN_PROGRESS
The available capacity providers for the cluster are updating.
UPDATE_COMPLETE
The capacity providers have successfully updated.
UPDATE_FAILED
The capacity provider updates failed.
Use this parameter to set a default Service Connect namespace. After you set a default Service Connect namespace, any new services with Service Connect turned on that are created in the cluster are added as client services in the namespace. This setting only applies to new services that set the enabled
parameter to true
in the ServiceConnectConfiguration
. You can set the namespace of each service individually in the ServiceConnectConfiguration
to override this default parameter.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace. When you create a service and don't specify a Service Connect configuration, this namespace is used.
Exceptions
ECS.Client.exceptions.ServerException
ECS.Client.exceptions.ClientException
ECS.Client.exceptions.InvalidParameterException
ECS.Client.exceptions.ClusterNotFoundException
ECS.Client.exceptions.ClusterContainsContainerInstancesException
ECS.Client.exceptions.ClusterContainsServicesException
ECS.Client.exceptions.ClusterContainsTasksException
ECS.Client.exceptions.UpdateInProgressException
Examples
This example deletes an empty cluster in your default region.
response = client.delete_cluster(
cluster='my_cluster',
)
print(response)
Expected Output:
{
'cluster': {
'activeServicesCount': 0,
'clusterArn': 'arn:aws:ecs:us-east-1:012345678910:cluster/my_cluster',
'clusterName': 'my_cluster',
'pendingTasksCount': 0,
'registeredContainerInstancesCount': 0,
'runningTasksCount': 0,
'status': 'INACTIVE',
},
'ResponseMetadata': {
'...': '...',
},
}
delete_service
(**kwargs)¶Deletes a specified service within a cluster. You can delete a service if you have no running tasks in it and the desired task count is zero. If the service is actively maintaining tasks, you can't delete it, and you must update the service to a desired task count of zero. For more information, see UpdateService.
Note
When you delete a service, if there are still running tasks that require cleanup, the service status moves from ACTIVE
to DRAINING
, and the service is no longer visible in the console or in the ListServices API operation. After all tasks have transitioned to either STOPPING
or STOPPED
status, the service status moves from DRAINING
to INACTIVE
. Services in the DRAINING
or INACTIVE
status can still be viewed with the DescribeServices API operation. However, in the future, INACTIVE
services may be cleaned up and purged from Amazon ECS record keeping, and DescribeServices calls on those services return a ServiceNotFoundException
error.
Warning
If you attempt to create a new service with the same name as an existing service in either ACTIVE
or DRAINING
status, you receive an error.
See also: AWS API Documentation
Request Syntax
response = client.delete_service(
cluster='string',
service='string',
force=True|False
)
[REQUIRED]
The name of the service to delete.
true
, allows you to delete a service even if it wasn't scaled down to zero tasks. It's only necessary to use this if the service uses the REPLICA
scheduling strategy.dict
Response Syntax
{
'service': {
'serviceArn': 'string',
'serviceName': 'string',
'clusterArn': 'string',
'loadBalancers': [
{
'targetGroupArn': 'string',
'loadBalancerName': 'string',
'containerName': 'string',
'containerPort': 123
},
],
'serviceRegistries': [
{
'registryArn': 'string',
'port': 123,
'containerName': 'string',
'containerPort': 123
},
],
'status': 'string',
'desiredCount': 123,
'runningCount': 123,
'pendingCount': 123,
'launchType': 'EC2'|'FARGATE'|'EXTERNAL',
'capacityProviderStrategy': [
{
'capacityProvider': 'string',
'weight': 123,
'base': 123
},
],
'platformVersion': 'string',
'platformFamily': 'string',
'taskDefinition': 'string',
'deploymentConfiguration': {
'deploymentCircuitBreaker': {
'enable': True|False,
'rollback': True|False
},
'maximumPercent': 123,
'minimumHealthyPercent': 123
},
'taskSets': [
{
'id': 'string',
'taskSetArn': 'string',
'serviceArn': 'string',
'clusterArn': 'string',
'startedBy': 'string',
'externalId': 'string',
'status': 'string',
'taskDefinition': 'string',
'computedDesiredCount': 123,
'pendingCount': 123,
'runningCount': 123,
'createdAt': datetime(2015, 1, 1),
'updatedAt': datetime(2015, 1, 1),
'launchType': 'EC2'|'FARGATE'|'EXTERNAL',
'capacityProviderStrategy': [
{
'capacityProvider': 'string',
'weight': 123,
'base': 123
},
],
'platformVersion': 'string',
'platformFamily': 'string',
'networkConfiguration': {
'awsvpcConfiguration': {
'subnets': [
'string',
],
'securityGroups': [
'string',
],
'assignPublicIp': 'ENABLED'|'DISABLED'
}
},
'loadBalancers': [
{
'targetGroupArn': 'string',
'loadBalancerName': 'string',
'containerName': 'string',
'containerPort': 123
},
],
'serviceRegistries': [
{
'registryArn': 'string',
'port': 123,
'containerName': 'string',
'containerPort': 123
},
],
'scale': {
'value': 123.0,
'unit': 'PERCENT'
},
'stabilityStatus': 'STEADY_STATE'|'STABILIZING',
'stabilityStatusAt': datetime(2015, 1, 1),
'tags': [
{
'key': 'string',
'value': 'string'
},
]
},
],
'deployments': [
{
'id': 'string',
'status': 'string',
'taskDefinition': 'string',
'desiredCount': 123,
'pendingCount': 123,
'runningCount': 123,
'failedTasks': 123,
'createdAt': datetime(2015, 1, 1),
'updatedAt': datetime(2015, 1, 1),
'capacityProviderStrategy': [
{
'capacityProvider': 'string',
'weight': 123,
'base': 123
},
],
'launchType': 'EC2'|'FARGATE'|'EXTERNAL',
'platformVersion': 'string',
'platformFamily': 'string',
'networkConfiguration': {
'awsvpcConfiguration': {
'subnets': [
'string',
],
'securityGroups': [
'string',
],
'assignPublicIp': 'ENABLED'|'DISABLED'
}
},
'rolloutState': 'COMPLETED'|'FAILED'|'IN_PROGRESS',
'rolloutStateReason': 'string',
'serviceConnectConfiguration': {
'enabled': True|False,
'namespace': 'string',
'services': [
{
'portName': 'string',
'discoveryName': 'string',
'clientAliases': [
{
'port': 123,
'dnsName': 'string'
},
],
'ingressPortOverride': 123
},
],
'logConfiguration': {
'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens',
'options': {
'string': 'string'
},
'secretOptions': [
{
'name': 'string',
'valueFrom': 'string'
},
]
}
},
'serviceConnectResources': [
{
'discoveryName': 'string',
'discoveryArn': 'string'
},
]
},
],
'roleArn': 'string',
'events': [
{
'id': 'string',
'createdAt': datetime(2015, 1, 1),
'message': 'string'
},
],
'createdAt': datetime(2015, 1, 1),
'placementConstraints': [
{
'type': 'distinctInstance'|'memberOf',
'expression': 'string'
},
],
'placementStrategy': [
{
'type': 'random'|'spread'|'binpack',
'field': 'string'
},
],
'networkConfiguration': {
'awsvpcConfiguration': {
'subnets': [
'string',
],
'securityGroups': [
'string',
],
'assignPublicIp': 'ENABLED'|'DISABLED'
}
},
'healthCheckGracePeriodSeconds': 123,
'schedulingStrategy': 'REPLICA'|'DAEMON',
'deploymentController': {
'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL'
},
'tags': [
{
'key': 'string',
'value': 'string'
},
],
'createdBy': 'string',
'enableECSManagedTags': True|False,
'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE',
'enableExecuteCommand': True|False
}
}
Response Structure
(dict) --
service (dict) --
The full description of the deleted service.
serviceArn (string) --
The ARN that identifies the service. For more information about the ARN format, see Amazon Resource Name (ARN) in the Amazon ECS Developer Guide .
serviceName (string) --
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster. However, you can have similarly named services in multiple clusters within a Region or across multiple Regions.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that hosts the service.
loadBalancers (list) --
A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition.
(dict) --
The load balancer configuration to use with a service or task set.
For specific notes and restrictions regarding the use of load balancers with services and task sets, see the CreateService and CreateTaskSet actions.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide .
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. If you're using a Classic Load Balancer, omit the target group ARN.
For services using the ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide .
For services using the CODE_DEPLOY
deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide .
Warning
If your service's task definition uses the awsvpc
network mode, you must choose ip
as the target type, not instance
. Do this when creating your target groups because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
A load balancer name is only specified when using a Classic Load Balancer. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort
of the port mapping.
serviceRegistries (list) --
The details for the service discovery registries to assign to this service. For more information, see Service Discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc
network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition that your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
status (string) --
The status of the service. The valid values are ACTIVE
, DRAINING
, or INACTIVE
.
desiredCount (integer) --
The desired number of instantiations of the task definition to keep running on the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService.
runningCount (integer) --
The number of tasks in the cluster that are in the RUNNING
state.
pendingCount (integer) --
The number of tasks in the cluster that are in the PENDING
state.
launchType (string) --
The launch type the service is using. When using the DescribeServices API, this field is omitted if the service was created using a capacity provider strategy.
capacityProviderStrategy (list) --
The capacity provider strategy the service uses. When using the DescribeServices API, this field is omitted if the service was created using a launch type.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE
or UPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight
value is taken into consideration after the base
value, if defined, is satisfied.
If no weight
value is specified, the default value of 0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0
, any RunTask
or CreateService
actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1
, then when the base
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1
for capacityProviderA and a weight of 4
for capacityProviderB , then for every one task that's run using capacityProviderA , four tasks would use capacityProviderB .
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0
is used.
platformVersion (string) --
The platform version to run your service on. A platform version is only specified for tasks that are hosted on Fargate. If one isn't specified, the LATEST
platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide .
platformFamily (string) --
The operating system that your tasks in the service run on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily
value as the service (for example, LINUX
).
taskDefinition (string) --
The task definition to use for tasks in the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService.
deploymentConfiguration (dict) --
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
deploymentCircuitBreaker (dict) --
Note
The deployment circuit breaker can only be used for services using the rolling update ( ECS
) deployment type.
The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If deployment circuit breaker is enabled, a service deployment will transition to a failed state and stop launching new tasks. If rollback is enabled, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
enable (boolean) --
Determines whether to use the deployment circuit breaker logic for the service.
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is enabled, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
maximumPercent (integer) --
If a service is using the rolling update ( ECS
) deployment type, the maximumPercent
parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING
or PENDING
state during a deployment, as a percentage of the desiredCount
(rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA
service scheduler and has a desiredCount
of four tasks and a maximumPercent
value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent
value for a service using the REPLICA
service scheduler is 200%.
If a service is using either the blue/green ( CODE_DEPLOY
) or EXTERNAL
deployment types and tasks that use the EC2 launch type, the maximum percent value is set to the default value and is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING
state while the container instances are in the DRAINING
state. If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service.
minimumHealthyPercent (integer) --
If a service is using the rolling update ( ECS
) deployment type, the minimumHealthyPercent
represents a lower limit on the number of your service's tasks that must remain in the RUNNING
state during a deployment, as a percentage of the desiredCount
(rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desiredCount
of four tasks and a minimumHealthyPercent
of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks.
For services that do not use a load balancer, the following should be noted:
RUNNING
state before the task is counted towards the minimum healthy percent total.For services are that do use a load balancer, the following should be noted:
If a service is using either the blue/green ( CODE_DEPLOY
) or EXTERNAL
deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value and is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING
state while the container instances are in the DRAINING
state. If a service is using either the blue/green ( CODE_DEPLOY
) or EXTERNAL
deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
taskSets (list) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL
deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
(dict) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL
deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy
parameter is CODE_DEPLOY
. If an external deployment created the task set, the startedBy
field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId
parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId
parameter contains the ECS_TASK_SET_EXTERNAL_ID
Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount
by the task set's scale
percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING
status during a deployment. A task in the PENDING
state is preparing to enter the RUNNING
state. A task set enters the PENDING
status when it launches for the first time or when it's restarted after being in the STOPPED
state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING
status during a deployment. A task in the RUNNING
state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide .
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE
or UPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight
value is taken into consideration after the base
value, if defined, is satisfied.
If no weight
value is specified, the default value of 0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0
, any RunTask
or CreateService
actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1
, then when the base
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1
for capacityProviderA and a weight of 4
for capacityProviderB , then for every one task that's run using capacityProviderA , four tasks would use capacityProviderB .
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0
is used.
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide .
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
Note
All specified subnets and security groups must be from the same VPC.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per AwsVpcConfiguration
.
Note
All specified subnets must be from the same VPC.
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per AwsVpcConfiguration
.
Note
All specified security groups must be from the same VPC.
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED
.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
For specific notes and restrictions regarding the use of load balancers with services and task sets, see the CreateService and CreateTaskSet actions.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide .
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. If you're using a Classic Load Balancer, omit the target group ARN.
For services using the ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide .
For services using the CODE_DEPLOY
deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide .
Warning
If your service's task definition uses the awsvpc
network mode, you must choose ip
as the target type, not instance
. Do this when creating your target groups because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
A load balancer name is only specified when using a Classic Load Balancer. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort
of the port mapping.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc
network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition that your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount
, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE
:
runningCount
is equal to the computedDesiredCount
.pendingCount
is 0
.DRAINING
status.If any of those conditions aren't met, the stability status returns STABILIZING
.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.key (string) --
One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
deployments (list) --
The current state of deployments for the service.
(dict) --
The details of an Amazon ECS service deployment. This is used only when a service uses the ECS
deployment controller type.
id (string) --
The ID of the deployment.
status (string) --
The status of the deployment. The following describes each state.
PRIMARY
The most recent deployment of a service.
ACTIVE
A service deployment that still has running tasks, but are in the process of being replaced with a new PRIMARY
deployment.
INACTIVE
A deployment that has been completely replaced.
taskDefinition (string) --
The most recent task definition that was specified for the tasks in the service to use.
desiredCount (integer) --
The most recent desired count of tasks that was specified for the service to deploy or maintain.
pendingCount (integer) --
The number of tasks in the deployment that are in the PENDING
status.
runningCount (integer) --
The number of tasks in the deployment that are in the RUNNING
status.
failedTasks (integer) --
The number of consecutively failed tasks in the deployment. A task is considered a failure if the service scheduler can't launch the task, the task doesn't transition to a RUNNING
state, or if it fails any of its defined health checks and is stopped.
Note
Once a service deployment has one or more successfully running tasks, the failed task count resets to zero and stops being evaluated.
createdAt (datetime) --
The Unix timestamp for the time when the service deployment was created.
updatedAt (datetime) --
The Unix timestamp for the time when the service deployment was last updated.
capacityProviderStrategy (list) --
The capacity provider strategy that the deployment is using.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE
or UPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight
value is taken into consideration after the base
value, if defined, is satisfied.
If no weight
value is specified, the default value of 0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0
, any RunTask
or CreateService
actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1
, then when the base
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1
for capacityProviderA and a weight of 4
for capacityProviderB , then for every one task that's run using capacityProviderA , four tasks would use capacityProviderB .
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0
is used.
launchType (string) --
The launch type the tasks in the service are using. For more information, see Amazon ECS Launch Types in the Amazon Elastic Container Service Developer Guide .
platformVersion (string) --
The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If one isn't specified, the LATEST
platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide .
platformFamily (string) --
The operating system that your tasks in the service, or tasks are running on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily
value as the service, for example, LINUX.
.
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc
networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
Note
All specified subnets and security groups must be from the same VPC.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per AwsVpcConfiguration
.
Note
All specified subnets must be from the same VPC.
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per AwsVpcConfiguration
.
Note
All specified security groups must be from the same VPC.
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED
.
rolloutState (string) --
Note
The rolloutState
of a service is only returned for services that use the rolling update ( ECS
) deployment type that aren't behind a Classic Load Balancer.
The rollout state of the deployment. When a service deployment is started, it begins in an IN_PROGRESS
state. When the service reaches a steady state, the deployment transitions to a COMPLETED
state. If the service fails to reach a steady state and circuit breaker is enabled, the deployment transitions to a FAILED
state. A deployment in FAILED
state doesn't launch any new tasks. For more information, see DeploymentCircuitBreaker.
rolloutStateReason (string) --
A description of the rollout state of a deployment.
serviceConnectConfiguration (dict) --
The details of the Service Connect configuration that's used by this deployment. Compare the configuration between multiple deployments when troubleshooting issues with new deployments.
The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
enabled (boolean) --
Specifies whether to use Service Connect with this service.
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide .
services (list) --
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.
This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.
An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service.
(dict) --
The Service Connect service object configuration. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
portName (string) --
The portName
must match the name of one of the portMappings
from all the containers in the task definition of this Amazon ECS service.
discoveryName (string) --
The discoveryName
is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace
is used. If the discoveryName
isn't specified, the port mapping name from the task definition is used in portName.namespace
.
clientAliases (list) --
The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1.
Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
For each ServiceConnectService
, you must provide at least one clientAlias
with one port
.
(dict) --
Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
port (integer) --
The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
dnsName (string) --
The dnsName
is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace
is used. If the discoveryName
isn't specified, the port mapping name from the task definition is used in portName.namespace
.
To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are database
, db
, or the lowercase name of a database, such as mysql
or redis
. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
ingressPortOverride (integer) --
The port number for the Service Connect proxy to listen on.
Use the value of this field to bypass the proxy for traffic on the port number specified in the named portMapping
in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service.
In awsvpc
mode and Fargate, the default value is the container port number. The container port number is in the portMapping
in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy.
logConfiguration (dict) --
The log configuration for the container. This parameter maps to LogConfig
in the Create a container section of the Docker Remote API and the --log-driver
option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. For more information about the options for different supported log drivers, see Configure logging drivers in the Docker documentation.
Understand the following when specifying a log configuration for your containers.
ECS_AVAILABLE_LOGGING_DRIVERS
environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide .logDriver (string) --
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs
, splunk
, and awsfirelens
.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs
, fluentd
, gelf
, json-file
, journald
, logentries
, syslog
, splunk
, and awsfirelens
.
For more information about using the awslogs
log driver, see Using the awslogs log driver in the Amazon Elastic Container Service Developer Guide .
For more information about using the awsfirelens
log driver, see Custom log routing in the Amazon Elastic Container Service Developer Guide .
Note
If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.
options (dict) --
The configuration options to send to the log driver. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide .
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
secrets
container definition parameter.secretOptions
container definition parameter.For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide .
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide .
Note
If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
serviceConnectResources (list) --
The list of Service Connect resources that are associated with this deployment. Each list entry maps a discovery name to a Cloud Map service name.
(dict) --
The Service Connect resource. Each configuration maps a discovery name to a Cloud Map service name. The data is stored in Cloud Map as part of the Service Connect configuration for each discovery name of this Amazon ECS service.
A task can resolve the dnsName
for each of the clientAliases
of a service. However a task can't resolve the discovery names. If you want to connect to a service, refer to the ServiceConnectConfiguration
of that service for the list of clientAliases
that you can use.
discoveryName (string) --
The discovery name of this Service Connect resource.
The discoveryName
is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace
is used. If the discoveryName
isn't specified, the port mapping name from the task definition is used in portName.namespace
.
discoveryArn (string) --
The Amazon Resource Name (ARN) for the namespace in Cloud Map that matches the discovery name for this Service Connect resource. You can use this ARN in other integrations with Cloud Map. However, Service Connect can't ensure connectivity outside of Amazon ECS.
roleArn (string) --
The ARN of the IAM role that's associated with the service. It allows the Amazon ECS container agent to register container instances with an Elastic Load Balancing load balancer.
events (list) --
The event stream for your service. A maximum of 100 of the latest events are displayed.
(dict) --
The details for an event that's associated with a service.
id (string) --
The ID string for the event.
createdAt (datetime) --
The Unix timestamp for the time when the event was triggered.
message (string) --
The event message.
createdAt (datetime) --
The Unix timestamp for the time when the service was created.
placementConstraints (list) --
The placement constraints for the tasks in the service.
(dict) --
An object representing a constraint on task placement. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide .
Note
If you're using the Fargate launch type, task placement constraints aren't supported.
type (string) --
The type of constraint. Use distinctInstance
to ensure that each task in a particular group is running on a different container instance. Use memberOf
to restrict the selection to a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is distinctInstance
. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide .
placementStrategy (list) --
The placement strategy that determines how tasks for the service are placed.
(dict) --
The task placement strategy for a task or service. For more information, see Task placement strategies in the Amazon Elastic Container Service Developer Guide .
type (string) --
The type of placement strategy. The random
placement strategy randomly places tasks on available candidates. The spread
placement strategy spreads placement across available candidates evenly based on the field
parameter. The binpack
strategy places tasks on available candidates that have the least available amount of the resource that's specified with the field
parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.
field (string) --
The field to apply the placement strategy against. For the spread
placement strategy, valid values are instanceId
(or host
, which has the same effect), or any platform or custom attribute that's applied to a container instance, such as attribute:ecs.availability-zone
. For the binpack
placement strategy, valid values are cpu
and memory
. For the random
placement strategy, this field is not used.
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc
networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
Note
All specified subnets and security groups must be from the same VPC.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per AwsVpcConfiguration
.
Note
All specified subnets must be from the same VPC.
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per AwsVpcConfiguration
.
Note
All specified security groups must be from the same VPC.
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED
.
healthCheckGracePeriodSeconds (integer) --
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started.
schedulingStrategy (string) --
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available.
REPLICA
-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions.DAEMON
-The daemon scheduling strategy deploys exactly one task on each active container instance. This task meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It stop tasks that don't meet the placement constraints.Note
Fargate tasks don't support the DAEMON
scheduling strategy.
deploymentController (dict) --
The deployment controller type the service is using.
type (string) --
The deployment controller type to use.
There are three deployment controller types available:
ECS
The rolling update ( ECS
) deployment type involves replacing the current running version of the container with the latest version. The number of containers Amazon ECS adds or removes from the service during a rolling update is controlled by adjusting the minimum and maximum number of healthy tasks allowed during a service deployment, as specified in the DeploymentConfiguration.
CODE_DEPLOY
The blue/green ( CODE_DEPLOY
) deployment type uses the blue/green deployment model powered by CodeDeploy, which allows you to verify a new deployment of a service before sending production traffic to it.
EXTERNAL
The external ( EXTERNAL
) deployment type enables you to use any third-party deployment controller for full control over the deployment process for an Amazon ECS service.
tags (list) --
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value. You define bot the key and value.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.key (string) --
One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
createdBy (string) --
The principal that created the service.
enableECSManagedTags (boolean) --
Determines whether to use Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide .
propagateTags (string) --
Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated.
enableExecuteCommand (boolean) --
Determines whether the execute command functionality is enabled for the service. If true
, the execute command functionality is enabled for all containers in tasks as part of the service.
Exceptions
ECS.Client.exceptions.ServerException
ECS.Client.exceptions.ClientException
ECS.Client.exceptions.InvalidParameterException
ECS.Client.exceptions.ClusterNotFoundException
ECS.Client.exceptions.ServiceNotFoundException
Examples
This example deletes the my-http-service service. The service must have a desired count and running count of 0 before you can delete it.
response = client.delete_service(
service='my-http-service',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
delete_task_set
(**kwargs)¶Deletes a specified task set within a service. This is used when a service uses the EXTERNAL
deployment controller type. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide .
See also: AWS API Documentation
Request Syntax
response = client.delete_task_set(
cluster='string',
service='string',
taskSet='string',
force=True|False
)
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service that the task set found in to delete.
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the service that hosts the task set to delete.
[REQUIRED]
The task set ID or full Amazon Resource Name (ARN) of the task set to delete.
true
, you can delete a task set even if it hasn't been scaled down to zero.dict
Response Syntax
{
'taskSet': {
'id': 'string',
'taskSetArn': 'string',
'serviceArn': 'string',
'clusterArn': 'string',
'startedBy': 'string',
'externalId': 'string',
'status': 'string',
'taskDefinition': 'string',
'computedDesiredCount': 123,
'pendingCount': 123,
'runningCount': 123,
'createdAt': datetime(2015, 1, 1),
'updatedAt': datetime(2015, 1, 1),
'launchType': 'EC2'|'FARGATE'|'EXTERNAL',
'capacityProviderStrategy': [
{
'capacityProvider': 'string',
'weight': 123,
'base': 123
},
],
'platformVersion': 'string',
'platformFamily': 'string',
'networkConfiguration': {
'awsvpcConfiguration': {
'subnets': [
'string',
],
'securityGroups': [
'string',
],
'assignPublicIp': 'ENABLED'|'DISABLED'
}
},
'loadBalancers': [
{
'targetGroupArn': 'string',
'loadBalancerName': 'string',
'containerName': 'string',
'containerPort': 123
},
],
'serviceRegistries': [
{
'registryArn': 'string',
'port': 123,
'containerName': 'string',
'containerPort': 123
},
],
'scale': {
'value': 123.0,
'unit': 'PERCENT'
},
'stabilityStatus': 'STEADY_STATE'|'STABILIZING',
'stabilityStatusAt': datetime(2015, 1, 1),
'tags': [
{
'key': 'string',
'value': 'string'
},
]
}
}
Response Structure
(dict) --
taskSet (dict) --
Details about the task set.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy
parameter is CODE_DEPLOY
. If an external deployment created the task set, the startedBy
field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId
parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId
parameter contains the ECS_TASK_SET_EXTERNAL_ID
Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount
by the task set's scale
percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING
status during a deployment. A task in the PENDING
state is preparing to enter the RUNNING
state. A task set enters the PENDING
status when it launches for the first time or when it's restarted after being in the STOPPED
state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING
status during a deployment. A task in the RUNNING
state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide .
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE
or UPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight
value is taken into consideration after the base
value, if defined, is satisfied.
If no weight
value is specified, the default value of 0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0
, any RunTask
or CreateService
actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1
, then when the base
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1
for capacityProviderA and a weight of 4
for capacityProviderB , then for every one task that's run using capacityProviderA , four tasks would use capacityProviderB .
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0
is used.
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide .
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
Note
All specified subnets and security groups must be from the same VPC.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per AwsVpcConfiguration
.
Note
All specified subnets must be from the same VPC.
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per AwsVpcConfiguration
.
Note
All specified security groups must be from the same VPC.
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED
.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
For specific notes and restrictions regarding the use of load balancers with services and task sets, see the CreateService and CreateTaskSet actions.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide .
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. If you're using a Classic Load Balancer, omit the target group ARN.
For services using the ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide .
For services using the CODE_DEPLOY
deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide .
Warning
If your service's task definition uses the awsvpc
network mode, you must choose ip
as the target type, not instance
. Do this when creating your target groups because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
A load balancer name is only specified when using a Classic Load Balancer. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort
of the port mapping.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc
network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition that your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount
, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE
:
runningCount
is equal to the computedDesiredCount
.pendingCount
is 0
.DRAINING
status.If any of those conditions aren't met, the stability status returns STABILIZING
.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.key (string) --
One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
Exceptions
ECS.Client.exceptions.ServerException
ECS.Client.exceptions.ClientException
ECS.Client.exceptions.InvalidParameterException
ECS.Client.exceptions.ClusterNotFoundException
ECS.Client.exceptions.UnsupportedFeatureException
ECS.Client.exceptions.AccessDeniedException
ECS.Client.exceptions.ServiceNotFoundException
ECS.Client.exceptions.ServiceNotActiveException
ECS.Client.exceptions.TaskSetNotFoundException
deregister_container_instance
(**kwargs)¶Deregisters an Amazon ECS container instance from the specified cluster. This instance is no longer available to run tasks.
If you intend to use the container instance for some other purpose after deregistration, we recommend that you stop all of the tasks running on the container instance before deregistration. That prevents any orphaned tasks from consuming resources.
Deregistering a container instance removes the instance from a cluster, but it doesn't terminate the EC2 instance. If you are finished using the instance, be sure to terminate it in the Amazon EC2 console to stop billing.
Note
If you terminate a running container instance, Amazon ECS automatically deregisters the instance from your cluster (stopped container instances or instances with disconnected agents aren't automatically deregistered when terminated).
See also: AWS API Documentation
Request Syntax
response = client.deregister_container_instance(
cluster='string',
containerInstance='string',
force=True|False
)
[REQUIRED]
The container instance ID or full ARN of the container instance to deregister. For more information about the ARN format, see Amazon Resource Name (ARN) in the Amazon ECS Developer Guide .
Forces the container instance to be deregistered. If you have tasks running on the container instance when you deregister it with the force
option, these tasks remain running until you terminate the instance or the tasks stop through some other means, but they're orphaned (no longer monitored or accounted for by Amazon ECS). If an orphaned task on your container instance is part of an Amazon ECS service, then the service scheduler starts another copy of that task, on a different container instance if possible.
Any containers in orphaned service tasks that are registered with a Classic Load Balancer or an Application Load Balancer target group are deregistered. They begin connection draining according to the settings on the load balancer or target group.
dict
Response Syntax
{
'containerInstance': {
'containerInstanceArn': 'string',
'ec2InstanceId': 'string',
'capacityProviderName': 'string',
'version': 123,
'versionInfo': {
'agentVersion': 'string',
'agentHash': 'string',
'dockerVersion': 'string'
},
'remainingResources': [
{
'name': 'string',
'type': 'string',
'doubleValue': 123.0,
'longValue': 123,
'integerValue': 123,
'stringSetValue': [
'string',
]
},
],
'registeredResources': [
{
'name': 'string',
'type': 'string',
'doubleValue': 123.0,
'longValue': 123,
'integerValue': 123,
'stringSetValue': [
'string',
]
},
],
'status': 'string',
'statusReason': 'string',
'agentConnected': True|False,
'runningTasksCount': 123,
'pendingTasksCount': 123,
'agentUpdateStatus': 'PENDING'|'STAGING'|'STAGED'|'UPDATING'|'UPDATED'|'FAILED',
'attributes': [
{
'name': 'string',
'value': 'string',
'targetType': 'container-instance',
'targetId': 'string'
},
],
'registeredAt': datetime(2015, 1, 1),
'attachments': [
{
'id': 'string',
'type': 'string',
'status': 'string',
'details': [
{
'name': 'string',
'value': 'string'
},
]
},
],
'tags': [
{
'key': 'string',
'value': 'string'
},
],
'healthStatus': {
'overallStatus': 'OK'|'IMPAIRED'|'INSUFFICIENT_DATA'|'INITIALIZING',
'details': [
{
'type': 'CONTAINER_RUNTIME',
'status': 'OK'|'IMPAIRED'|'INSUFFICIENT_DATA'|'INITIALIZING',
'lastUpdated': datetime(2015, 1, 1),
'lastStatusChange': datetime(2015, 1, 1)
},
]
}
}
}
Response Structure
(dict) --
containerInstance (dict) --
The container instance that was deregistered.
containerInstanceArn (string) --
The Amazon Resource Name (ARN) of the container instance. For more information about the ARN format, see Amazon Resource Name (ARN) in the Amazon ECS Developer Guide .
ec2InstanceId (string) --
The ID of the container instance. For Amazon EC2 instances, this value is the Amazon EC2 instance ID. For external instances, this value is the Amazon Web Services Systems Manager managed instance ID.
capacityProviderName (string) --
The capacity provider that's associated with the container instance.
version (integer) --
The version counter for the container instance. Every time a container instance experiences a change that triggers a CloudWatch event, the version counter is incremented. If you're replicating your Amazon ECS container instance state with CloudWatch Events, you can compare the version of a container instance reported by the Amazon ECS APIs with the version reported in CloudWatch Events for the container instance (inside the detail
object) to verify that the version in your event stream is current.
versionInfo (dict) --
The version information for the Amazon ECS container agent and Docker daemon running on the container instance.
agentVersion (string) --
The version number of the Amazon ECS container agent.
agentHash (string) --
The Git commit hash for the Amazon ECS container agent build on the amazon-ecs-agent GitHub repository.
dockerVersion (string) --
The Docker version that's running on the container instance.
remainingResources (list) --
For CPU and memory resource types, this parameter describes the remaining CPU and memory that wasn't already allocated to tasks and is therefore available for new tasks. For port resource types, this parameter describes the ports that were reserved by the Amazon ECS container agent (at instance registration time) and any task containers that have reserved port mappings on the host (with the host
or bridge
network mode). Any port that's not specified here is available for new tasks.
(dict) --
Describes the resources available for a container instance.
name (string) --
The name of the resource, such as CPU
, MEMORY
, PORTS
, PORTS_UDP
, or a user-defined resource.
type (string) --
The type of the resource. Valid values: INTEGER
, DOUBLE
, LONG
, or STRINGSET
.
doubleValue (float) --
When the doubleValue
type is set, the value of the resource must be a double precision floating-point type.
longValue (integer) --
When the longValue
type is set, the value of the resource must be an extended precision floating-point type.
integerValue (integer) --
When the integerValue
type is set, the value of the resource must be an integer.
stringSetValue (list) --
When the stringSetValue
type is set, the value of the resource must be a string type.
registeredResources (list) --
For CPU and memory resource types, this parameter describes the amount of each resource that was available on the container instance when the container agent registered it with Amazon ECS. This value represents the total amount of CPU and memory that can be allocated on this container instance to tasks. For port resource types, this parameter describes the ports that were reserved by the Amazon ECS container agent when it registered the container instance with Amazon ECS.
(dict) --
Describes the resources available for a container instance.
name (string) --
The name of the resource, such as CPU
, MEMORY
, PORTS
, PORTS_UDP
, or a user-defined resource.
type (string) --
The type of the resource. Valid values: INTEGER
, DOUBLE
, LONG
, or STRINGSET
.
doubleValue (float) --
When the doubleValue
type is set, the value of the resource must be a double precision floating-point type.
longValue (integer) --
When the longValue
type is set, the value of the resource must be an extended precision floating-point type.
integerValue (integer) --
When the integerValue
type is set, the value of the resource must be an integer.
stringSetValue (list) --
When the stringSetValue
type is set, the value of the resource must be a string type.
status (string) --
The status of the container instance. The valid values are REGISTERING
, REGISTRATION_FAILED
, ACTIVE
, INACTIVE
, DEREGISTERING
, or DRAINING
.
If your account has opted in to the awsvpcTrunking
account setting, then any newly registered container instance will transition to a REGISTERING
status while the trunk elastic network interface is provisioned for the instance. If the registration fails, the instance will transition to a REGISTRATION_FAILED
status. You can describe the container instance and see the reason for failure in the statusReason
parameter. Once the container instance is terminated, the instance transitions to a DEREGISTERING
status while the trunk elastic network interface is deprovisioned. The instance then transitions to an INACTIVE
status.
The ACTIVE
status indicates that the container instance can accept tasks. The DRAINING
indicates that new tasks aren't placed on the container instance and any service tasks running on the container instance are removed if possible. For more information, see Container instance draining in the Amazon Elastic Container Service Developer Guide .
statusReason (string) --
The reason that the container instance reached its current status.
agentConnected (boolean) --
This parameter returns true
if the agent is connected to Amazon ECS. An instance with an agent that may be unhealthy or stopped return false
. Only instances connected to an agent can accept task placement requests.
runningTasksCount (integer) --
The number of tasks on the container instance that are in the RUNNING
status.
pendingTasksCount (integer) --
The number of tasks on the container instance that are in the PENDING
status.
agentUpdateStatus (string) --
The status of the most recent agent update. If an update wasn't ever requested, this value is NULL
.
attributes (list) --
The attributes set for the container instance, either by the Amazon ECS container agent at instance registration or manually with the PutAttributes operation.
(dict) --
An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide .
name (string) --
The name of the attribute. The name
must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.).
value (string) --
The value of the attribute. The value
must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space.
targetType (string) --
The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN.
targetId (string) --
The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN).
registeredAt (datetime) --
The Unix timestamp for the time when the container instance was registered.
attachments (list) --
The resources attached to a container instance, such as an elastic network interface.
(dict) --
An object representing a container instance or task attachment.
id (string) --
The unique identifier for the attachment.
type (string) --
The type of the attachment, such as ElasticNetworkInterface
.
status (string) --
The status of the attachment. Valid values are PRECREATED
, CREATED
, ATTACHING
, ATTACHED
, DETACHING
, DETACHED
, DELETED
, and FAILED
.
details (list) --
Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
tags (list) --
The metadata that you apply to the container instance to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.key (string) --
One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
healthStatus (dict) --
An object representing the health status of the container instance.
overallStatus (string) --
The overall health status of the container instance. This is an aggregate status of all container instance health checks.
details (list) --
An array of objects representing the details of the container instance health status.
(dict) --
An object representing the result of a container instance health status check.
type (string) --
The type of container instance health status that was verified.
status (string) --
The container instance health status.
lastUpdated (datetime) --
The Unix timestamp for when the container instance health status was last updated.
lastStatusChange (datetime) --
The Unix timestamp for when the container instance health status last changed.
Exceptions
ECS.Client.exceptions.ServerException
ECS.Client.exceptions.ClientException
ECS.Client.exceptions.InvalidParameterException
ECS.Client.exceptions.ClusterNotFoundException
Examples
This example deregisters a container instance from the specified cluster in your default region. If there are still tasks running on the container instance, you must either stop those tasks before deregistering, or use the force option.
response = client.deregister_container_instance(
cluster='default',
containerInstance='container_instance_UUID',
force=True,
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
deregister_task_definition
(**kwargs)¶Deregisters the specified task definition by family and revision. Upon deregistration, the task definition is marked as INACTIVE
. Existing tasks and services that reference an INACTIVE
task definition continue to run without disruption. Existing services that reference an INACTIVE
task definition can still scale up or down by modifying the service's desired count.
You can't use an INACTIVE
task definition to run new tasks or create new services, and you can't update an existing service to reference an INACTIVE
task definition. However, there may be up to a 10-minute window following deregistration where these restrictions have not yet taken effect.
Note
At this time, INACTIVE
task definitions remain discoverable in your account indefinitely. However, this behavior is subject to change in the future. We don't recommend that you rely on INACTIVE
task definitions persisting beyond the lifecycle of any associated tasks and services.
See also: AWS API Documentation
Request Syntax
response = client.deregister_task_definition(
taskDefinition='string'
)
[REQUIRED]
The family
and revision
( family:revision
) or full Amazon Resource Name (ARN) of the task definition to deregister. You must specify a revision
.
{
'taskDefinition': {
'taskDefinitionArn': 'string',
'containerDefinitions': [
{
'name': 'string',
'image': 'string',
'repositoryCredentials': {
'credentialsParameter': 'string'
},
'cpu': 123,
'memory': 123,
'memoryReservation': 123,
'links': [
'string',
],
'portMappings': [
{
'containerPort': 123,
'hostPort': 123,
'protocol': 'tcp'|'udp',
'name': 'string',
'appProtocol': 'http'|'http2'|'grpc',
'containerPortRange': 'string'
},
],
'essential': True|False,
'entryPoint': [
'string',
],
'command': [
'string',
],
'environment': [
{
'name': 'string',
'value': 'string'
},
],
'environmentFiles': [
{
'value': 'string',
'type': 's3'
},
],
'mountPoints': [
{
'sourceVolume': 'string',
'containerPath': 'string',
'readOnly': True|False
},
],
'volumesFrom': [
{
'sourceContainer': 'string',
'readOnly': True|False
},
],
'linuxParameters': {
'capabilities': {
'add': [
'string',
],
'drop': [
'string',
]
},
'devices': [
{
'hostPath': 'string',
'containerPath': 'string',
'permissions': [
'read'|'write'|'mknod',
]
},
],
'initProcessEnabled': True|False,
'sharedMemorySize': 123,
'tmpfs': [
{
'containerPath': 'string',
'size': 123,
'mountOptions': [
'string',
]
},
],
'maxSwap': 123,
'swappiness': 123
},
'secrets': [
{
'name': 'string',
'valueFrom': 'string'
},
],
'dependsOn': [
{
'containerName': 'string',
'condition': 'START'|'COMPLETE'|'SUCCESS'|'HEALTHY'
},
],
'startTimeout': 123,
'stopTimeout': 123,
'hostname': 'string',
'user': 'string',
'workingDirectory': 'string',
'disableNetworking': True|False,
'privileged': True|False,
'readonlyRootFilesystem': True|False,
'dnsServers': [
'string',
],
'dnsSearchDomains': [
'string',
],
'extraHosts': [
{
'hostname': 'string',
'ipAddress': 'string'
},
],
'dockerSecurityOptions': [
'string',
],
'interactive': True|False,
'pseudoTerminal': True|False,
'dockerLabels': {
'string': 'string'
},
'ulimits': [
{
'name': 'core'|'cpu'|'data'|'fsize'|'locks'|'memlock'|'msgqueue'|'nice'|'nofile'|'nproc'|'rss'|'rtprio'|'rttime'|'sigpending'|'stack',
'softLimit': 123,
'hardLimit': 123
},
],
'logConfiguration': {
'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens',
'options': {
'string': 'string'
},
'secretOptions': [
{
'name': 'string',
'valueFrom': 'string'
},
]
},
'healthCheck': {
'command': [
'string',
],
'interval': 123,
'timeout': 123,
'retries': 123,
'startPeriod': 123
},
'systemControls': [
{
'namespace': 'string',
'value': 'string'
},
],
'resourceRequirements': [
{
'value': 'string',
'type': 'GPU'|'InferenceAccelerator'
},
],
'firelensConfiguration': {
'type': 'fluentd'|'fluentbit',
'options': {
'string': 'string'
}
}
},
],
'family': 'string',
'taskRoleArn': 'string',
'executionRoleArn': 'string',
'networkMode': 'bridge'|'host'|'awsvpc'|'none',
'revision': 123,
'volumes': [
{
'name': 'string',
'host': {
'sourcePath': 'string'
},
'dockerVolumeConfiguration': {
'scope': 'task'|'shared',
'autoprovision': True|False,
'driver': 'string',
'driverOpts': {
'string': 'string'
},
'labels': {
'string': 'string'
}
},
'efsVolumeConfiguration': {
'fileSystemId': 'string',
'rootDirectory': 'string',
'transitEncryption': 'ENABLED'|'DISABLED',
'transitEncryptionPort': 123,
'authorizationConfig': {
'accessPointId': 'string',
'iam': 'ENABLED'|'DISABLED'
}
},
'fsxWindowsFileServerVolumeConfiguration': {
'fileSystemId': 'string',
'rootDirectory': 'string',
'authorizationConfig': {
'credentialsParameter': 'string',
'domain': 'string'
}
}
},
],
'status': 'ACTIVE'|'INACTIVE',
'requiresAttributes': [
{
'name': 'string',
'value': 'string',
'targetType': 'container-instance',
'targetId': 'string'
},
],
'placementConstraints': [
{
'type': 'memberOf',
'expression': 'string'
},
],
'compatibilities': [
'EC2'|'FARGATE'|'EXTERNAL',
],
'runtimePlatform': {
'cpuArchitecture': 'X86_64'|'ARM64',
'operatingSystemFamily': 'WINDOWS_SERVER_2019_FULL'|'WINDOWS_SERVER_2019_CORE'|'WINDOWS_SERVER_2016_FULL'|'WINDOWS_SERVER_2004_CORE'|'WINDOWS_SERVER_2022_CORE'|'WINDOWS_SERVER_2022_FULL'|'WINDOWS_SERVER_20H2_CORE'|'LINUX'
},
'requiresCompatibilities': [
'EC2'|'FARGATE'|'EXTERNAL',
],
'cpu': 'string',
'memory': 'string',
'inferenceAccelerators': [
{
'deviceName': 'string',
'deviceType': 'string'
},
],
'pidMode': 'host'|'task',
'ipcMode': 'host'|'task'|'none',
'proxyConfiguration': {
'type': 'APPMESH',
'containerName': 'string',
'properties': [
{
'name': 'string',
'value': 'string'
},
]
},
'registeredAt': datetime(2015, 1, 1),
'deregisteredAt': datetime(2015, 1, 1),
'registeredBy': 'string',
'ephemeralStorage': {
'sizeInGiB': 123
}
}
}
Response Structure
The full description of the deregistered task.
The full Amazon Resource Name (ARN) of the task definition.
A list of container definitions in JSON format that describe the different containers that make up your task. For more information about container definition parameters and defaults, see Amazon ECS Task Definitions in the Amazon Elastic Container Service Developer Guide .
Container definitions are used in task definitions to describe the different containers that are launched as part of a task.
The name of a container. If you're linking multiple containers together in a task definition, the name
of one container can be entered in the links
of another container to connect the containers. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This parameter maps to name
in the Create a container section of the Docker Remote API and the --name
option to docker run.
The image used to start a container. This string is passed directly to the Docker daemon. By default, images in the Docker Hub registry are available. Other repositories are specified with either repository-url/image:tag
or repository-url/image@digest
. Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps to Image
in the Create a container section of the Docker Remote API and the IMAGE
parameter of docker run.
registry/repository:tag
or registry/repository@digest
. For example, 012345678910.dkr.ecr.<region-name>.amazonaws.com/<repository-name>:latest
or 012345678910.dkr.ecr.<region-name>.amazonaws.com/<repository-name>@sha256:94afd1f2e64d908bc90dbca0035a5b567EXAMPLE
.ubuntu
or mongo
).amazon/amazon-ecs-agent
).quay.io/assemblyline/ubuntu
).The private repository authentication credentials to use.
The Amazon Resource Name (ARN) of the secret containing the private repository credentials.
Note
When you use the Amazon ECS API, CLI, or Amazon Web Services SDK, if the secret exists in the same Region as the task that you're launching then you can use either the full ARN or the name of the secret. When you use the Amazon Web Services Management Console, you must specify the full ARN of the secret.
The number of cpu
units reserved for the container. This parameter maps to CpuShares
in the Create a container section of the Docker Remote API and the --cpu-shares
option to docker run.
This field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level cpu
value.
Note
You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the Amazon EC2 Instances detail page by 1,024.
Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.
On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. For more information, see CPU share constraint in the Docker documentation. The minimum valid CPU share value that the Linux kernel allows is 2. However, the CPU parameter isn't required, and you can use CPU values below 2 in your container definitions. For CPU values below 2 (including null), the behavior varies based on your Amazon ECS container agent version:
On Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as 0
, which Windows interprets as 1% of one CPU.
The amount (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. The total amount of memory reserved for all containers within a task must be lower than the task memory
value, if one is specified. This parameter maps to Memory
in the Create a container section of the Docker Remote API and the --memory
option to docker run.
If using the Fargate launch type, this parameter is optional.
If using the EC2 launch type, you must specify either a task-level memory value or a container-level memory value. If you specify both a container-level memory
and memoryReservation
value, memory
must be greater than memoryReservation
. If you specify memoryReservation
, then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of memory
is used.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.
The soft limit (in MiB) of memory to reserve for the container. When system memory is under heavy contention, Docker attempts to keep the container memory to this soft limit. However, your container can consume more memory when it needs to, up to either the hard limit specified with the memory
parameter (if applicable), or all of the available memory on the container instance, whichever comes first. This parameter maps to MemoryReservation
in the Create a container section of the Docker Remote API and the --memory-reservation
option to docker run.
If a task-level memory value is not specified, you must specify a non-zero integer for one or both of memory
or memoryReservation
in a container definition. If you specify both, memory
must be greater than memoryReservation
. If you specify memoryReservation
, then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of memory
is used.
For example, if your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time, you can set a memoryReservation
of 128 MiB, and a memory
hard limit of 300 MiB. This configuration would allow the container to only reserve 128 MiB of memory from the remaining resources on the container instance, but also allow the container to consume more memory resources when needed.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.
The links
parameter allows containers to communicate with each other without the need for port mappings. This parameter is only supported if the network mode of a task definition is bridge
. The name:internalName
construct is analogous to name:alias
in Docker links. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. For more information about linking Docker containers, go to Legacy container links in the Docker documentation. This parameter maps to Links
in the Create a container section of the Docker Remote API and the --link
option to docker run.
Note
This parameter is not supported for Windows containers.
Warning
Containers that are collocated on a single container instance may be able to communicate with each other without requiring links or host port mappings. Network isolation is achieved on the container instance using security groups and VPC settings.
The list of port mappings for the container. Port mappings allow containers to access ports on the host container instance to send or receive traffic.
For task definitions that use the awsvpc
network mode, only specify the containerPort
. The hostPort
can be left blank or it must be the same value as the containerPort
.
Port mappings on Windows use the NetNAT
gateway address rather than localhost
. There's no loopback for port mappings on Windows, so you can't access a container's mapped port from the host itself.
This parameter maps to PortBindings
in the Create a container section of the Docker Remote API and the --publish
option to docker run. If the network mode of a task definition is set to none
, then you can't specify port mappings. If the network mode of a task definition is set to host
, then host ports must either be undefined or they must match the container port in the port mapping.
Note
After a task reaches the RUNNING
status, manual and automatic host and container port assignments are visible in the Network Bindings section of a container description for a selected task in the Amazon ECS console. The assignments are also visible in the networkBindings
section DescribeTasks responses.
Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition.
If you use containers in a task with the awsvpc
or host
network mode, specify the exposed ports using containerPort
. The hostPort
can be left blank or it must be the same value as the containerPort
.
Note
You can't expose the same container port for multiple protocols. If you attempt this, an error is returned.
After a task reaches the RUNNING
status, manual and automatic host and container port assignments are visible in the networkBindings
section of DescribeTasks API responses.
The port number on the container that's bound to the user-specified or automatically assigned host port.
If you use containers in a task with the awsvpc
or host
network mode, specify the exposed ports using containerPort
.
If you use containers in a task with the bridge
network mode and you specify a container port and not a host port, your container automatically receives a host port in the ephemeral port range. For more information, see hostPort
. Port mappings that are automatically assigned in this way do not count toward the 100 reserved ports limit of a container instance.
The port number on the container instance to reserve for your container.
If you specify a containerPortRange
, leave this field empty and the value of the hostPort
is set as follows:
awsvpc
network mode, the hostPort
is set to the same value as the containerPort
. This is a static mapping strategy.bridge
network mode, the Amazon ECS agent finds open ports on the host and automaticaly binds them to the container ports. This is a dynamic mapping strategy.If you use containers in a task with the awsvpc
or host
network mode, the hostPort
can either be left blank or set to the same value as the containerPort
.
If you use containers in a task with the bridge
network mode, you can specify a non-reserved host port for your container port mapping, or you can omit the hostPort
(or set it to 0
) while specifying a containerPort
and your container automatically receives a port in the ephemeral port range for your container instance operating system and Docker version.
The default ephemeral port range for Docker version 1.6.0 and later is listed on the instance under /proc/sys/net/ipv4/ip_local_port_range
. If this kernel parameter is unavailable, the default ephemeral port range from 49153 through 65535 is used. Do not attempt to specify a host port in the ephemeral port range as these are reserved for automatic assignment. In general, ports below 32768 are outside of the ephemeral port range.
The default reserved ports are 22 for SSH, the Docker ports 2375 and 2376, and the Amazon ECS container agent ports 51678-51680. Any host port that was previously specified in a running task is also reserved while the task is running. That is, after a task stops, the host port is released. The current reserved ports are displayed in the remainingResources
of DescribeContainerInstances output. A container instance can have up to 100 reserved ports at a time. This number includes the default reserved ports. Automatically assigned ports aren't included in the 100 reserved ports quota.
The protocol used for the port mapping. Valid values are tcp
and udp
. The default is tcp
.
The name that's used for the port mapping. This parameter only applies to Service Connect. This parameter is the name that you use in the serviceConnectConfiguration
of a service. The name can include up to 64 characters. The characters can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
The application protocol that's used for the port mapping. This parameter only applies to Service Connect. We recommend that you set this parameter to be consistent with the protocol that your application uses. If you set this parameter, Amazon ECS adds protocol-specific connection handling to the Service Connect proxy. If you set this parameter, Amazon ECS adds protocol-specific telemetry in the Amazon ECS console and CloudWatch.
If you don't set a value for this parameter, then TCP is used. However, Amazon ECS doesn't add protocol-specific telemetry for TCP.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
The port number range on the container that's bound to the dynamically mapped host port range.
The following rules apply when you specify a containerPortRange
:
bridge
network mode or the awsvpc
network mode.ecs-init
packagehostPortRange
. The value of the hostPortRange
is set as follows:awsvpc
network mode, the hostPort
is set to the same value as the containerPort
. This is a static mapping strategy.bridge
network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports.containerPortRange
valid values are between 1 and 65535.You can call DescribeTasks to view the hostPortRange
which are the host ports that are bound to the container ports.
If the essential
parameter of a container is marked as true
, and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the essential
parameter of a container is marked as false
, its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential.
All tasks must have at least one essential container. If you have an application that's composed of multiple containers, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see Application Architecture in the Amazon Elastic Container Service Developer Guide .
Warning
Early versions of the Amazon ECS container agent don't properly handle entryPoint
parameters. If you have problems using entryPoint
, update your container agent or enter your commands and arguments as command
array items instead.
The entry point that's passed to the container. This parameter maps to Entrypoint
in the Create a container section of the Docker Remote API and the --entrypoint
option to docker run. For more information, see https://docs.docker.com/engine/reference/builder/#entrypoint.
The command that's passed to the container. This parameter maps to Cmd
in the Create a container section of the Docker Remote API and the COMMAND
parameter to docker run. For more information, see https://docs.docker.com/engine/reference/builder/#cmd. If there are multiple arguments, each argument is a separated string in the array.
The environment variables to pass to a container. This parameter maps to Env
in the Create a container section of the Docker Remote API and the --env
option to docker run.
Warning
We don't recommend that you use plaintext environment variables for sensitive information, such as credential data.
A key-value pair object.
The name of the key-value pair. For environment variables, this is the name of the environment variable.
The value of the key-value pair. For environment variables, this is the value of the environment variable.
A list of files containing the environment variables to pass to a container. This parameter maps to the --env-file
option to docker run.
You can specify up to ten environment files. The file must have a .env
file extension. Each line in an environment file contains an environment variable in VARIABLE=VALUE
format. Lines beginning with #
are treated as comments and are ignored. For more information about the environment variable file syntax, see Declare default environment variables in file.
If there are environment variables specified using the environment
parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Specifying Environment Variables in the Amazon Elastic Container Service Developer Guide .
A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a .env
file extension. Each line in an environment file should contain an environment variable in VARIABLE=VALUE
format. Lines beginning with #
are treated as comments and are ignored. For more information about the environment variable file syntax, see Declare default environment variables in file.
If there are environment variables specified using the environment
parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Specifying environment variables in the Amazon Elastic Container Service Developer Guide .
This parameter is only supported for tasks hosted on Fargate using the following platform versions:
1.4.0
or later.1.0.0
or later.The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.
The file type to use. The only supported value is s3
.
The mount points for data volumes in your container.
This parameter maps to Volumes
in the Create a container section of the Docker Remote API and the --volume
option to docker run.
Windows containers can mount whole directories on the same drive as $env:ProgramData
. Windows containers can't mount directories on a different drive, and mount point can't be across drives.
Details for a volume mount point that's used in a container definition.
The name of the volume to mount. Must be a volume name referenced in the name
parameter of task definition volume
.
The path on the container to mount the host volume at.
If this value is true
, the container has read-only access to the volume. If this value is false
, then the container can write to the volume. The default value is false
.
Data volumes to mount from another container. This parameter maps to VolumesFrom
in the Create a container section of the Docker Remote API and the --volumes-from
option to docker run.
Details on a data volume from another container in the same task definition.
The name of another container within the same task definition to mount volumes from.
If this value is true
, the container has read-only access to the volume. If this value is false
, then the container can write to the volume. The default value is false
.
Linux-specific modifications that are applied to the container, such as Linux kernel capabilities. For more information see KernelCapabilities.
Note
This parameter is not supported for Windows containers.
The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker.
Note
For tasks that use the Fargate launch type, capabilities
is supported for all platform versions but the add
parameter is only supported if using platform version 1.4.0 or later.
The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to CapAdd
in the Create a container section of the Docker Remote API and the --cap-add
option to docker run.
Note
Tasks launched on Fargate only support adding the SYS_PTRACE
kernel capability.
Valid values: "ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to CapDrop
in the Create a container section of the Docker Remote API and the --cap-drop
option to docker run.
Valid values: "ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
Any host devices to expose to the container. This parameter maps to Devices
in the Create a container section of the Docker Remote API and the --device
option to docker run.
Note
If you're using tasks that use the Fargate launch type, the devices
parameter isn't supported.
An object representing a container instance host device.
The path for the device on the host container instance.
The path inside the container at which to expose the host device.
The explicit permissions to provide to the container for the device. By default, the container has permissions for read
, write
, and mknod
for the device.
Run an init
process inside the container that forwards signals and reaps processes. This parameter maps to the --init
option to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
The value for the size (in MiB) of the /dev/shm
volume. This parameter maps to the --shm-size
option to docker run.
Note
If you are using tasks that use the Fargate launch type, the sharedMemorySize
parameter is not supported.
The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the --tmpfs
option to docker run.
Note
If you're using tasks that use the Fargate launch type, the tmpfs
parameter isn't supported.
The container path, mount options, and size of the tmpfs mount.
The absolute file path where the tmpfs volume is to be mounted.
The maximum size (in MiB) of the tmpfs volume.
The list of tmpfs volume mount options.
Valid values: "defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"
The total amount of swap memory (in MiB) a container can use. This parameter will be translated to the --memory-swap
option to docker run where the value would be the sum of the container memory plus the maxSwap
value.
If a maxSwap
value of 0
is specified, the container will not use swap. Accepted values are 0
or any positive integer. If the maxSwap
parameter is omitted, the container will use the swap configuration for the container instance it is running on. A maxSwap
value must be set for the swappiness
parameter to be used.
Note
If you're using tasks that use the Fargate launch type, the maxSwap
parameter isn't supported.
This allows you to tune a container's memory swappiness behavior. A swappiness
value of 0
will cause swapping to not happen unless absolutely necessary. A swappiness
value of 100
will cause pages to be swapped very aggressively. Accepted values are whole numbers between 0
and 100
. If the swappiness
parameter is not specified, a default value of 60
is used. If a value is not specified for maxSwap
then this parameter is ignored. This parameter maps to the --memory-swappiness
option to docker run.
Note
If you're using tasks that use the Fargate launch type, the swappiness
parameter isn't supported.
The secrets to pass to the container. For more information, see Specifying Sensitive Data in the Amazon Elastic Container Service Developer Guide .
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
secrets
container definition parameter.secretOptions
container definition parameter.For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide .
The name of the secret.
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide .
Note
If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition. When a dependency is defined for container startup, for container shutdown it is reversed.
For tasks using the EC2 launch type, the container instances require at least version 1.26.0 of the container agent to turn on container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide . If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init
package. If your container instances are launched from version 20190301
or later, then they contain the required versions of the container agent and ecs-init
. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide .
For tasks using the Fargate launch type, the task or service requires the following platforms:
1.3.0
or later.1.0.0
or later.The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed.
Your Amazon ECS container instances require at least version 1.26.0 of the container agent to use container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide . If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init
package. If your container instances are launched from version 20190301
or later, then they contain the required versions of the container agent and ecs-init
. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide .
Note
For tasks that use the Fargate launch type, the task or service requires the following platforms:
1.3.0
or later.1.0.0
or later.The name of a container.
The dependency condition of the container. The following are the available conditions and their behavior:
START
- This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start.COMPLETE
- This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container.SUCCESS
- This condition is the same as COMPLETE
, but it also requires that the container exits with a zero
status. This condition can't be set on an essential container.HEALTHY
- This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup.Time duration (in seconds) to wait before giving up on resolving dependencies for a container. For example, you specify two containers in a task definition with containerA having a dependency on containerB reaching a COMPLETE
, SUCCESS
, or HEALTHY
status. If a startTimeout
value is specified for containerB and it doesn't reach the desired status within that time then containerA gives up and not start. This results in the task transitioning to a STOPPED
state.
Note
When the ECS_CONTAINER_START_TIMEOUT
container agent configuration variable is used, it's enforced independently from this start timeout value.
For tasks using the Fargate launch type, the task or service requires the following platforms:
1.3.0
or later.1.0.0
or later.For tasks using the EC2 launch type, your container instances require at least version 1.26.0
of the container agent to use a container start timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide . If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1
of the ecs-init
package. If your container instances are launched from version 20190301
or later, then they contain the required versions of the container agent and ecs-init
. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide .
Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.
For tasks using the Fargate launch type, the task or service requires the following platforms:
1.3.0
or later.1.0.0
or later.The max stop timeout value is 120 seconds and if the parameter is not specified, the default value of 30 seconds is used.
For tasks that use the EC2 launch type, if the stopTimeout
parameter isn't specified, the value set for the Amazon ECS container agent configuration variable ECS_CONTAINER_STOP_TIMEOUT
is used. If neither the stopTimeout
parameter or the ECS_CONTAINER_STOP_TIMEOUT
agent configuration variable are set, then the default values of 30 seconds for Linux containers and 30 seconds on Windows containers are used. Your container instances require at least version 1.26.0 of the container agent to use a container stop timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide . If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init
package. If your container instances are launched from version 20190301
or later, then they contain the required versions of the container agent and ecs-init
. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide .
The hostname to use for your container. This parameter maps to Hostname
in the Create a container section of the Docker Remote API and the --hostname
option to docker run.
Note
The hostname
parameter is not supported if you're using the awsvpc
network mode.
The user to use inside the container. This parameter maps to User
in the Create a container section of the Docker Remote API and the --user
option to docker run.
Warning
When running tasks using the host
network mode, don't run containers using the root user (UID 0). We recommend using a non-root user for better security.
You can specify the user
using the following formats. If specifying a UID or GID, you must specify it as a positive integer.
user
user:group
uid
uid:gid
user:gid
uid:group
Note
This parameter is not supported for Windows containers.
The working directory to run commands inside the container in. This parameter maps to WorkingDir
in the Create a container section of the Docker Remote API and the --workdir
option to docker run.
When this parameter is true, networking is disabled within the container. This parameter maps to NetworkDisabled
in the Create a container section of the Docker Remote API.
Note
This parameter is not supported for Windows containers.
When this parameter is true, the container is given elevated privileges on the host container instance (similar to the root
user). This parameter maps to Privileged
in the Create a container section of the Docker Remote API and the --privileged
option to docker run.
Note
This parameter is not supported for Windows containers or tasks run on Fargate.
When this parameter is true, the container is given read-only access to its root file system. This parameter maps to ReadonlyRootfs
in the Create a container section of the Docker Remote API and the --read-only
option to docker run.
Note
This parameter is not supported for Windows containers.
A list of DNS servers that are presented to the container. This parameter maps to Dns
in the Create a container section of the Docker Remote API and the --dns
option to docker run.
Note
This parameter is not supported for Windows containers.
A list of DNS search domains that are presented to the container. This parameter maps to DnsSearch
in the Create a container section of the Docker Remote API and the --dns-search
option to docker run.
Note
This parameter is not supported for Windows containers.
A list of hostnames and IP address mappings to append to the /etc/hosts
file on the container. This parameter maps to ExtraHosts
in the Create a container section of the Docker Remote API and the --add-host
option to docker run.
Note
This parameter isn't supported for Windows containers or tasks that use the awsvpc
network mode.
Hostnames and IP address entries that are added to the /etc/hosts
file of a container via the extraHosts
parameter of its ContainerDefinition.
The hostname to use in the /etc/hosts
entry.
The IP address to use in the /etc/hosts
entry.
A list of strings to provide custom labels for SELinux and AppArmor multi-level security systems. This field isn't valid for containers in tasks using the Fargate launch type.
With Windows containers, this parameter can be used to reference a credential spec file when configuring a container for Active Directory authentication. For more information, see Using gMSAs for Windows Containers in the Amazon Elastic Container Service Developer Guide .
This parameter maps to SecurityOpt
in the Create a container section of the Docker Remote API and the --security-opt
option to docker run.
Note
The Amazon ECS container agent running on a container instance must register with the ECS_SELINUX_CAPABLE=true
or ECS_APPARMOR_CAPABLE=true
environment variables before containers placed on that instance can use these security options. For more information, see Amazon ECS Container Agent Configuration in the Amazon Elastic Container Service Developer Guide .
For more information about valid values, see Docker Run Security Configuration.
Valid values: "no-new-privileges" | "apparmor:PROFILE" | "label:value" | "credentialspec:CredentialSpecFilePath"
When this parameter is true
, you can deploy containerized applications that require stdin
or a tty
to be allocated. This parameter maps to OpenStdin
in the Create a container section of the Docker Remote API and the --interactive
option to docker run.
When this parameter is true
, a TTY is allocated. This parameter maps to Tty
in the Create a container section of the Docker Remote API and the --tty
option to docker run.
A key/value map of labels to add to the container. This parameter maps to Labels
in the Create a container section of the Docker Remote API and the --label
option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
A list of ulimits
to set in the container. If a ulimit
value is specified in a task definition, it overrides the default values set by Docker. This parameter maps to Ulimits
in the Create a container section of the Docker Remote API and the --ulimit
option to docker run. Valid naming values are displayed in the Ulimit data type.
Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the nofile
resource limit parameter which Fargate overrides. The nofile
resource limit sets a restriction on the number of open files that a container can use. The default nofile
soft limit is 1024
and hard limit is 4096
.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
Note
This parameter is not supported for Windows containers.
The ulimit
settings to pass to the container.
Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the nofile
resource limit parameter which Fargate overrides. The nofile
resource limit sets a restriction on the number of open files that a container can use. The default nofile
soft limit is 1024
and hard limit is 4096
.
The type
of the ulimit
.
The soft limit for the ulimit
type.
The hard limit for the ulimit
type.
The log configuration specification for the container.
This parameter maps to LogConfig
in the Create a container section of the Docker Remote API and the --log-driver
option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However the container can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). For more information about the options for different supported log drivers, see Configure logging drivers in the Docker documentation.
Note
Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon (shown in the LogConfiguration data type). Additional log drivers may be available in future releases of the Amazon ECS container agent.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
Note
The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the ECS_AVAILABLE_LOGGING_DRIVERS
environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS Container Agent Configuration in the Amazon Elastic Container Service Developer Guide .
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs
, splunk
, and awsfirelens
.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs
, fluentd
, gelf
, json-file
, journald
, logentries
, syslog
, splunk
, and awsfirelens
.
For more information about using the awslogs
log driver, see Using the awslogs log driver in the Amazon Elastic Container Service Developer Guide .
For more information about using the awsfirelens
log driver, see Custom log routing in the Amazon Elastic Container Service Developer Guide .
Note
If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.
The configuration options to send to the log driver. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide .
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
secrets
container definition parameter.secretOptions
container definition parameter.For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide .
The name of the secret.
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide .
Note
If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
The container health check command and associated configuration parameters for the container. This parameter maps to HealthCheck
in the Create a container section of the Docker Remote API and the HEALTHCHECK
parameter of docker run.
A string array representing the command that the container runs to determine if it is healthy. The string array must start with CMD
to run the command arguments directly, or CMD-SHELL
to run the command with the container's default shell.
When you use the Amazon Web Services Management Console JSON panel, the Command Line Interface, or the APIs, enclose the list of commands in brackets.
[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]
You don't need to include the brackets when you use the Amazon Web Services Management Console.
"CMD-SHELL", "curl -f http://localhost/ || exit 1"
An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, see HealthCheck
in the Create a container section of the Docker Remote API.
The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds.
The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5.
The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3.
The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the startPeriod
is disabled.
Note
If a health check succeeds within the startPeriod
, then the container is considered healthy and any subsequent failures count toward the maximum number of retries.
A list of namespaced kernel parameters to set in the container. This parameter maps to Sysctls
in the Create a container section of the Docker Remote API and the --sysctl
option to docker run.
Note
We don't recommended that you specify network-related systemControls
parameters for multiple containers in a single task that also uses either the awsvpc
or host
network modes. For tasks that use the awsvpc
network mode, the container that's started last determines which systemControls
parameters take effect. For tasks that use the host
network mode, it changes the container instance's namespaced kernel parameters as well as the containers.
A list of namespaced kernel parameters to set in the container. This parameter maps to Sysctls
in the Create a container section of the Docker Remote API and the --sysctl
option to docker run.
We don't recommend that you specify network-related systemControls
parameters for multiple containers in a single task. This task also uses either the awsvpc
or host
network mode. It does it for the following reasons.
awsvpc
network mode, if you set systemControls
for any container, it applies to all containers in the task. If you set different systemControls
for multiple containers in a single task, the container that's started last determines which systemControls
take effect.host
network mode, the systemControls
parameter applies to the container instance's kernel parameter and that of all containers of any tasks running on that container instance.The namespaced kernel parameter to set a value
for.
The value for the namespaced kernel parameter that's specified in namespace
.
The type and amount of a resource to assign to a container. The only supported resource is a GPU.
The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide
The value for the specified resource type.
If the GPU
type is used, the value is the number of physical GPUs
the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on.
If the InferenceAccelerator
type is used, the value
matches the deviceName
for an InferenceAccelerator specified in a task definition.
The type of resource to assign to a container. The supported values are GPU
or InferenceAccelerator
.
The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see Custom Log Routing in the Amazon Elastic Container Service Developer Guide .
The log router to use. The valid values are fluentd
or fluentbit
.
The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is "options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::mybucket/fluent.conf|filepath"}
. For more information, see Creating a task definition that uses a FireLens configuration in the Amazon Elastic Container Service Developer Guide .
Note
Tasks hosted on Fargate only support the file
configuration file type.
The name of a family that this task definition is registered to. Up to 255 characters are allowed. Letters (both uppercase and lowercase letters), numbers, hyphens (-), and underscores (_) are allowed.
A family groups multiple versions of a task definition. Amazon ECS gives the first task definition that you registered to a family a revision number of 1. Amazon ECS gives sequential revision numbers to each task definition that you add.
The short name or full Amazon Resource Name (ARN) of the Identity and Access Management role that grants containers in the task permission to call Amazon Web Services APIs on your behalf. For more information, see Amazon ECS Task Role in the Amazon Elastic Container Service Developer Guide .
IAM roles for tasks on Windows require that the -EnableTaskIAMRole
option is set when you launch the Amazon ECS-optimized Windows AMI. Your containers must also run some configuration code to use the feature. For more information, see Windows IAM roles for tasks in the Amazon Elastic Container Service Developer Guide .
The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf. The task execution IAM role is required depending on the requirements of your task. For more information, see Amazon ECS task execution IAM role in the Amazon Elastic Container Service Developer Guide .
The Docker networking mode to use for the containers in the task. The valid values are none
, bridge
, awsvpc
, and host
. If no network mode is specified, the default is bridge
.
For Amazon ECS tasks on Fargate, the awsvpc
network mode is required. For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances, <default>
or awsvpc
can be used. If the network mode is set to none
, you cannot specify port mappings in your container definitions, and the tasks containers do not have external connectivity. The host
and awsvpc
network modes offer the highest networking performance for containers because they use the EC2 network stack instead of the virtualized network stack provided by the bridge
mode.
With the host
and awsvpc
network modes, exposed container ports are mapped directly to the corresponding host port (for the host
network mode) or the attached elastic network interface port (for the awsvpc
network mode), so you cannot take advantage of dynamic host port mappings.
Warning
When using the host
network mode, you should not run containers using the root user (UID 0). It is considered best practice to use a non-root user.
If the network mode is awsvpc
, the task is allocated an elastic network interface, and you must specify a NetworkConfiguration value when you create a service or run a task with the task definition. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide .
If the network mode is host
, you cannot run multiple instantiations of the same task on a single container instance when port mappings are used.
For more information, see Network settings in the Docker run reference .
The revision of the task in a particular family. The revision is a version number of a task definition in a family. When you register a task definition for the first time, the revision is 1
. Each time that you register a new revision of a task definition in the same family, the revision value always increases by one. This is even if you deregistered previous revisions in this family.
The list of data volume definitions for the task. For more information, see Using data volumes in tasks in the Amazon Elastic Container Service Developer Guide .
Note
The host
and sourcePath
parameters aren't supported for tasks run on Fargate.
A data volume that's used in a task definition. For tasks that use the Amazon Elastic File System (Amazon EFS), specify an efsVolumeConfiguration
. For Windows tasks that use Amazon FSx for Windows File Server file system, specify a fsxWindowsFileServerVolumeConfiguration
. For tasks that use a Docker volume, specify a DockerVolumeConfiguration
. For tasks that use a bind mount host volume, specify a host
and optional sourcePath
. For more information, see Using Data Volumes in Tasks.
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This name is referenced in the sourceVolume
parameter of container definition mountPoints
.
This parameter is specified when you use bind mount host volumes. The contents of the host
parameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If the host
parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running.
Windows containers can mount whole directories on the same drive as $env:ProgramData
. Windows containers can't mount directories on a different drive, and mount point can't be across drives. For example, you can mount C:\my\path:C:\my\path
and D:\:D:\
, but not D:\my\path:C:\my\path
or D:\:C:\my\path
.
When the host
parameter is used, specify a sourcePath
to declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If the host
parameter contains a sourcePath
file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the sourcePath
value doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.
If you're using the Fargate launch type, the sourcePath
parameter is not supported.
This parameter is specified when you use Docker volumes.
Windows containers only support the use of the local
driver. To use bind mounts, specify the host
parameter instead.
Note
Docker volumes aren't supported by tasks run on Fargate.
The scope for the Docker volume that determines its lifecycle. Docker volumes that are scoped to a task
are automatically provisioned when the task starts and destroyed when the task stops. Docker volumes that are scoped as shared
persist after the task stops.
If this value is true
, the Docker volume is created if it doesn't already exist.
Note
This field is only used if the scope
is shared
.
The Docker volume driver to use. The driver value must match the driver name provided by Docker because it is used for task placement. If the driver was installed using the Docker plugin CLI, use docker plugin ls
to retrieve the driver name from your container instance. If the driver was installed using another method, use Docker plugin discovery to retrieve the driver name. For more information, see Docker plugin discovery. This parameter maps to Driver
in the Create a volume section of the Docker Remote API and the xxdriver
option to docker volume create.
A map of Docker driver-specific options passed through. This parameter maps to DriverOpts
in the Create a volume section of the Docker Remote API and the xxopt
option to docker volume create.
Custom metadata to add to your Docker volume. This parameter maps to Labels
in the Create a volume section of the Docker Remote API and the xxlabel
option to docker volume create.
This parameter is specified when you use an Amazon Elastic File System file system for task storage.
The Amazon EFS file system ID to use.
The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume will be used. Specifying /
will have the same effect as omitting this parameter.
Warning
If an EFS access point is specified in the authorizationConfig
, the root directory parameter must either be omitted or set to /
which will enforce the path set on the EFS access point.
Determines whether to use encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. Transit encryption must be enabled if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of DISABLED
is used. For more information, see Encrypting data in transit in the Amazon Elastic File System User Guide .
The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you do not specify a transit encryption port, it will use the port selection strategy that the Amazon EFS mount helper uses. For more information, see EFS mount helper in the Amazon Elastic File System User Guide .
The authorization configuration details for the Amazon EFS file system.
The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the EFSVolumeConfiguration
must either be omitted or set to /
which will enforce the path set on the EFS access point. If an access point is used, transit encryption must be enabled in the EFSVolumeConfiguration
. For more information, see Working with Amazon EFS access points in the Amazon Elastic File System User Guide .
Determines whether to use the Amazon ECS task IAM role defined in a task definition when mounting the Amazon EFS file system. If enabled, transit encryption must be enabled in the EFSVolumeConfiguration
. If this parameter is omitted, the default value of DISABLED
is used. For more information, see Using Amazon EFS access points in the Amazon Elastic Container Service Developer Guide .
This parameter is specified when you use Amazon FSx for Windows File Server file system for task storage.
The Amazon FSx for Windows File Server file system ID to use.
The directory within the Amazon FSx for Windows File Server file system to mount as the root directory inside the host.
The authorization configuration details for the Amazon FSx for Windows File Server file system.
The authorization credential option to use. The authorization credential options can be provided using either the Amazon Resource Name (ARN) of an Secrets Manager secret or SSM Parameter Store parameter. The ARN refers to the stored credentials.
A fully qualified domain name hosted by an Directory Service Managed Microsoft AD (Active Directory) or self-hosted AD on Amazon EC2.
The status of the task definition.
The container instance attributes required by your task. When an Amazon EC2 instance is registered to your cluster, the Amazon ECS container agent assigns some standard attributes to the instance. You can apply custom attributes. These are specified as key-value pairs using the Amazon ECS console or the PutAttributes API. These attributes are used when determining task placement for tasks hosted on Amazon EC2 instances. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide .
Note
This parameter isn't supported for tasks run on Fargate.
An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide .
The name of the attribute. The name
must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.).
The value of the attribute. The value
must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space.
The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN.
The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN).
An array of placement constraint objects to use for tasks.
Note
This parameter isn't supported for tasks run on Fargate.
An object representing a constraint on task placement in the task definition. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide .
Note
Task placement constraints aren't supported for tasks run on Fargate.
The type of constraint. The MemberOf
constraint restricts selection to be from a group of valid candidates.
A cluster query language expression to apply to the constraint. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide .
The task launch types the task definition validated against during task definition registration. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide .
The operating system that your task definitions are running on. A platform family is specified only for tasks using the Fargate launch type.
When you specify a task in a service, this value must match the runtimePlatform
value of the service.
The CPU architecture.
You can run your Linux tasks on an ARM-based platform by setting the value to ARM64
. This option is available for tasks that run on Linux Amazon EC2 instance or Linux containers on Fargate.
The operating system.
The task launch types the task definition was validated against. To determine which task launch types the task definition is validated for, see the TaskDefinition$compatibilities parameter.
The number of cpu
units used by the task. If you use the EC2 launch type, this field is optional. Any value can be used. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines your range of valid values for the memory
parameter.
The CPU units cannot be less than 1 vCPU when you use Windows containers on Fargate.
memory
values: 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB)memory
values: 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB)memory
values: 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB)memory
values: 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB)memory
values: 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB)memory
values: 16 GB and 60 GB in 4 GB increments This option requires Linux platform 1.4.0
or later.memory
values: 32GB and 120 GB in 8 GB increments This option requires Linux platform 1.4.0
or later.The amount (in MiB) of memory used by the task.
If your tasks runs on Amazon EC2 instances, you must specify either a task-level memory value or a container-level memory value. This field is optional and any value can be used. If a task-level memory value is specified, the container-level memory value is optional. For more information regarding container-level memory and memory reservation, see ContainerDefinition.
If your tasks runs on Fargate, this field is required. You must use one of the following values. The value you choose determines your range of valid values for the cpu
parameter.
cpu
values: 256 (.25 vCPU)cpu
values: 512 (.5 vCPU)cpu
values: 1024 (1 vCPU)cpu
values: 2048 (2 vCPU)cpu
values: 4096 (4 vCPU)cpu
values: 8192 (8 vCPU) This option requires Linux platform 1.4.0
or later.cpu
values: 16384 (16 vCPU) This option requires Linux platform 1.4.0
or later.The Elastic Inference accelerator that's associated with the task.
Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide .
The Elastic Inference accelerator device name. The deviceName
must also be referenced in a container definition as a ResourceRequirement.
The Elastic Inference accelerator type to use.
The process namespace to use for the containers in the task. The valid values are host
or task
. If host
is specified, then all containers within the tasks that specified the host
PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance. If task
is specified, all containers within the specified task share the same process namespace. If no value is specified, the default is a private namespace. For more information, see PID settings in the Docker run reference .
If the host
PID mode is used, be aware that there is a heightened risk of undesired process namespace expose. For more information, see Docker security.
Note
This parameter is not supported for Windows containers or tasks run on Fargate.
The IPC resource namespace to use for the containers in the task. The valid values are host
, task
, or none
. If host
is specified, then all containers within the tasks that specified the host
IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If task
is specified, all containers within the specified task share the same IPC resources. If none
is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance. For more information, see IPC settings in the Docker run reference .
If the host
IPC mode is used, be aware that there is a heightened risk of undesired IPC namespace expose. For more information, see Docker security.
If you are setting namespaced kernel parameters using systemControls
for the containers in the task, the following will apply to your IPC resource namespace. For more information, see System Controls in the Amazon Elastic Container Service Developer Guide .
host
IPC mode, IPC namespace related systemControls
are not supported.task
IPC mode, IPC namespace related systemControls
will apply to all containers within a task.Note
This parameter is not supported for Windows containers or tasks run on Fargate.
The configuration details for the App Mesh proxy.
Your Amazon ECS container instances require at least version 1.26.0 of the container agent and at least version 1.26.0-1 of the ecs-init
package to use a proxy configuration. If your container instances are launched from the Amazon ECS optimized AMI version 20190301
or later, they contain the required versions of the container agent and ecs-init
. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide .
The proxy type. The only supported value is APPMESH
.
The name of the container that will serve as the App Mesh proxy.
The set of network configuration parameters to provide the Container Network Interface (CNI) plugin, specified as key-value pairs.
IgnoredUID
- (Required) The user ID (UID) of the proxy container as defined by the user
parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If IgnoredGID
is specified, this field can be empty.IgnoredGID
- (Required) The group ID (GID) of the proxy container as defined by the user
parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If IgnoredUID
is specified, this field can be empty.AppPorts
- (Required) The list of ports that the application uses. Network traffic to these ports is forwarded to the ProxyIngressPort
and ProxyEgressPort
.ProxyIngressPort
- (Required) Specifies the port that incoming traffic to the AppPorts
is directed to.ProxyEgressPort
- (Required) Specifies the port that outgoing traffic from the AppPorts
is directed to.EgressIgnoredPorts
- (Required) The egress traffic going to the specified ports is ignored and not redirected to the ProxyEgressPort
. It can be an empty list.EgressIgnoredIPs
- (Required) The egress traffic going to the specified IP addresses is ignored and not redirected to the ProxyEgressPort
. It can be an empty list.A key-value pair object.
The name of the key-value pair. For environment variables, this is the name of the environment variable.
The value of the key-value pair. For environment variables, this is the value of the environment variable.
The Unix timestamp for the time when the task definition was registered.
The Unix timestamp for the time when the task definition was deregistered.
The principal that registered the task definition.
The ephemeral storage settings to use for tasks run with the task definition.
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is 21
GiB and the maximum supported value is 200
GiB.
Exceptions
ECS.Client.exceptions.ServerException
ECS.Client.exceptions.ClientException
ECS.Client.exceptions.InvalidParameterException
describe_capacity_providers
(**kwargs)¶Describes one or more of your capacity providers.
See also: AWS API Documentation
Request Syntax
response = client.describe_capacity_providers(
capacityProviders=[
'string',
],
include=[
'TAGS',
],
maxResults=123,
nextToken='string'
)
The short name or full Amazon Resource Name (ARN) of one or more capacity providers. Up to 100
capacity providers can be described in an action.
Specifies whether or not you want to see the resource tags for the capacity provider. If TAGS
is specified, the tags are included in the response. If this field is omitted, tags aren't included in the response.
DescribeCapacityProviders
in paginated output. When this parameter is used, DescribeCapacityProviders
only returns maxResults
results in a single page along with a nextToken
response element. The remaining results of the initial request can be seen by sending another DescribeCapacityProviders
request with the returned nextToken
value. This value can be between 1 and 10. If this parameter is not used, then DescribeCapacityProviders
returns up to 10 results and a nextToken
value if applicable.The nextToken
value returned from a previous paginated DescribeCapacityProviders
request where maxResults
was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken
value.
Note
This token should be treated as an opaque identifier that is only used to retrieve the next items in a list and not for other programmatic purposes.
dict
Response Syntax
{
'capacityProviders': [
{
'capacityProviderArn': 'string',
'name': 'string',
'status': 'ACTIVE'|'INACTIVE',
'autoScalingGroupProvider': {
'autoScalingGroupArn': 'string',
'managedScaling': {
'status': 'ENABLED'|'DISABLED',
'targetCapacity': 123,
'minimumScalingStepSize': 123,
'maximumScalingStepSize': 123,
'instanceWarmupPeriod': 123
},
'managedTerminationProtection': 'ENABLED'|'DISABLED'
},
'updateStatus': 'DELETE_IN_PROGRESS'|'DELETE_COMPLETE'|'DELETE_FAILED'|'UPDATE_IN_PROGRESS'|'UPDATE_COMPLETE'|'UPDATE_FAILED',
'updateStatusReason': 'string',
'tags': [
{
'key': 'string',
'value': 'string'
},
]
},
],
'failures': [
{
'arn': 'string',
'reason': 'string',
'detail': 'string'
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
capacityProviders (list) --
The list of capacity providers.
(dict) --
The details for a capacity provider.
capacityProviderArn (string) --
The Amazon Resource Name (ARN) that identifies the capacity provider.
name (string) --
The name of the capacity provider.
status (string) --
The current status of the capacity provider. Only capacity providers in an ACTIVE
state can be used in a cluster. When a capacity provider is successfully deleted, it has an INACTIVE
status.
autoScalingGroupProvider (dict) --
The Auto Scaling group settings for the capacity provider.
autoScalingGroupArn (string) --
The Amazon Resource Name (ARN) that identifies the Auto Scaling group.
managedScaling (dict) --
The managed scaling settings for the Auto Scaling group capacity provider.
status (string) --
Determines whether to use managed scaling for the capacity provider.
targetCapacity (integer) --
The target capacity value for the capacity provider. The specified value must be greater than 0
and less than or equal to 100
. A value of 100
results in the Amazon EC2 instances in your Auto Scaling group being completely used.
minimumScalingStepSize (integer) --
The minimum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter If this parameter is omitted, the default value of 1
is used.
When additional capacity is required, Amazon ECS will scale up the minimum scaling step size even if the actual demand is less than the minimum scaling step size.
If you use a capacity provider with an Auto Scaling group configured with more than one Amazon EC2 instance type or Availability Zone, Amazon ECS will scale up by the exact minimum scaling step size value and will ignore both the maximum scaling step size as well as the capacity demand.
maximumScalingStepSize (integer) --
The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter. If this parameter is omitted, the default value of 1
is used.
instanceWarmupPeriod (integer) --
The period of time, in seconds, after a newly launched Amazon EC2 instance can contribute to CloudWatch metrics for Auto Scaling group. If this parameter is omitted, the default value of 300
seconds is used.
managedTerminationProtection (string) --
The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection. The default is disabled.
Warning
When using managed termination protection, managed scaling must also be used otherwise managed termination protection doesn't work.
When managed termination protection is enabled, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions enabled as well. For more information, see Instance Protection in the Auto Scaling User Guide .
When managed termination protection is disabled, your Amazon EC2 instances aren't protected from termination when the Auto Scaling group scales in.
updateStatus (string) --
The update status of the capacity provider. The following are the possible states that is returned.
DELETE_IN_PROGRESS
The capacity provider is in the process of being deleted.
DELETE_COMPLETE
The capacity provider was successfully deleted and has an INACTIVE
status.
DELETE_FAILED
The capacity provider can't be deleted. The update status reason provides further details about why the delete failed.
updateStatusReason (string) --
The update status reason. This provides further details about the update status for the capacity provider.
tags (list) --
The metadata that you apply to the capacity provider to help you categorize and organize it. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.key (string) --
One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
failures (list) --
Any failures associated with the call.
(dict) --
A failed resource. For a list of common causes, see API failure reasons in the Amazon Elastic Container Service Developer Guide .
arn (string) --
The Amazon Resource Name (ARN) of the failed resource.
reason (string) --
The reason for the failure.
detail (string) --
The details of the failure.
nextToken (string) --
The nextToken
value to include in a future DescribeCapacityProviders
request. When the results of a DescribeCapacityProviders
request exceed maxResults
, this value can be used to retrieve the next page of results. This value is null
when there are no more results to return.
Exceptions
ECS.Client.exceptions.ServerException
ECS.Client.exceptions.ClientException
ECS.Client.exceptions.InvalidParameterException
describe_clusters
(**kwargs)¶Describes one or more of your clusters.
See also: AWS API Documentation
Request Syntax
response = client.describe_clusters(
clusters=[
'string',
],
include=[
'ATTACHMENTS'|'CONFIGURATIONS'|'SETTINGS'|'STATISTICS'|'TAGS',
]
)
A list of up to 100 cluster names or full cluster Amazon Resource Name (ARN) entries. If you do not specify a cluster, the default cluster is assumed.
Determines whether to include additional information about the clusters in the response. If this field is omitted, this information isn't included.
If ATTACHMENTS
is specified, the attachments for the container instances or tasks within the cluster are included, for example the capacity providers.
If SETTINGS
is specified, the settings for the cluster are included.
If CONFIGURATIONS
is specified, the configuration for the cluster is included.
If STATISTICS
is specified, the task and service count is included, separated by launch type.
If TAGS
is specified, the metadata tags associated with the cluster are included.
dict
Response Syntax
{
'clusters': [
{
'clusterArn': 'string',
'clusterName': 'string',
'configuration': {
'executeCommandConfiguration': {
'kmsKeyId': 'string',
'logging': 'NONE'|'DEFAULT'|'OVERRIDE',
'logConfiguration': {
'cloudWatchLogGroupName': 'string',
'cloudWatchEncryptionEnabled': True|False,
's3BucketName': 'string',
's3EncryptionEnabled': True|False,
's3KeyPrefix': 'string'
}
}
},
'status': 'string',
'registeredContainerInstancesCount': 123,
'runningTasksCount': 123,
'pendingTasksCount': 123,
'activeServicesCount': 123,
'statistics': [
{
'name': 'string',
'value': 'string'
},
],
'tags': [
{
'key': 'string',
'value': 'string'
},
],
'settings': [
{
'name': 'containerInsights',
'value': 'string'
},
],
'capacityProviders': [
'string',
],
'defaultCapacityProviderStrategy': [
{
'capacityProvider': 'string',
'weight': 123,
'base': 123
},
],
'attachments': [
{
'id': 'string',
'type': 'string',
'status': 'string',
'details': [
{
'name': 'string',
'value': 'string'
},
]
},
],
'attachmentsStatus': 'string',
'serviceConnectDefaults': {
'namespace': 'string'
}
},
],
'failures': [
{
'arn': 'string',
'reason': 'string',
'detail': 'string'
},
]
}
Response Structure
(dict) --
clusters (list) --
The list of clusters.
(dict) --
A regional grouping of one or more container instances where you can run task requests. Each account receives a default cluster the first time you use the Amazon ECS service, but you may also create other clusters. Clusters may contain more than one instance type simultaneously.
clusterArn (string) --
The Amazon Resource Name (ARN) that identifies the cluster. For more information about the ARN format, see Amazon Resource Name (ARN) in the Amazon ECS Developer Guide .
clusterName (string) --
A user-generated string that you use to identify your cluster.
configuration (dict) --
The execute command configuration for the cluster.
executeCommandConfiguration (dict) --
The details of the execute command configuration.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the data between the local client and the container.
logging (string) --
The log setting to use for redirecting logs for your execute command results. The following log settings are available.
NONE
: The execute command session is not logged.DEFAULT
: The awslogs
configuration in the task definition is used. If no logging parameter is specified, it defaults to this value. If no awslogs
log driver is configured in the task definition, the output won't be logged.OVERRIDE
: Specify the logging details as a part of logConfiguration
. If the OVERRIDE
logging option is specified, the logConfiguration
is required.logConfiguration (dict) --
The log configuration for the results of the execute command actions. The logs can be sent to CloudWatch Logs or an Amazon S3 bucket. When logging=OVERRIDE
is specified, a logConfiguration
must be provided.
cloudWatchLogGroupName (string) --
The name of the CloudWatch log group to send logs to.
Note
The CloudWatch log group must already be created.
cloudWatchEncryptionEnabled (boolean) --
Determines whether to use encryption on the CloudWatch logs. If not specified, encryption will be disabled.
s3BucketName (string) --
The name of the S3 bucket to send logs to.
Note
The S3 bucket must already be created.
s3EncryptionEnabled (boolean) --
Determines whether to use encryption on the S3 logs. If not specified, encryption is not used.
s3KeyPrefix (string) --
An optional folder in the S3 bucket to place logs in.
status (string) --
The status of the cluster. The following are the possible states that are returned.
ACTIVE
The cluster is ready to accept tasks and if applicable you can register container instances with the cluster.
PROVISIONING
The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being created.
DEPROVISIONING
The cluster has capacity providers that are associated with it and the resources needed for the capacity provider are being deleted.
FAILED
The cluster has capacity providers that are associated with it and the resources needed for the capacity provider have failed to create.
INACTIVE
The cluster has been deleted. Clusters with an INACTIVE
status may remain discoverable in your account for a period of time. However, this behavior is subject to change in the future. We don't recommend that you rely on INACTIVE
clusters persisting.
registeredContainerInstancesCount (integer) --
The number of container instances registered into the cluster. This includes container instances in both ACTIVE
and DRAINING
status.
runningTasksCount (integer) --
The number of tasks in the cluster that are in the RUNNING
state.
pendingTasksCount (integer) --
The number of tasks in the cluster that are in the PENDING
state.
activeServicesCount (integer) --
The number of services that are running on the cluster in an ACTIVE
state. You can view these services with ListServices.
statistics (list) --
Additional information about your clusters that are separated by launch type. They include the following:
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
tags (list) --
The metadata that you apply to the cluster to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.key (string) --
One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
settings (list) --
The settings for the cluster. This parameter indicates whether CloudWatch Container Insights is enabled or disabled for a cluster.
(dict) --
The settings to use when creating a cluster. This parameter is used to turn on CloudWatch Container Insights for a cluster.
name (string) --
The name of the cluster setting. The only supported value is containerInsights
.
value (string) --
The value to set for the cluster setting. The supported values are enabled
and disabled
. If enabled
is specified, CloudWatch Container Insights will be enabled for the cluster, otherwise it will be disabled unless the containerInsights
account setting is enabled. If a cluster value is specified, it will override the containerInsights
value set with PutAccountSetting or PutAccountSettingDefault.
capacityProviders (list) --
The capacity providers associated with the cluster.
defaultCapacityProviderStrategy (list) --
The default capacity provider strategy for the cluster. When services or tasks are run in the cluster with no launch type or capacity provider strategy specified, the default capacity provider strategy is used.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE
or UPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight
value is taken into consideration after the base
value, if defined, is satisfied.
If no weight
value is specified, the default value of 0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0
, any RunTask
or CreateService
actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1
, then when the base
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1
for capacityProviderA and a weight of 4
for capacityProviderB , then for every one task that's run using capacityProviderA , four tasks would use capacityProviderB .
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0
is used.
attachments (list) --
The resources attached to a cluster. When using a capacity provider with a cluster, the capacity provider and associated resources are returned as cluster attachments.
(dict) --
An object representing a container instance or task attachment.
id (string) --
The unique identifier for the attachment.
type (string) --
The type of the attachment, such as ElasticNetworkInterface
.
status (string) --
The status of the attachment. Valid values are PRECREATED
, CREATED
, ATTACHING
, ATTACHED
, DETACHING
, DETACHED
, DELETED
, and FAILED
.
details (list) --
Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
attachmentsStatus (string) --
The status of the capacity providers associated with the cluster. The following are the states that are returned.
UPDATE_IN_PROGRESS
The available capacity providers for the cluster are updating.
UPDATE_COMPLETE
The capacity providers have successfully updated.
UPDATE_FAILED
The capacity provider updates failed.
serviceConnectDefaults (dict) --
Use this parameter to set a default Service Connect namespace. After you set a default Service Connect namespace, any new services with Service Connect turned on that are created in the cluster are added as client services in the namespace. This setting only applies to new services that set the enabled
parameter to true
in the ServiceConnectConfiguration
. You can set the namespace of each service individually in the ServiceConnectConfiguration
to override this default parameter.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace. When you create a service and don't specify a Service Connect configuration, this namespace is used.
failures (list) --
Any failures associated with the call.
(dict) --
A failed resource. For a list of common causes, see API failure reasons in the Amazon Elastic Container Service Developer Guide .
arn (string) --
The Amazon Resource Name (ARN) of the failed resource.
reason (string) --
The reason for the failure.
detail (string) --
The details of the failure.
Exceptions
ECS.Client.exceptions.ServerException
ECS.Client.exceptions.ClientException
ECS.Client.exceptions.InvalidParameterException
Examples
This example provides a description of the specified cluster in your default region.
response = client.describe_clusters(
clusters=[
'default',
],
)
print(response)
Expected Output:
{
'clusters': [
{
'clusterArn': 'arn:aws:ecs:us-east-1:aws_account_id:cluster/default',
'clusterName': 'default',
'status': 'ACTIVE',
},
],
'failures': [
],
'ResponseMetadata': {
'...': '...',
},
}
describe_container_instances
(**kwargs)¶Describes one or more container instances. Returns metadata about each container instance requested.
See also: AWS API Documentation
Request Syntax
response = client.describe_container_instances(
cluster='string',
containerInstances=[
'string',
],
include=[
'TAGS'|'CONTAINER_INSTANCE_HEALTH',
]
)
[REQUIRED]
A list of up to 100 container instance IDs or full Amazon Resource Name (ARN) entries.
Specifies whether you want to see the resource tags for the container instance. If TAGS
is specified, the tags are included in the response. If CONTAINER_INSTANCE_HEALTH
is specified, the container instance health is included in the response. If this field is omitted, tags and container instance health status aren't included in the response.
dict
Response Syntax
{
'containerInstances': [
{
'containerInstanceArn': 'string',
'ec2InstanceId': 'string',
'capacityProviderName': 'string',
'version': 123,
'versionInfo': {
'agentVersion': 'string',
'agentHash': 'string',
'dockerVersion': 'string'
},
'remainingResources': [
{
'name': 'string',
'type': 'string',
'doubleValue': 123.0,
'longValue': 123,
'integerValue': 123,
'stringSetValue': [
'string',
]
},
],
'registeredResources': [
{
'name': 'string',
'type': 'string',
'doubleValue': 123.0,
'longValue': 123,
'integerValue': 123,
'stringSetValue': [
'string',
]
},
],
'status': 'string',
'statusReason': 'string',
'agentConnected': True|False,
'runningTasksCount': 123,
'pendingTasksCount': 123,
'agentUpdateStatus': 'PENDING'|'STAGING'|'STAGED'|'UPDATING'|'UPDATED'|'FAILED',
'attributes': [
{
'name': 'string',
'value': 'string',
'targetType': 'container-instance',
'targetId': 'string'
},
],
'registeredAt': datetime(2015, 1, 1),
'attachments': [
{
'id': 'string',
'type': 'string',
'status': 'string',
'details': [
{
'name': 'string',
'value': 'string'
},
]
},
],
'tags': [
{
'key': 'string',
'value': 'string'
},
],
'healthStatus': {
'overallStatus': 'OK'|'IMPAIRED'|'INSUFFICIENT_DATA'|'INITIALIZING',
'details': [
{
'type': 'CONTAINER_RUNTIME',
'status': 'OK'|'IMPAIRED'|'INSUFFICIENT_DATA'|'INITIALIZING',
'lastUpdated': datetime(2015, 1, 1),
'lastStatusChange': datetime(2015, 1, 1)
},
]
}
},
],
'failures': [
{
'arn': 'string',
'reason': 'string',
'detail': 'string'
},
]
}
Response Structure
(dict) --
containerInstances (list) --
The list of container instances.
(dict) --
An Amazon EC2 or External instance that's running the Amazon ECS agent and has been registered with a cluster.
containerInstanceArn (string) --
The Amazon Resource Name (ARN) of the container instance. For more information about the ARN format, see Amazon Resource Name (ARN) in the Amazon ECS Developer Guide .
ec2InstanceId (string) --
The ID of the container instance. For Amazon EC2 instances, this value is the Amazon EC2 instance ID. For external instances, this value is the Amazon Web Services Systems Manager managed instance ID.
capacityProviderName (string) --
The capacity provider that's associated with the container instance.
version (integer) --
The version counter for the container instance. Every time a container instance experiences a change that triggers a CloudWatch event, the version counter is incremented. If you're replicating your Amazon ECS container instance state with CloudWatch Events, you can compare the version of a container instance reported by the Amazon ECS APIs with the version reported in CloudWatch Events for the container instance (inside the detail
object) to verify that the version in your event stream is current.
versionInfo (dict) --
The version information for the Amazon ECS container agent and Docker daemon running on the container instance.
agentVersion (string) --
The version number of the Amazon ECS container agent.
agentHash (string) --
The Git commit hash for the Amazon ECS container agent build on the amazon-ecs-agent GitHub repository.
dockerVersion (string) --
The Docker version that's running on the container instance.
remainingResources (list) --
For CPU and memory resource types, this parameter describes the remaining CPU and memory that wasn't already allocated to tasks and is therefore available for new tasks. For port resource types, this parameter describes the ports that were reserved by the Amazon ECS container agent (at instance registration time) and any task containers that have reserved port mappings on the host (with the host
or bridge
network mode). Any port that's not specified here is available for new tasks.
(dict) --
Describes the resources available for a container instance.
name (string) --
The name of the resource, such as CPU
, MEMORY
, PORTS
, PORTS_UDP
, or a user-defined resource.
type (string) --
The type of the resource. Valid values: INTEGER
, DOUBLE
, LONG
, or STRINGSET
.
doubleValue (float) --
When the doubleValue
type is set, the value of the resource must be a double precision floating-point type.
longValue (integer) --
When the longValue
type is set, the value of the resource must be an extended precision floating-point type.
integerValue (integer) --
When the integerValue
type is set, the value of the resource must be an integer.
stringSetValue (list) --
When the stringSetValue
type is set, the value of the resource must be a string type.
registeredResources (list) --
For CPU and memory resource types, this parameter describes the amount of each resource that was available on the container instance when the container agent registered it with Amazon ECS. This value represents the total amount of CPU and memory that can be allocated on this container instance to tasks. For port resource types, this parameter describes the ports that were reserved by the Amazon ECS container agent when it registered the container instance with Amazon ECS.
(dict) --
Describes the resources available for a container instance.
name (string) --
The name of the resource, such as CPU
, MEMORY
, PORTS
, PORTS_UDP
, or a user-defined resource.
type (string) --
The type of the resource. Valid values: INTEGER
, DOUBLE
, LONG
, or STRINGSET
.
doubleValue (float) --
When the doubleValue
type is set, the value of the resource must be a double precision floating-point type.
longValue (integer) --
When the longValue
type is set, the value of the resource must be an extended precision floating-point type.
integerValue (integer) --
When the integerValue
type is set, the value of the resource must be an integer.
stringSetValue (list) --
When the stringSetValue
type is set, the value of the resource must be a string type.
status (string) --
The status of the container instance. The valid values are REGISTERING
, REGISTRATION_FAILED
, ACTIVE
, INACTIVE
, DEREGISTERING
, or DRAINING
.
If your account has opted in to the awsvpcTrunking
account setting, then any newly registered container instance will transition to a REGISTERING
status while the trunk elastic network interface is provisioned for the instance. If the registration fails, the instance will transition to a REGISTRATION_FAILED
status. You can describe the container instance and see the reason for failure in the statusReason
parameter. Once the container instance is terminated, the instance transitions to a DEREGISTERING
status while the trunk elastic network interface is deprovisioned. The instance then transitions to an INACTIVE
status.
The ACTIVE
status indicates that the container instance can accept tasks. The DRAINING
indicates that new tasks aren't placed on the container instance and any service tasks running on the container instance are removed if possible. For more information, see Container instance draining in the Amazon Elastic Container Service Developer Guide .
statusReason (string) --
The reason that the container instance reached its current status.
agentConnected (boolean) --
This parameter returns true
if the agent is connected to Amazon ECS. An instance with an agent that may be unhealthy or stopped return false
. Only instances connected to an agent can accept task placement requests.
runningTasksCount (integer) --
The number of tasks on the container instance that are in the RUNNING
status.
pendingTasksCount (integer) --
The number of tasks on the container instance that are in the PENDING
status.
agentUpdateStatus (string) --
The status of the most recent agent update. If an update wasn't ever requested, this value is NULL
.
attributes (list) --
The attributes set for the container instance, either by the Amazon ECS container agent at instance registration or manually with the PutAttributes operation.
(dict) --
An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide .
name (string) --
The name of the attribute. The name
must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.).
value (string) --
The value of the attribute. The value
must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space.
targetType (string) --
The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN.
targetId (string) --
The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN).
registeredAt (datetime) --
The Unix timestamp for the time when the container instance was registered.
attachments (list) --
The resources attached to a container instance, such as an elastic network interface.
(dict) --
An object representing a container instance or task attachment.
id (string) --
The unique identifier for the attachment.
type (string) --
The type of the attachment, such as ElasticNetworkInterface
.
status (string) --
The status of the attachment. Valid values are PRECREATED
, CREATED
, ATTACHING
, ATTACHED
, DETACHING
, DETACHED
, DELETED
, and FAILED
.
details (list) --
Details of the attachment. For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
tags (list) --
The metadata that you apply to the container instance to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.key (string) --
One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
healthStatus (dict) --
An object representing the health status of the container instance.
overallStatus (string) --
The overall health status of the container instance. This is an aggregate status of all container instance health checks.
details (list) --
An array of objects representing the details of the container instance health status.
(dict) --
An object representing the result of a container instance health status check.
type (string) --
The type of container instance health status that was verified.
status (string) --
The container instance health status.
lastUpdated (datetime) --
The Unix timestamp for when the container instance health status was last updated.
lastStatusChange (datetime) --
The Unix timestamp for when the container instance health status last changed.
failures (list) --
Any failures associated with the call.
(dict) --
A failed resource. For a list of common causes, see API failure reasons in the Amazon Elastic Container Service Developer Guide .
arn (string) --
The Amazon Resource Name (ARN) of the failed resource.
reason (string) --
The reason for the failure.
detail (string) --
The details of the failure.
Exceptions
ECS.Client.exceptions.ServerException
ECS.Client.exceptions.ClientException
ECS.Client.exceptions.InvalidParameterException
ECS.Client.exceptions.ClusterNotFoundException
Examples
This example provides a description of the specified container instance in your default region, using the container instance UUID as an identifier.
response = client.describe_container_instances(
cluster='default',
containerInstances=[
'f2756532-8f13-4d53-87c9-aed50dc94cd7',
],
)
print(response)
Expected Output:
{
'containerInstances': [
{
'agentConnected': True,
'containerInstanceArn': 'arn:aws:ecs:us-east-1:012345678910:container-instance/f2756532-8f13-4d53-87c9-aed50dc94cd7',
'ec2InstanceId': 'i-807f3249',
'pendingTasksCount': 0,
'registeredResources': [
{
'name': 'CPU',
'type': 'INTEGER',
'doubleValue': 0.0,
'integerValue': 2048,
'longValue': 0,
},
{
'name': 'MEMORY',
'type': 'INTEGER',
'doubleValue': 0.0,
'integerValue': 3768,
'longValue': 0,
},
{
'name': 'PORTS',
'type': 'STRINGSET',
'doubleValue': 0.0,
'integerValue': 0,
'longValue': 0,
'stringSetValue': [
'2376',
'22',
'51678',
'2375',
],
},
],
'remainingResources': [
{
'name': 'CPU',
'type': 'INTEGER',
'doubleValue': 0.0,
'integerValue': 1948,
'longValue': 0,
},
{
'name': 'MEMORY',
'type': 'INTEGER',
'doubleValue': 0.0,
'integerValue': 3668,
'longValue': 0,
},
{
'name': 'PORTS',
'type': 'STRINGSET',
'doubleValue': 0.0,
'integerValue': 0,
'longValue': 0,
'stringSetValue': [
'2376',
'22',
'80',
'51678',
'2375',
],
},
],
'runningTasksCount': 1,
'status': 'ACTIVE',
},
],
'failures': [
],
'ResponseMetadata': {
'...': '...',
},
}
describe_services
(**kwargs)¶Describes the specified services running in your cluster.
See also: AWS API Documentation
Request Syntax
response = client.describe_services(
cluster='string',
services=[
'string',
],
include=[
'TAGS',
]
)
[REQUIRED]
A list of services to describe. You may specify up to 10 services to describe in a single operation.
Determines whether you want to see the resource tags for the service. If TAGS
is specified, the tags are included in the response. If this field is omitted, tags aren't included in the response.
dict
Response Syntax
{
'services': [
{
'serviceArn': 'string',
'serviceName': 'string',
'clusterArn': 'string',
'loadBalancers': [
{
'targetGroupArn': 'string',
'loadBalancerName': 'string',
'containerName': 'string',
'containerPort': 123
},
],
'serviceRegistries': [
{
'registryArn': 'string',
'port': 123,
'containerName': 'string',
'containerPort': 123
},
],
'status': 'string',
'desiredCount': 123,
'runningCount': 123,
'pendingCount': 123,
'launchType': 'EC2'|'FARGATE'|'EXTERNAL',
'capacityProviderStrategy': [
{
'capacityProvider': 'string',
'weight': 123,
'base': 123
},
],
'platformVersion': 'string',
'platformFamily': 'string',
'taskDefinition': 'string',
'deploymentConfiguration': {
'deploymentCircuitBreaker': {
'enable': True|False,
'rollback': True|False
},
'maximumPercent': 123,
'minimumHealthyPercent': 123
},
'taskSets': [
{
'id': 'string',
'taskSetArn': 'string',
'serviceArn': 'string',
'clusterArn': 'string',
'startedBy': 'string',
'externalId': 'string',
'status': 'string',
'taskDefinition': 'string',
'computedDesiredCount': 123,
'pendingCount': 123,
'runningCount': 123,
'createdAt': datetime(2015, 1, 1),
'updatedAt': datetime(2015, 1, 1),
'launchType': 'EC2'|'FARGATE'|'EXTERNAL',
'capacityProviderStrategy': [
{
'capacityProvider': 'string',
'weight': 123,
'base': 123
},
],
'platformVersion': 'string',
'platformFamily': 'string',
'networkConfiguration': {
'awsvpcConfiguration': {
'subnets': [
'string',
],
'securityGroups': [
'string',
],
'assignPublicIp': 'ENABLED'|'DISABLED'
}
},
'loadBalancers': [
{
'targetGroupArn': 'string',
'loadBalancerName': 'string',
'containerName': 'string',
'containerPort': 123
},
],
'serviceRegistries': [
{
'registryArn': 'string',
'port': 123,
'containerName': 'string',
'containerPort': 123
},
],
'scale': {
'value': 123.0,
'unit': 'PERCENT'
},
'stabilityStatus': 'STEADY_STATE'|'STABILIZING',
'stabilityStatusAt': datetime(2015, 1, 1),
'tags': [
{
'key': 'string',
'value': 'string'
},
]
},
],
'deployments': [
{
'id': 'string',
'status': 'string',
'taskDefinition': 'string',
'desiredCount': 123,
'pendingCount': 123,
'runningCount': 123,
'failedTasks': 123,
'createdAt': datetime(2015, 1, 1),
'updatedAt': datetime(2015, 1, 1),
'capacityProviderStrategy': [
{
'capacityProvider': 'string',
'weight': 123,
'base': 123
},
],
'launchType': 'EC2'|'FARGATE'|'EXTERNAL',
'platformVersion': 'string',
'platformFamily': 'string',
'networkConfiguration': {
'awsvpcConfiguration': {
'subnets': [
'string',
],
'securityGroups': [
'string',
],
'assignPublicIp': 'ENABLED'|'DISABLED'
}
},
'rolloutState': 'COMPLETED'|'FAILED'|'IN_PROGRESS',
'rolloutStateReason': 'string',
'serviceConnectConfiguration': {
'enabled': True|False,
'namespace': 'string',
'services': [
{
'portName': 'string',
'discoveryName': 'string',
'clientAliases': [
{
'port': 123,
'dnsName': 'string'
},
],
'ingressPortOverride': 123
},
],
'logConfiguration': {
'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens',
'options': {
'string': 'string'
},
'secretOptions': [
{
'name': 'string',
'valueFrom': 'string'
},
]
}
},
'serviceConnectResources': [
{
'discoveryName': 'string',
'discoveryArn': 'string'
},
]
},
],
'roleArn': 'string',
'events': [
{
'id': 'string',
'createdAt': datetime(2015, 1, 1),
'message': 'string'
},
],
'createdAt': datetime(2015, 1, 1),
'placementConstraints': [
{
'type': 'distinctInstance'|'memberOf',
'expression': 'string'
},
],
'placementStrategy': [
{
'type': 'random'|'spread'|'binpack',
'field': 'string'
},
],
'networkConfiguration': {
'awsvpcConfiguration': {
'subnets': [
'string',
],
'securityGroups': [
'string',
],
'assignPublicIp': 'ENABLED'|'DISABLED'
}
},
'healthCheckGracePeriodSeconds': 123,
'schedulingStrategy': 'REPLICA'|'DAEMON',
'deploymentController': {
'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL'
},
'tags': [
{
'key': 'string',
'value': 'string'
},
],
'createdBy': 'string',
'enableECSManagedTags': True|False,
'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE',
'enableExecuteCommand': True|False
},
],
'failures': [
{
'arn': 'string',
'reason': 'string',
'detail': 'string'
},
]
}
Response Structure
(dict) --
services (list) --
The list of services described.
(dict) --
Details on a service within a cluster
serviceArn (string) --
The ARN that identifies the service. For more information about the ARN format, see Amazon Resource Name (ARN) in the Amazon ECS Developer Guide .
serviceName (string) --
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster. However, you can have similarly named services in multiple clusters within a Region or across multiple Regions.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that hosts the service.
loadBalancers (list) --
A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition.
(dict) --
The load balancer configuration to use with a service or task set.
For specific notes and restrictions regarding the use of load balancers with services and task sets, see the CreateService and CreateTaskSet actions.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide .
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. If you're using a Classic Load Balancer, omit the target group ARN.
For services using the ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide .
For services using the CODE_DEPLOY
deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide .
Warning
If your service's task definition uses the awsvpc
network mode, you must choose ip
as the target type, not instance
. Do this when creating your target groups because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
A load balancer name is only specified when using a Classic Load Balancer. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort
of the port mapping.
serviceRegistries (list) --
The details for the service discovery registries to assign to this service. For more information, see Service Discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc
network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition that your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
status (string) --
The status of the service. The valid values are ACTIVE
, DRAINING
, or INACTIVE
.
desiredCount (integer) --
The desired number of instantiations of the task definition to keep running on the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService.
runningCount (integer) --
The number of tasks in the cluster that are in the RUNNING
state.
pendingCount (integer) --
The number of tasks in the cluster that are in the PENDING
state.
launchType (string) --
The launch type the service is using. When using the DescribeServices API, this field is omitted if the service was created using a capacity provider strategy.
capacityProviderStrategy (list) --
The capacity provider strategy the service uses. When using the DescribeServices API, this field is omitted if the service was created using a launch type.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE
or UPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight
value is taken into consideration after the base
value, if defined, is satisfied.
If no weight
value is specified, the default value of 0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0
, any RunTask
or CreateService
actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1
, then when the base
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1
for capacityProviderA and a weight of 4
for capacityProviderB , then for every one task that's run using capacityProviderA , four tasks would use capacityProviderB .
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0
is used.
platformVersion (string) --
The platform version to run your service on. A platform version is only specified for tasks that are hosted on Fargate. If one isn't specified, the LATEST
platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide .
platformFamily (string) --
The operating system that your tasks in the service run on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily
value as the service (for example, LINUX
).
taskDefinition (string) --
The task definition to use for tasks in the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService.
deploymentConfiguration (dict) --
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
deploymentCircuitBreaker (dict) --
Note
The deployment circuit breaker can only be used for services using the rolling update ( ECS
) deployment type.
The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If deployment circuit breaker is enabled, a service deployment will transition to a failed state and stop launching new tasks. If rollback is enabled, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
enable (boolean) --
Determines whether to use the deployment circuit breaker logic for the service.
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is enabled, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
maximumPercent (integer) --
If a service is using the rolling update ( ECS
) deployment type, the maximumPercent
parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING
or PENDING
state during a deployment, as a percentage of the desiredCount
(rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA
service scheduler and has a desiredCount
of four tasks and a maximumPercent
value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent
value for a service using the REPLICA
service scheduler is 200%.
If a service is using either the blue/green ( CODE_DEPLOY
) or EXTERNAL
deployment types and tasks that use the EC2 launch type, the maximum percent value is set to the default value and is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING
state while the container instances are in the DRAINING
state. If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service.
minimumHealthyPercent (integer) --
If a service is using the rolling update ( ECS
) deployment type, the minimumHealthyPercent
represents a lower limit on the number of your service's tasks that must remain in the RUNNING
state during a deployment, as a percentage of the desiredCount
(rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desiredCount
of four tasks and a minimumHealthyPercent
of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks.
For services that do not use a load balancer, the following should be noted:
RUNNING
state before the task is counted towards the minimum healthy percent total.For services are that do use a load balancer, the following should be noted:
If a service is using either the blue/green ( CODE_DEPLOY
) or EXTERNAL
deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value and is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING
state while the container instances are in the DRAINING
state. If a service is using either the blue/green ( CODE_DEPLOY
) or EXTERNAL
deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
taskSets (list) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL
deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
(dict) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL
deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy
parameter is CODE_DEPLOY
. If an external deployment created the task set, the startedBy
field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId
parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId
parameter contains the ECS_TASK_SET_EXTERNAL_ID
Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount
by the task set's scale
percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING
status during a deployment. A task in the PENDING
state is preparing to enter the RUNNING
state. A task set enters the PENDING
status when it launches for the first time or when it's restarted after being in the STOPPED
state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING
status during a deployment. A task in the RUNNING
state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide .
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask or CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE
or UPDATING
status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight
value is taken into consideration after the base
value, if defined, is satisfied.
If no weight
value is specified, the default value of 0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0
, any RunTask
or CreateService
actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1
, then when the base
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1
for capacityProviderA and a weight of 4
for capacityProviderB , then for every one task that's run using capacityProviderA , four tasks would use capacityProviderB .
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0
is used.
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide .
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
Note
All specified subnets and security groups must be from the same VPC.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per AwsVpcConfiguration
.
Note
All specified subnets must be from the same VPC.
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per AwsVpcConfiguration
.
Note
All specified security groups must be from the same VPC.
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED
.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
For specific notes and restrictions regarding the use of load balancers with services and task sets, see the CreateService and CreateTaskSet actions.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide .
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. If you're using a Classic Load Balancer, omit the target group ARN.
For services using the ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide .
For services using the CODE_DEPLOY
deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide .
Warning
If your service's task definition uses the awsvpc
network mode, you must choose ip
as the target type, not instance
. Do this when creating your target groups because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
A load balancer name is only specified when using a Classic Load Balancer. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort
of the port mapping.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc
network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition that your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge
or host
network mode, you must specify a containerName
and containerPort
combination from the task definition. If the task definition your service task specifies uses the awsvpc
network mode and a type SRV DNS record is used, you must specify either a containerName
and containerPort
combination or a port
value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount
, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE
:
runningCount
is equal to the computedDesiredCount
.pendingCount
is 0
.DRAINING
status.If any of those conditions aren't met, the stability status returns STABILIZING
.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags: