Table of Contents
AutoScaling.
Client
¶A low-level client representing Auto Scaling
Amazon EC2 Auto Scaling is designed to automatically launch and terminate EC2 instances based on user-defined scaling policies, scheduled actions, and health checks.
For more information, see the Amazon EC2 Auto Scaling User Guide and the Amazon EC2 Auto Scaling API Reference.
import boto3
client = boto3.client('autoscaling')
These are the available methods:
attach_instances()
attach_load_balancer_target_groups()
attach_load_balancers()
attach_traffic_sources()
batch_delete_scheduled_action()
batch_put_scheduled_update_group_action()
can_paginate()
cancel_instance_refresh()
close()
complete_lifecycle_action()
create_auto_scaling_group()
create_launch_configuration()
create_or_update_tags()
delete_auto_scaling_group()
delete_launch_configuration()
delete_lifecycle_hook()
delete_notification_configuration()
delete_policy()
delete_scheduled_action()
delete_tags()
delete_warm_pool()
describe_account_limits()
describe_adjustment_types()
describe_auto_scaling_groups()
describe_auto_scaling_instances()
describe_auto_scaling_notification_types()
describe_instance_refreshes()
describe_launch_configurations()
describe_lifecycle_hook_types()
describe_lifecycle_hooks()
describe_load_balancer_target_groups()
describe_load_balancers()
describe_metric_collection_types()
describe_notification_configurations()
describe_policies()
describe_scaling_activities()
describe_scaling_process_types()
describe_scheduled_actions()
describe_tags()
describe_termination_policy_types()
describe_traffic_sources()
describe_warm_pool()
detach_instances()
detach_load_balancer_target_groups()
detach_load_balancers()
detach_traffic_sources()
disable_metrics_collection()
enable_metrics_collection()
enter_standby()
execute_policy()
exit_standby()
get_paginator()
get_predictive_scaling_forecast()
get_waiter()
put_lifecycle_hook()
put_notification_configuration()
put_scaling_policy()
put_scheduled_update_group_action()
put_warm_pool()
record_lifecycle_action_heartbeat()
resume_processes()
rollback_instance_refresh()
set_desired_capacity()
set_instance_health()
set_instance_protection()
start_instance_refresh()
suspend_processes()
terminate_instance_in_auto_scaling_group()
update_auto_scaling_group()
attach_instances
(**kwargs)¶Attaches one or more EC2 instances to the specified Auto Scaling group.
When you attach instances, Amazon EC2 Auto Scaling increases the desired capacity of the group by the number of instances being attached. If the number of instances being attached plus the desired capacity of the group exceeds the maximum size of the group, the operation fails.
If there is a Classic Load Balancer attached to your Auto Scaling group, the instances are also registered with the load balancer. If there are target groups attached to your Auto Scaling group, the instances are also registered with the target groups.
For more information, see Attach EC2 instances to your Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.attach_instances(
InstanceIds=[
'string',
],
AutoScalingGroupName='string'
)
The IDs of the instances. You can specify up to 20 instances.
[REQUIRED]
The name of the Auto Scaling group.
None
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.ServiceLinkedRoleFailure
Examples
This example attaches the specified instance to the specified Auto Scaling group.
response = client.attach_instances(
AutoScalingGroupName='my-auto-scaling-group',
InstanceIds=[
'i-93633f9b',
],
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
attach_load_balancer_target_groups
(**kwargs)¶Attaches one or more target groups to the specified Auto Scaling group.
This operation is used with the following load balancer types:
To describe the target groups for an Auto Scaling group, call the DescribeLoadBalancerTargetGroups API. To detach the target group from the Auto Scaling group, call the DetachLoadBalancerTargetGroups API.
This operation is additive and does not detach existing target groups or Classic Load Balancers from the Auto Scaling group.
For more information, see Use Elastic Load Balancing to distribute traffic across the instances in your Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.attach_load_balancer_target_groups(
AutoScalingGroupName='string',
TargetGroupARNs=[
'string',
]
)
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
The Amazon Resource Names (ARNs) of the target groups. You can specify up to 10 target groups. To get the ARN of a target group, use the Elastic Load Balancing DescribeTargetGroups API operation.
dict
Response Syntax
{}
Response Structure
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.ServiceLinkedRoleFailure
Examples
This example attaches the specified target group to the specified Auto Scaling group.
response = client.attach_load_balancer_target_groups(
AutoScalingGroupName='my-auto-scaling-group',
TargetGroupARNs=[
'arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067',
],
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
attach_load_balancers
(**kwargs)¶Note
To attach an Application Load Balancer, Network Load Balancer, or Gateway Load Balancer, use the AttachLoadBalancerTargetGroups API operation instead.
Attaches one or more Classic Load Balancers to the specified Auto Scaling group. Amazon EC2 Auto Scaling registers the running instances with these Classic Load Balancers.
To describe the load balancers for an Auto Scaling group, call the DescribeLoadBalancers API. To detach a load balancer from the Auto Scaling group, call the DetachLoadBalancers API.
This operation is additive and does not detach existing Classic Load Balancers or target groups from the Auto Scaling group.
For more information, see Use Elastic Load Balancing to distribute traffic across the instances in your Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.attach_load_balancers(
AutoScalingGroupName='string',
LoadBalancerNames=[
'string',
]
)
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
The names of the load balancers. You can specify up to 10 load balancers.
dict
Response Syntax
{}
Response Structure
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.ServiceLinkedRoleFailure
Examples
This example attaches the specified load balancer to the specified Auto Scaling group.
response = client.attach_load_balancers(
AutoScalingGroupName='my-auto-scaling-group',
LoadBalancerNames=[
'my-load-balancer',
],
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
attach_traffic_sources
(**kwargs)¶Reserved for use with Amazon VPC Lattice, which is in preview and subject to change. Do not use this API for production workloads. This API is also subject to change.
Attaches one or more traffic sources to the specified Auto Scaling group.
To describe the traffic sources for an Auto Scaling group, call the DescribeTrafficSources API. To detach a traffic source from the Auto Scaling group, call the DetachTrafficSources API.
This operation is additive and does not detach existing traffic sources from the Auto Scaling group.
See also: AWS API Documentation
Request Syntax
response = client.attach_traffic_sources(
AutoScalingGroupName='string',
TrafficSources=[
{
'Identifier': 'string'
},
]
)
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
The unique identifiers of one or more traffic sources. You can specify up to 10 traffic sources.
Currently, you must specify an Amazon Resource Name (ARN) for an existing VPC Lattice target group. Amazon EC2 Auto Scaling registers the running instances with the attached target groups. The target groups receive incoming traffic and route requests to one or more registered targets.
Describes the identifier of a traffic source.
Currently, you must specify an Amazon Resource Name (ARN) for an existing VPC Lattice target group.
The unique identifier of the traffic source.
dict
Response Syntax
{}
Response Structure
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.ServiceLinkedRoleFailure
batch_delete_scheduled_action
(**kwargs)¶Deletes one or more scheduled actions for the specified Auto Scaling group.
See also: AWS API Documentation
Request Syntax
response = client.batch_delete_scheduled_action(
AutoScalingGroupName='string',
ScheduledActionNames=[
'string',
]
)
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
The names of the scheduled actions to delete. The maximum number allowed is 50.
dict
Response Syntax
{
'FailedScheduledActions': [
{
'ScheduledActionName': 'string',
'ErrorCode': 'string',
'ErrorMessage': 'string'
},
]
}
Response Structure
(dict) --
FailedScheduledActions (list) --
The names of the scheduled actions that could not be deleted, including an error message.
(dict) --
Describes a scheduled action that could not be created, updated, or deleted.
ScheduledActionName (string) --
The name of the scheduled action.
ErrorCode (string) --
The error code.
ErrorMessage (string) --
The error message accompanying the error code.
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
batch_put_scheduled_update_group_action
(**kwargs)¶Creates or updates one or more scheduled scaling actions for an Auto Scaling group.
See also: AWS API Documentation
Request Syntax
response = client.batch_put_scheduled_update_group_action(
AutoScalingGroupName='string',
ScheduledUpdateGroupActions=[
{
'ScheduledActionName': 'string',
'StartTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'Recurrence': 'string',
'MinSize': 123,
'MaxSize': 123,
'DesiredCapacity': 123,
'TimeZone': 'string'
},
]
)
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
One or more scheduled actions. The maximum number allowed is 50.
Describes information used for one or more scheduled scaling action updates in a BatchPutScheduledUpdateGroupAction operation.
The name of the scaling action.
The date and time for the action to start, in YYYY-MM-DDThh:mm:ssZ format in UTC/GMT only and in quotes (for example, "2019-06-01T00:00:00Z"
).
If you specify Recurrence
and StartTime
, Amazon EC2 Auto Scaling performs the action at this time, and then performs the action based on the specified recurrence.
If you try to schedule the action in the past, Amazon EC2 Auto Scaling returns an error message.
The date and time for the recurring schedule to end, in UTC.
The recurring schedule for the action, in Unix cron syntax format. This format consists of five fields separated by white spaces: [Minute] [Hour] [Day_of_Month] [Month_of_Year] [Day_of_Week]. The value must be in quotes (for example, "30 0 1 1,6,12 *"
). For more information about this format, see Crontab.
When StartTime
and EndTime
are specified with Recurrence
, they form the boundaries of when the recurring action starts and stops.
Cron expressions use Universal Coordinated Time (UTC) by default.
The minimum size of the Auto Scaling group.
The maximum size of the Auto Scaling group.
The desired capacity is the initial capacity of the Auto Scaling group after the scheduled action runs and the capacity it attempts to maintain.
Specifies the time zone for a cron expression. If a time zone is not provided, UTC is used by default.
Valid values are the canonical names of the IANA time zones, derived from the IANA Time Zone Database (such as Etc/GMT+9
or Pacific/Tahiti
). For more information, see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones.
dict
Response Syntax
{
'FailedScheduledUpdateGroupActions': [
{
'ScheduledActionName': 'string',
'ErrorCode': 'string',
'ErrorMessage': 'string'
},
]
}
Response Structure
(dict) --
FailedScheduledUpdateGroupActions (list) --
The names of the scheduled actions that could not be created or updated, including an error message.
(dict) --
Describes a scheduled action that could not be created, updated, or deleted.
ScheduledActionName (string) --
The name of the scheduled action.
ErrorCode (string) --
The error code.
ErrorMessage (string) --
The error message accompanying the error code.
Exceptions
AutoScaling.Client.exceptions.AlreadyExistsFault
AutoScaling.Client.exceptions.LimitExceededFault
AutoScaling.Client.exceptions.ResourceContentionFault
can_paginate
(operation_name)¶Check if an operation can be paginated.
create_foo
, and you'd normally invoke the
operation as client.create_foo(**kwargs)
, if the
create_foo
operation can be paginated, you can use the
call client.get_paginator("create_foo")
.True
if the operation can be paginated,
False
otherwise.cancel_instance_refresh
(**kwargs)¶Cancels an instance refresh or rollback that is in progress. If an instance refresh or rollback is not in progress, an ActiveInstanceRefreshNotFound
error occurs.
This operation is part of the instance refresh feature in Amazon EC2 Auto Scaling, which helps you update instances in your Auto Scaling group after you make configuration changes.
When you cancel an instance refresh, this does not roll back any changes that it made. Use the RollbackInstanceRefresh API to roll back instead.
See also: AWS API Documentation
Request Syntax
response = client.cancel_instance_refresh(
AutoScalingGroupName='string'
)
[REQUIRED]
The name of the Auto Scaling group.
{
'InstanceRefreshId': 'string'
}
Response Structure
The instance refresh ID associated with the request. This is the unique ID assigned to the instance refresh when it was started.
Exceptions
AutoScaling.Client.exceptions.LimitExceededFault
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.ActiveInstanceRefreshNotFoundFault
Examples
This example cancels an instance refresh operation in progress.
response = client.cancel_instance_refresh(
AutoScalingGroupName='my-auto-scaling-group',
)
print(response)
Expected Output:
{
'InstanceRefreshId': '08b91cf7-8fa6-48af-b6a6-d227f40f1b9b',
'ResponseMetadata': {
'...': '...',
},
}
close
()¶Closes underlying endpoint connections.
complete_lifecycle_action
(**kwargs)¶Completes the lifecycle action for the specified token or instance with the specified result.
This step is a part of the procedure for adding a lifecycle hook to an Auto Scaling group:
For more information, see Amazon EC2 Auto Scaling lifecycle hooks in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.complete_lifecycle_action(
LifecycleHookName='string',
AutoScalingGroupName='string',
LifecycleActionToken='string',
LifecycleActionResult='string',
InstanceId='string'
)
[REQUIRED]
The name of the lifecycle hook.
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
The action for the group to take. You can specify either CONTINUE
or ABANDON
.
dict
Response Syntax
{}
Response Structure
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example notifies Auto Scaling that the specified lifecycle action is complete so that it can finish launching or terminating the instance.
response = client.complete_lifecycle_action(
AutoScalingGroupName='my-auto-scaling-group',
LifecycleActionResult='CONTINUE',
LifecycleActionToken='bcd2f1b8-9a78-44d3-8a7a-4dd07d7cf635',
LifecycleHookName='my-lifecycle-hook',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
create_auto_scaling_group
(**kwargs)¶We strongly recommend using a launch template when calling this operation to ensure full functionality for Amazon EC2 Auto Scaling and Amazon EC2.
Creates an Auto Scaling group with the specified name and attributes.
If you exceed your maximum limit of Auto Scaling groups, the call fails. To query this limit, call the DescribeAccountLimits API. For information about updating this limit, see Quotas for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
For introductory exercises for creating an Auto Scaling group, see Getting started with Amazon EC2 Auto Scaling and Tutorial: Set up a scaled and load-balanced application in the Amazon EC2 Auto Scaling User Guide . For more information, see Auto Scaling groups in the Amazon EC2 Auto Scaling User Guide .
Every Auto Scaling group has three size properties ( DesiredCapacity
, MaxSize
, and MinSize
). Usually, you set these sizes based on a specific number of instances. However, if you configure a mixed instances policy that defines weights for the instance types, you must specify these sizes with the same units that you use for weighting instances.
See also: AWS API Documentation
Request Syntax
response = client.create_auto_scaling_group(
AutoScalingGroupName='string',
LaunchConfigurationName='string',
LaunchTemplate={
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
MixedInstancesPolicy={
'LaunchTemplate': {
'LaunchTemplateSpecification': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'Overrides': [
{
'InstanceType': 'string',
'WeightedCapacity': 'string',
'LaunchTemplateSpecification': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'InstanceRequirements': {
'VCpuCount': {
'Min': 123,
'Max': 123
},
'MemoryMiB': {
'Min': 123,
'Max': 123
},
'CpuManufacturers': [
'intel'|'amd'|'amazon-web-services',
],
'MemoryGiBPerVCpu': {
'Min': 123.0,
'Max': 123.0
},
'ExcludedInstanceTypes': [
'string',
],
'InstanceGenerations': [
'current'|'previous',
],
'SpotMaxPricePercentageOverLowestPrice': 123,
'OnDemandMaxPricePercentageOverLowestPrice': 123,
'BareMetal': 'included'|'excluded'|'required',
'BurstablePerformance': 'included'|'excluded'|'required',
'RequireHibernateSupport': True|False,
'NetworkInterfaceCount': {
'Min': 123,
'Max': 123
},
'LocalStorage': 'included'|'excluded'|'required',
'LocalStorageTypes': [
'hdd'|'ssd',
],
'TotalLocalStorageGB': {
'Min': 123.0,
'Max': 123.0
},
'BaselineEbsBandwidthMbps': {
'Min': 123,
'Max': 123
},
'AcceleratorTypes': [
'gpu'|'fpga'|'inference',
],
'AcceleratorCount': {
'Min': 123,
'Max': 123
},
'AcceleratorManufacturers': [
'nvidia'|'amd'|'amazon-web-services'|'xilinx',
],
'AcceleratorNames': [
'a100'|'v100'|'k80'|'t4'|'m60'|'radeon-pro-v520'|'vu9p',
],
'AcceleratorTotalMemoryMiB': {
'Min': 123,
'Max': 123
},
'NetworkBandwidthGbps': {
'Min': 123.0,
'Max': 123.0
},
'AllowedInstanceTypes': [
'string',
]
}
},
]
},
'InstancesDistribution': {
'OnDemandAllocationStrategy': 'string',
'OnDemandBaseCapacity': 123,
'OnDemandPercentageAboveBaseCapacity': 123,
'SpotAllocationStrategy': 'string',
'SpotInstancePools': 123,
'SpotMaxPrice': 'string'
}
},
InstanceId='string',
MinSize=123,
MaxSize=123,
DesiredCapacity=123,
DefaultCooldown=123,
AvailabilityZones=[
'string',
],
LoadBalancerNames=[
'string',
],
TargetGroupARNs=[
'string',
],
HealthCheckType='string',
HealthCheckGracePeriod=123,
PlacementGroup='string',
VPCZoneIdentifier='string',
TerminationPolicies=[
'string',
],
NewInstancesProtectedFromScaleIn=True|False,
CapacityRebalance=True|False,
LifecycleHookSpecificationList=[
{
'LifecycleHookName': 'string',
'LifecycleTransition': 'string',
'NotificationMetadata': 'string',
'HeartbeatTimeout': 123,
'DefaultResult': 'string',
'NotificationTargetARN': 'string',
'RoleARN': 'string'
},
],
Tags=[
{
'ResourceId': 'string',
'ResourceType': 'string',
'Key': 'string',
'Value': 'string',
'PropagateAtLaunch': True|False
},
],
ServiceLinkedRoleARN='string',
MaxInstanceLifetime=123,
Context='string',
DesiredCapacityType='string',
DefaultInstanceWarmup=123,
TrafficSources=[
{
'Identifier': 'string'
},
]
)
[REQUIRED]
The name of the Auto Scaling group. This name must be unique per Region per account.
The name can contain any ASCII character 33 to 126 including most punctuation characters, digits, and upper and lowercased letters.
Note
You cannot use a colon (:) in the name.
The name of the launch configuration to use to launch instances.
Conditional: You must specify either a launch template ( LaunchTemplate
or MixedInstancesPolicy
) or a launch configuration ( LaunchConfigurationName
or InstanceId
).
Information used to specify the launch template and version to use to launch instances.
Conditional: You must specify either a launch template ( LaunchTemplate
or MixedInstancesPolicy
) or a launch configuration ( LaunchConfigurationName
or InstanceId
).
Note
The launch template that is specified must be configured for use with an Auto Scaling group. For more information, see Creating a launch template for an Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
The mixed instances policy. For more information, see Auto Scaling groups with multiple instance types and purchase options in the Amazon EC2 Auto Scaling User Guide .
One or more launch templates and the instance types (overrides) that are used to launch EC2 instances to fulfill On-Demand and Spot capacities.
The launch template.
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
Any properties that you specify override the same properties in the launch template.
Use this structure to let Amazon EC2 Auto Scaling do the following when the Auto Scaling group has a mixed instances policy:
Specify the instance types that you want, or define your instance requirements instead and let Amazon EC2 Auto Scaling provision the available instance types that meet your requirements. This can provide Amazon EC2 Auto Scaling with a larger selection of instance types to choose from when fulfilling Spot and On-Demand capacities. You can view which instance types are matched before you apply the instance requirements to your Auto Scaling group.
After you define your instance requirements, you don't have to keep updating these settings to get new EC2 instance types automatically. Amazon EC2 Auto Scaling uses the instance requirements of the Auto Scaling group to determine whether a new EC2 instance type can be used.
The instance type, such as m3.xlarge
. You must specify an instance type that is supported in your requested Region and Availability Zones. For more information, see Instance types in the Amazon Elastic Compute Cloud User Guide .
You can specify up to 40 instance types per Auto Scaling group.
If you provide a list of instance types to use, you can specify the number of capacity units provided by each instance type in terms of virtual CPUs, memory, storage, throughput, or other relative performance characteristic. When a Spot or On-Demand Instance is launched, the capacity units count toward the desired capacity. Amazon EC2 Auto Scaling launches instances until the desired capacity is totally fulfilled, even if this results in an overage. For example, if there are two units remaining to fulfill capacity, and Amazon EC2 Auto Scaling can only launch an instance with a WeightedCapacity
of five units, the instance is launched, and the desired capacity is exceeded by three units. For more information, see Configuring instance weighting for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide . Value must be in the range of 1–999.
If you specify a value for WeightedCapacity
for one instance type, you must specify a value for WeightedCapacity
for all of them.
Warning
Every Auto Scaling group has three size parameters ( DesiredCapacity
, MaxSize
, and MinSize
). Usually, you set these sizes based on a specific number of instances. However, if you configure a mixed instances policy that defines weights for the instance types, you must specify these sizes with the same units that you use for weighting instances.
Provides a launch template for the specified instance type or set of instance requirements. For example, some instance types might require a launch template with a different AMI. If not provided, Amazon EC2 Auto Scaling uses the launch template that's specified in the LaunchTemplate
definition. For more information, see Specifying a different launch template for an instance type in the Amazon EC2 Auto Scaling User Guide .
You can specify up to 20 launch templates per Auto Scaling group. The launch templates specified in the overrides and in the LaunchTemplate
definition count towards this limit.
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
The instance requirements. Amazon EC2 Auto Scaling uses your specified requirements to identify instance types. Then, it uses your On-Demand and Spot allocation strategies to launch instances from these instance types.
You can specify up to four separate sets of instance requirements per Auto Scaling group. This is useful for provisioning instances from different Amazon Machine Images (AMIs) in the same Auto Scaling group. To do this, create the AMIs and create a new launch template for each AMI. Then, create a compatible set of instance requirements for each launch template.
Note
If you specify InstanceRequirements
, you can't specify InstanceType
.
The minimum and maximum number of vCPUs for an instance type.
The minimum number of vCPUs.
The maximum number of vCPUs.
The minimum and maximum instance memory size for an instance type, in MiB.
The memory minimum in MiB.
The memory maximum in MiB.
Lists which specific CPU manufacturers to include.
intel
.amd
.amazon-web-services
.Note
Don't confuse the CPU hardware manufacturer with the CPU hardware architecture. Instances will be launched with a compatible CPU architecture based on the Amazon Machine Image (AMI) that you specify in your launch template.
Default: Any manufacturer
The minimum and maximum amount of memory per vCPU for an instance type, in GiB.
Default: No minimum or maximum limits
The memory minimum in GiB.
The memory maximum in GiB.
The instance types to exclude. You can use strings with one or more wild cards, represented by an asterisk ( *
), to exclude an instance family, type, size, or generation. The following are examples: m5.8xlarge
, c5*.*
, m5a.*
, r*
, *3*
.
For example, if you specify c5*
, you are excluding the entire C5 instance family, which includes all C5a and C5n instance types. If you specify m5a.*
, Amazon EC2 Auto Scaling will exclude all the M5a instance types, but not the M5n instance types.
Note
If you specify ExcludedInstanceTypes
, you can't specify AllowedInstanceTypes
.
Default: No excluded instance types
Indicates whether current or previous generation instance types are included.
current
. The current generation includes EC2 instance types currently recommended for use. This typically includes the latest two to three generations in each instance family. For more information, see Instance types in the Amazon EC2 User Guide for Linux Instances .previous
.Default: Any current or previous generation
The price protection threshold for Spot Instances. This is the maximum you’ll pay for a Spot Instance, expressed as a percentage higher than the least expensive current generation M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999
.
If you set DesiredCapacityType
to vcpu
or memory-mib
, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price.
Default: 100
The price protection threshold for On-Demand Instances. This is the maximum you’ll pay for an On-Demand Instance, expressed as a percentage higher than the least expensive current generation M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999
.
If you set DesiredCapacityType
to vcpu
or memory-mib
, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price.
Default: 20
Indicates whether bare metal instance types are included, excluded, or required.
Default: excluded
Indicates whether burstable performance instance types are included, excluded, or required. For more information, see Burstable performance instances in the Amazon EC2 User Guide for Linux Instances .
Default: excluded
Indicates whether instance types must provide On-Demand Instance hibernation support.
Default: false
The minimum and maximum number of network interfaces for an instance type.
Default: No minimum or maximum limits
The minimum number of network interfaces.
The maximum number of network interfaces.
Indicates whether instance types with instance store volumes are included, excluded, or required. For more information, see Amazon EC2 instance store in the Amazon EC2 User Guide for Linux Instances .
Default: included
Indicates the type of local storage that is required.
hdd
.ssd
.Default: Any local storage type
The minimum and maximum total local storage size for an instance type, in GB.
Default: No minimum or maximum limits
The storage minimum in GB.
The storage maximum in GB.
The minimum and maximum baseline bandwidth performance for an instance type, in Mbps. For more information, see Amazon EBS–optimized instances in the Amazon EC2 User Guide for Linux Instances .
Default: No minimum or maximum limits
The minimum value in Mbps.
The maximum value in Mbps.
Lists the accelerator types that must be on an instance type.
gpu
.fpga
.inference
.Default: Any accelerator type
The minimum and maximum number of accelerators (GPUs, FPGAs, or Amazon Web Services Inferentia chips) for an instance type.
To exclude accelerator-enabled instance types, set Max
to 0
.
Default: No minimum or maximum limits
The minimum value.
The maximum value.
Indicates whether instance types must have accelerators by specific manufacturers.
nvidia
.amd
.amazon-web-services
.xilinx
.Default: Any manufacturer
Lists the accelerators that must be on an instance type.
a100
.v100
.k80
.t4
.m60
.radeon-pro-v520
.vu9p
.Default: Any accelerator
The minimum and maximum total memory size for the accelerators on an instance type, in MiB.
Default: No minimum or maximum limits
The memory minimum in MiB.
The memory maximum in MiB.
The minimum and maximum amount of network bandwidth, in gigabits per second (Gbps).
Default: No minimum or maximum limits
The minimum amount of network bandwidth, in gigabits per second (Gbps).
The maximum amount of network bandwidth, in gigabits per second (Gbps).
The instance types to apply your specified attributes against. All other instance types are ignored, even if they match your specified attributes.
You can use strings with one or more wild cards, represented by an asterisk ( *
), to allow an instance type, size, or generation. The following are examples: m5.8xlarge
, c5*.*
, m5a.*
, r*
, *3*
.
For example, if you specify c5*
, Amazon EC2 Auto Scaling will allow the entire C5 instance family, which includes all C5a and C5n instance types. If you specify m5a.*
, Amazon EC2 Auto Scaling will allow all the M5a instance types, but not the M5n instance types.
Note
If you specify AllowedInstanceTypes
, you can't specify ExcludedInstanceTypes
.
Default: All instance types
The instances distribution.
The allocation strategy to apply to your On-Demand Instances when they are launched. Possible instance types are determined by the launch template overrides that you specify.
The following lists the valid values:
lowest-price
Uses price to determine which instance types are the highest priority, launching the lowest priced instance types within an Availability Zone first. This is the default value for Auto Scaling groups that specify InstanceRequirements.
prioritized
You set the order of instance types for the launch template overrides from highest to lowest priority (from first to last in the list). Amazon EC2 Auto Scaling launches your highest priority instance types first. If all your On-Demand capacity cannot be fulfilled using your highest priority instance type, then Amazon EC2 Auto Scaling launches the remaining capacity using the second priority instance type, and so on. This is the default value for Auto Scaling groups that don't specify InstanceRequirements and cannot be used for groups that do.
The minimum amount of the Auto Scaling group's capacity that must be fulfilled by On-Demand Instances. This base portion is launched first as your group scales.
This number has the same unit of measurement as the group's desired capacity. If you change the default unit of measurement (number of instances) by specifying weighted capacity values in your launch template overrides list, or by changing the default desired capacity type setting of the group, you must specify this number using the same unit of measurement.
Default: 0
Controls the percentages of On-Demand Instances and Spot Instances for your additional capacity beyond OnDemandBaseCapacity
. Expressed as a number (for example, 20 specifies 20% On-Demand Instances, 80% Spot Instances). If set to 100, only On-Demand Instances are used.
Default: 100
The allocation strategy to apply to your Spot Instances when they are launched. Possible instance types are determined by the launch template overrides that you specify.
The following lists the valid values:
capacity-optimized
Requests Spot Instances using pools that are optimally chosen based on the available Spot capacity. This strategy has the lowest risk of interruption. To give certain instance types a higher chance of launching first, use capacity-optimized-prioritized
.
capacity-optimized-prioritized
You set the order of instance types for the launch template overrides from highest to lowest priority (from first to last in the list). Amazon EC2 Auto Scaling honors the instance type priorities on a best effort basis but optimizes for capacity first. Note that if the On-Demand allocation strategy is set to prioritized
, the same priority is applied when fulfilling On-Demand capacity. This is not a valid value for Auto Scaling groups that specify InstanceRequirements.
lowest-price
Requests Spot Instances using the lowest priced pools within an Availability Zone, across the number of Spot pools that you specify for the SpotInstancePools
property. To ensure that your desired capacity is met, you might receive Spot Instances from several pools. This is the default value, but it might lead to high interruption rates because this strategy only considers instance price and not available capacity.
price-capacity-optimized (recommended)
The price and capacity optimized allocation strategy looks at both price and capacity to select the Spot Instance pools that are the least likely to be interrupted and have the lowest possible price.
The number of Spot Instance pools across which to allocate your Spot Instances. The Spot pools are determined from the different instance types in the overrides. Valid only when the SpotAllocationStrategy
is lowest-price
. Value must be in the range of 1–20.
Default: 2
The maximum price per unit hour that you are willing to pay for a Spot Instance. If your maximum price is lower than the Spot price for the instance types that you selected, your Spot Instances are not launched. We do not recommend specifying a maximum price because it can lead to increased interruptions. When Spot Instances launch, you pay the current Spot price. To remove a maximum price that you previously set, include the property but specify an empty string ("") for the value.
Warning
If you specify a maximum price, your instances will be interrupted more frequently than if you do not specify one.
Valid Range: Minimum value of 0.001
[REQUIRED]
The minimum size of the group.
[REQUIRED]
The maximum size of the group.
Note
With a mixed instances policy that uses instance weighting, Amazon EC2 Auto Scaling may need to go above MaxSize
to meet your capacity requirements. In this event, Amazon EC2 Auto Scaling will never go above MaxSize
by more than your largest instance weight (weights that define how many units each instance contributes to the desired capacity of the group).
Only needed if you use simple scaling policies.
The amount of time, in seconds, between one scaling activity ending and another one starting due to simple scaling policies. For more information, see Scaling cooldowns for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
Default: 300
seconds
A list of Availability Zones where instances in the Auto Scaling group can be created. Used for launching into the default VPC subnet in each Availability Zone when not using the VPCZoneIdentifier
property, or for attaching a network interface when an existing network interface ID is specified in a launch template.
A list of Classic Load Balancers associated with this Auto Scaling group. For Application Load Balancers, Network Load Balancers, and Gateway Load Balancer, specify the TargetGroupARNs
property instead.
The Amazon Resource Names (ARN) of the Elastic Load Balancing target groups to associate with the Auto Scaling group. Instances are registered as targets with the target groups. The target groups receive incoming traffic and route requests to one or more registered targets. For more information, see Use Elastic Load Balancing to distribute traffic across the instances in your Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .
Determines whether any additional health checks are performed on the instances in this group. Amazon EC2 health checks are always on. For more information, see Health checks for Auto Scaling instances in the Amazon EC2 Auto Scaling User Guide .
The valid values are EC2
(default), ELB
, and VPC_LATTICE
. The VPC_LATTICE
health check type is reserved for use with VPC Lattice, which is in preview release and is subject to change.
The amount of time, in seconds, that Amazon EC2 Auto Scaling waits before checking the health status of an EC2 instance that has come into service and marking it unhealthy due to a failed health check. This is useful if your instances do not immediately pass their health checks after they enter the InService
state. For more information, see Set the health check grace period for an Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .
Default: 0
seconds
The name of the placement group into which to launch your instances. For more information, see Placement groups in the Amazon EC2 User Guide for Linux Instances .
Note
A cluster placement group is a logical grouping of instances within a single Availability Zone. You cannot specify multiple Availability Zones and a cluster placement group.
VPCZoneIdentifier
with AvailabilityZones
, the subnets that you specify must reside in those Availability Zones.A policy or a list of policies that are used to select the instance to terminate. These policies are executed in the order that you list them. For more information, see Work with Amazon EC2 Auto Scaling termination policies in the Amazon EC2 Auto Scaling User Guide .
Valid values: Default
| AllocationStrategy
| ClosestToNextInstanceHour
| NewestInstance
| OldestInstance
| OldestLaunchConfiguration
| OldestLaunchTemplate
| arn:aws:lambda:region:account-id:function:my-function:my-alias
One or more lifecycle hooks to add to the Auto Scaling group before instances are launched.
Describes information used to specify a lifecycle hook for an Auto Scaling group.
For more information, see Amazon EC2 Auto Scaling lifecycle hooks in the Amazon EC2 Auto Scaling User Guide .
The name of the lifecycle hook.
The lifecycle transition. For Auto Scaling groups, there are two major lifecycle transitions.
autoscaling:EC2_INSTANCE_LAUNCHING
.autoscaling:EC2_INSTANCE_TERMINATING
.Additional information that you want to include any time Amazon EC2 Auto Scaling sends a message to the notification target.
The maximum time, in seconds, that can elapse before the lifecycle hook times out. The range is from 30
to 7200
seconds. The default value is 3600
seconds (1 hour).
The action the Auto Scaling group takes when the lifecycle hook timeout elapses or if an unexpected failure occurs. The default value is ABANDON
.
Valid values: CONTINUE
| ABANDON
The Amazon Resource Name (ARN) of the notification target that Amazon EC2 Auto Scaling sends notifications to when an instance is in a wait state for the lifecycle hook. You can specify an Amazon SNS topic or an Amazon SQS queue.
The ARN of the IAM role that allows the Auto Scaling group to publish to the specified notification target. For information about creating this role, see Configure a notification target for a lifecycle hook in the Amazon EC2 Auto Scaling User Guide .
Valid only if the notification target is an Amazon SNS topic or an Amazon SQS queue.
One or more tags. You can tag your Auto Scaling group and propagate the tags to the Amazon EC2 instances it launches. Tags are not propagated to Amazon EBS volumes. To add tags to Amazon EBS volumes, specify the tags in a launch template but use caution. If the launch template specifies an instance tag with a key that is also specified for the Auto Scaling group, Amazon EC2 Auto Scaling overrides the value of that instance tag with the value specified by the Auto Scaling group. For more information, see Tag Auto Scaling groups and instances in the Amazon EC2 Auto Scaling User Guide .
Describes a tag for an Auto Scaling group.
The name of the Auto Scaling group.
The type of resource. The only supported value is auto-scaling-group
.
The tag key.
The tag value.
Determines whether the tag is added to new instances as they are launched in the group.
AWSServiceRoleForAutoScaling
, which it creates if it does not exist. For more information, see Service-linked roles in the Amazon EC2 Auto Scaling User Guide .The unit of measurement for the value specified for desired capacity. Amazon EC2 Auto Scaling supports DesiredCapacityType
for attribute-based instance type selection only. For more information, see Creating an Auto Scaling group using attribute-based instance type selection in the Amazon EC2 Auto Scaling User Guide .
By default, Amazon EC2 Auto Scaling specifies units
, which translates into number of instances.
Valid values: units
| vcpu
| memory-mib
The amount of time, in seconds, until a new instance is considered to have finished initializing and resource consumption to become stable after it enters the InService
state.
During an instance refresh, Amazon EC2 Auto Scaling waits for the warm-up period after it replaces an instance before it moves on to replacing the next instance. Amazon EC2 Auto Scaling also waits for the warm-up period before aggregating the metrics for new instances with existing instances in the Amazon CloudWatch metrics that are used for scaling, resulting in more reliable usage data. For more information, see Set the default instance warmup for an Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .
Warning
To manage various warm-up settings at the group level, we recommend that you set the default instance warmup, even if it is set to 0 seconds . To remove a value that you previously set, include the property but specify -1
for the value. However, we strongly recommend keeping the default instance warmup enabled by specifying a value of 0
or other nominal value.
Default: None
Reserved for use with Amazon VPC Lattice, which is in preview release and is subject to change. Do not use this parameter for production workloads. It is also subject to change.
The unique identifiers of one or more traffic sources.
Currently, you must specify an Amazon Resource Name (ARN) for an existing VPC Lattice target group. Amazon EC2 Auto Scaling registers the running instances with the attached target groups. The target groups receive incoming traffic and route requests to one or more registered targets.
Describes the identifier of a traffic source.
Currently, you must specify an Amazon Resource Name (ARN) for an existing VPC Lattice target group.
The unique identifier of the traffic source.
None
Exceptions
AutoScaling.Client.exceptions.AlreadyExistsFault
AutoScaling.Client.exceptions.LimitExceededFault
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.ServiceLinkedRoleFailure
Examples
This example creates an Auto Scaling group.
response = client.create_auto_scaling_group(
AutoScalingGroupName='my-auto-scaling-group',
LaunchTemplate={
'LaunchTemplateName': 'my-template-for-auto-scaling',
'Version': '$Latest',
},
MaxInstanceLifetime=2592000,
MaxSize=3,
MinSize=1,
VPCZoneIdentifier='subnet-057fa0918fEXAMPLE',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
This example creates an Auto Scaling group and attaches the specified target group.
response = client.create_auto_scaling_group(
AutoScalingGroupName='my-auto-scaling-group',
HealthCheckGracePeriod=300,
HealthCheckType='ELB',
LaunchTemplate={
'LaunchTemplateName': 'my-template-for-auto-scaling',
'Version': '$Latest',
},
MaxSize=3,
MinSize=1,
TargetGroupARNs=[
'arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067',
],
VPCZoneIdentifier='subnet-057fa0918fEXAMPLE, subnet-610acd08EXAMPLE',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
This example creates an Auto Scaling group with a mixed instances policy. It specifies the c5.large, c5a.large, and c6g.large instance types and defines a different launch template for the c6g.large instance type.
response = client.create_auto_scaling_group(
AutoScalingGroupName='my-asg',
DesiredCapacity=3,
MaxSize=5,
MinSize=1,
MixedInstancesPolicy={
'InstancesDistribution': {
'OnDemandBaseCapacity': 1,
'OnDemandPercentageAboveBaseCapacity': 50,
'SpotAllocationStrategy': 'capacity-optimized',
},
'LaunchTemplate': {
'LaunchTemplateSpecification': {
'LaunchTemplateName': 'my-launch-template-for-x86',
'Version': '$Latest',
},
'Overrides': [
{
'InstanceType': 'c6g.large',
'LaunchTemplateSpecification': {
'LaunchTemplateName': 'my-launch-template-for-arm',
'Version': '$Latest',
},
},
{
'InstanceType': 'c5.large',
},
{
'InstanceType': 'c5a.large',
},
],
},
},
VPCZoneIdentifier='subnet-057fa0918fEXAMPLE, subnet-610acd08EXAMPLE',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
create_launch_configuration
(**kwargs)¶Creates a launch configuration.
If you exceed your maximum limit of launch configurations, the call fails. To query this limit, call the DescribeAccountLimits API. For information about updating this limit, see Quotas for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
For more information, see Launch configurations in the Amazon EC2 Auto Scaling User Guide .
Note
Amazon EC2 Auto Scaling configures instances launched as part of an Auto Scaling group using either a launch template or a launch configuration. We strongly recommend that you do not use launch configurations. They do not provide full functionality for Amazon EC2 Auto Scaling or Amazon EC2. For information about using launch templates, see Launch templates in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.create_launch_configuration(
LaunchConfigurationName='string',
ImageId='string',
KeyName='string',
SecurityGroups=[
'string',
],
ClassicLinkVPCId='string',
ClassicLinkVPCSecurityGroups=[
'string',
],
UserData='string',
InstanceId='string',
InstanceType='string',
KernelId='string',
RamdiskId='string',
BlockDeviceMappings=[
{
'VirtualName': 'string',
'DeviceName': 'string',
'Ebs': {
'SnapshotId': 'string',
'VolumeSize': 123,
'VolumeType': 'string',
'DeleteOnTermination': True|False,
'Iops': 123,
'Encrypted': True|False,
'Throughput': 123
},
'NoDevice': True|False
},
],
InstanceMonitoring={
'Enabled': True|False
},
SpotPrice='string',
IamInstanceProfile='string',
EbsOptimized=True|False,
AssociatePublicIpAddress=True|False,
PlacementTenancy='string',
MetadataOptions={
'HttpTokens': 'optional'|'required',
'HttpPutResponseHopLimit': 123,
'HttpEndpoint': 'disabled'|'enabled'
}
)
[REQUIRED]
The name of the launch configuration. This name must be unique per Region per account.
The ID of the Amazon Machine Image (AMI) that was assigned during registration. For more information, see Finding a Linux AMI in the Amazon EC2 User Guide for Linux Instances .
If you specify InstanceId
, an ImageId
is not required.
A list that contains the security group IDs to assign to the instances in the Auto Scaling group. For more information, see Control traffic to resources using security groups in the Amazon Virtual Private Cloud User Guide .
Available for backward compatibility.
The user data to make available to the launched EC2 instances. For more information, see Instance metadata and user data (Linux) and Instance metadata and user data (Windows). If you are using a command line tool, base64-encoding is performed for you, and you can load the text from a file. Otherwise, you must provide base64-encoded text. User data is limited to 16 KB.
This value will be base64 encoded automatically. Do not base64 encode this value prior to performing the operation.
The ID of the instance to use to create the launch configuration. The new launch configuration derives attributes from the instance, except for the block device mapping.
To create a launch configuration with a block device mapping or override any other instance attributes, specify them as part of the same request.
For more information, see Creating a launch configuration using an EC2 instance in the Amazon EC2 Auto Scaling User Guide .
Specifies the instance type of the EC2 instance. For information about available instance types, see Available instance types in the Amazon EC2 User Guide for Linux Instances .
If you specify InstanceId
, an InstanceType
is not required.
The ID of the kernel associated with the AMI.
Note
We recommend that you use PV-GRUB instead of kernels and RAM disks. For more information, see User provided kernels in the Amazon EC2 User Guide for Linux Instances .
The ID of the RAM disk to select.
Note
We recommend that you use PV-GRUB instead of kernels and RAM disks. For more information, see User provided kernels in the Amazon EC2 User Guide for Linux Instances .
The block device mapping entries that define the block devices to attach to the instances at launch. By default, the block devices specified in the block device mapping for the AMI are used. For more information, see Block device mappings in the Amazon EC2 User Guide for Linux Instances .
Describes a block device mapping.
The name of the instance store volume (virtual device) to attach to an instance at launch. The name must be in the form ephemeral*X* where X is a number starting from zero (0), for example, ephemeral0
.
The device name assigned to the volume (for example, /dev/sdh
or xvdh
). For more information, see Device naming on Linux instances in the Amazon EC2 User Guide for Linux Instances .
Note
To define a block device mapping, set the device name and exactly one of the following properties: Ebs
, NoDevice
, or VirtualName
.
Information to attach an EBS volume to an instance at launch.
The snapshot ID of the volume to use.
You must specify either a VolumeSize
or a SnapshotId
.
The volume size, in GiBs. The following are the supported volumes sizes for each volume type:
gp2
and gp3
: 1-16,384io1
: 4-16,384st1
and sc1
: 125-16,384standard
: 1-1,024You must specify either a SnapshotId
or a VolumeSize
. If you specify both SnapshotId
and VolumeSize
, the volume size must be equal or greater than the size of the snapshot.
The volume type. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide for Linux Instances .
Valid values: standard
| io1
| gp2
| st1
| sc1
| gp3
Indicates whether the volume is deleted on instance termination. For Amazon EC2 Auto Scaling, the default value is true
.
The number of input/output (I/O) operations per second (IOPS) to provision for the volume. For gp3
and io1
volumes, this represents the number of IOPS that are provisioned for the volume. For gp2
volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type:
gp3
: 3,000-16,000 IOPSio1
: 100-64,000 IOPSFor io1
volumes, we guarantee 64,000 IOPS only for Instances built on the Nitro System. Other instance families guarantee performance up to 32,000 IOPS.
Iops
is supported when the volume type isgp3
orio1
and required only when the volume type isio1
. (Not used withstandard
,gp2
,st1
, orsc1
volumes.)
Specifies whether the volume should be encrypted. Encrypted EBS volumes can only be attached to instances that support Amazon EBS encryption. For more information, see Supported instance types. If your AMI uses encrypted volumes, you can also only launch it on supported instance types.
Note
If you are creating a volume from a snapshot, you cannot create an unencrypted volume from an encrypted snapshot. Also, you cannot specify a KMS key ID when using a launch configuration.
If you enable encryption by default, the EBS volumes that you create are always encrypted, either using the Amazon Web Services managed KMS key or a customer-managed KMS key, regardless of whether the snapshot was encrypted.
For more information, see Use Amazon Web Services KMS keys to encrypt Amazon EBS volumes in the Amazon EC2 Auto Scaling User Guide .
The throughput (MiBps) to provision for a gp3
volume.
Setting this value to true
prevents a volume that is included in the block device mapping of the AMI from being mapped to the specified device name at launch.
If NoDevice
is true
for the root device, instances might fail the EC2 health check. In that case, Amazon EC2 Auto Scaling launches replacement instances.
Controls whether instances in this group are launched with detailed ( true
) or basic ( false
) monitoring.
The default value is true
(enabled).
Warning
When detailed monitoring is enabled, Amazon CloudWatch generates metrics every minute and your account is charged a fee. When you disable detailed monitoring, CloudWatch generates metrics every 5 minutes. For more information, see Configure Monitoring for Auto Scaling Instances in the Amazon EC2 Auto Scaling User Guide .
If true
, detailed monitoring is enabled. Otherwise, basic monitoring is enabled.
The maximum hourly price to be paid for any Spot Instance launched to fulfill the request. Spot Instances are launched when the price you specify exceeds the current Spot price. For more information, see Request Spot Instances for fault-tolerant and flexible applications in the Amazon EC2 Auto Scaling User Guide .
Valid Range: Minimum value of 0.001
Note
When you change your maximum price by creating a new launch configuration, running instances will continue to run as long as the maximum price for those running instances is higher than the current Spot price.
Specifies whether the launch configuration is optimized for EBS I/O ( true
) or not ( false
). The optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal I/O performance. This optimization is not available with all instance types. Additional fees are incurred when you enable EBS optimization for an instance type that is not EBS-optimized by default. For more information, see Amazon EBS-optimized instances in the Amazon EC2 User Guide for Linux Instances .
The default value is false
.
Specifies whether to assign a public IPv4 address to the group's instances. If the instance is launched into a default subnet, the default is to assign a public IPv4 address, unless you disabled the option to assign a public IPv4 address on the subnet. If the instance is launched into a nondefault subnet, the default is not to assign a public IPv4 address, unless you enabled the option to assign a public IPv4 address on the subnet.
If you specify true
, each instance in the Auto Scaling group receives a unique public IPv4 address. For more information, see Launching Auto Scaling instances in a VPC in the Amazon EC2 Auto Scaling User Guide .
If you specify this property, you must specify at least one subnet for VPCZoneIdentifier
when you create your group.
The tenancy of the instance, either default
or dedicated
. An instance with dedicated
tenancy runs on isolated, single-tenant hardware and can only be launched into a VPC. To launch dedicated instances into a shared tenancy VPC (a VPC with the instance placement tenancy attribute set to default
), you must set the value of this property to dedicated
. For more information, see Configuring instance tenancy with Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
If you specify PlacementTenancy
, you must specify at least one subnet for VPCZoneIdentifier
when you create your group.
Valid values: default
| dedicated
The metadata options for the instances. For more information, see Configuring the Instance Metadata Options in the Amazon EC2 Auto Scaling User Guide .
The state of token usage for your instance metadata requests. If the parameter is not specified in the request, the default state is optional
.
If the state is optional
, you can choose to retrieve instance metadata with or without a signed token header on your request. If you retrieve the IAM role credentials without a token, the version 1.0 role credentials are returned. If you retrieve the IAM role credentials using a valid signed token, the version 2.0 role credentials are returned.
If the state is required
, you must send a signed token header with any instance metadata retrieval requests. In this state, retrieving the IAM role credentials always returns the version 2.0 credentials; the version 1.0 credentials are not available.
The desired HTTP PUT response hop limit for instance metadata requests. The larger the number, the further instance metadata requests can travel.
Default: 1
This parameter enables or disables the HTTP metadata endpoint on your instances. If the parameter is not specified, the default state is enabled
.
Note
If you specify a value of disabled
, you will not be able to access your instance metadata.
None
Exceptions
AutoScaling.Client.exceptions.AlreadyExistsFault
AutoScaling.Client.exceptions.LimitExceededFault
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example creates a launch configuration.
response = client.create_launch_configuration(
IamInstanceProfile='my-iam-role',
ImageId='ami-12345678',
InstanceType='m3.medium',
LaunchConfigurationName='my-launch-config',
SecurityGroups=[
'sg-eb2af88e',
],
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
Creates or updates tags for the specified Auto Scaling group.
When you specify a tag with a key that already exists, the operation overwrites the previous tag definition, and you do not get an error message.
For more information, see Tag Auto Scaling groups and instances in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.create_or_update_tags(
Tags=[
{
'ResourceId': 'string',
'ResourceType': 'string',
'Key': 'string',
'Value': 'string',
'PropagateAtLaunch': True|False
},
]
)
[REQUIRED]
One or more tags.
Describes a tag for an Auto Scaling group.
The name of the Auto Scaling group.
The type of resource. The only supported value is auto-scaling-group
.
The tag key.
The tag value.
Determines whether the tag is added to new instances as they are launched in the group.
Exceptions
AutoScaling.Client.exceptions.LimitExceededFault
AutoScaling.Client.exceptions.AlreadyExistsFault
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.ResourceInUseFault
Examples
This example adds two tags to the specified Auto Scaling group.
response = client.create_or_update_tags(
Tags=[
{
'Key': 'Role',
'PropagateAtLaunch': True,
'ResourceId': 'my-auto-scaling-group',
'ResourceType': 'auto-scaling-group',
'Value': 'WebServer',
},
{
'Key': 'Dept',
'PropagateAtLaunch': True,
'ResourceId': 'my-auto-scaling-group',
'ResourceType': 'auto-scaling-group',
'Value': 'Research',
},
],
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
delete_auto_scaling_group
(**kwargs)¶Deletes the specified Auto Scaling group.
If the group has instances or scaling activities in progress, you must specify the option to force the deletion in order for it to succeed. The force delete operation will also terminate the EC2 instances. If the group has a warm pool, the force delete option also deletes the warm pool.
To remove instances from the Auto Scaling group before deleting it, call the DetachInstances API with the list of instances and the option to decrement the desired capacity. This ensures that Amazon EC2 Auto Scaling does not launch replacement instances.
To terminate all instances before deleting the Auto Scaling group, call the UpdateAutoScalingGroup API and set the minimum size and desired capacity of the Auto Scaling group to zero.
If the group has scaling policies, deleting the group deletes the policies, the underlying alarm actions, and any alarm that no longer has an associated action.
For more information, see Delete your Auto Scaling infrastructure in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.delete_auto_scaling_group(
AutoScalingGroupName='string',
ForceDelete=True|False
)
[REQUIRED]
The name of the Auto Scaling group.
None
Exceptions
AutoScaling.Client.exceptions.ScalingActivityInProgressFault
AutoScaling.Client.exceptions.ResourceInUseFault
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example deletes the specified Auto Scaling group.
response = client.delete_auto_scaling_group(
AutoScalingGroupName='my-auto-scaling-group',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
This example deletes the specified Auto Scaling group and all its instances.
response = client.delete_auto_scaling_group(
AutoScalingGroupName='my-auto-scaling-group',
ForceDelete=True,
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
delete_launch_configuration
(**kwargs)¶Deletes the specified launch configuration.
The launch configuration must not be attached to an Auto Scaling group. When this call completes, the launch configuration is no longer available for use.
See also: AWS API Documentation
Request Syntax
response = client.delete_launch_configuration(
LaunchConfigurationName='string'
)
[REQUIRED]
The name of the launch configuration.
Exceptions
AutoScaling.Client.exceptions.ResourceInUseFault
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example deletes the specified launch configuration.
response = client.delete_launch_configuration(
LaunchConfigurationName='my-launch-config',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
delete_lifecycle_hook
(**kwargs)¶Deletes the specified lifecycle hook.
If there are any outstanding lifecycle actions, they are completed first ( ABANDON
for launching instances, CONTINUE
for terminating instances).
See also: AWS API Documentation
Request Syntax
response = client.delete_lifecycle_hook(
LifecycleHookName='string',
AutoScalingGroupName='string'
)
[REQUIRED]
The name of the lifecycle hook.
[REQUIRED]
The name of the Auto Scaling group.
dict
Response Syntax
{}
Response Structure
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example deletes the specified lifecycle hook.
response = client.delete_lifecycle_hook(
AutoScalingGroupName='my-auto-scaling-group',
LifecycleHookName='my-lifecycle-hook',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
delete_notification_configuration
(**kwargs)¶Deletes the specified notification.
See also: AWS API Documentation
Request Syntax
response = client.delete_notification_configuration(
AutoScalingGroupName='string',
TopicARN='string'
)
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
The Amazon Resource Name (ARN) of the Amazon SNS topic.
None
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example deletes the specified notification from the specified Auto Scaling group.
response = client.delete_notification_configuration(
AutoScalingGroupName='my-auto-scaling-group',
TopicARN='arn:aws:sns:us-west-2:123456789012:my-sns-topic',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
delete_policy
(**kwargs)¶Deletes the specified scaling policy.
Deleting either a step scaling policy or a simple scaling policy deletes the underlying alarm action, but does not delete the alarm, even if it no longer has an associated action.
For more information, see Deleting a scaling policy in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.delete_policy(
AutoScalingGroupName='string',
PolicyName='string'
)
[REQUIRED]
The name or Amazon Resource Name (ARN) of the policy.
None
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.ServiceLinkedRoleFailure
Examples
This example deletes the specified Auto Scaling policy.
response = client.delete_policy(
AutoScalingGroupName='my-auto-scaling-group',
PolicyName='my-step-scale-out-policy',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
delete_scheduled_action
(**kwargs)¶Deletes the specified scheduled action.
See also: AWS API Documentation
Request Syntax
response = client.delete_scheduled_action(
AutoScalingGroupName='string',
ScheduledActionName='string'
)
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
The name of the action to delete.
None
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example deletes the specified scheduled action from the specified Auto Scaling group.
response = client.delete_scheduled_action(
AutoScalingGroupName='my-auto-scaling-group',
ScheduledActionName='my-scheduled-action',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
Deletes the specified tags.
See also: AWS API Documentation
Request Syntax
response = client.delete_tags(
Tags=[
{
'ResourceId': 'string',
'ResourceType': 'string',
'Key': 'string',
'Value': 'string',
'PropagateAtLaunch': True|False
},
]
)
[REQUIRED]
One or more tags.
Describes a tag for an Auto Scaling group.
The name of the Auto Scaling group.
The type of resource. The only supported value is auto-scaling-group
.
The tag key.
The tag value.
Determines whether the tag is added to new instances as they are launched in the group.
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.ResourceInUseFault
Examples
This example deletes the specified tag from the specified Auto Scaling group.
response = client.delete_tags(
Tags=[
{
'Key': 'Dept',
'ResourceId': 'my-auto-scaling-group',
'ResourceType': 'auto-scaling-group',
'Value': 'Research',
},
],
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
delete_warm_pool
(**kwargs)¶Deletes the warm pool for the specified Auto Scaling group.
For more information, see Warm pools for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.delete_warm_pool(
AutoScalingGroupName='string',
ForceDelete=True|False
)
[REQUIRED]
The name of the Auto Scaling group.
dict
Response Syntax
{}
Response Structure
Exceptions
AutoScaling.Client.exceptions.LimitExceededFault
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.ScalingActivityInProgressFault
AutoScaling.Client.exceptions.ResourceInUseFault
describe_account_limits
()¶Describes the current Amazon EC2 Auto Scaling resource quotas for your account.
When you establish an Amazon Web Services account, the account has initial quotas on the maximum number of Auto Scaling groups and launch configurations that you can create in a given Region. For more information, see Quotas for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.describe_account_limits()
{
'MaxNumberOfAutoScalingGroups': 123,
'MaxNumberOfLaunchConfigurations': 123,
'NumberOfAutoScalingGroups': 123,
'NumberOfLaunchConfigurations': 123
}
Response Structure
The maximum number of groups allowed for your account. The default is 200 groups per Region.
The maximum number of launch configurations allowed for your account. The default is 200 launch configurations per Region.
The current number of groups for your account.
The current number of launch configurations for your account.
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example describes the Amazon EC2 Auto Scaling service quotas for your account.
response = client.describe_account_limits(
)
print(response)
Expected Output:
{
'MaxNumberOfAutoScalingGroups': 20,
'MaxNumberOfLaunchConfigurations': 100,
'NumberOfAutoScalingGroups': 3,
'NumberOfLaunchConfigurations': 5,
'ResponseMetadata': {
'...': '...',
},
}
describe_adjustment_types
()¶Describes the available adjustment types for step scaling and simple scaling policies.
The following adjustment types are supported:
ChangeInCapacity
ExactCapacity
PercentChangeInCapacity
See also: AWS API Documentation
Request Syntax
response = client.describe_adjustment_types()
{
'AdjustmentTypes': [
{
'AdjustmentType': 'string'
},
]
}
Response Structure
The policy adjustment types.
Describes a policy adjustment type.
The policy adjustment type. The valid values are ChangeInCapacity
, ExactCapacity
, and PercentChangeInCapacity
.
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example describes the available adjustment types.
response = client.describe_adjustment_types(
)
print(response)
Expected Output:
{
'AdjustmentTypes': [
{
'AdjustmentType': 'ChangeInCapacity',
},
{
'AdjustmentType': 'ExactCapcity',
},
{
'AdjustmentType': 'PercentChangeInCapacity',
},
],
'ResponseMetadata': {
'...': '...',
},
}
describe_auto_scaling_groups
(**kwargs)¶Gets information about the Auto Scaling groups in the account and Region.
If you specify Auto Scaling group names, the output includes information for only the specified Auto Scaling groups. If you specify filters, the output includes information for only those Auto Scaling groups that meet the filter criteria. If you do not specify group names or filters, the output includes information for all Auto Scaling groups.
This operation also returns information about instances in Auto Scaling groups. To retrieve information about the instances in a warm pool, you must call the DescribeWarmPool API.
See also: AWS API Documentation
Request Syntax
response = client.describe_auto_scaling_groups(
AutoScalingGroupNames=[
'string',
],
NextToken='string',
MaxRecords=123,
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
]
)
The names of the Auto Scaling groups. By default, you can only specify up to 50 names. You can optionally increase this limit using the MaxRecords
property.
If you omit this property, all Auto Scaling groups are described.
50
and the maximum value is 100
.One or more filters to limit the results based on specific tags.
Describes a filter that is used to return a more specific list of results from a describe operation.
If you specify multiple filters, the filters are automatically logically joined with an AND
, and the request returns only the results that match all of the specified filters.
For more information, see Tag Auto Scaling groups and instances in the Amazon EC2 Auto Scaling User Guide .
The name of the filter.
The valid values for Name
depend on which API operation you're using with the filter ( DescribeAutoScalingGroups or DescribeTags ).
DescribeAutoScalingGroups
Valid values for Name
include the following:
tag-key
- Accepts tag keys. The results only include information about the Auto Scaling groups associated with these tag keys.tag-value
- Accepts tag values. The results only include information about the Auto Scaling groups associated with these tag values.tag:<key>
- Accepts the key/value combination of the tag. Use the tag key in the filter name and the tag value as the filter value. The results only include information about the Auto Scaling groups associated with the specified key/value combination.DescribeTags
Valid values for Name
include the following:
auto-scaling-group
- Accepts the names of Auto Scaling groups. The results only include information about the tags associated with these Auto Scaling groups.key
- Accepts tag keys. The results only include information about the tags associated with these tag keys.value
- Accepts tag values. The results only include information about the tags associated with these tag values.propagate-at-launch
- Accepts a Boolean value, which specifies whether tags propagate to instances at launch. The results only include information about the tags associated with the specified Boolean value.One or more filter values. Filter values are case-sensitive.
If you specify multiple values for a filter, the values are automatically logically joined with an OR
, and the request returns all results that match any of the specified values. For example, specify "tag:environment" for the filter name and "production,development" for the filter values to find Auto Scaling groups with the tag "environment=production" or "environment=development".
dict
Response Syntax
{
'AutoScalingGroups': [
{
'AutoScalingGroupName': 'string',
'AutoScalingGroupARN': 'string',
'LaunchConfigurationName': 'string',
'LaunchTemplate': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'MixedInstancesPolicy': {
'LaunchTemplate': {
'LaunchTemplateSpecification': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'Overrides': [
{
'InstanceType': 'string',
'WeightedCapacity': 'string',
'LaunchTemplateSpecification': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'InstanceRequirements': {
'VCpuCount': {
'Min': 123,
'Max': 123
},
'MemoryMiB': {
'Min': 123,
'Max': 123
},
'CpuManufacturers': [
'intel'|'amd'|'amazon-web-services',
],
'MemoryGiBPerVCpu': {
'Min': 123.0,
'Max': 123.0
},
'ExcludedInstanceTypes': [
'string',
],
'InstanceGenerations': [
'current'|'previous',
],
'SpotMaxPricePercentageOverLowestPrice': 123,
'OnDemandMaxPricePercentageOverLowestPrice': 123,
'BareMetal': 'included'|'excluded'|'required',
'BurstablePerformance': 'included'|'excluded'|'required',
'RequireHibernateSupport': True|False,
'NetworkInterfaceCount': {
'Min': 123,
'Max': 123
},
'LocalStorage': 'included'|'excluded'|'required',
'LocalStorageTypes': [
'hdd'|'ssd',
],
'TotalLocalStorageGB': {
'Min': 123.0,
'Max': 123.0
},
'BaselineEbsBandwidthMbps': {
'Min': 123,
'Max': 123
},
'AcceleratorTypes': [
'gpu'|'fpga'|'inference',
],
'AcceleratorCount': {
'Min': 123,
'Max': 123
},
'AcceleratorManufacturers': [
'nvidia'|'amd'|'amazon-web-services'|'xilinx',
],
'AcceleratorNames': [
'a100'|'v100'|'k80'|'t4'|'m60'|'radeon-pro-v520'|'vu9p',
],
'AcceleratorTotalMemoryMiB': {
'Min': 123,
'Max': 123
},
'NetworkBandwidthGbps': {
'Min': 123.0,
'Max': 123.0
},
'AllowedInstanceTypes': [
'string',
]
}
},
]
},
'InstancesDistribution': {
'OnDemandAllocationStrategy': 'string',
'OnDemandBaseCapacity': 123,
'OnDemandPercentageAboveBaseCapacity': 123,
'SpotAllocationStrategy': 'string',
'SpotInstancePools': 123,
'SpotMaxPrice': 'string'
}
},
'MinSize': 123,
'MaxSize': 123,
'DesiredCapacity': 123,
'PredictedCapacity': 123,
'DefaultCooldown': 123,
'AvailabilityZones': [
'string',
],
'LoadBalancerNames': [
'string',
],
'TargetGroupARNs': [
'string',
],
'HealthCheckType': 'string',
'HealthCheckGracePeriod': 123,
'Instances': [
{
'InstanceId': 'string',
'InstanceType': 'string',
'AvailabilityZone': 'string',
'LifecycleState': 'Pending'|'Pending:Wait'|'Pending:Proceed'|'Quarantined'|'InService'|'Terminating'|'Terminating:Wait'|'Terminating:Proceed'|'Terminated'|'Detaching'|'Detached'|'EnteringStandby'|'Standby'|'Warmed:Pending'|'Warmed:Pending:Wait'|'Warmed:Pending:Proceed'|'Warmed:Terminating'|'Warmed:Terminating:Wait'|'Warmed:Terminating:Proceed'|'Warmed:Terminated'|'Warmed:Stopped'|'Warmed:Running'|'Warmed:Hibernated',
'HealthStatus': 'string',
'LaunchConfigurationName': 'string',
'LaunchTemplate': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'ProtectedFromScaleIn': True|False,
'WeightedCapacity': 'string'
},
],
'CreatedTime': datetime(2015, 1, 1),
'SuspendedProcesses': [
{
'ProcessName': 'string',
'SuspensionReason': 'string'
},
],
'PlacementGroup': 'string',
'VPCZoneIdentifier': 'string',
'EnabledMetrics': [
{
'Metric': 'string',
'Granularity': 'string'
},
],
'Status': 'string',
'Tags': [
{
'ResourceId': 'string',
'ResourceType': 'string',
'Key': 'string',
'Value': 'string',
'PropagateAtLaunch': True|False
},
],
'TerminationPolicies': [
'string',
],
'NewInstancesProtectedFromScaleIn': True|False,
'ServiceLinkedRoleARN': 'string',
'MaxInstanceLifetime': 123,
'CapacityRebalance': True|False,
'WarmPoolConfiguration': {
'MaxGroupPreparedCapacity': 123,
'MinSize': 123,
'PoolState': 'Stopped'|'Running'|'Hibernated',
'Status': 'PendingDelete',
'InstanceReusePolicy': {
'ReuseOnScaleIn': True|False
}
},
'WarmPoolSize': 123,
'Context': 'string',
'DesiredCapacityType': 'string',
'DefaultInstanceWarmup': 123,
'TrafficSources': [
{
'Identifier': 'string'
},
]
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
AutoScalingGroups (list) --
The groups.
(dict) --
Describes an Auto Scaling group.
AutoScalingGroupName (string) --
The name of the Auto Scaling group.
AutoScalingGroupARN (string) --
The Amazon Resource Name (ARN) of the Auto Scaling group.
LaunchConfigurationName (string) --
The name of the associated launch configuration.
LaunchTemplate (dict) --
The launch template for the group.
LaunchTemplateId (string) --
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
LaunchTemplateName (string) --
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
Version (string) --
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
MixedInstancesPolicy (dict) --
The mixed instances policy for the group.
LaunchTemplate (dict) --
One or more launch templates and the instance types (overrides) that are used to launch EC2 instances to fulfill On-Demand and Spot capacities.
LaunchTemplateSpecification (dict) --
The launch template.
LaunchTemplateId (string) --
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
LaunchTemplateName (string) --
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
Version (string) --
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
Overrides (list) --
Any properties that you specify override the same properties in the launch template.
(dict) --
Use this structure to let Amazon EC2 Auto Scaling do the following when the Auto Scaling group has a mixed instances policy:
Specify the instance types that you want, or define your instance requirements instead and let Amazon EC2 Auto Scaling provision the available instance types that meet your requirements. This can provide Amazon EC2 Auto Scaling with a larger selection of instance types to choose from when fulfilling Spot and On-Demand capacities. You can view which instance types are matched before you apply the instance requirements to your Auto Scaling group.
After you define your instance requirements, you don't have to keep updating these settings to get new EC2 instance types automatically. Amazon EC2 Auto Scaling uses the instance requirements of the Auto Scaling group to determine whether a new EC2 instance type can be used.
InstanceType (string) --
The instance type, such as m3.xlarge
. You must specify an instance type that is supported in your requested Region and Availability Zones. For more information, see Instance types in the Amazon Elastic Compute Cloud User Guide .
You can specify up to 40 instance types per Auto Scaling group.
WeightedCapacity (string) --
If you provide a list of instance types to use, you can specify the number of capacity units provided by each instance type in terms of virtual CPUs, memory, storage, throughput, or other relative performance characteristic. When a Spot or On-Demand Instance is launched, the capacity units count toward the desired capacity. Amazon EC2 Auto Scaling launches instances until the desired capacity is totally fulfilled, even if this results in an overage. For example, if there are two units remaining to fulfill capacity, and Amazon EC2 Auto Scaling can only launch an instance with a WeightedCapacity
of five units, the instance is launched, and the desired capacity is exceeded by three units. For more information, see Configuring instance weighting for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide . Value must be in the range of 1–999.
If you specify a value for WeightedCapacity
for one instance type, you must specify a value for WeightedCapacity
for all of them.
Warning
Every Auto Scaling group has three size parameters ( DesiredCapacity
, MaxSize
, and MinSize
). Usually, you set these sizes based on a specific number of instances. However, if you configure a mixed instances policy that defines weights for the instance types, you must specify these sizes with the same units that you use for weighting instances.
LaunchTemplateSpecification (dict) --
Provides a launch template for the specified instance type or set of instance requirements. For example, some instance types might require a launch template with a different AMI. If not provided, Amazon EC2 Auto Scaling uses the launch template that's specified in the LaunchTemplate
definition. For more information, see Specifying a different launch template for an instance type in the Amazon EC2 Auto Scaling User Guide .
You can specify up to 20 launch templates per Auto Scaling group. The launch templates specified in the overrides and in the LaunchTemplate
definition count towards this limit.
LaunchTemplateId (string) --
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
LaunchTemplateName (string) --
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
Version (string) --
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
InstanceRequirements (dict) --
The instance requirements. Amazon EC2 Auto Scaling uses your specified requirements to identify instance types. Then, it uses your On-Demand and Spot allocation strategies to launch instances from these instance types.
You can specify up to four separate sets of instance requirements per Auto Scaling group. This is useful for provisioning instances from different Amazon Machine Images (AMIs) in the same Auto Scaling group. To do this, create the AMIs and create a new launch template for each AMI. Then, create a compatible set of instance requirements for each launch template.
Note
If you specify InstanceRequirements
, you can't specify InstanceType
.
VCpuCount (dict) --
The minimum and maximum number of vCPUs for an instance type.
Min (integer) --
The minimum number of vCPUs.
Max (integer) --
The maximum number of vCPUs.
MemoryMiB (dict) --
The minimum and maximum instance memory size for an instance type, in MiB.
Min (integer) --
The memory minimum in MiB.
Max (integer) --
The memory maximum in MiB.
CpuManufacturers (list) --
Lists which specific CPU manufacturers to include.
intel
.amd
.amazon-web-services
.Note
Don't confuse the CPU hardware manufacturer with the CPU hardware architecture. Instances will be launched with a compatible CPU architecture based on the Amazon Machine Image (AMI) that you specify in your launch template.
Default: Any manufacturer
MemoryGiBPerVCpu (dict) --
The minimum and maximum amount of memory per vCPU for an instance type, in GiB.
Default: No minimum or maximum limits
Min (float) --
The memory minimum in GiB.
Max (float) --
The memory maximum in GiB.
ExcludedInstanceTypes (list) --
The instance types to exclude. You can use strings with one or more wild cards, represented by an asterisk ( *
), to exclude an instance family, type, size, or generation. The following are examples: m5.8xlarge
, c5*.*
, m5a.*
, r*
, *3*
.
For example, if you specify c5*
, you are excluding the entire C5 instance family, which includes all C5a and C5n instance types. If you specify m5a.*
, Amazon EC2 Auto Scaling will exclude all the M5a instance types, but not the M5n instance types.
Note
If you specify ExcludedInstanceTypes
, you can't specify AllowedInstanceTypes
.
Default: No excluded instance types
InstanceGenerations (list) --
Indicates whether current or previous generation instance types are included.
current
. The current generation includes EC2 instance types currently recommended for use. This typically includes the latest two to three generations in each instance family. For more information, see Instance types in the Amazon EC2 User Guide for Linux Instances .previous
.Default: Any current or previous generation
SpotMaxPricePercentageOverLowestPrice (integer) --
The price protection threshold for Spot Instances. This is the maximum you’ll pay for a Spot Instance, expressed as a percentage higher than the least expensive current generation M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999
.
If you set DesiredCapacityType
to vcpu
or memory-mib
, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price.
Default: 100
OnDemandMaxPricePercentageOverLowestPrice (integer) --
The price protection threshold for On-Demand Instances. This is the maximum you’ll pay for an On-Demand Instance, expressed as a percentage higher than the least expensive current generation M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999
.
If you set DesiredCapacityType
to vcpu
or memory-mib
, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price.
Default: 20
BareMetal (string) --
Indicates whether bare metal instance types are included, excluded, or required.
Default: excluded
BurstablePerformance (string) --
Indicates whether burstable performance instance types are included, excluded, or required. For more information, see Burstable performance instances in the Amazon EC2 User Guide for Linux Instances .
Default: excluded
RequireHibernateSupport (boolean) --
Indicates whether instance types must provide On-Demand Instance hibernation support.
Default: false
NetworkInterfaceCount (dict) --
The minimum and maximum number of network interfaces for an instance type.
Default: No minimum or maximum limits
Min (integer) --
The minimum number of network interfaces.
Max (integer) --
The maximum number of network interfaces.
LocalStorage (string) --
Indicates whether instance types with instance store volumes are included, excluded, or required. For more information, see Amazon EC2 instance store in the Amazon EC2 User Guide for Linux Instances .
Default: included
LocalStorageTypes (list) --
Indicates the type of local storage that is required.
hdd
.ssd
.Default: Any local storage type
TotalLocalStorageGB (dict) --
The minimum and maximum total local storage size for an instance type, in GB.
Default: No minimum or maximum limits
Min (float) --
The storage minimum in GB.
Max (float) --
The storage maximum in GB.
BaselineEbsBandwidthMbps (dict) --
The minimum and maximum baseline bandwidth performance for an instance type, in Mbps. For more information, see Amazon EBS–optimized instances in the Amazon EC2 User Guide for Linux Instances .
Default: No minimum or maximum limits
Min (integer) --
The minimum value in Mbps.
Max (integer) --
The maximum value in Mbps.
AcceleratorTypes (list) --
Lists the accelerator types that must be on an instance type.
gpu
.fpga
.inference
.Default: Any accelerator type
AcceleratorCount (dict) --
The minimum and maximum number of accelerators (GPUs, FPGAs, or Amazon Web Services Inferentia chips) for an instance type.
To exclude accelerator-enabled instance types, set Max
to 0
.
Default: No minimum or maximum limits
Min (integer) --
The minimum value.
Max (integer) --
The maximum value.
AcceleratorManufacturers (list) --
Indicates whether instance types must have accelerators by specific manufacturers.
nvidia
.amd
.amazon-web-services
.xilinx
.Default: Any manufacturer
AcceleratorNames (list) --
Lists the accelerators that must be on an instance type.
a100
.v100
.k80
.t4
.m60
.radeon-pro-v520
.vu9p
.Default: Any accelerator
AcceleratorTotalMemoryMiB (dict) --
The minimum and maximum total memory size for the accelerators on an instance type, in MiB.
Default: No minimum or maximum limits
Min (integer) --
The memory minimum in MiB.
Max (integer) --
The memory maximum in MiB.
NetworkBandwidthGbps (dict) --
The minimum and maximum amount of network bandwidth, in gigabits per second (Gbps).
Default: No minimum or maximum limits
Min (float) --
The minimum amount of network bandwidth, in gigabits per second (Gbps).
Max (float) --
The maximum amount of network bandwidth, in gigabits per second (Gbps).
AllowedInstanceTypes (list) --
The instance types to apply your specified attributes against. All other instance types are ignored, even if they match your specified attributes.
You can use strings with one or more wild cards, represented by an asterisk ( *
), to allow an instance type, size, or generation. The following are examples: m5.8xlarge
, c5*.*
, m5a.*
, r*
, *3*
.
For example, if you specify c5*
, Amazon EC2 Auto Scaling will allow the entire C5 instance family, which includes all C5a and C5n instance types. If you specify m5a.*
, Amazon EC2 Auto Scaling will allow all the M5a instance types, but not the M5n instance types.
Note
If you specify AllowedInstanceTypes
, you can't specify ExcludedInstanceTypes
.
Default: All instance types
InstancesDistribution (dict) --
The instances distribution.
OnDemandAllocationStrategy (string) --
The allocation strategy to apply to your On-Demand Instances when they are launched. Possible instance types are determined by the launch template overrides that you specify.
The following lists the valid values:
lowest-price
Uses price to determine which instance types are the highest priority, launching the lowest priced instance types within an Availability Zone first. This is the default value for Auto Scaling groups that specify InstanceRequirements.
prioritized
You set the order of instance types for the launch template overrides from highest to lowest priority (from first to last in the list). Amazon EC2 Auto Scaling launches your highest priority instance types first. If all your On-Demand capacity cannot be fulfilled using your highest priority instance type, then Amazon EC2 Auto Scaling launches the remaining capacity using the second priority instance type, and so on. This is the default value for Auto Scaling groups that don't specify InstanceRequirements and cannot be used for groups that do.
OnDemandBaseCapacity (integer) --
The minimum amount of the Auto Scaling group's capacity that must be fulfilled by On-Demand Instances. This base portion is launched first as your group scales.
This number has the same unit of measurement as the group's desired capacity. If you change the default unit of measurement (number of instances) by specifying weighted capacity values in your launch template overrides list, or by changing the default desired capacity type setting of the group, you must specify this number using the same unit of measurement.
Default: 0
OnDemandPercentageAboveBaseCapacity (integer) --
Controls the percentages of On-Demand Instances and Spot Instances for your additional capacity beyond OnDemandBaseCapacity
. Expressed as a number (for example, 20 specifies 20% On-Demand Instances, 80% Spot Instances). If set to 100, only On-Demand Instances are used.
Default: 100
SpotAllocationStrategy (string) --
The allocation strategy to apply to your Spot Instances when they are launched. Possible instance types are determined by the launch template overrides that you specify.
The following lists the valid values:
capacity-optimized
Requests Spot Instances using pools that are optimally chosen based on the available Spot capacity. This strategy has the lowest risk of interruption. To give certain instance types a higher chance of launching first, use capacity-optimized-prioritized
.
capacity-optimized-prioritized
You set the order of instance types for the launch template overrides from highest to lowest priority (from first to last in the list). Amazon EC2 Auto Scaling honors the instance type priorities on a best effort basis but optimizes for capacity first. Note that if the On-Demand allocation strategy is set to prioritized
, the same priority is applied when fulfilling On-Demand capacity. This is not a valid value for Auto Scaling groups that specify InstanceRequirements.
lowest-price
Requests Spot Instances using the lowest priced pools within an Availability Zone, across the number of Spot pools that you specify for the SpotInstancePools
property. To ensure that your desired capacity is met, you might receive Spot Instances from several pools. This is the default value, but it might lead to high interruption rates because this strategy only considers instance price and not available capacity.
price-capacity-optimized (recommended)
The price and capacity optimized allocation strategy looks at both price and capacity to select the Spot Instance pools that are the least likely to be interrupted and have the lowest possible price.
SpotInstancePools (integer) --
The number of Spot Instance pools across which to allocate your Spot Instances. The Spot pools are determined from the different instance types in the overrides. Valid only when the SpotAllocationStrategy
is lowest-price
. Value must be in the range of 1–20.
Default: 2
SpotMaxPrice (string) --
The maximum price per unit hour that you are willing to pay for a Spot Instance. If your maximum price is lower than the Spot price for the instance types that you selected, your Spot Instances are not launched. We do not recommend specifying a maximum price because it can lead to increased interruptions. When Spot Instances launch, you pay the current Spot price. To remove a maximum price that you previously set, include the property but specify an empty string ("") for the value.
Warning
If you specify a maximum price, your instances will be interrupted more frequently than if you do not specify one.
Valid Range: Minimum value of 0.001
MinSize (integer) --
The minimum size of the group.
MaxSize (integer) --
The maximum size of the group.
DesiredCapacity (integer) --
The desired size of the group.
PredictedCapacity (integer) --
The predicted capacity of the group when it has a predictive scaling policy.
DefaultCooldown (integer) --
The duration of the default cooldown period, in seconds.
AvailabilityZones (list) --
One or more Availability Zones for the group.
LoadBalancerNames (list) --
One or more load balancers associated with the group.
TargetGroupARNs (list) --
The Amazon Resource Names (ARN) of the target groups for your load balancer.
HealthCheckType (string) --
Determines whether any additional health checks are performed on the instances in this group. Amazon EC2 health checks are always on.
The valid values are EC2
(default), ELB
, and VPC_LATTICE
. The VPC_LATTICE
health check type is reserved for use with VPC Lattice, which is in preview release and is subject to change.
HealthCheckGracePeriod (integer) --
The duration of the health check grace period, in seconds.
Instances (list) --
The EC2 instances associated with the group.
(dict) --
Describes an EC2 instance.
InstanceId (string) --
The ID of the instance.
InstanceType (string) --
The instance type of the EC2 instance.
AvailabilityZone (string) --
The Availability Zone in which the instance is running.
LifecycleState (string) --
A description of the current lifecycle state. The Quarantined
state is not used. For information about lifecycle states, see Instance lifecycle in the Amazon EC2 Auto Scaling User Guide .
HealthStatus (string) --
The last reported health status of the instance. "Healthy" means that the instance is healthy and should remain in service. "Unhealthy" means that the instance is unhealthy and that Amazon EC2 Auto Scaling should terminate and replace it.
LaunchConfigurationName (string) --
The launch configuration associated with the instance.
LaunchTemplate (dict) --
The launch template for the instance.
LaunchTemplateId (string) --
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
LaunchTemplateName (string) --
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
Version (string) --
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
ProtectedFromScaleIn (boolean) --
Indicates whether the instance is protected from termination by Amazon EC2 Auto Scaling when scaling in.
WeightedCapacity (string) --
The number of capacity units contributed by the instance based on its instance type.
Valid Range: Minimum value of 1. Maximum value of 999.
CreatedTime (datetime) --
The date and time the group was created.
SuspendedProcesses (list) --
The suspended processes associated with the group.
(dict) --
Describes an auto scaling process that has been suspended.
For more information, see Scaling processes in the Amazon EC2 Auto Scaling User Guide .
ProcessName (string) --
The name of the suspended process.
SuspensionReason (string) --
The reason that the process was suspended.
PlacementGroup (string) --
The name of the placement group into which to launch your instances, if any.
VPCZoneIdentifier (string) --
One or more subnet IDs, if applicable, separated by commas.
EnabledMetrics (list) --
The metrics enabled for the group.
(dict) --
Describes an enabled Auto Scaling group metric.
Metric (string) --
One of the following metrics:
GroupMinSize
GroupMaxSize
GroupDesiredCapacity
GroupInServiceInstances
GroupPendingInstances
GroupStandbyInstances
GroupTerminatingInstances
GroupTotalInstances
GroupInServiceCapacity
GroupPendingCapacity
GroupStandbyCapacity
GroupTerminatingCapacity
GroupTotalCapacity
WarmPoolDesiredCapacity
WarmPoolWarmedCapacity
WarmPoolPendingCapacity
WarmPoolTerminatingCapacity
WarmPoolTotalCapacity
GroupAndWarmPoolDesiredCapacity
GroupAndWarmPoolTotalCapacity
For more information, see Auto Scaling group metrics in the Amazon EC2 Auto Scaling User Guide .
Granularity (string) --
The granularity of the metric. The only valid value is 1Minute
.
Status (string) --
The current state of the group when the DeleteAutoScalingGroup operation is in progress.
Tags (list) --
The tags for the group.
(dict) --
Describes a tag for an Auto Scaling group.
ResourceId (string) --
The name of the group.
ResourceType (string) --
The type of resource. The only supported value is auto-scaling-group
.
Key (string) --
The tag key.
Value (string) --
The tag value.
PropagateAtLaunch (boolean) --
Determines whether the tag is added to new instances as they are launched in the group.
TerminationPolicies (list) --
The termination policies for the group.
NewInstancesProtectedFromScaleIn (boolean) --
Indicates whether newly launched instances are protected from termination by Amazon EC2 Auto Scaling when scaling in.
ServiceLinkedRoleARN (string) --
The Amazon Resource Name (ARN) of the service-linked role that the Auto Scaling group uses to call other Amazon Web Services on your behalf.
MaxInstanceLifetime (integer) --
The maximum amount of time, in seconds, that an instance can be in service.
Valid Range: Minimum value of 0.
CapacityRebalance (boolean) --
Indicates whether Capacity Rebalancing is enabled.
WarmPoolConfiguration (dict) --
The warm pool for the group.
MaxGroupPreparedCapacity (integer) --
The maximum number of instances that are allowed to be in the warm pool or in any state except Terminated
for the Auto Scaling group.
MinSize (integer) --
The minimum number of instances to maintain in the warm pool.
PoolState (string) --
The instance state to transition to after the lifecycle actions are complete.
Status (string) --
The status of a warm pool that is marked for deletion.
InstanceReusePolicy (dict) --
The instance reuse policy.
ReuseOnScaleIn (boolean) --
Specifies whether instances in the Auto Scaling group can be returned to the warm pool on scale in.
WarmPoolSize (integer) --
The current size of the warm pool.
Context (string) --
Reserved.
DesiredCapacityType (string) --
The unit of measurement for the value specified for desired capacity. Amazon EC2 Auto Scaling supports DesiredCapacityType
for attribute-based instance type selection only.
DefaultInstanceWarmup (integer) --
The duration of the default instance warmup, in seconds.
TrafficSources (list) --
Reserved for use with Amazon VPC Lattice, which is in preview release and is subject to change. Do not use this parameter for production workloads. It is also subject to change.
The unique identifiers of the traffic sources.
(dict) --
Describes the identifier of a traffic source.
Currently, you must specify an Amazon Resource Name (ARN) for an existing VPC Lattice target group.
Identifier (string) --
The unique identifier of the traffic source.
NextToken (string) --
A string that indicates that the response contains more items than can be returned in a single response. To receive additional items, specify this string for the NextToken
value when requesting the next set of items. This value is null when there are no more items to return.
Exceptions
AutoScaling.Client.exceptions.InvalidNextToken
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example describes the specified Auto Scaling group.
response = client.describe_auto_scaling_groups(
AutoScalingGroupNames=[
'my-auto-scaling-group',
],
)
print(response)
Expected Output:
{
'AutoScalingGroups': [
{
'AutoScalingGroupARN': 'arn:aws:autoscaling:us-west-2:123456789012:autoScalingGroup:930d940e-891e-4781-a11a-7b0acd480f03:autoScalingGroupName/my-auto-scaling-group',
'AutoScalingGroupName': 'my-auto-scaling-group',
'AvailabilityZones': [
'us-west-2c',
],
'CreatedTime': datetime(2013, 8, 19, 20, 53, 25, 0, 231, 0),
'DefaultCooldown': 300,
'DesiredCapacity': 1,
'EnabledMetrics': [
],
'HealthCheckGracePeriod': 300,
'HealthCheckType': 'EC2',
'Instances': [
{
'AvailabilityZone': 'us-west-2c',
'HealthStatus': 'Healthy',
'InstanceId': 'i-4ba0837f',
'LaunchConfigurationName': 'my-launch-config',
'LifecycleState': 'InService',
'ProtectedFromScaleIn': False,
},
],
'LaunchConfigurationName': 'my-launch-config',
'LoadBalancerNames': [
],
'MaxSize': 1,
'MinSize': 0,
'NewInstancesProtectedFromScaleIn': False,
'SuspendedProcesses': [
],
'Tags': [
],
'TerminationPolicies': [
'Default',
],
'VPCZoneIdentifier': 'subnet-12345678',
},
],
'ResponseMetadata': {
'...': '...',
},
}
describe_auto_scaling_instances
(**kwargs)¶Gets information about the Auto Scaling instances in the account and Region.
See also: AWS API Documentation
Request Syntax
response = client.describe_auto_scaling_instances(
InstanceIds=[
'string',
],
MaxRecords=123,
NextToken='string'
)
The IDs of the instances. If you omit this property, all Auto Scaling instances are described. If you specify an ID that does not exist, it is ignored with no error.
Array Members: Maximum number of 50 items.
50
and the maximum value is 50
.dict
Response Syntax
{
'AutoScalingInstances': [
{
'InstanceId': 'string',
'InstanceType': 'string',
'AutoScalingGroupName': 'string',
'AvailabilityZone': 'string',
'LifecycleState': 'string',
'HealthStatus': 'string',
'LaunchConfigurationName': 'string',
'LaunchTemplate': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'ProtectedFromScaleIn': True|False,
'WeightedCapacity': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
AutoScalingInstances (list) --
The instances.
(dict) --
Describes an EC2 instance associated with an Auto Scaling group.
InstanceId (string) --
The ID of the instance.
InstanceType (string) --
The instance type of the EC2 instance.
AutoScalingGroupName (string) --
The name of the Auto Scaling group for the instance.
AvailabilityZone (string) --
The Availability Zone for the instance.
LifecycleState (string) --
The lifecycle state for the instance. The Quarantined
state is not used. For information about lifecycle states, see Instance lifecycle in the Amazon EC2 Auto Scaling User Guide .
Valid values: Pending
| Pending:Wait
| Pending:Proceed
| Quarantined
| InService
| Terminating
| Terminating:Wait
| Terminating:Proceed
| Terminated
| Detaching
| Detached
| EnteringStandby
| Standby
| Warmed:Pending
| Warmed:Pending:Wait
| Warmed:Pending:Proceed
| Warmed:Terminating
| Warmed:Terminating:Wait
| Warmed:Terminating:Proceed
| Warmed:Terminated
| Warmed:Stopped
| Warmed:Running
HealthStatus (string) --
The last reported health status of this instance. "Healthy" means that the instance is healthy and should remain in service. "Unhealthy" means that the instance is unhealthy and Amazon EC2 Auto Scaling should terminate and replace it.
LaunchConfigurationName (string) --
The launch configuration used to launch the instance. This value is not available if you attached the instance to the Auto Scaling group.
LaunchTemplate (dict) --
The launch template for the instance.
LaunchTemplateId (string) --
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
LaunchTemplateName (string) --
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
Version (string) --
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
ProtectedFromScaleIn (boolean) --
Indicates whether the instance is protected from termination by Amazon EC2 Auto Scaling when scaling in.
WeightedCapacity (string) --
The number of capacity units contributed by the instance based on its instance type.
Valid Range: Minimum value of 1. Maximum value of 999.
NextToken (string) --
A string that indicates that the response contains more items than can be returned in a single response. To receive additional items, specify this string for the NextToken
value when requesting the next set of items. This value is null when there are no more items to return.
Exceptions
AutoScaling.Client.exceptions.InvalidNextToken
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example describes the specified Auto Scaling instance.
response = client.describe_auto_scaling_instances(
InstanceIds=[
'i-4ba0837f',
],
)
print(response)
Expected Output:
{
'AutoScalingInstances': [
{
'AutoScalingGroupName': 'my-auto-scaling-group',
'AvailabilityZone': 'us-west-2c',
'HealthStatus': 'HEALTHY',
'InstanceId': 'i-4ba0837f',
'LaunchConfigurationName': 'my-launch-config',
'LifecycleState': 'InService',
'ProtectedFromScaleIn': False,
},
],
'ResponseMetadata': {
'...': '...',
},
}
describe_auto_scaling_notification_types
()¶Describes the notification types that are supported by Amazon EC2 Auto Scaling.
See also: AWS API Documentation
Request Syntax
response = client.describe_auto_scaling_notification_types()
{
'AutoScalingNotificationTypes': [
'string',
]
}
Response Structure
The notification types.
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example describes the available notification types.
response = client.describe_auto_scaling_notification_types(
)
print(response)
Expected Output:
{
'AutoScalingNotificationTypes': [
'autoscaling:EC2_INSTANCE_LAUNCH',
'autoscaling:EC2_INSTANCE_LAUNCH_ERROR',
'autoscaling:EC2_INSTANCE_TERMINATE',
'autoscaling:EC2_INSTANCE_TERMINATE_ERROR',
'autoscaling:TEST_NOTIFICATION',
],
'ResponseMetadata': {
'...': '...',
},
}
describe_instance_refreshes
(**kwargs)¶Gets information about the instance refreshes for the specified Auto Scaling group.
This operation is part of the instance refresh feature in Amazon EC2 Auto Scaling, which helps you update instances in your Auto Scaling group after you make configuration changes.
To help you determine the status of an instance refresh, Amazon EC2 Auto Scaling returns information about the instance refreshes you previously initiated, including their status, start time, end time, the percentage of the instance refresh that is complete, and the number of instances remaining to update before the instance refresh is complete. If a rollback is initiated while an instance refresh is in progress, Amazon EC2 Auto Scaling also returns information about the rollback of the instance refresh.
See also: AWS API Documentation
Request Syntax
response = client.describe_instance_refreshes(
AutoScalingGroupName='string',
InstanceRefreshIds=[
'string',
],
NextToken='string',
MaxRecords=123
)
[REQUIRED]
The name of the Auto Scaling group.
One or more instance refresh IDs.
50
and the maximum value is 100
.dict
Response Syntax
{
'InstanceRefreshes': [
{
'InstanceRefreshId': 'string',
'AutoScalingGroupName': 'string',
'Status': 'Pending'|'InProgress'|'Successful'|'Failed'|'Cancelling'|'Cancelled'|'RollbackInProgress'|'RollbackFailed'|'RollbackSuccessful',
'StatusReason': 'string',
'StartTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'PercentageComplete': 123,
'InstancesToUpdate': 123,
'ProgressDetails': {
'LivePoolProgress': {
'PercentageComplete': 123,
'InstancesToUpdate': 123
},
'WarmPoolProgress': {
'PercentageComplete': 123,
'InstancesToUpdate': 123
}
},
'Preferences': {
'MinHealthyPercentage': 123,
'InstanceWarmup': 123,
'CheckpointPercentages': [
123,
],
'CheckpointDelay': 123,
'SkipMatching': True|False,
'AutoRollback': True|False,
'ScaleInProtectedInstances': 'Refresh'|'Ignore'|'Wait',
'StandbyInstances': 'Terminate'|'Ignore'|'Wait'
},
'DesiredConfiguration': {
'LaunchTemplate': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'MixedInstancesPolicy': {
'LaunchTemplate': {
'LaunchTemplateSpecification': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'Overrides': [
{
'InstanceType': 'string',
'WeightedCapacity': 'string',
'LaunchTemplateSpecification': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'InstanceRequirements': {
'VCpuCount': {
'Min': 123,
'Max': 123
},
'MemoryMiB': {
'Min': 123,
'Max': 123
},
'CpuManufacturers': [
'intel'|'amd'|'amazon-web-services',
],
'MemoryGiBPerVCpu': {
'Min': 123.0,
'Max': 123.0
},
'ExcludedInstanceTypes': [
'string',
],
'InstanceGenerations': [
'current'|'previous',
],
'SpotMaxPricePercentageOverLowestPrice': 123,
'OnDemandMaxPricePercentageOverLowestPrice': 123,
'BareMetal': 'included'|'excluded'|'required',
'BurstablePerformance': 'included'|'excluded'|'required',
'RequireHibernateSupport': True|False,
'NetworkInterfaceCount': {
'Min': 123,
'Max': 123
},
'LocalStorage': 'included'|'excluded'|'required',
'LocalStorageTypes': [
'hdd'|'ssd',
],
'TotalLocalStorageGB': {
'Min': 123.0,
'Max': 123.0
},
'BaselineEbsBandwidthMbps': {
'Min': 123,
'Max': 123
},
'AcceleratorTypes': [
'gpu'|'fpga'|'inference',
],
'AcceleratorCount': {
'Min': 123,
'Max': 123
},
'AcceleratorManufacturers': [
'nvidia'|'amd'|'amazon-web-services'|'xilinx',
],
'AcceleratorNames': [
'a100'|'v100'|'k80'|'t4'|'m60'|'radeon-pro-v520'|'vu9p',
],
'AcceleratorTotalMemoryMiB': {
'Min': 123,
'Max': 123
},
'NetworkBandwidthGbps': {
'Min': 123.0,
'Max': 123.0
},
'AllowedInstanceTypes': [
'string',
]
}
},
]
},
'InstancesDistribution': {
'OnDemandAllocationStrategy': 'string',
'OnDemandBaseCapacity': 123,
'OnDemandPercentageAboveBaseCapacity': 123,
'SpotAllocationStrategy': 'string',
'SpotInstancePools': 123,
'SpotMaxPrice': 'string'
}
}
},
'RollbackDetails': {
'RollbackReason': 'string',
'RollbackStartTime': datetime(2015, 1, 1),
'PercentageCompleteOnRollback': 123,
'InstancesToUpdateOnRollback': 123,
'ProgressDetailsOnRollback': {
'LivePoolProgress': {
'PercentageComplete': 123,
'InstancesToUpdate': 123
},
'WarmPoolProgress': {
'PercentageComplete': 123,
'InstancesToUpdate': 123
}
}
}
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
InstanceRefreshes (list) --
The instance refreshes for the specified group, sorted by creation timestamp in descending order.
(dict) --
Describes an instance refresh for an Auto Scaling group.
InstanceRefreshId (string) --
The instance refresh ID.
AutoScalingGroupName (string) --
The name of the Auto Scaling group.
Status (string) --
The current status for the instance refresh operation:
Pending
- The request was created, but the instance refresh has not started.InProgress
- An instance refresh is in progress.Successful
- An instance refresh completed successfully.Failed
- An instance refresh failed to complete. You can troubleshoot using the status reason and the scaling activities.Cancelling
- An ongoing instance refresh is being cancelled.Cancelled
- The instance refresh is cancelled.RollbackInProgress
- An instance refresh is being rolled back.RollbackFailed
- The rollback failed to complete. You can troubleshoot using the status reason and the scaling activities.RollbackSuccessful
- The rollback completed successfully.StatusReason (string) --
The explanation for the specific status assigned to this operation.
StartTime (datetime) --
The date and time at which the instance refresh began.
EndTime (datetime) --
The date and time at which the instance refresh ended.
PercentageComplete (integer) --
The percentage of the instance refresh that is complete. For each instance replacement, Amazon EC2 Auto Scaling tracks the instance's health status and warm-up time. When the instance's health status changes to healthy and the specified warm-up time passes, the instance is considered updated and is added to the percentage complete.
Note
PercentageComplete
does not include instances that are replaced during a rollback. This value gradually goes back down to zero during a rollback.
InstancesToUpdate (integer) --
The number of instances remaining to update before the instance refresh is complete.
Note
If you roll back the instance refresh, InstancesToUpdate
shows you the number of instances that were not yet updated by the instance refresh. Therefore, these instances don't need to be replaced as part of the rollback.
ProgressDetails (dict) --
Additional progress details for an Auto Scaling group that has a warm pool.
LivePoolProgress (dict) --
Reports progress on replacing instances that are in the Auto Scaling group.
PercentageComplete (integer) --
The percentage of instances in the Auto Scaling group that have been replaced. For each instance replacement, Amazon EC2 Auto Scaling tracks the instance's health status and warm-up time. When the instance's health status changes to healthy and the specified warm-up time passes, the instance is considered updated and is added to the percentage complete.
InstancesToUpdate (integer) --
The number of instances remaining to update.
WarmPoolProgress (dict) --
Reports progress on replacing instances that are in the warm pool.
PercentageComplete (integer) --
The percentage of instances in the warm pool that have been replaced. For each instance replacement, Amazon EC2 Auto Scaling tracks the instance's health status and warm-up time. When the instance's health status changes to healthy and the specified warm-up time passes, the instance is considered updated and is added to the percentage complete.
InstancesToUpdate (integer) --
The number of instances remaining to update.
Preferences (dict) --
Describes the preferences for an instance refresh.
MinHealthyPercentage (integer) --
The amount of capacity in the Auto Scaling group that must pass your group's health checks to allow the operation to continue. The value is expressed as a percentage of the desired capacity of the Auto Scaling group (rounded up to the nearest integer). The default is 90
.
Setting the minimum healthy percentage to 100 percent limits the rate of replacement to one instance at a time. In contrast, setting it to 0 percent has the effect of replacing all instances at the same time.
InstanceWarmup (integer) --
A time period, in seconds, during which an instance refresh waits before moving on to replacing the next instance after a new instance enters the InService
state.
This property is not required for normal usage. Instead, use the DefaultInstanceWarmup
property of the Auto Scaling group. The InstanceWarmup
and DefaultInstanceWarmup
properties work the same way. Only specify this property if you must override the DefaultInstanceWarmup
property.
If you do not specify this property, the instance warmup by default is the value of the DefaultInstanceWarmup
property, if defined (which is recommended in all cases), or the HealthCheckGracePeriod
property otherwise.
CheckpointPercentages (list) --
(Optional) Threshold values for each checkpoint in ascending order. Each number must be unique. To replace all instances in the Auto Scaling group, the last number in the array must be 100
.
For usage examples, see Adding checkpoints to an instance refresh in the Amazon EC2 Auto Scaling User Guide .
CheckpointDelay (integer) --
(Optional) The amount of time, in seconds, to wait after a checkpoint before continuing. This property is optional, but if you specify a value for it, you must also specify a value for CheckpointPercentages
. If you specify a value for CheckpointPercentages
and not for CheckpointDelay
, the CheckpointDelay
defaults to 3600
(1 hour).
SkipMatching (boolean) --
(Optional) Indicates whether skip matching is enabled. If enabled ( true
), then Amazon EC2 Auto Scaling skips replacing instances that match the desired configuration. If no desired configuration is specified, then it skips replacing instances that have the same launch template and instance types that the Auto Scaling group was using before the start of the instance refresh. The default is false
.
For more information, see Use an instance refresh with skip matching in the Amazon EC2 Auto Scaling User Guide .
AutoRollback (boolean) --
(Optional) Indicates whether to roll back the Auto Scaling group to its previous configuration if the instance refresh fails. The default is false
.
A rollback is not supported in the following situations:
ImageId
property.$Latest
or $Default
version.ScaleInProtectedInstances (string) --
Choose the behavior that you want Amazon EC2 Auto Scaling to use if instances protected from scale in are found.
The following lists the valid values:
Refresh
Amazon EC2 Auto Scaling replaces instances that are protected from scale in.
Ignore
Amazon EC2 Auto Scaling ignores instances that are protected from scale in and continues to replace instances that are not protected.
Wait (default)
Amazon EC2 Auto Scaling waits one hour for you to remove scale-in protection. Otherwise, the instance refresh will fail.
StandbyInstances (string) --
Choose the behavior that you want Amazon EC2 Auto Scaling to use if instances in Standby
state are found.
The following lists the valid values:
Terminate
Amazon EC2 Auto Scaling terminates instances that are in Standby
.
Ignore
Amazon EC2 Auto Scaling ignores instances that are in Standby
and continues to replace instances that are in the InService
state.
Wait (default)
Amazon EC2 Auto Scaling waits one hour for you to return the instances to service. Otherwise, the instance refresh will fail.
DesiredConfiguration (dict) --
Describes the desired configuration for the instance refresh.
LaunchTemplate (dict) --
Describes the launch template and the version of the launch template that Amazon EC2 Auto Scaling uses to launch Amazon EC2 instances. For more information about launch templates, see Launch templates in the Amazon EC2 Auto Scaling User Guide .
LaunchTemplateId (string) --
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
LaunchTemplateName (string) --
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
Version (string) --
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
MixedInstancesPolicy (dict) --
Use this structure to launch multiple instance types and On-Demand Instances and Spot Instances within a single Auto Scaling group.
A mixed instances policy contains information that Amazon EC2 Auto Scaling can use to launch instances and help optimize your costs. For more information, see Auto Scaling groups with multiple instance types and purchase options in the Amazon EC2 Auto Scaling User Guide .
LaunchTemplate (dict) --
One or more launch templates and the instance types (overrides) that are used to launch EC2 instances to fulfill On-Demand and Spot capacities.
LaunchTemplateSpecification (dict) --
The launch template.
LaunchTemplateId (string) --
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
LaunchTemplateName (string) --
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
Version (string) --
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
Overrides (list) --
Any properties that you specify override the same properties in the launch template.
(dict) --
Use this structure to let Amazon EC2 Auto Scaling do the following when the Auto Scaling group has a mixed instances policy:
Specify the instance types that you want, or define your instance requirements instead and let Amazon EC2 Auto Scaling provision the available instance types that meet your requirements. This can provide Amazon EC2 Auto Scaling with a larger selection of instance types to choose from when fulfilling Spot and On-Demand capacities. You can view which instance types are matched before you apply the instance requirements to your Auto Scaling group.
After you define your instance requirements, you don't have to keep updating these settings to get new EC2 instance types automatically. Amazon EC2 Auto Scaling uses the instance requirements of the Auto Scaling group to determine whether a new EC2 instance type can be used.
InstanceType (string) --
The instance type, such as m3.xlarge
. You must specify an instance type that is supported in your requested Region and Availability Zones. For more information, see Instance types in the Amazon Elastic Compute Cloud User Guide .
You can specify up to 40 instance types per Auto Scaling group.
WeightedCapacity (string) --
If you provide a list of instance types to use, you can specify the number of capacity units provided by each instance type in terms of virtual CPUs, memory, storage, throughput, or other relative performance characteristic. When a Spot or On-Demand Instance is launched, the capacity units count toward the desired capacity. Amazon EC2 Auto Scaling launches instances until the desired capacity is totally fulfilled, even if this results in an overage. For example, if there are two units remaining to fulfill capacity, and Amazon EC2 Auto Scaling can only launch an instance with a WeightedCapacity
of five units, the instance is launched, and the desired capacity is exceeded by three units. For more information, see Configuring instance weighting for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide . Value must be in the range of 1–999.
If you specify a value for WeightedCapacity
for one instance type, you must specify a value for WeightedCapacity
for all of them.
Warning
Every Auto Scaling group has three size parameters ( DesiredCapacity
, MaxSize
, and MinSize
). Usually, you set these sizes based on a specific number of instances. However, if you configure a mixed instances policy that defines weights for the instance types, you must specify these sizes with the same units that you use for weighting instances.
LaunchTemplateSpecification (dict) --
Provides a launch template for the specified instance type or set of instance requirements. For example, some instance types might require a launch template with a different AMI. If not provided, Amazon EC2 Auto Scaling uses the launch template that's specified in the LaunchTemplate
definition. For more information, see Specifying a different launch template for an instance type in the Amazon EC2 Auto Scaling User Guide .
You can specify up to 20 launch templates per Auto Scaling group. The launch templates specified in the overrides and in the LaunchTemplate
definition count towards this limit.
LaunchTemplateId (string) --
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
LaunchTemplateName (string) --
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
Version (string) --
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
InstanceRequirements (dict) --
The instance requirements. Amazon EC2 Auto Scaling uses your specified requirements to identify instance types. Then, it uses your On-Demand and Spot allocation strategies to launch instances from these instance types.
You can specify up to four separate sets of instance requirements per Auto Scaling group. This is useful for provisioning instances from different Amazon Machine Images (AMIs) in the same Auto Scaling group. To do this, create the AMIs and create a new launch template for each AMI. Then, create a compatible set of instance requirements for each launch template.
Note
If you specify InstanceRequirements
, you can't specify InstanceType
.
VCpuCount (dict) --
The minimum and maximum number of vCPUs for an instance type.
Min (integer) --
The minimum number of vCPUs.
Max (integer) --
The maximum number of vCPUs.
MemoryMiB (dict) --
The minimum and maximum instance memory size for an instance type, in MiB.
Min (integer) --
The memory minimum in MiB.
Max (integer) --
The memory maximum in MiB.
CpuManufacturers (list) --
Lists which specific CPU manufacturers to include.
intel
.amd
.amazon-web-services
.Note
Don't confuse the CPU hardware manufacturer with the CPU hardware architecture. Instances will be launched with a compatible CPU architecture based on the Amazon Machine Image (AMI) that you specify in your launch template.
Default: Any manufacturer
MemoryGiBPerVCpu (dict) --
The minimum and maximum amount of memory per vCPU for an instance type, in GiB.
Default: No minimum or maximum limits
Min (float) --
The memory minimum in GiB.
Max (float) --
The memory maximum in GiB.
ExcludedInstanceTypes (list) --
The instance types to exclude. You can use strings with one or more wild cards, represented by an asterisk ( *
), to exclude an instance family, type, size, or generation. The following are examples: m5.8xlarge
, c5*.*
, m5a.*
, r*
, *3*
.
For example, if you specify c5*
, you are excluding the entire C5 instance family, which includes all C5a and C5n instance types. If you specify m5a.*
, Amazon EC2 Auto Scaling will exclude all the M5a instance types, but not the M5n instance types.
Note
If you specify ExcludedInstanceTypes
, you can't specify AllowedInstanceTypes
.
Default: No excluded instance types
InstanceGenerations (list) --
Indicates whether current or previous generation instance types are included.
current
. The current generation includes EC2 instance types currently recommended for use. This typically includes the latest two to three generations in each instance family. For more information, see Instance types in the Amazon EC2 User Guide for Linux Instances .previous
.Default: Any current or previous generation
SpotMaxPricePercentageOverLowestPrice (integer) --
The price protection threshold for Spot Instances. This is the maximum you’ll pay for a Spot Instance, expressed as a percentage higher than the least expensive current generation M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999
.
If you set DesiredCapacityType
to vcpu
or memory-mib
, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price.
Default: 100
OnDemandMaxPricePercentageOverLowestPrice (integer) --
The price protection threshold for On-Demand Instances. This is the maximum you’ll pay for an On-Demand Instance, expressed as a percentage higher than the least expensive current generation M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999
.
If you set DesiredCapacityType
to vcpu
or memory-mib
, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price.
Default: 20
BareMetal (string) --
Indicates whether bare metal instance types are included, excluded, or required.
Default: excluded
BurstablePerformance (string) --
Indicates whether burstable performance instance types are included, excluded, or required. For more information, see Burstable performance instances in the Amazon EC2 User Guide for Linux Instances .
Default: excluded
RequireHibernateSupport (boolean) --
Indicates whether instance types must provide On-Demand Instance hibernation support.
Default: false
NetworkInterfaceCount (dict) --
The minimum and maximum number of network interfaces for an instance type.
Default: No minimum or maximum limits
Min (integer) --
The minimum number of network interfaces.
Max (integer) --
The maximum number of network interfaces.
LocalStorage (string) --
Indicates whether instance types with instance store volumes are included, excluded, or required. For more information, see Amazon EC2 instance store in the Amazon EC2 User Guide for Linux Instances .
Default: included
LocalStorageTypes (list) --
Indicates the type of local storage that is required.
hdd
.ssd
.Default: Any local storage type
TotalLocalStorageGB (dict) --
The minimum and maximum total local storage size for an instance type, in GB.
Default: No minimum or maximum limits
Min (float) --
The storage minimum in GB.
Max (float) --
The storage maximum in GB.
BaselineEbsBandwidthMbps (dict) --
The minimum and maximum baseline bandwidth performance for an instance type, in Mbps. For more information, see Amazon EBS–optimized instances in the Amazon EC2 User Guide for Linux Instances .
Default: No minimum or maximum limits
Min (integer) --
The minimum value in Mbps.
Max (integer) --
The maximum value in Mbps.
AcceleratorTypes (list) --
Lists the accelerator types that must be on an instance type.
gpu
.fpga
.inference
.Default: Any accelerator type
AcceleratorCount (dict) --
The minimum and maximum number of accelerators (GPUs, FPGAs, or Amazon Web Services Inferentia chips) for an instance type.
To exclude accelerator-enabled instance types, set Max
to 0
.
Default: No minimum or maximum limits
Min (integer) --
The minimum value.
Max (integer) --
The maximum value.
AcceleratorManufacturers (list) --
Indicates whether instance types must have accelerators by specific manufacturers.
nvidia
.amd
.amazon-web-services
.xilinx
.Default: Any manufacturer
AcceleratorNames (list) --
Lists the accelerators that must be on an instance type.
a100
.v100
.k80
.t4
.m60
.radeon-pro-v520
.vu9p
.Default: Any accelerator
AcceleratorTotalMemoryMiB (dict) --
The minimum and maximum total memory size for the accelerators on an instance type, in MiB.
Default: No minimum or maximum limits
Min (integer) --
The memory minimum in MiB.
Max (integer) --
The memory maximum in MiB.
NetworkBandwidthGbps (dict) --
The minimum and maximum amount of network bandwidth, in gigabits per second (Gbps).
Default: No minimum or maximum limits
Min (float) --
The minimum amount of network bandwidth, in gigabits per second (Gbps).
Max (float) --
The maximum amount of network bandwidth, in gigabits per second (Gbps).
AllowedInstanceTypes (list) --
The instance types to apply your specified attributes against. All other instance types are ignored, even if they match your specified attributes.
You can use strings with one or more wild cards, represented by an asterisk ( *
), to allow an instance type, size, or generation. The following are examples: m5.8xlarge
, c5*.*
, m5a.*
, r*
, *3*
.
For example, if you specify c5*
, Amazon EC2 Auto Scaling will allow the entire C5 instance family, which includes all C5a and C5n instance types. If you specify m5a.*
, Amazon EC2 Auto Scaling will allow all the M5a instance types, but not the M5n instance types.
Note
If you specify AllowedInstanceTypes
, you can't specify ExcludedInstanceTypes
.
Default: All instance types
InstancesDistribution (dict) --
The instances distribution.
OnDemandAllocationStrategy (string) --
The allocation strategy to apply to your On-Demand Instances when they are launched. Possible instance types are determined by the launch template overrides that you specify.
The following lists the valid values:
lowest-price
Uses price to determine which instance types are the highest priority, launching the lowest priced instance types within an Availability Zone first. This is the default value for Auto Scaling groups that specify InstanceRequirements.
prioritized
You set the order of instance types for the launch template overrides from highest to lowest priority (from first to last in the list). Amazon EC2 Auto Scaling launches your highest priority instance types first. If all your On-Demand capacity cannot be fulfilled using your highest priority instance type, then Amazon EC2 Auto Scaling launches the remaining capacity using the second priority instance type, and so on. This is the default value for Auto Scaling groups that don't specify InstanceRequirements and cannot be used for groups that do.
OnDemandBaseCapacity (integer) --
The minimum amount of the Auto Scaling group's capacity that must be fulfilled by On-Demand Instances. This base portion is launched first as your group scales.
This number has the same unit of measurement as the group's desired capacity. If you change the default unit of measurement (number of instances) by specifying weighted capacity values in your launch template overrides list, or by changing the default desired capacity type setting of the group, you must specify this number using the same unit of measurement.
Default: 0
OnDemandPercentageAboveBaseCapacity (integer) --
Controls the percentages of On-Demand Instances and Spot Instances for your additional capacity beyond OnDemandBaseCapacity
. Expressed as a number (for example, 20 specifies 20% On-Demand Instances, 80% Spot Instances). If set to 100, only On-Demand Instances are used.
Default: 100
SpotAllocationStrategy (string) --
The allocation strategy to apply to your Spot Instances when they are launched. Possible instance types are determined by the launch template overrides that you specify.
The following lists the valid values:
capacity-optimized
Requests Spot Instances using pools that are optimally chosen based on the available Spot capacity. This strategy has the lowest risk of interruption. To give certain instance types a higher chance of launching first, use capacity-optimized-prioritized
.
capacity-optimized-prioritized
You set the order of instance types for the launch template overrides from highest to lowest priority (from first to last in the list). Amazon EC2 Auto Scaling honors the instance type priorities on a best effort basis but optimizes for capacity first. Note that if the On-Demand allocation strategy is set to prioritized
, the same priority is applied when fulfilling On-Demand capacity. This is not a valid value for Auto Scaling groups that specify InstanceRequirements.
lowest-price
Requests Spot Instances using the lowest priced pools within an Availability Zone, across the number of Spot pools that you specify for the SpotInstancePools
property. To ensure that your desired capacity is met, you might receive Spot Instances from several pools. This is the default value, but it might lead to high interruption rates because this strategy only considers instance price and not available capacity.
price-capacity-optimized (recommended)
The price and capacity optimized allocation strategy looks at both price and capacity to select the Spot Instance pools that are the least likely to be interrupted and have the lowest possible price.
SpotInstancePools (integer) --
The number of Spot Instance pools across which to allocate your Spot Instances. The Spot pools are determined from the different instance types in the overrides. Valid only when the SpotAllocationStrategy
is lowest-price
. Value must be in the range of 1–20.
Default: 2
SpotMaxPrice (string) --
The maximum price per unit hour that you are willing to pay for a Spot Instance. If your maximum price is lower than the Spot price for the instance types that you selected, your Spot Instances are not launched. We do not recommend specifying a maximum price because it can lead to increased interruptions. When Spot Instances launch, you pay the current Spot price. To remove a maximum price that you previously set, include the property but specify an empty string ("") for the value.
Warning
If you specify a maximum price, your instances will be interrupted more frequently than if you do not specify one.
Valid Range: Minimum value of 0.001
RollbackDetails (dict) --
The rollback details.
RollbackReason (string) --
The reason for this instance refresh rollback (for example, whether a manual or automatic rollback was initiated).
RollbackStartTime (datetime) --
The date and time at which the rollback began.
PercentageCompleteOnRollback (integer) --
Indicates the value of PercentageComplete
at the time the rollback started.
InstancesToUpdateOnRollback (integer) --
Indicates the value of InstancesToUpdate
at the time the rollback started.
ProgressDetailsOnRollback (dict) --
Reports progress on replacing instances in an Auto Scaling group that has a warm pool. This includes separate details for instances in the warm pool and instances in the Auto Scaling group (the live pool).
LivePoolProgress (dict) --
Reports progress on replacing instances that are in the Auto Scaling group.
PercentageComplete (integer) --
The percentage of instances in the Auto Scaling group that have been replaced. For each instance replacement, Amazon EC2 Auto Scaling tracks the instance's health status and warm-up time. When the instance's health status changes to healthy and the specified warm-up time passes, the instance is considered updated and is added to the percentage complete.
InstancesToUpdate (integer) --
The number of instances remaining to update.
WarmPoolProgress (dict) --
Reports progress on replacing instances that are in the warm pool.
PercentageComplete (integer) --
The percentage of instances in the warm pool that have been replaced. For each instance replacement, Amazon EC2 Auto Scaling tracks the instance's health status and warm-up time. When the instance's health status changes to healthy and the specified warm-up time passes, the instance is considered updated and is added to the percentage complete.
InstancesToUpdate (integer) --
The number of instances remaining to update.
NextToken (string) --
A string that indicates that the response contains more items than can be returned in a single response. To receive additional items, specify this string for the NextToken
value when requesting the next set of items. This value is null when there are no more items to return.
Exceptions
AutoScaling.Client.exceptions.InvalidNextToken
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example describes the instance refreshes for the specified Auto Scaling group.
response = client.describe_instance_refreshes(
AutoScalingGroupName='my-auto-scaling-group',
)
print(response)
Expected Output:
{
'InstanceRefreshes': [
{
'AutoScalingGroupName': 'my-auto-scaling-group',
'InstanceRefreshId': '08b91cf7-8fa6-48af-b6a6-d227f40f1b9b',
'InstancesToUpdate': 5,
'PercentageComplete': 0,
'StartTime': datetime(2020, 6, 2, 18, 11, 27, 1, 154, 0),
'Status': 'InProgress',
},
{
'AutoScalingGroupName': 'my-auto-scaling-group',
'EndTime': datetime(2020, 6, 2, 16, 53, 37, 1, 154, 0),
'InstanceRefreshId': 'dd7728d0-5bc4-4575-96a3-1b2c52bf8bb1',
'InstancesToUpdate': 0,
'PercentageComplete': 100,
'StartTime': datetime(2020, 6, 2, 16, 43, 19, 1, 154, 0),
'Status': 'Successful',
},
],
'ResponseMetadata': {
'...': '...',
},
}
describe_launch_configurations
(**kwargs)¶Gets information about the launch configurations in the account and Region.
See also: AWS API Documentation
Request Syntax
response = client.describe_launch_configurations(
LaunchConfigurationNames=[
'string',
],
NextToken='string',
MaxRecords=123
)
The launch configuration names. If you omit this property, all launch configurations are described.
Array Members: Maximum number of 50 items.
50
and the maximum value is 100
.dict
Response Syntax
{
'LaunchConfigurations': [
{
'LaunchConfigurationName': 'string',
'LaunchConfigurationARN': 'string',
'ImageId': 'string',
'KeyName': 'string',
'SecurityGroups': [
'string',
],
'ClassicLinkVPCId': 'string',
'ClassicLinkVPCSecurityGroups': [
'string',
],
'UserData': 'string',
'InstanceType': 'string',
'KernelId': 'string',
'RamdiskId': 'string',
'BlockDeviceMappings': [
{
'VirtualName': 'string',
'DeviceName': 'string',
'Ebs': {
'SnapshotId': 'string',
'VolumeSize': 123,
'VolumeType': 'string',
'DeleteOnTermination': True|False,
'Iops': 123,
'Encrypted': True|False,
'Throughput': 123
},
'NoDevice': True|False
},
],
'InstanceMonitoring': {
'Enabled': True|False
},
'SpotPrice': 'string',
'IamInstanceProfile': 'string',
'CreatedTime': datetime(2015, 1, 1),
'EbsOptimized': True|False,
'AssociatePublicIpAddress': True|False,
'PlacementTenancy': 'string',
'MetadataOptions': {
'HttpTokens': 'optional'|'required',
'HttpPutResponseHopLimit': 123,
'HttpEndpoint': 'disabled'|'enabled'
}
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
LaunchConfigurations (list) --
The launch configurations.
(dict) --
Describes a launch configuration.
LaunchConfigurationName (string) --
The name of the launch configuration.
LaunchConfigurationARN (string) --
The Amazon Resource Name (ARN) of the launch configuration.
ImageId (string) --
The ID of the Amazon Machine Image (AMI) to use to launch your EC2 instances. For more information, see Find a Linux AMI in the Amazon EC2 User Guide for Linux Instances .
KeyName (string) --
The name of the key pair.
For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide for Linux Instances .
SecurityGroups (list) --
A list that contains the security groups to assign to the instances in the Auto Scaling group. For more information, see Security Groups for Your VPC in the Amazon Virtual Private Cloud User Guide .
ClassicLinkVPCId (string) --
Available for backward compatibility.
ClassicLinkVPCSecurityGroups (list) --
Available for backward compatibility.
UserData (string) --
The user data to make available to the launched EC2 instances. For more information, see Instance metadata and user data (Linux) and Instance metadata and user data (Windows). If you are using a command line tool, base64-encoding is performed for you, and you can load the text from a file. Otherwise, you must provide base64-encoded text. User data is limited to 16 KB.
InstanceType (string) --
The instance type for the instances. For information about available instance types, see Available instance types in the Amazon EC2 User Guide for Linux Instances .
KernelId (string) --
The ID of the kernel associated with the AMI.
RamdiskId (string) --
The ID of the RAM disk associated with the AMI.
BlockDeviceMappings (list) --
The block device mapping entries that define the block devices to attach to the instances at launch. By default, the block devices specified in the block device mapping for the AMI are used. For more information, see Block Device Mapping in the Amazon EC2 User Guide for Linux Instances .
(dict) --
Describes a block device mapping.
VirtualName (string) --
The name of the instance store volume (virtual device) to attach to an instance at launch. The name must be in the form ephemeral*X* where X is a number starting from zero (0), for example, ephemeral0
.
DeviceName (string) --
The device name assigned to the volume (for example, /dev/sdh
or xvdh
). For more information, see Device naming on Linux instances in the Amazon EC2 User Guide for Linux Instances .
Note
To define a block device mapping, set the device name and exactly one of the following properties: Ebs
, NoDevice
, or VirtualName
.
Ebs (dict) --
Information to attach an EBS volume to an instance at launch.
SnapshotId (string) --
The snapshot ID of the volume to use.
You must specify either a VolumeSize
or a SnapshotId
.
VolumeSize (integer) --
The volume size, in GiBs. The following are the supported volumes sizes for each volume type:
gp2
and gp3
: 1-16,384io1
: 4-16,384st1
and sc1
: 125-16,384standard
: 1-1,024You must specify either a SnapshotId
or a VolumeSize
. If you specify both SnapshotId
and VolumeSize
, the volume size must be equal or greater than the size of the snapshot.
VolumeType (string) --
The volume type. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide for Linux Instances .
Valid values: standard
| io1
| gp2
| st1
| sc1
| gp3
DeleteOnTermination (boolean) --
Indicates whether the volume is deleted on instance termination. For Amazon EC2 Auto Scaling, the default value is true
.
Iops (integer) --
The number of input/output (I/O) operations per second (IOPS) to provision for the volume. For gp3
and io1
volumes, this represents the number of IOPS that are provisioned for the volume. For gp2
volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type:
gp3
: 3,000-16,000 IOPSio1
: 100-64,000 IOPSFor io1
volumes, we guarantee 64,000 IOPS only for Instances built on the Nitro System. Other instance families guarantee performance up to 32,000 IOPS.
Iops
is supported when the volume type isgp3
orio1
and required only when the volume type isio1
. (Not used withstandard
,gp2
,st1
, orsc1
volumes.)
Encrypted (boolean) --
Specifies whether the volume should be encrypted. Encrypted EBS volumes can only be attached to instances that support Amazon EBS encryption. For more information, see Supported instance types. If your AMI uses encrypted volumes, you can also only launch it on supported instance types.
Note
If you are creating a volume from a snapshot, you cannot create an unencrypted volume from an encrypted snapshot. Also, you cannot specify a KMS key ID when using a launch configuration.
If you enable encryption by default, the EBS volumes that you create are always encrypted, either using the Amazon Web Services managed KMS key or a customer-managed KMS key, regardless of whether the snapshot was encrypted.
For more information, see Use Amazon Web Services KMS keys to encrypt Amazon EBS volumes in the Amazon EC2 Auto Scaling User Guide .
Throughput (integer) --
The throughput (MiBps) to provision for a gp3
volume.
NoDevice (boolean) --
Setting this value to true
prevents a volume that is included in the block device mapping of the AMI from being mapped to the specified device name at launch.
If NoDevice
is true
for the root device, instances might fail the EC2 health check. In that case, Amazon EC2 Auto Scaling launches replacement instances.
InstanceMonitoring (dict) --
Controls whether instances in this group are launched with detailed ( true
) or basic ( false
) monitoring.
For more information, see Configure Monitoring for Auto Scaling Instances in the Amazon EC2 Auto Scaling User Guide .
Enabled (boolean) --
If true
, detailed monitoring is enabled. Otherwise, basic monitoring is enabled.
SpotPrice (string) --
The maximum hourly price to be paid for any Spot Instance launched to fulfill the request. Spot Instances are launched when the price you specify exceeds the current Spot price. For more information, see Requesting Spot Instances in the Amazon EC2 Auto Scaling User Guide .
IamInstanceProfile (string) --
The name or the Amazon Resource Name (ARN) of the instance profile associated with the IAM role for the instance. The instance profile contains the IAM role. For more information, see IAM role for applications that run on Amazon EC2 instances in the Amazon EC2 Auto Scaling User Guide .
CreatedTime (datetime) --
The creation date and time for the launch configuration.
EbsOptimized (boolean) --
Specifies whether the launch configuration is optimized for EBS I/O ( true
) or not ( false
). For more information, see Amazon EBS-Optimized Instances in the Amazon EC2 User Guide for Linux Instances .
AssociatePublicIpAddress (boolean) --
Specifies whether to assign a public IPv4 address to the group's instances. If the instance is launched into a default subnet, the default is to assign a public IPv4 address, unless you disabled the option to assign a public IPv4 address on the subnet. If the instance is launched into a nondefault subnet, the default is not to assign a public IPv4 address, unless you enabled the option to assign a public IPv4 address on the subnet. For more information, see Launching Auto Scaling instances in a VPC in the Amazon EC2 Auto Scaling User Guide .
PlacementTenancy (string) --
The tenancy of the instance, either default
or dedicated
. An instance with dedicated
tenancy runs on isolated, single-tenant hardware and can only be launched into a VPC.
For more information, see Configuring instance tenancy with Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
MetadataOptions (dict) --
The metadata options for the instances. For more information, see Configuring the Instance Metadata Options in the Amazon EC2 Auto Scaling User Guide .
HttpTokens (string) --
The state of token usage for your instance metadata requests. If the parameter is not specified in the request, the default state is optional
.
If the state is optional
, you can choose to retrieve instance metadata with or without a signed token header on your request. If you retrieve the IAM role credentials without a token, the version 1.0 role credentials are returned. If you retrieve the IAM role credentials using a valid signed token, the version 2.0 role credentials are returned.
If the state is required
, you must send a signed token header with any instance metadata retrieval requests. In this state, retrieving the IAM role credentials always returns the version 2.0 credentials; the version 1.0 credentials are not available.
HttpPutResponseHopLimit (integer) --
The desired HTTP PUT response hop limit for instance metadata requests. The larger the number, the further instance metadata requests can travel.
Default: 1
HttpEndpoint (string) --
This parameter enables or disables the HTTP metadata endpoint on your instances. If the parameter is not specified, the default state is enabled
.
Note
If you specify a value of disabled
, you will not be able to access your instance metadata.
NextToken (string) --
A string that indicates that the response contains more items than can be returned in a single response. To receive additional items, specify this string for the NextToken
value when requesting the next set of items. This value is null when there are no more items to return.
Exceptions
AutoScaling.Client.exceptions.InvalidNextToken
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example describes the specified launch configuration.
response = client.describe_launch_configurations(
LaunchConfigurationNames=[
'my-launch-config',
],
)
print(response)
Expected Output:
{
'LaunchConfigurations': [
{
'AssociatePublicIpAddress': True,
'BlockDeviceMappings': [
],
'CreatedTime': datetime(2014, 5, 7, 17, 39, 28, 2, 127, 0),
'EbsOptimized': False,
'ImageId': 'ami-043a5034',
'InstanceMonitoring': {
'Enabled': True,
},
'InstanceType': 't1.micro',
'LaunchConfigurationARN': 'arn:aws:autoscaling:us-west-2:123456789012:launchConfiguration:98d3b196-4cf9-4e88-8ca1-8547c24ced8b:launchConfigurationName/my-launch-config',
'LaunchConfigurationName': 'my-launch-config',
'SecurityGroups': [
'sg-67ef0308',
],
},
],
'ResponseMetadata': {
'...': '...',
},
}
describe_lifecycle_hook_types
()¶Describes the available types of lifecycle hooks.
The following hook types are supported:
autoscaling:EC2_INSTANCE_LAUNCHING
autoscaling:EC2_INSTANCE_TERMINATING
See also: AWS API Documentation
Request Syntax
response = client.describe_lifecycle_hook_types()
{
'LifecycleHookTypes': [
'string',
]
}
Response Structure
The lifecycle hook types.
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example describes the available lifecycle hook types.
response = client.describe_lifecycle_hook_types(
)
print(response)
Expected Output:
{
'LifecycleHookTypes': [
'autoscaling:EC2_INSTANCE_LAUNCHING',
'autoscaling:EC2_INSTANCE_TERMINATING',
],
'ResponseMetadata': {
'...': '...',
},
}
describe_lifecycle_hooks
(**kwargs)¶Gets information about the lifecycle hooks for the specified Auto Scaling group.
See also: AWS API Documentation
Request Syntax
response = client.describe_lifecycle_hooks(
AutoScalingGroupName='string',
LifecycleHookNames=[
'string',
]
)
[REQUIRED]
The name of the Auto Scaling group.
The names of one or more lifecycle hooks. If you omit this property, all lifecycle hooks are described.
dict
Response Syntax
{
'LifecycleHooks': [
{
'LifecycleHookName': 'string',
'AutoScalingGroupName': 'string',
'LifecycleTransition': 'string',
'NotificationTargetARN': 'string',
'RoleARN': 'string',
'NotificationMetadata': 'string',
'HeartbeatTimeout': 123,
'GlobalTimeout': 123,
'DefaultResult': 'string'
},
]
}
Response Structure
(dict) --
LifecycleHooks (list) --
The lifecycle hooks for the specified group.
(dict) --
Describes a lifecycle hook. A lifecycle hook lets you create solutions that are aware of events in the Auto Scaling instance lifecycle, and then perform a custom action on instances when the corresponding lifecycle event occurs.
LifecycleHookName (string) --
The name of the lifecycle hook.
AutoScalingGroupName (string) --
The name of the Auto Scaling group for the lifecycle hook.
LifecycleTransition (string) --
The lifecycle transition.
Valid values: autoscaling:EC2_INSTANCE_LAUNCHING
| autoscaling:EC2_INSTANCE_TERMINATING
NotificationTargetARN (string) --
The ARN of the target that Amazon EC2 Auto Scaling sends notifications to when an instance is in a wait state for the lifecycle hook.
RoleARN (string) --
The ARN of the IAM role that allows the Auto Scaling group to publish to the specified notification target (an Amazon SNS topic or an Amazon SQS queue).
NotificationMetadata (string) --
Additional information that is included any time Amazon EC2 Auto Scaling sends a message to the notification target.
HeartbeatTimeout (integer) --
The maximum time, in seconds, that can elapse before the lifecycle hook times out. If the lifecycle hook times out, Amazon EC2 Auto Scaling performs the action that you specified in the DefaultResult
property.
GlobalTimeout (integer) --
The maximum time, in seconds, that an instance can remain in a wait state. The maximum is 172800 seconds (48 hours) or 100 times HeartbeatTimeout
, whichever is smaller.
DefaultResult (string) --
The action the Auto Scaling group takes when the lifecycle hook timeout elapses or if an unexpected failure occurs.
Valid values: CONTINUE
| ABANDON
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example describes the lifecycle hooks for the specified Auto Scaling group.
response = client.describe_lifecycle_hooks(
AutoScalingGroupName='my-auto-scaling-group',
)
print(response)
Expected Output:
{
'LifecycleHooks': [
{
'AutoScalingGroupName': 'my-auto-scaling-group',
'DefaultResult': 'ABANDON',
'GlobalTimeout': 172800,
'HeartbeatTimeout': 3600,
'LifecycleHookName': 'my-lifecycle-hook',
'LifecycleTransition': 'autoscaling:EC2_INSTANCE_LAUNCHING',
'NotificationTargetARN': 'arn:aws:sns:us-west-2:123456789012:my-sns-topic',
'RoleARN': 'arn:aws:iam::123456789012:role/my-auto-scaling-role',
},
],
'ResponseMetadata': {
'...': '...',
},
}
describe_load_balancer_target_groups
(**kwargs)¶Gets information about the Elastic Load Balancing target groups for the specified Auto Scaling group.
To determine the attachment status of the target group, use the State
element in the response. When you attach a target group to an Auto Scaling group, the initial State
value is Adding
. The state transitions to Added
after all Auto Scaling instances are registered with the target group. If Elastic Load Balancing health checks are enabled for the Auto Scaling group, the state transitions to InService
after at least one Auto Scaling instance passes the health check. When the target group is in the InService
state, Amazon EC2 Auto Scaling can terminate and replace any instances that are reported as unhealthy. If no registered instances pass the health checks, the target group doesn't enter the InService
state.
Target groups also have an InService
state if you attach them in the CreateAutoScalingGroup API call. If your target group state is InService
, but it is not working properly, check the scaling activities by calling DescribeScalingActivities and take any corrective actions necessary.
For help with failed health checks, see Troubleshooting Amazon EC2 Auto Scaling: Health checks in the Amazon EC2 Auto Scaling User Guide . For more information, see Use Elastic Load Balancing to distribute traffic across the instances in your Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .
Note
You can use this operation to describe target groups that were attached by using AttachLoadBalancerTargetGroups, but not for target groups that were attached by using AttachTrafficSources.
See also: AWS API Documentation
Request Syntax
response = client.describe_load_balancer_target_groups(
AutoScalingGroupName='string',
NextToken='string',
MaxRecords=123
)
[REQUIRED]
The name of the Auto Scaling group.
100
and the maximum value is 100
.dict
Response Syntax
{
'LoadBalancerTargetGroups': [
{
'LoadBalancerTargetGroupARN': 'string',
'State': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
LoadBalancerTargetGroups (list) --
Information about the target groups.
(dict) --
Describes the state of a target group.
LoadBalancerTargetGroupARN (string) --
The Amazon Resource Name (ARN) of the target group.
State (string) --
The state of the target group.
Adding
- The Auto Scaling instances are being registered with the target group.Added
- All Auto Scaling instances are registered with the target group.InService
- At least one Auto Scaling instance passed an ELB
health check.Removing
- The Auto Scaling instances are being deregistered from the target group. If connection draining is enabled, Elastic Load Balancing waits for in-flight requests to complete before deregistering the instances.Removed
- All Auto Scaling instances are deregistered from the target group.NextToken (string) --
A string that indicates that the response contains more items than can be returned in a single response. To receive additional items, specify this string for the NextToken
value when requesting the next set of items. This value is null when there are no more items to return.
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.InvalidNextToken
Examples
This example describes the target groups attached to the specified Auto Scaling group.
response = client.describe_load_balancer_target_groups(
AutoScalingGroupName='my-auto-scaling-group',
)
print(response)
Expected Output:
{
'LoadBalancerTargetGroups': [
{
'LoadBalancerTargetGroupARN': 'arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067',
'State': 'Added',
},
],
'ResponseMetadata': {
'...': '...',
},
}
describe_load_balancers
(**kwargs)¶Gets information about the load balancers for the specified Auto Scaling group.
This operation describes only Classic Load Balancers. If you have Application Load Balancers, Network Load Balancers, or Gateway Load Balancer, use the DescribeLoadBalancerTargetGroups API instead.
To determine the attachment status of the load balancer, use the State
element in the response. When you attach a load balancer to an Auto Scaling group, the initial State
value is Adding
. The state transitions to Added
after all Auto Scaling instances are registered with the load balancer. If Elastic Load Balancing health checks are enabled for the Auto Scaling group, the state transitions to InService
after at least one Auto Scaling instance passes the health check. When the load balancer is in the InService
state, Amazon EC2 Auto Scaling can terminate and replace any instances that are reported as unhealthy. If no registered instances pass the health checks, the load balancer doesn't enter the InService
state.
Load balancers also have an InService
state if you attach them in the CreateAutoScalingGroup API call. If your load balancer state is InService
, but it is not working properly, check the scaling activities by calling DescribeScalingActivities and take any corrective actions necessary.
For help with failed health checks, see Troubleshooting Amazon EC2 Auto Scaling: Health checks in the Amazon EC2 Auto Scaling User Guide . For more information, see Use Elastic Load Balancing to distribute traffic across the instances in your Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.describe_load_balancers(
AutoScalingGroupName='string',
NextToken='string',
MaxRecords=123
)
[REQUIRED]
The name of the Auto Scaling group.
100
and the maximum value is 100
.dict
Response Syntax
{
'LoadBalancers': [
{
'LoadBalancerName': 'string',
'State': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
LoadBalancers (list) --
The load balancers.
(dict) --
Describes the state of a Classic Load Balancer.
LoadBalancerName (string) --
The name of the load balancer.
State (string) --
One of the following load balancer states:
Adding
- The Auto Scaling instances are being registered with the load balancer.Added
- All Auto Scaling instances are registered with the load balancer.InService
- At least one Auto Scaling instance passed an ELB
health check.Removing
- The Auto Scaling instances are being deregistered from the load balancer. If connection draining is enabled, Elastic Load Balancing waits for in-flight requests to complete before deregistering the instances.Removed
- All Auto Scaling instances are deregistered from the load balancer.NextToken (string) --
A string that indicates that the response contains more items than can be returned in a single response. To receive additional items, specify this string for the NextToken
value when requesting the next set of items. This value is null when there are no more items to return.
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.InvalidNextToken
Examples
This example describes the load balancers attached to the specified Auto Scaling group.
response = client.describe_load_balancers(
AutoScalingGroupName='my-auto-scaling-group',
)
print(response)
Expected Output:
{
'LoadBalancers': [
{
'LoadBalancerName': 'my-load-balancer',
'State': 'Added',
},
],
'ResponseMetadata': {
'...': '...',
},
}
describe_metric_collection_types
()¶Describes the available CloudWatch metrics for Amazon EC2 Auto Scaling.
See also: AWS API Documentation
Request Syntax
response = client.describe_metric_collection_types()
{
'Metrics': [
{
'Metric': 'string'
},
],
'Granularities': [
{
'Granularity': 'string'
},
]
}
Response Structure
The metrics.
Describes a metric.
One of the following metrics:
GroupMinSize
GroupMaxSize
GroupDesiredCapacity
GroupInServiceInstances
GroupPendingInstances
GroupStandbyInstances
GroupTerminatingInstances
GroupTotalInstances
GroupInServiceCapacity
GroupPendingCapacity
GroupStandbyCapacity
GroupTerminatingCapacity
GroupTotalCapacity
WarmPoolDesiredCapacity
WarmPoolWarmedCapacity
WarmPoolPendingCapacity
WarmPoolTerminatingCapacity
WarmPoolTotalCapacity
GroupAndWarmPoolDesiredCapacity
GroupAndWarmPoolTotalCapacity
The granularities for the metrics.
Describes a granularity of a metric.
The granularity. The only valid value is 1Minute
.
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example describes the available metric collection types.
response = client.describe_metric_collection_types(
)
print(response)
Expected Output:
{
'Granularities': [
{
'Granularity': '1Minute',
},
],
'Metrics': [
{
'Metric': 'GroupMinSize',
},
{
'Metric': 'GroupMaxSize',
},
{
'Metric': 'GroupDesiredCapacity',
},
{
'Metric': 'GroupInServiceInstances',
},
{
'Metric': 'GroupPendingInstances',
},
{
'Metric': 'GroupTerminatingInstances',
},
{
'Metric': 'GroupStandbyInstances',
},
{
'Metric': 'GroupTotalInstances',
},
],
'ResponseMetadata': {
'...': '...',
},
}
describe_notification_configurations
(**kwargs)¶Gets information about the Amazon SNS notifications that are configured for one or more Auto Scaling groups.
See also: AWS API Documentation
Request Syntax
response = client.describe_notification_configurations(
AutoScalingGroupNames=[
'string',
],
NextToken='string',
MaxRecords=123
)
The name of the Auto Scaling group.
50
and the maximum value is 100
.dict
Response Syntax
{
'NotificationConfigurations': [
{
'AutoScalingGroupName': 'string',
'TopicARN': 'string',
'NotificationType': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
NotificationConfigurations (list) --
The notification configurations.
(dict) --
Describes a notification.
AutoScalingGroupName (string) --
The name of the Auto Scaling group.
TopicARN (string) --
The Amazon Resource Name (ARN) of the Amazon SNS topic.
NotificationType (string) --
One of the following event notification types:
autoscaling:EC2_INSTANCE_LAUNCH
autoscaling:EC2_INSTANCE_LAUNCH_ERROR
autoscaling:EC2_INSTANCE_TERMINATE
autoscaling:EC2_INSTANCE_TERMINATE_ERROR
autoscaling:TEST_NOTIFICATION
NextToken (string) --
A string that indicates that the response contains more items than can be returned in a single response. To receive additional items, specify this string for the NextToken
value when requesting the next set of items. This value is null when there are no more items to return.
Exceptions
AutoScaling.Client.exceptions.InvalidNextToken
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example describes the notification configurations for the specified Auto Scaling group.
response = client.describe_notification_configurations(
AutoScalingGroupNames=[
'my-auto-scaling-group',
],
)
print(response)
Expected Output:
{
'NotificationConfigurations': [
{
'AutoScalingGroupName': 'my-auto-scaling-group',
'NotificationType': 'autoscaling:TEST_NOTIFICATION',
'TopicARN': 'arn:aws:sns:us-west-2:123456789012:my-sns-topic-2',
},
{
'AutoScalingGroupName': 'my-auto-scaling-group',
'NotificationType': 'autoscaling:TEST_NOTIFICATION',
'TopicARN': 'arn:aws:sns:us-west-2:123456789012:my-sns-topic',
},
],
'ResponseMetadata': {
'...': '...',
},
}
describe_policies
(**kwargs)¶Gets information about the scaling policies in the account and Region.
See also: AWS API Documentation
Request Syntax
response = client.describe_policies(
AutoScalingGroupName='string',
PolicyNames=[
'string',
],
PolicyTypes=[
'string',
],
NextToken='string',
MaxRecords=123
)
The names of one or more policies. If you omit this property, all policies are described. If a group name is provided, the results are limited to that group. If you specify an unknown policy name, it is ignored with no error.
Array Members: Maximum number of 50 items.
One or more policy types. The valid values are SimpleScaling
, StepScaling
, TargetTrackingScaling
, and PredictiveScaling
.
50
and the maximum value is 100
.dict
Response Syntax
{
'ScalingPolicies': [
{
'AutoScalingGroupName': 'string',
'PolicyName': 'string',
'PolicyARN': 'string',
'PolicyType': 'string',
'AdjustmentType': 'string',
'MinAdjustmentStep': 123,
'MinAdjustmentMagnitude': 123,
'ScalingAdjustment': 123,
'Cooldown': 123,
'StepAdjustments': [
{
'MetricIntervalLowerBound': 123.0,
'MetricIntervalUpperBound': 123.0,
'ScalingAdjustment': 123
},
],
'MetricAggregationType': 'string',
'EstimatedInstanceWarmup': 123,
'Alarms': [
{
'AlarmName': 'string',
'AlarmARN': 'string'
},
],
'TargetTrackingConfiguration': {
'PredefinedMetricSpecification': {
'PredefinedMetricType': 'ASGAverageCPUUtilization'|'ASGAverageNetworkIn'|'ASGAverageNetworkOut'|'ALBRequestCountPerTarget',
'ResourceLabel': 'string'
},
'CustomizedMetricSpecification': {
'MetricName': 'string',
'Namespace': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
],
'Statistic': 'Average'|'Minimum'|'Maximum'|'SampleCount'|'Sum',
'Unit': 'string',
'Metrics': [
{
'Id': 'string',
'Expression': 'string',
'MetricStat': {
'Metric': {
'Namespace': 'string',
'MetricName': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
]
},
'Stat': 'string',
'Unit': 'string'
},
'Label': 'string',
'ReturnData': True|False
},
]
},
'TargetValue': 123.0,
'DisableScaleIn': True|False
},
'Enabled': True|False,
'PredictiveScalingConfiguration': {
'MetricSpecifications': [
{
'TargetValue': 123.0,
'PredefinedMetricPairSpecification': {
'PredefinedMetricType': 'ASGCPUUtilization'|'ASGNetworkIn'|'ASGNetworkOut'|'ALBRequestCount',
'ResourceLabel': 'string'
},
'PredefinedScalingMetricSpecification': {
'PredefinedMetricType': 'ASGAverageCPUUtilization'|'ASGAverageNetworkIn'|'ASGAverageNetworkOut'|'ALBRequestCountPerTarget',
'ResourceLabel': 'string'
},
'PredefinedLoadMetricSpecification': {
'PredefinedMetricType': 'ASGTotalCPUUtilization'|'ASGTotalNetworkIn'|'ASGTotalNetworkOut'|'ALBTargetGroupRequestCount',
'ResourceLabel': 'string'
},
'CustomizedScalingMetricSpecification': {
'MetricDataQueries': [
{
'Id': 'string',
'Expression': 'string',
'MetricStat': {
'Metric': {
'Namespace': 'string',
'MetricName': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
]
},
'Stat': 'string',
'Unit': 'string'
},
'Label': 'string',
'ReturnData': True|False
},
]
},
'CustomizedLoadMetricSpecification': {
'MetricDataQueries': [
{
'Id': 'string',
'Expression': 'string',
'MetricStat': {
'Metric': {
'Namespace': 'string',
'MetricName': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
]
},
'Stat': 'string',
'Unit': 'string'
},
'Label': 'string',
'ReturnData': True|False
},
]
},
'CustomizedCapacityMetricSpecification': {
'MetricDataQueries': [
{
'Id': 'string',
'Expression': 'string',
'MetricStat': {
'Metric': {
'Namespace': 'string',
'MetricName': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
]
},
'Stat': 'string',
'Unit': 'string'
},
'Label': 'string',
'ReturnData': True|False
},
]
}
},
],
'Mode': 'ForecastAndScale'|'ForecastOnly',
'SchedulingBufferTime': 123,
'MaxCapacityBreachBehavior': 'HonorMaxCapacity'|'IncreaseMaxCapacity',
'MaxCapacityBuffer': 123
}
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
ScalingPolicies (list) --
The scaling policies.
(dict) --
Describes a scaling policy.
AutoScalingGroupName (string) --
The name of the Auto Scaling group.
PolicyName (string) --
The name of the scaling policy.
PolicyARN (string) --
The Amazon Resource Name (ARN) of the policy.
PolicyType (string) --
One of the following policy types:
TargetTrackingScaling
StepScaling
SimpleScaling
(default)PredictiveScaling
For more information, see Target tracking scaling policies and Step and simple scaling policies in the Amazon EC2 Auto Scaling User Guide .
AdjustmentType (string) --
Specifies how the scaling adjustment is interpreted (for example, an absolute number or a percentage). The valid values are ChangeInCapacity
, ExactCapacity
, and PercentChangeInCapacity
.
MinAdjustmentStep (integer) --
Available for backward compatibility. Use MinAdjustmentMagnitude
instead.
MinAdjustmentMagnitude (integer) --
The minimum value to scale by when the adjustment type is PercentChangeInCapacity
.
ScalingAdjustment (integer) --
The amount by which to scale, based on the specified adjustment type. A positive value adds to the current capacity while a negative number removes from the current capacity.
Cooldown (integer) --
The duration of the policy's cooldown period, in seconds.
StepAdjustments (list) --
A set of adjustments that enable you to scale based on the size of the alarm breach.
(dict) --
Describes information used to create a step adjustment for a step scaling policy.
For the following examples, suppose that you have an alarm with a breach threshold of 50:
There are a few rules for the step adjustments for your step policy:
For more information, see Step adjustments in the Amazon EC2 Auto Scaling User Guide .
MetricIntervalLowerBound (float) --
The lower bound for the difference between the alarm threshold and the CloudWatch metric. If the metric value is above the breach threshold, the lower bound is inclusive (the metric must be greater than or equal to the threshold plus the lower bound). Otherwise, it is exclusive (the metric must be greater than the threshold plus the lower bound). A null value indicates negative infinity.
MetricIntervalUpperBound (float) --
The upper bound for the difference between the alarm threshold and the CloudWatch metric. If the metric value is above the breach threshold, the upper bound is exclusive (the metric must be less than the threshold plus the upper bound). Otherwise, it is inclusive (the metric must be less than or equal to the threshold plus the upper bound). A null value indicates positive infinity.
The upper bound must be greater than the lower bound.
ScalingAdjustment (integer) --
The amount by which to scale, based on the specified adjustment type. A positive value adds to the current capacity while a negative number removes from the current capacity.
The amount by which to scale. The adjustment is based on the value that you specified in the AdjustmentType
property (either an absolute number or a percentage). A positive value adds to the current capacity and a negative number subtracts from the current capacity.
MetricAggregationType (string) --
The aggregation type for the CloudWatch metrics. The valid values are Minimum
, Maximum
, and Average
.
EstimatedInstanceWarmup (integer) --
The estimated time, in seconds, until a newly launched instance can contribute to the CloudWatch metrics.
Alarms (list) --
The CloudWatch alarms related to the policy.
(dict) --
Describes an alarm.
AlarmName (string) --
The name of the alarm.
AlarmARN (string) --
The Amazon Resource Name (ARN) of the alarm.
TargetTrackingConfiguration (dict) --
A target tracking scaling policy.
PredefinedMetricSpecification (dict) --
A predefined metric. You must specify either a predefined metric or a customized metric.
PredefinedMetricType (string) --
The metric type. The following predefined metrics are available:
ASGAverageCPUUtilization
- Average CPU utilization of the Auto Scaling group.ASGAverageNetworkIn
- Average number of bytes received on all network interfaces by the Auto Scaling group.ASGAverageNetworkOut
- Average number of bytes sent out on all network interfaces by the Auto Scaling group.ALBRequestCountPerTarget
- Average Application Load Balancer request count per target for your Auto Scaling group.ResourceLabel (string) --
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the average request count served by your Auto Scaling group. You can't specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff
.
Where:
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
CustomizedMetricSpecification (dict) --
A customized metric. You must specify either a predefined metric or a customized metric.
MetricName (string) --
The name of the metric. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics.
Namespace (string) --
The namespace of the metric.
Dimensions (list) --
The dimensions of the metric.
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
(dict) --
Describes the dimension of a metric.
Name (string) --
The name of the dimension.
Value (string) --
The value of the dimension.
Statistic (string) --
The statistic of the metric.
Unit (string) --
The unit of the metric. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
Metrics (list) --
The metrics to include in the target tracking scaling policy, as a metric data query. This can include both raw metric and metric math expressions.
(dict) --
The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
Id (string) --
A short name that identifies the object's results in the response. This name must be unique among all TargetTrackingMetricDataQuery
objects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
Expression (string) --
The math expression to perform on the returned data, if this object is performing a math expression. This expression can use the Id
of the other metrics to refer to those metrics, and can also use the Id
of other expressions to use the result of those expressions.
Conditional: Within each TargetTrackingMetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
MetricStat (dict) --
Information about the metric data to return.
Conditional: Within each TargetTrackingMetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
Metric (dict) --
Represents a specific metric.
Namespace (string) --
The namespace of the metric. For more information, see the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
MetricName (string) --
The name of the metric.
Dimensions (list) --
The dimensions for the metric. For the list of available dimensions, see the Amazon Web Services documentation available from the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
(dict) --
Describes the dimension of a metric.
Name (string) --
The name of the dimension.
Value (string) --
The value of the dimension.
Stat (string) --
The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metrics for scaling is Average
Unit (string) --
The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
Label (string) --
A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.
ReturnData (boolean) --
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify true
for this value for only the final math expression that the metric specification is based on. You must specify false
for ReturnData
for all the other metrics and expressions used in the metric specification.
If you are only retrieving metrics and not performing any math expressions, do not specify anything for ReturnData
. This sets it to its default ( true
).
TargetValue (float) --
The target value for the metric.
Note
Some metrics are based on a count instead of a percentage, such as the request count for an Application Load Balancer or the number of messages in an SQS queue. If the scaling policy specifies one of these metrics, specify the target utilization as the optimal average request or message count per instance during any one-minute interval.
DisableScaleIn (boolean) --
Indicates whether scaling in by the target tracking scaling policy is disabled. If scaling in is disabled, the target tracking scaling policy doesn't remove instances from the Auto Scaling group. Otherwise, the target tracking scaling policy can remove instances from the Auto Scaling group. The default is false
.
Enabled (boolean) --
Indicates whether the policy is enabled ( true
) or disabled ( false
).
PredictiveScalingConfiguration (dict) --
A predictive scaling policy.
MetricSpecifications (list) --
This structure includes the metrics and target utilization to use for predictive scaling.
This is an array, but we currently only support a single metric specification. That is, you can specify a target value and a single metric pair, or a target value and one scaling metric and one load metric.
(dict) --
This structure specifies the metrics and target utilization settings for a predictive scaling policy.
You must specify either a metric pair, or a load metric and a scaling metric individually. Specifying a metric pair instead of individual metrics provides a simpler way to configure metrics for a scaling policy. You choose the metric pair, and the policy automatically knows the correct sum and average statistics to use for the load metric and the scaling metric.
Example
ALBRequestCount
as the value for the metric pair and 1000.0
as the target value. For this type of metric, you must provide the metric dimension for the corresponding target group, so you also provide a resource label for the Application Load Balancer target group that is attached to your Auto Scaling group.RequestCount
and RequestCountPerTarget
metrics, respectively.For information about using custom metrics with predictive scaling, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide .
TargetValue (float) --
Specifies the target utilization.
Note
Some metrics are based on a count instead of a percentage, such as the request count for an Application Load Balancer or the number of messages in an SQS queue. If the scaling policy specifies one of these metrics, specify the target utilization as the optimal average request or message count per instance during any one-minute interval.
PredefinedMetricPairSpecification (dict) --
The predefined metric pair specification from which Amazon EC2 Auto Scaling determines the appropriate scaling metric and load metric to use.
PredefinedMetricType (string) --
Indicates which metrics to use. There are two different types of metrics for each metric type: one is a load metric and one is a scaling metric. For example, if the metric type is ASGCPUUtilization
, the Auto Scaling group's total CPU metric is used as the load metric, and the average CPU metric is used for the scaling metric.
ResourceLabel (string) --
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the total and average request count served by your Auto Scaling group. You can't specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff
.
Where:
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
PredefinedScalingMetricSpecification (dict) --
The predefined scaling metric specification.
PredefinedMetricType (string) --
The metric type.
ResourceLabel (string) --
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the average request count served by your Auto Scaling group. You can't specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff
.
Where:
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
PredefinedLoadMetricSpecification (dict) --
The predefined load metric specification.
PredefinedMetricType (string) --
The metric type.
ResourceLabel (string) --
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the request count served by your Auto Scaling group. You can't specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff
.
Where:
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
CustomizedScalingMetricSpecification (dict) --
The customized scaling metric specification.
MetricDataQueries (list) --
One or more metric data queries to provide the data points for a scaling metric. Use multiple metric data queries only if you are performing a math expression on returned data.
(dict) --
The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
For more information and examples, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide .
Id (string) --
A short name that identifies the object's results in the response. This name must be unique among all MetricDataQuery
objects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
Expression (string) --
The math expression to perform on the returned data, if this object is performing a math expression. This expression can use the Id
of the other metrics to refer to those metrics, and can also use the Id
of other expressions to use the result of those expressions.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
MetricStat (dict) --
Information about the metric data to return.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
Metric (dict) --
The CloudWatch metric to return, including the metric name, namespace, and dimensions. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics.
Namespace (string) --
The namespace of the metric. For more information, see the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
MetricName (string) --
The name of the metric.
Dimensions (list) --
The dimensions for the metric. For the list of available dimensions, see the Amazon Web Services documentation available from the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
(dict) --
Describes the dimension of a metric.
Name (string) --
The name of the dimension.
Value (string) --
The value of the dimension.
Stat (string) --
The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metrics for predictive scaling are Average
and Sum
.
Unit (string) --
The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
Label (string) --
A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.
ReturnData (boolean) --
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify true
for this value for only the final math expression that the metric specification is based on. You must specify false
for ReturnData
for all the other metrics and expressions used in the metric specification.
If you are only retrieving metrics and not performing any math expressions, do not specify anything for ReturnData
. This sets it to its default ( true
).
CustomizedLoadMetricSpecification (dict) --
The customized load metric specification.
MetricDataQueries (list) --
One or more metric data queries to provide the data points for a load metric. Use multiple metric data queries only if you are performing a math expression on returned data.
(dict) --
The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
For more information and examples, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide .
Id (string) --
A short name that identifies the object's results in the response. This name must be unique among all MetricDataQuery
objects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
Expression (string) --
The math expression to perform on the returned data, if this object is performing a math expression. This expression can use the Id
of the other metrics to refer to those metrics, and can also use the Id
of other expressions to use the result of those expressions.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
MetricStat (dict) --
Information about the metric data to return.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
Metric (dict) --
The CloudWatch metric to return, including the metric name, namespace, and dimensions. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics.
Namespace (string) --
The namespace of the metric. For more information, see the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
MetricName (string) --
The name of the metric.
Dimensions (list) --
The dimensions for the metric. For the list of available dimensions, see the Amazon Web Services documentation available from the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
(dict) --
Describes the dimension of a metric.
Name (string) --
The name of the dimension.
Value (string) --
The value of the dimension.
Stat (string) --
The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metrics for predictive scaling are Average
and Sum
.
Unit (string) --
The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
Label (string) --
A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.
ReturnData (boolean) --
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify true
for this value for only the final math expression that the metric specification is based on. You must specify false
for ReturnData
for all the other metrics and expressions used in the metric specification.
If you are only retrieving metrics and not performing any math expressions, do not specify anything for ReturnData
. This sets it to its default ( true
).
CustomizedCapacityMetricSpecification (dict) --
The customized capacity metric specification.
MetricDataQueries (list) --
One or more metric data queries to provide the data points for a capacity metric. Use multiple metric data queries only if you are performing a math expression on returned data.
(dict) --
The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
For more information and examples, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide .
Id (string) --
A short name that identifies the object's results in the response. This name must be unique among all MetricDataQuery
objects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
Expression (string) --
The math expression to perform on the returned data, if this object is performing a math expression. This expression can use the Id
of the other metrics to refer to those metrics, and can also use the Id
of other expressions to use the result of those expressions.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
MetricStat (dict) --
Information about the metric data to return.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
Metric (dict) --
The CloudWatch metric to return, including the metric name, namespace, and dimensions. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics.
Namespace (string) --
The namespace of the metric. For more information, see the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
MetricName (string) --
The name of the metric.
Dimensions (list) --
The dimensions for the metric. For the list of available dimensions, see the Amazon Web Services documentation available from the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
(dict) --
Describes the dimension of a metric.
Name (string) --
The name of the dimension.
Value (string) --
The value of the dimension.
Stat (string) --
The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metrics for predictive scaling are Average
and Sum
.
Unit (string) --
The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
Label (string) --
A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.
ReturnData (boolean) --
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify true
for this value for only the final math expression that the metric specification is based on. You must specify false
for ReturnData
for all the other metrics and expressions used in the metric specification.
If you are only retrieving metrics and not performing any math expressions, do not specify anything for ReturnData
. This sets it to its default ( true
).
Mode (string) --
The predictive scaling mode. Defaults to ForecastOnly
if not specified.
SchedulingBufferTime (integer) --
The amount of time, in seconds, by which the instance launch time can be advanced. For example, the forecast says to add capacity at 10:00 AM, and you choose to pre-launch instances by 5 minutes. In that case, the instances will be launched at 9:55 AM. The intention is to give resources time to be provisioned. It can take a few minutes to launch an EC2 instance. The actual amount of time required depends on several factors, such as the size of the instance and whether there are startup scripts to complete.
The value must be less than the forecast interval duration of 3600 seconds (60 minutes). Defaults to 300 seconds if not specified.
MaxCapacityBreachBehavior (string) --
Defines the behavior that should be applied if the forecast capacity approaches or exceeds the maximum capacity of the Auto Scaling group. Defaults to HonorMaxCapacity
if not specified.
The following are possible values:
HonorMaxCapacity
- Amazon EC2 Auto Scaling cannot scale out capacity higher than the maximum capacity. The maximum capacity is enforced as a hard limit.IncreaseMaxCapacity
- Amazon EC2 Auto Scaling can scale out capacity higher than the maximum capacity when the forecast capacity is close to or exceeds the maximum capacity. The upper limit is determined by the forecasted capacity and the value for MaxCapacityBuffer
.MaxCapacityBuffer (integer) --
The size of the capacity buffer to use when the forecast capacity is close to or exceeds the maximum capacity. The value is specified as a percentage relative to the forecast capacity. For example, if the buffer is 10, this means a 10 percent buffer, such that if the forecast capacity is 50, and the maximum capacity is 40, then the effective maximum capacity is 55.
If set to 0, Amazon EC2 Auto Scaling may scale capacity higher than the maximum capacity to equal but not exceed forecast capacity.
Required if the MaxCapacityBreachBehavior
property is set to IncreaseMaxCapacity
, and cannot be used otherwise.
NextToken (string) --
A string that indicates that the response contains more items than can be returned in a single response. To receive additional items, specify this string for the NextToken
value when requesting the next set of items. This value is null when there are no more items to return.
Exceptions
AutoScaling.Client.exceptions.InvalidNextToken
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.ServiceLinkedRoleFailure
Examples
This example describes the policies for the specified Auto Scaling group.
response = client.describe_policies(
AutoScalingGroupName='my-auto-scaling-group',
)
print(response)
Expected Output:
{
'ScalingPolicies': [
{
'AdjustmentType': 'ChangeInCapacity',
'Alarms': [
],
'AutoScalingGroupName': 'my-auto-scaling-group',
'PolicyARN': 'arn:aws:autoscaling:us-west-2:123456789012:scalingPolicy:2233f3d7-6290-403b-b632-93c553560106:autoScalingGroupName/my-auto-scaling-group:policyName/ScaleIn',
'PolicyName': 'ScaleIn',
'ScalingAdjustment': -1,
},
{
'AdjustmentType': 'PercentChangeInCapacity',
'Alarms': [
],
'AutoScalingGroupName': 'my-auto-scaling-group',
'Cooldown': 60,
'MinAdjustmentStep': 2,
'PolicyARN': 'arn:aws:autoscaling:us-west-2:123456789012:scalingPolicy:2b435159-cf77-4e89-8c0e-d63b497baad7:autoScalingGroupName/my-auto-scaling-group:policyName/ScalePercentChange',
'PolicyName': 'ScalePercentChange',
'ScalingAdjustment': 25,
},
],
'ResponseMetadata': {
'...': '...',
},
}
describe_scaling_activities
(**kwargs)¶Gets information about the scaling activities in the account and Region.
When scaling events occur, you see a record of the scaling activity in the scaling activities. For more information, see Verifying a scaling activity for an Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .
If the scaling event succeeds, the value of the StatusCode
element in the response is Successful
. If an attempt to launch instances failed, the StatusCode
value is Failed
or Cancelled
and the StatusMessage
element in the response indicates the cause of the failure. For help interpreting the StatusMessage
, see Troubleshooting Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.describe_scaling_activities(
ActivityIds=[
'string',
],
AutoScalingGroupName='string',
IncludeDeletedGroups=True|False,
MaxRecords=123,
NextToken='string'
)
The activity IDs of the desired scaling activities. If you omit this property, all activities for the past six weeks are described. If unknown activities are requested, they are ignored with no error. If you specify an Auto Scaling group, the results are limited to that group.
Array Members: Maximum number of 50 IDs.
100
and the maximum value is 100
.dict
Response Syntax
{
'Activities': [
{
'ActivityId': 'string',
'AutoScalingGroupName': 'string',
'Description': 'string',
'Cause': 'string',
'StartTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'StatusCode': 'PendingSpotBidPlacement'|'WaitingForSpotInstanceRequestId'|'WaitingForSpotInstanceId'|'WaitingForInstanceId'|'PreInService'|'InProgress'|'WaitingForELBConnectionDraining'|'MidLifecycleAction'|'WaitingForInstanceWarmup'|'Successful'|'Failed'|'Cancelled',
'StatusMessage': 'string',
'Progress': 123,
'Details': 'string',
'AutoScalingGroupState': 'string',
'AutoScalingGroupARN': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
Activities (list) --
The scaling activities. Activities are sorted by start time. Activities still in progress are described first.
(dict) --
Describes scaling activity, which is a long-running process that represents a change to your Auto Scaling group, such as changing its size or replacing an instance.
ActivityId (string) --
The ID of the activity.
AutoScalingGroupName (string) --
The name of the Auto Scaling group.
Description (string) --
A friendly, more verbose description of the activity.
Cause (string) --
The reason the activity began.
StartTime (datetime) --
The start time of the activity.
EndTime (datetime) --
The end time of the activity.
StatusCode (string) --
The current status of the activity.
StatusMessage (string) --
A friendly, more verbose description of the activity status.
Progress (integer) --
A value between 0 and 100 that indicates the progress of the activity.
Details (string) --
The details about the activity.
AutoScalingGroupState (string) --
The state of the Auto Scaling group, which is either InService
or Deleted
.
AutoScalingGroupARN (string) --
The Amazon Resource Name (ARN) of the Auto Scaling group.
NextToken (string) --
A string that indicates that the response contains more items than can be returned in a single response. To receive additional items, specify this string for the NextToken
value when requesting the next set of items. This value is null when there are no more items to return.
Exceptions
AutoScaling.Client.exceptions.InvalidNextToken
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example describes the scaling activities for the specified Auto Scaling group.
response = client.describe_scaling_activities(
AutoScalingGroupName='my-auto-scaling-group',
)
print(response)
Expected Output:
{
'Activities': [
{
'ActivityId': 'f9f2d65b-f1f2-43e7-b46d-d86756459699',
'AutoScalingGroupName': 'my-auto-scaling-group',
'Cause': 'At 2013-08-19T20:53:25Z a user request created an AutoScalingGroup changing the desired capacity from 0 to 1. At 2013-08-19T20:53:29Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 0 to 1.',
'Description': 'Launching a new EC2 instance: i-4ba0837f',
'Details': 'details',
'EndTime': datetime(2013, 8, 19, 20, 54, 2, 0, 231, 0),
'Progress': 100,
'StartTime': datetime(2013, 8, 19, 20, 53, 29, 0, 231, 0),
'StatusCode': 'Successful',
},
],
'ResponseMetadata': {
'...': '...',
},
}
describe_scaling_process_types
()¶Describes the scaling process types for use with the ResumeProcesses and SuspendProcesses APIs.
See also: AWS API Documentation
Request Syntax
response = client.describe_scaling_process_types()
{
'Processes': [
{
'ProcessName': 'string'
},
]
}
Response Structure
The names of the process types.
Describes a process type.
For more information, see Scaling processes in the Amazon EC2 Auto Scaling User Guide .
One of the following processes:
Launch
Terminate
AddToLoadBalancer
AlarmNotification
AZRebalance
HealthCheck
InstanceRefresh
ReplaceUnhealthy
ScheduledActions
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example describes the Auto Scaling process types.
response = client.describe_scaling_process_types(
)
print(response)
Expected Output:
{
'Processes': [
{
'ProcessName': 'AZRebalance',
},
{
'ProcessName': 'AddToLoadBalancer',
},
{
'ProcessName': 'AlarmNotification',
},
{
'ProcessName': 'HealthCheck',
},
{
'ProcessName': 'Launch',
},
{
'ProcessName': 'ReplaceUnhealthy',
},
{
'ProcessName': 'ScheduledActions',
},
{
'ProcessName': 'Terminate',
},
],
'ResponseMetadata': {
'...': '...',
},
}
describe_scheduled_actions
(**kwargs)¶Gets information about the scheduled actions that haven't run or that have not reached their end time.
To describe the scaling activities for scheduled actions that have already run, call the DescribeScalingActivities API.
See also: AWS API Documentation
Request Syntax
response = client.describe_scheduled_actions(
AutoScalingGroupName='string',
ScheduledActionNames=[
'string',
],
StartTime=datetime(2015, 1, 1),
EndTime=datetime(2015, 1, 1),
NextToken='string',
MaxRecords=123
)
The names of one or more scheduled actions. If you omit this property, all scheduled actions are described. If you specify an unknown scheduled action, it is ignored with no error.
Array Members: Maximum number of 50 actions.
50
and the maximum value is 100
.dict
Response Syntax
{
'ScheduledUpdateGroupActions': [
{
'AutoScalingGroupName': 'string',
'ScheduledActionName': 'string',
'ScheduledActionARN': 'string',
'Time': datetime(2015, 1, 1),
'StartTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'Recurrence': 'string',
'MinSize': 123,
'MaxSize': 123,
'DesiredCapacity': 123,
'TimeZone': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
ScheduledUpdateGroupActions (list) --
The scheduled actions.
(dict) --
Describes a scheduled scaling action.
AutoScalingGroupName (string) --
The name of the Auto Scaling group.
ScheduledActionName (string) --
The name of the scheduled action.
ScheduledActionARN (string) --
The Amazon Resource Name (ARN) of the scheduled action.
Time (datetime) --
This property is no longer used.
StartTime (datetime) --
The date and time in UTC for this action to start. For example, "2019-06-01T00:00:00Z"
.
EndTime (datetime) --
The date and time in UTC for the recurring schedule to end. For example, "2019-06-01T00:00:00Z"
.
Recurrence (string) --
The recurring schedule for the action, in Unix cron syntax format.
When StartTime
and EndTime
are specified with Recurrence
, they form the boundaries of when the recurring action starts and stops.
MinSize (integer) --
The minimum size of the Auto Scaling group.
MaxSize (integer) --
The maximum size of the Auto Scaling group.
DesiredCapacity (integer) --
The desired capacity is the initial capacity of the Auto Scaling group after the scheduled action runs and the capacity it attempts to maintain.
TimeZone (string) --
The time zone for the cron expression.
NextToken (string) --
A string that indicates that the response contains more items than can be returned in a single response. To receive additional items, specify this string for the NextToken
value when requesting the next set of items. This value is null when there are no more items to return.
Exceptions
AutoScaling.Client.exceptions.InvalidNextToken
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example describes the scheduled actions for the specified Auto Scaling group.
response = client.describe_scheduled_actions(
AutoScalingGroupName='my-auto-scaling-group',
)
print(response)
Expected Output:
{
'ScheduledUpdateGroupActions': [
{
'AutoScalingGroupName': 'my-auto-scaling-group',
'DesiredCapacity': 4,
'MaxSize': 6,
'MinSize': 2,
'Recurrence': '30 0 1 12 0',
'ScheduledActionARN': 'arn:aws:autoscaling:us-west-2:123456789012:scheduledUpdateGroupAction:8e86b655-b2e6-4410-8f29-b4f094d6871c:autoScalingGroupName/my-auto-scaling-group:scheduledActionName/my-scheduled-action',
'ScheduledActionName': 'my-scheduled-action',
'StartTime': datetime(2016, 12, 1, 0, 30, 0, 3, 336, 0),
'Time': datetime(2016, 12, 1, 0, 30, 0, 3, 336, 0),
},
],
'ResponseMetadata': {
'...': '...',
},
}
Describes the specified tags.
You can use filters to limit the results. For example, you can query for the tags for a specific Auto Scaling group. You can specify multiple values for a filter. A tag must match at least one of the specified values for it to be included in the results.
You can also specify multiple filters. The result includes information for a particular tag only if it matches all the filters. If there's no match, no special message is returned.
For more information, see Tag Auto Scaling groups and instances in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.describe_tags(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
NextToken='string',
MaxRecords=123
)
One or more filters to scope the tags to return. The maximum number of filters per filter type (for example, auto-scaling-group
) is 1000.
Describes a filter that is used to return a more specific list of results from a describe operation.
If you specify multiple filters, the filters are automatically logically joined with an AND
, and the request returns only the results that match all of the specified filters.
For more information, see Tag Auto Scaling groups and instances in the Amazon EC2 Auto Scaling User Guide .
The name of the filter.
The valid values for Name
depend on which API operation you're using with the filter ( DescribeAutoScalingGroups or DescribeTags ).
DescribeAutoScalingGroups
Valid values for Name
include the following:
tag-key
- Accepts tag keys. The results only include information about the Auto Scaling groups associated with these tag keys.tag-value
- Accepts tag values. The results only include information about the Auto Scaling groups associated with these tag values.tag:<key>
- Accepts the key/value combination of the tag. Use the tag key in the filter name and the tag value as the filter value. The results only include information about the Auto Scaling groups associated with the specified key/value combination.DescribeTags
Valid values for Name
include the following:
auto-scaling-group
- Accepts the names of Auto Scaling groups. The results only include information about the tags associated with these Auto Scaling groups.key
- Accepts tag keys. The results only include information about the tags associated with these tag keys.value
- Accepts tag values. The results only include information about the tags associated with these tag values.propagate-at-launch
- Accepts a Boolean value, which specifies whether tags propagate to instances at launch. The results only include information about the tags associated with the specified Boolean value.One or more filter values. Filter values are case-sensitive.
If you specify multiple values for a filter, the values are automatically logically joined with an OR
, and the request returns all results that match any of the specified values. For example, specify "tag:environment" for the filter name and "production,development" for the filter values to find Auto Scaling groups with the tag "environment=production" or "environment=development".
50
and the maximum value is 100
.dict
Response Syntax
{
'Tags': [
{
'ResourceId': 'string',
'ResourceType': 'string',
'Key': 'string',
'Value': 'string',
'PropagateAtLaunch': True|False
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
Tags (list) --
One or more tags.
(dict) --
Describes a tag for an Auto Scaling group.
ResourceId (string) --
The name of the group.
ResourceType (string) --
The type of resource. The only supported value is auto-scaling-group
.
Key (string) --
The tag key.
Value (string) --
The tag value.
PropagateAtLaunch (boolean) --
Determines whether the tag is added to new instances as they are launched in the group.
NextToken (string) --
A string that indicates that the response contains more items than can be returned in a single response. To receive additional items, specify this string for the NextToken
value when requesting the next set of items. This value is null when there are no more items to return.
Exceptions
AutoScaling.Client.exceptions.InvalidNextToken
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example describes the tags for the specified Auto Scaling group.
response = client.describe_tags(
Filters=[
{
'Name': 'auto-scaling-group',
'Values': [
'my-auto-scaling-group',
],
},
],
)
print(response)
Expected Output:
{
'Tags': [
{
'Key': 'Dept',
'PropagateAtLaunch': True,
'ResourceId': 'my-auto-scaling-group',
'ResourceType': 'auto-scaling-group',
'Value': 'Research',
},
{
'Key': 'Role',
'PropagateAtLaunch': True,
'ResourceId': 'my-auto-scaling-group',
'ResourceType': 'auto-scaling-group',
'Value': 'WebServer',
},
],
'ResponseMetadata': {
'...': '...',
},
}
describe_termination_policy_types
()¶Describes the termination policies supported by Amazon EC2 Auto Scaling.
For more information, see Work with Amazon EC2 Auto Scaling termination policies in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.describe_termination_policy_types()
{
'TerminationPolicyTypes': [
'string',
]
}
Response Structure
The termination policies supported by Amazon EC2 Auto Scaling: OldestInstance
, OldestLaunchConfiguration
, NewestInstance
, ClosestToNextInstanceHour
, Default
, OldestLaunchTemplate
, and AllocationStrategy
.
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example describes the available termination policy types.
response = client.describe_termination_policy_types(
)
print(response)
Expected Output:
{
'TerminationPolicyTypes': [
'ClosestToNextInstanceHour',
'Default',
'NewestInstance',
'OldestInstance',
'OldestLaunchConfiguration',
],
'ResponseMetadata': {
'...': '...',
},
}
describe_traffic_sources
(**kwargs)¶Reserved for use with Amazon VPC Lattice, which is in preview and subject to change. Do not use this API for production workloads. This API is also subject to change.
Gets information about the traffic sources for the specified Auto Scaling group.
See also: AWS API Documentation
Request Syntax
response = client.describe_traffic_sources(
AutoScalingGroupName='string',
TrafficSourceType='string',
NextToken='string',
MaxRecords=123
)
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
The type of traffic source you are describing. Currently, the only valid value is vpc-lattice
.
50
.dict
Response Syntax
{
'TrafficSources': [
{
'TrafficSource': 'string',
'State': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
TrafficSources (list) --
Information about the traffic sources.
(dict) --
Describes the state of a traffic source.
TrafficSource (string) --
The unique identifier of the traffic source. Currently, this is the Amazon Resource Name (ARN) for a VPC Lattice target group.
State (string) --
The following are the possible states for a VPC Lattice target group:
Adding
- The Auto Scaling instances are being registered with the target group.Added
- All Auto Scaling instances are registered with the target group.InService
- At least one Auto Scaling instance passed the VPC_LATTICE
health check.Removing
- The Auto Scaling instances are being deregistered from the target group. If connection draining is enabled, VPC Lattice waits for in-flight requests to complete before deregistering the instances.Removed
- All Auto Scaling instances are deregistered from the target group.NextToken (string) --
This string indicates that the response contains more items than can be returned in a single response. To receive additional items, specify this string for the NextToken
value when requesting the next set of items. This value is null when there are no more items to return.
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.InvalidNextToken
describe_warm_pool
(**kwargs)¶Gets information about a warm pool and its instances.
For more information, see Warm pools for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.describe_warm_pool(
AutoScalingGroupName='string',
MaxRecords=123,
NextToken='string'
)
[REQUIRED]
The name of the Auto Scaling group.
50
.dict
Response Syntax
{
'WarmPoolConfiguration': {
'MaxGroupPreparedCapacity': 123,
'MinSize': 123,
'PoolState': 'Stopped'|'Running'|'Hibernated',
'Status': 'PendingDelete',
'InstanceReusePolicy': {
'ReuseOnScaleIn': True|False
}
},
'Instances': [
{
'InstanceId': 'string',
'InstanceType': 'string',
'AvailabilityZone': 'string',
'LifecycleState': 'Pending'|'Pending:Wait'|'Pending:Proceed'|'Quarantined'|'InService'|'Terminating'|'Terminating:Wait'|'Terminating:Proceed'|'Terminated'|'Detaching'|'Detached'|'EnteringStandby'|'Standby'|'Warmed:Pending'|'Warmed:Pending:Wait'|'Warmed:Pending:Proceed'|'Warmed:Terminating'|'Warmed:Terminating:Wait'|'Warmed:Terminating:Proceed'|'Warmed:Terminated'|'Warmed:Stopped'|'Warmed:Running'|'Warmed:Hibernated',
'HealthStatus': 'string',
'LaunchConfigurationName': 'string',
'LaunchTemplate': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'ProtectedFromScaleIn': True|False,
'WeightedCapacity': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
WarmPoolConfiguration (dict) --
The warm pool configuration details.
MaxGroupPreparedCapacity (integer) --
The maximum number of instances that are allowed to be in the warm pool or in any state except Terminated
for the Auto Scaling group.
MinSize (integer) --
The minimum number of instances to maintain in the warm pool.
PoolState (string) --
The instance state to transition to after the lifecycle actions are complete.
Status (string) --
The status of a warm pool that is marked for deletion.
InstanceReusePolicy (dict) --
The instance reuse policy.
ReuseOnScaleIn (boolean) --
Specifies whether instances in the Auto Scaling group can be returned to the warm pool on scale in.
Instances (list) --
The instances that are currently in the warm pool.
(dict) --
Describes an EC2 instance.
InstanceId (string) --
The ID of the instance.
InstanceType (string) --
The instance type of the EC2 instance.
AvailabilityZone (string) --
The Availability Zone in which the instance is running.
LifecycleState (string) --
A description of the current lifecycle state. The Quarantined
state is not used. For information about lifecycle states, see Instance lifecycle in the Amazon EC2 Auto Scaling User Guide .
HealthStatus (string) --
The last reported health status of the instance. "Healthy" means that the instance is healthy and should remain in service. "Unhealthy" means that the instance is unhealthy and that Amazon EC2 Auto Scaling should terminate and replace it.
LaunchConfigurationName (string) --
The launch configuration associated with the instance.
LaunchTemplate (dict) --
The launch template for the instance.
LaunchTemplateId (string) --
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
LaunchTemplateName (string) --
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
Version (string) --
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
ProtectedFromScaleIn (boolean) --
Indicates whether the instance is protected from termination by Amazon EC2 Auto Scaling when scaling in.
WeightedCapacity (string) --
The number of capacity units contributed by the instance based on its instance type.
Valid Range: Minimum value of 1. Maximum value of 999.
NextToken (string) --
This string indicates that the response contains more items than can be returned in a single response. To receive additional items, specify this string for the NextToken
value when requesting the next set of items. This value is null when there are no more items to return.
Exceptions
AutoScaling.Client.exceptions.InvalidNextToken
AutoScaling.Client.exceptions.LimitExceededFault
AutoScaling.Client.exceptions.ResourceContentionFault
detach_instances
(**kwargs)¶Removes one or more instances from the specified Auto Scaling group.
After the instances are detached, you can manage them independent of the Auto Scaling group.
If you do not specify the option to decrement the desired capacity, Amazon EC2 Auto Scaling launches instances to replace the ones that are detached.
If there is a Classic Load Balancer attached to the Auto Scaling group, the instances are deregistered from the load balancer. If there are target groups attached to the Auto Scaling group, the instances are deregistered from the target groups.
For more information, see Detach EC2 instances from your Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.detach_instances(
InstanceIds=[
'string',
],
AutoScalingGroupName='string',
ShouldDecrementDesiredCapacity=True|False
)
The IDs of the instances. You can specify up to 20 instances.
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
Indicates whether the Auto Scaling group decrements the desired capacity value by the number of instances detached.
dict
Response Syntax
{
'Activities': [
{
'ActivityId': 'string',
'AutoScalingGroupName': 'string',
'Description': 'string',
'Cause': 'string',
'StartTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'StatusCode': 'PendingSpotBidPlacement'|'WaitingForSpotInstanceRequestId'|'WaitingForSpotInstanceId'|'WaitingForInstanceId'|'PreInService'|'InProgress'|'WaitingForELBConnectionDraining'|'MidLifecycleAction'|'WaitingForInstanceWarmup'|'Successful'|'Failed'|'Cancelled',
'StatusMessage': 'string',
'Progress': 123,
'Details': 'string',
'AutoScalingGroupState': 'string',
'AutoScalingGroupARN': 'string'
},
]
}
Response Structure
(dict) --
Activities (list) --
The activities related to detaching the instances from the Auto Scaling group.
(dict) --
Describes scaling activity, which is a long-running process that represents a change to your Auto Scaling group, such as changing its size or replacing an instance.
ActivityId (string) --
The ID of the activity.
AutoScalingGroupName (string) --
The name of the Auto Scaling group.
Description (string) --
A friendly, more verbose description of the activity.
Cause (string) --
The reason the activity began.
StartTime (datetime) --
The start time of the activity.
EndTime (datetime) --
The end time of the activity.
StatusCode (string) --
The current status of the activity.
StatusMessage (string) --
A friendly, more verbose description of the activity status.
Progress (integer) --
A value between 0 and 100 that indicates the progress of the activity.
Details (string) --
The details about the activity.
AutoScalingGroupState (string) --
The state of the Auto Scaling group, which is either InService
or Deleted
.
AutoScalingGroupARN (string) --
The Amazon Resource Name (ARN) of the Auto Scaling group.
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example detaches the specified instance from the specified Auto Scaling group.
response = client.detach_instances(
AutoScalingGroupName='my-auto-scaling-group',
InstanceIds=[
'i-93633f9b',
],
ShouldDecrementDesiredCapacity=True,
)
print(response)
Expected Output:
{
'Activities': [
{
'ActivityId': '5091cb52-547a-47ce-a236-c9ccbc2cb2c9',
'AutoScalingGroupName': 'my-auto-scaling-group',
'Cause': 'At 2015-04-12T15:02:16Z instance i-93633f9b was detached in response to a user request, shrinking the capacity from 2 to 1.',
'Description': 'Detaching EC2 instance: i-93633f9b',
'Details': 'details',
'Progress': 50,
'StartTime': datetime(2015, 4, 12, 15, 2, 16, 6, 102, 0),
'StatusCode': 'InProgress',
},
],
'ResponseMetadata': {
'...': '...',
},
}
detach_load_balancer_target_groups
(**kwargs)¶Detaches one or more target groups from the specified Auto Scaling group.
When you detach a target group, it enters the Removing
state while deregistering the instances in the group. When all instances are deregistered, then you can no longer describe the target group using the DescribeLoadBalancerTargetGroups API call. The instances remain running.
Note
You can use this operation to detach target groups that were attached by using AttachLoadBalancerTargetGroups, but not for target groups that were attached by using AttachTrafficSources.
See also: AWS API Documentation
Request Syntax
response = client.detach_load_balancer_target_groups(
AutoScalingGroupName='string',
TargetGroupARNs=[
'string',
]
)
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
The Amazon Resource Names (ARN) of the target groups. You can specify up to 10 target groups.
dict
Response Syntax
{}
Response Structure
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example detaches the specified target group from the specified Auto Scaling group
response = client.detach_load_balancer_target_groups(
AutoScalingGroupName='my-auto-scaling-group',
TargetGroupARNs=[
'arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067',
],
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
detach_load_balancers
(**kwargs)¶Detaches one or more Classic Load Balancers from the specified Auto Scaling group.
This operation detaches only Classic Load Balancers. If you have Application Load Balancers, Network Load Balancers, or Gateway Load Balancer, use the DetachLoadBalancerTargetGroups API instead.
When you detach a load balancer, it enters the Removing
state while deregistering the instances in the group. When all instances are deregistered, then you can no longer describe the load balancer using the DescribeLoadBalancers API call. The instances remain running.
See also: AWS API Documentation
Request Syntax
response = client.detach_load_balancers(
AutoScalingGroupName='string',
LoadBalancerNames=[
'string',
]
)
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
The names of the load balancers. You can specify up to 10 load balancers.
dict
Response Syntax
{}
Response Structure
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example detaches the specified load balancer from the specified Auto Scaling group.
response = client.detach_load_balancers(
AutoScalingGroupName='my-auto-scaling-group',
LoadBalancerNames=[
'my-load-balancer',
],
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
detach_traffic_sources
(**kwargs)¶Reserved for use with Amazon VPC Lattice, which is in preview and subject to change. Do not use this API for production workloads. This API is also subject to change.
Detaches one or more traffic sources from the specified Auto Scaling group.
See also: AWS API Documentation
Request Syntax
response = client.detach_traffic_sources(
AutoScalingGroupName='string',
TrafficSources=[
{
'Identifier': 'string'
},
]
)
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
The unique identifiers of one or more traffic sources you are detaching. You can specify up to 10 traffic sources.
Currently, you must specify an Amazon Resource Name (ARN) for an existing VPC Lattice target group. When you detach a target group, it enters the Removing
state while deregistering the instances in the group. When all instances are deregistered, then you can no longer describe the target group using the DescribeTrafficSources API call. The instances continue to run.
Describes the identifier of a traffic source.
Currently, you must specify an Amazon Resource Name (ARN) for an existing VPC Lattice target group.
The unique identifier of the traffic source.
dict
Response Syntax
{}
Response Structure
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
disable_metrics_collection
(**kwargs)¶Disables group metrics collection for the specified Auto Scaling group.
See also: AWS API Documentation
Request Syntax
response = client.disable_metrics_collection(
AutoScalingGroupName='string',
Metrics=[
'string',
]
)
[REQUIRED]
The name of the Auto Scaling group.
Identifies the metrics to disable.
You can specify one or more of the following metrics:
GroupMinSize
GroupMaxSize
GroupDesiredCapacity
GroupInServiceInstances
GroupPendingInstances
GroupStandbyInstances
GroupTerminatingInstances
GroupTotalInstances
GroupInServiceCapacity
GroupPendingCapacity
GroupStandbyCapacity
GroupTerminatingCapacity
GroupTotalCapacity
WarmPoolDesiredCapacity
WarmPoolWarmedCapacity
WarmPoolPendingCapacity
WarmPoolTerminatingCapacity
WarmPoolTotalCapacity
GroupAndWarmPoolDesiredCapacity
GroupAndWarmPoolTotalCapacity
If you omit this property, all metrics are disabled.
For more information, see Auto Scaling group metrics in the Amazon EC2 Auto Scaling User Guide .
None
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example disables collecting data for the GroupDesiredCapacity metric for the specified Auto Scaling group.
response = client.disable_metrics_collection(
AutoScalingGroupName='my-auto-scaling-group',
Metrics=[
'GroupDesiredCapacity',
],
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
enable_metrics_collection
(**kwargs)¶Enables group metrics collection for the specified Auto Scaling group.
You can use these metrics to track changes in an Auto Scaling group and to set alarms on threshold values. You can view group metrics using the Amazon EC2 Auto Scaling console or the CloudWatch console. For more information, see Monitor CloudWatch metrics for your Auto Scaling groups and instances in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.enable_metrics_collection(
AutoScalingGroupName='string',
Metrics=[
'string',
],
Granularity='string'
)
[REQUIRED]
The name of the Auto Scaling group.
Identifies the metrics to enable.
You can specify one or more of the following metrics:
GroupMinSize
GroupMaxSize
GroupDesiredCapacity
GroupInServiceInstances
GroupPendingInstances
GroupStandbyInstances
GroupTerminatingInstances
GroupTotalInstances
GroupInServiceCapacity
GroupPendingCapacity
GroupStandbyCapacity
GroupTerminatingCapacity
GroupTotalCapacity
WarmPoolDesiredCapacity
WarmPoolWarmedCapacity
WarmPoolPendingCapacity
WarmPoolTerminatingCapacity
WarmPoolTotalCapacity
GroupAndWarmPoolDesiredCapacity
GroupAndWarmPoolTotalCapacity
If you specify Granularity
and don't specify any metrics, all metrics are enabled.
For more information, see Auto Scaling group metrics in the Amazon EC2 Auto Scaling User Guide .
[REQUIRED]
The frequency at which Amazon EC2 Auto Scaling sends aggregated data to CloudWatch. The only valid value is 1Minute
.
None
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example enables data collection for the specified Auto Scaling group.
response = client.enable_metrics_collection(
AutoScalingGroupName='my-auto-scaling-group',
Granularity='1Minute',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
enter_standby
(**kwargs)¶Moves the specified instances into the standby state.
If you choose to decrement the desired capacity of the Auto Scaling group, the instances can enter standby as long as the desired capacity of the Auto Scaling group after the instances are placed into standby is equal to or greater than the minimum capacity of the group.
If you choose not to decrement the desired capacity of the Auto Scaling group, the Auto Scaling group launches new instances to replace the instances on standby.
For more information, see Temporarily removing instances from your Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.enter_standby(
InstanceIds=[
'string',
],
AutoScalingGroupName='string',
ShouldDecrementDesiredCapacity=True|False
)
The IDs of the instances. You can specify up to 20 instances.
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
Indicates whether to decrement the desired capacity of the Auto Scaling group by the number of instances moved to Standby
mode.
dict
Response Syntax
{
'Activities': [
{
'ActivityId': 'string',
'AutoScalingGroupName': 'string',
'Description': 'string',
'Cause': 'string',
'StartTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'StatusCode': 'PendingSpotBidPlacement'|'WaitingForSpotInstanceRequestId'|'WaitingForSpotInstanceId'|'WaitingForInstanceId'|'PreInService'|'InProgress'|'WaitingForELBConnectionDraining'|'MidLifecycleAction'|'WaitingForInstanceWarmup'|'Successful'|'Failed'|'Cancelled',
'StatusMessage': 'string',
'Progress': 123,
'Details': 'string',
'AutoScalingGroupState': 'string',
'AutoScalingGroupARN': 'string'
},
]
}
Response Structure
(dict) --
Activities (list) --
The activities related to moving instances into Standby
mode.
(dict) --
Describes scaling activity, which is a long-running process that represents a change to your Auto Scaling group, such as changing its size or replacing an instance.
ActivityId (string) --
The ID of the activity.
AutoScalingGroupName (string) --
The name of the Auto Scaling group.
Description (string) --
A friendly, more verbose description of the activity.
Cause (string) --
The reason the activity began.
StartTime (datetime) --
The start time of the activity.
EndTime (datetime) --
The end time of the activity.
StatusCode (string) --
The current status of the activity.
StatusMessage (string) --
A friendly, more verbose description of the activity status.
Progress (integer) --
A value between 0 and 100 that indicates the progress of the activity.
Details (string) --
The details about the activity.
AutoScalingGroupState (string) --
The state of the Auto Scaling group, which is either InService
or Deleted
.
AutoScalingGroupARN (string) --
The Amazon Resource Name (ARN) of the Auto Scaling group.
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example puts the specified instance into standby mode.
response = client.enter_standby(
AutoScalingGroupName='my-auto-scaling-group',
InstanceIds=[
'i-93633f9b',
],
ShouldDecrementDesiredCapacity=True,
)
print(response)
Expected Output:
{
'Activities': [
{
'ActivityId': 'ffa056b4-6ed3-41ba-ae7c-249dfae6eba1',
'AutoScalingGroupName': 'my-auto-scaling-group',
'Cause': 'At 2015-04-12T15:10:23Z instance i-93633f9b was moved to standby in response to a user request, shrinking the capacity from 2 to 1.',
'Description': 'Moving EC2 instance to Standby: i-93633f9b',
'Details': 'details',
'Progress': 50,
'StartTime': datetime(2015, 4, 12, 15, 10, 23, 6, 102, 0),
'StatusCode': 'InProgress',
},
],
'ResponseMetadata': {
'...': '...',
},
}
execute_policy
(**kwargs)¶Executes the specified policy. This can be useful for testing the design of your scaling policy.
See also: AWS API Documentation
Request Syntax
response = client.execute_policy(
AutoScalingGroupName='string',
PolicyName='string',
HonorCooldown=True|False,
MetricValue=123.0,
BreachThreshold=123.0
)
[REQUIRED]
The name or ARN of the policy.
Indicates whether Amazon EC2 Auto Scaling waits for the cooldown period to complete before executing the policy.
Valid only if the policy type is SimpleScaling
. For more information, see Scaling cooldowns for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
The metric value to compare to BreachThreshold
. This enables you to execute a policy of type StepScaling
and determine which step adjustment to use. For example, if the breach threshold is 50 and you want to use a step adjustment with a lower bound of 0 and an upper bound of 10, you can set the metric value to 59.
If you specify a metric value that doesn't correspond to a step adjustment for the policy, the call returns an error.
Required if the policy type is StepScaling
and not supported otherwise.
The breach threshold for the alarm.
Required if the policy type is StepScaling
and not supported otherwise.
None
Exceptions
AutoScaling.Client.exceptions.ScalingActivityInProgressFault
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example executes the specified policy.
response = client.execute_policy(
AutoScalingGroupName='my-auto-scaling-group',
BreachThreshold=50.0,
MetricValue=59.0,
PolicyName='my-step-scale-out-policy',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
exit_standby
(**kwargs)¶Moves the specified instances out of the standby state.
After you put the instances back in service, the desired capacity is incremented.
For more information, see Temporarily removing instances from your Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.exit_standby(
InstanceIds=[
'string',
],
AutoScalingGroupName='string'
)
The IDs of the instances. You can specify up to 20 instances.
[REQUIRED]
The name of the Auto Scaling group.
dict
Response Syntax
{
'Activities': [
{
'ActivityId': 'string',
'AutoScalingGroupName': 'string',
'Description': 'string',
'Cause': 'string',
'StartTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'StatusCode': 'PendingSpotBidPlacement'|'WaitingForSpotInstanceRequestId'|'WaitingForSpotInstanceId'|'WaitingForInstanceId'|'PreInService'|'InProgress'|'WaitingForELBConnectionDraining'|'MidLifecycleAction'|'WaitingForInstanceWarmup'|'Successful'|'Failed'|'Cancelled',
'StatusMessage': 'string',
'Progress': 123,
'Details': 'string',
'AutoScalingGroupState': 'string',
'AutoScalingGroupARN': 'string'
},
]
}
Response Structure
(dict) --
Activities (list) --
The activities related to moving instances out of Standby
mode.
(dict) --
Describes scaling activity, which is a long-running process that represents a change to your Auto Scaling group, such as changing its size or replacing an instance.
ActivityId (string) --
The ID of the activity.
AutoScalingGroupName (string) --
The name of the Auto Scaling group.
Description (string) --
A friendly, more verbose description of the activity.
Cause (string) --
The reason the activity began.
StartTime (datetime) --
The start time of the activity.
EndTime (datetime) --
The end time of the activity.
StatusCode (string) --
The current status of the activity.
StatusMessage (string) --
A friendly, more verbose description of the activity status.
Progress (integer) --
A value between 0 and 100 that indicates the progress of the activity.
Details (string) --
The details about the activity.
AutoScalingGroupState (string) --
The state of the Auto Scaling group, which is either InService
or Deleted
.
AutoScalingGroupARN (string) --
The Amazon Resource Name (ARN) of the Auto Scaling group.
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example moves the specified instance out of standby mode.
response = client.exit_standby(
AutoScalingGroupName='my-auto-scaling-group',
InstanceIds=[
'i-93633f9b',
],
)
print(response)
Expected Output:
{
'Activities': [
{
'ActivityId': '142928e1-a2dc-453a-9b24-b85ad6735928',
'AutoScalingGroupName': 'my-auto-scaling-group',
'Cause': 'At 2015-04-12T15:14:29Z instance i-93633f9b was moved out of standby in response to a user request, increasing the capacity from 1 to 2.',
'Description': 'Moving EC2 instance out of Standby: i-93633f9b',
'Details': 'details',
'Progress': 30,
'StartTime': datetime(2015, 4, 12, 15, 14, 29, 6, 102, 0),
'StatusCode': 'PreInService',
},
],
'ResponseMetadata': {
'...': '...',
},
}
get_paginator
(operation_name)¶Create a paginator for an operation.
create_foo
, and you'd normally invoke the
operation as client.create_foo(**kwargs)
, if the
create_foo
operation can be paginated, you can use the
call client.get_paginator("create_foo")
.client.can_paginate
method to
check if an operation is pageable.get_predictive_scaling_forecast
(**kwargs)¶Retrieves the forecast data for a predictive scaling policy.
Load forecasts are predictions of the hourly load values using historical load data from CloudWatch and an analysis of historical trends. Capacity forecasts are represented as predicted values for the minimum capacity that is needed on an hourly basis, based on the hourly load forecast.
A minimum of 24 hours of data is required to create the initial forecasts. However, having a full 14 days of historical data results in more accurate forecasts.
For more information, see Predictive scaling for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.get_predictive_scaling_forecast(
AutoScalingGroupName='string',
PolicyName='string',
StartTime=datetime(2015, 1, 1),
EndTime=datetime(2015, 1, 1)
)
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
The name of the policy.
[REQUIRED]
The inclusive start time of the time range for the forecast data to get. At most, the date and time can be one year before the current date and time.
[REQUIRED]
The exclusive end time of the time range for the forecast data to get. The maximum time duration between the start and end time is 30 days.
Although this parameter can accept a date and time that is more than two days in the future, the availability of forecast data has limits. Amazon EC2 Auto Scaling only issues forecasts for periods of two days in advance.
dict
Response Syntax
{
'LoadForecast': [
{
'Timestamps': [
datetime(2015, 1, 1),
],
'Values': [
123.0,
],
'MetricSpecification': {
'TargetValue': 123.0,
'PredefinedMetricPairSpecification': {
'PredefinedMetricType': 'ASGCPUUtilization'|'ASGNetworkIn'|'ASGNetworkOut'|'ALBRequestCount',
'ResourceLabel': 'string'
},
'PredefinedScalingMetricSpecification': {
'PredefinedMetricType': 'ASGAverageCPUUtilization'|'ASGAverageNetworkIn'|'ASGAverageNetworkOut'|'ALBRequestCountPerTarget',
'ResourceLabel': 'string'
},
'PredefinedLoadMetricSpecification': {
'PredefinedMetricType': 'ASGTotalCPUUtilization'|'ASGTotalNetworkIn'|'ASGTotalNetworkOut'|'ALBTargetGroupRequestCount',
'ResourceLabel': 'string'
},
'CustomizedScalingMetricSpecification': {
'MetricDataQueries': [
{
'Id': 'string',
'Expression': 'string',
'MetricStat': {
'Metric': {
'Namespace': 'string',
'MetricName': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
]
},
'Stat': 'string',
'Unit': 'string'
},
'Label': 'string',
'ReturnData': True|False
},
]
},
'CustomizedLoadMetricSpecification': {
'MetricDataQueries': [
{
'Id': 'string',
'Expression': 'string',
'MetricStat': {
'Metric': {
'Namespace': 'string',
'MetricName': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
]
},
'Stat': 'string',
'Unit': 'string'
},
'Label': 'string',
'ReturnData': True|False
},
]
},
'CustomizedCapacityMetricSpecification': {
'MetricDataQueries': [
{
'Id': 'string',
'Expression': 'string',
'MetricStat': {
'Metric': {
'Namespace': 'string',
'MetricName': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
]
},
'Stat': 'string',
'Unit': 'string'
},
'Label': 'string',
'ReturnData': True|False
},
]
}
}
},
],
'CapacityForecast': {
'Timestamps': [
datetime(2015, 1, 1),
],
'Values': [
123.0,
]
},
'UpdateTime': datetime(2015, 1, 1)
}
Response Structure
(dict) --
LoadForecast (list) --
The load forecast.
(dict) --
A GetPredictiveScalingForecast
call returns the load forecast for a predictive scaling policy. This structure includes the data points for that load forecast, along with the timestamps of those data points and the metric specification.
Timestamps (list) --
The timestamps for the data points, in UTC format.
Values (list) --
The values of the data points.
MetricSpecification (dict) --
The metric specification for the load forecast.
TargetValue (float) --
Specifies the target utilization.
Note
Some metrics are based on a count instead of a percentage, such as the request count for an Application Load Balancer or the number of messages in an SQS queue. If the scaling policy specifies one of these metrics, specify the target utilization as the optimal average request or message count per instance during any one-minute interval.
PredefinedMetricPairSpecification (dict) --
The predefined metric pair specification from which Amazon EC2 Auto Scaling determines the appropriate scaling metric and load metric to use.
PredefinedMetricType (string) --
Indicates which metrics to use. There are two different types of metrics for each metric type: one is a load metric and one is a scaling metric. For example, if the metric type is ASGCPUUtilization
, the Auto Scaling group's total CPU metric is used as the load metric, and the average CPU metric is used for the scaling metric.
ResourceLabel (string) --
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the total and average request count served by your Auto Scaling group. You can't specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff
.
Where:
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
PredefinedScalingMetricSpecification (dict) --
The predefined scaling metric specification.
PredefinedMetricType (string) --
The metric type.
ResourceLabel (string) --
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the average request count served by your Auto Scaling group. You can't specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff
.
Where:
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
PredefinedLoadMetricSpecification (dict) --
The predefined load metric specification.
PredefinedMetricType (string) --
The metric type.
ResourceLabel (string) --
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the request count served by your Auto Scaling group. You can't specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff
.
Where:
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
CustomizedScalingMetricSpecification (dict) --
The customized scaling metric specification.
MetricDataQueries (list) --
One or more metric data queries to provide the data points for a scaling metric. Use multiple metric data queries only if you are performing a math expression on returned data.
(dict) --
The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
For more information and examples, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide .
Id (string) --
A short name that identifies the object's results in the response. This name must be unique among all MetricDataQuery
objects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
Expression (string) --
The math expression to perform on the returned data, if this object is performing a math expression. This expression can use the Id
of the other metrics to refer to those metrics, and can also use the Id
of other expressions to use the result of those expressions.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
MetricStat (dict) --
Information about the metric data to return.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
Metric (dict) --
The CloudWatch metric to return, including the metric name, namespace, and dimensions. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics.
Namespace (string) --
The namespace of the metric. For more information, see the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
MetricName (string) --
The name of the metric.
Dimensions (list) --
The dimensions for the metric. For the list of available dimensions, see the Amazon Web Services documentation available from the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
(dict) --
Describes the dimension of a metric.
Name (string) --
The name of the dimension.
Value (string) --
The value of the dimension.
Stat (string) --
The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metrics for predictive scaling are Average
and Sum
.
Unit (string) --
The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
Label (string) --
A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.
ReturnData (boolean) --
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify true
for this value for only the final math expression that the metric specification is based on. You must specify false
for ReturnData
for all the other metrics and expressions used in the metric specification.
If you are only retrieving metrics and not performing any math expressions, do not specify anything for ReturnData
. This sets it to its default ( true
).
CustomizedLoadMetricSpecification (dict) --
The customized load metric specification.
MetricDataQueries (list) --
One or more metric data queries to provide the data points for a load metric. Use multiple metric data queries only if you are performing a math expression on returned data.
(dict) --
The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
For more information and examples, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide .
Id (string) --
A short name that identifies the object's results in the response. This name must be unique among all MetricDataQuery
objects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
Expression (string) --
The math expression to perform on the returned data, if this object is performing a math expression. This expression can use the Id
of the other metrics to refer to those metrics, and can also use the Id
of other expressions to use the result of those expressions.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
MetricStat (dict) --
Information about the metric data to return.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
Metric (dict) --
The CloudWatch metric to return, including the metric name, namespace, and dimensions. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics.
Namespace (string) --
The namespace of the metric. For more information, see the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
MetricName (string) --
The name of the metric.
Dimensions (list) --
The dimensions for the metric. For the list of available dimensions, see the Amazon Web Services documentation available from the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
(dict) --
Describes the dimension of a metric.
Name (string) --
The name of the dimension.
Value (string) --
The value of the dimension.
Stat (string) --
The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metrics for predictive scaling are Average
and Sum
.
Unit (string) --
The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
Label (string) --
A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.
ReturnData (boolean) --
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify true
for this value for only the final math expression that the metric specification is based on. You must specify false
for ReturnData
for all the other metrics and expressions used in the metric specification.
If you are only retrieving metrics and not performing any math expressions, do not specify anything for ReturnData
. This sets it to its default ( true
).
CustomizedCapacityMetricSpecification (dict) --
The customized capacity metric specification.
MetricDataQueries (list) --
One or more metric data queries to provide the data points for a capacity metric. Use multiple metric data queries only if you are performing a math expression on returned data.
(dict) --
The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
For more information and examples, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide .
Id (string) --
A short name that identifies the object's results in the response. This name must be unique among all MetricDataQuery
objects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
Expression (string) --
The math expression to perform on the returned data, if this object is performing a math expression. This expression can use the Id
of the other metrics to refer to those metrics, and can also use the Id
of other expressions to use the result of those expressions.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
MetricStat (dict) --
Information about the metric data to return.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
Metric (dict) --
The CloudWatch metric to return, including the metric name, namespace, and dimensions. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics.
Namespace (string) --
The namespace of the metric. For more information, see the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
MetricName (string) --
The name of the metric.
Dimensions (list) --
The dimensions for the metric. For the list of available dimensions, see the Amazon Web Services documentation available from the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
(dict) --
Describes the dimension of a metric.
Name (string) --
The name of the dimension.
Value (string) --
The value of the dimension.
Stat (string) --
The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metrics for predictive scaling are Average
and Sum
.
Unit (string) --
The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
Label (string) --
A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.
ReturnData (boolean) --
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify true
for this value for only the final math expression that the metric specification is based on. You must specify false
for ReturnData
for all the other metrics and expressions used in the metric specification.
If you are only retrieving metrics and not performing any math expressions, do not specify anything for ReturnData
. This sets it to its default ( true
).
CapacityForecast (dict) --
The capacity forecast.
Timestamps (list) --
The timestamps for the data points, in UTC format.
Values (list) --
The values of the data points.
UpdateTime (datetime) --
The time the forecast was made.
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
get_waiter
(waiter_name)¶Returns an object that can wait for some condition.
put_lifecycle_hook
(**kwargs)¶Creates or updates a lifecycle hook for the specified Auto Scaling group.
Lifecycle hooks let you create solutions that are aware of events in the Auto Scaling instance lifecycle, and then perform a custom action on instances when the corresponding lifecycle event occurs.
This step is a part of the procedure for adding a lifecycle hook to an Auto Scaling group:
For more information, see Amazon EC2 Auto Scaling lifecycle hooks in the Amazon EC2 Auto Scaling User Guide .
If you exceed your maximum limit of lifecycle hooks, which by default is 50 per Auto Scaling group, the call fails.
You can view the lifecycle hooks for an Auto Scaling group using the DescribeLifecycleHooks API call. If you are no longer using a lifecycle hook, you can delete it by calling the DeleteLifecycleHook API.
See also: AWS API Documentation
Request Syntax
response = client.put_lifecycle_hook(
LifecycleHookName='string',
AutoScalingGroupName='string',
LifecycleTransition='string',
RoleARN='string',
NotificationTargetARN='string',
NotificationMetadata='string',
HeartbeatTimeout=123,
DefaultResult='string'
)
[REQUIRED]
The name of the lifecycle hook.
[REQUIRED]
The name of the Auto Scaling group.
The lifecycle transition. For Auto Scaling groups, there are two major lifecycle transitions.
autoscaling:EC2_INSTANCE_LAUNCHING
.autoscaling:EC2_INSTANCE_TERMINATING
.Required for new lifecycle hooks, but optional when updating existing hooks.
The ARN of the IAM role that allows the Auto Scaling group to publish to the specified notification target.
Valid only if the notification target is an Amazon SNS topic or an Amazon SQS queue. Required for new lifecycle hooks, but optional when updating existing hooks.
The Amazon Resource Name (ARN) of the notification target that Amazon EC2 Auto Scaling uses to notify you when an instance is in a wait state for the lifecycle hook. You can specify either an Amazon SNS topic or an Amazon SQS queue.
If you specify an empty string, this overrides the current ARN.
This operation uses the JSON format when sending notifications to an Amazon SQS queue, and an email key-value pair format when sending notifications to an Amazon SNS topic.
When you specify a notification target, Amazon EC2 Auto Scaling sends it a test message. Test messages contain the following additional key-value pair: "Event": "autoscaling:TEST_NOTIFICATION"
.
30
to 7200
seconds. The default value is 3600
seconds (1 hour).The action the Auto Scaling group takes when the lifecycle hook timeout elapses or if an unexpected failure occurs. The default value is ABANDON
.
Valid values: CONTINUE
| ABANDON
dict
Response Syntax
{}
Response Structure
Exceptions
AutoScaling.Client.exceptions.LimitExceededFault
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example creates a lifecycle hook for instance launch.
response = client.put_lifecycle_hook(
AutoScalingGroupName='my-auto-scaling-group',
DefaultResult='CONTINUE',
HeartbeatTimeout=300,
LifecycleHookName='my-launch-lifecycle-hook',
LifecycleTransition='autoscaling:EC2_INSTANCE_LAUNCHING',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
put_notification_configuration
(**kwargs)¶Configures an Auto Scaling group to send notifications when specified events take place. Subscribers to the specified topic can have messages delivered to an endpoint such as a web server or an email address.
This configuration overwrites any existing configuration.
For more information, see Getting Amazon SNS notifications when your Auto Scaling group scales in the Amazon EC2 Auto Scaling User Guide .
If you exceed your maximum limit of SNS topics, which is 10 per Auto Scaling group, the call fails.
See also: AWS API Documentation
Request Syntax
response = client.put_notification_configuration(
AutoScalingGroupName='string',
TopicARN='string',
NotificationTypes=[
'string',
]
)
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
The Amazon Resource Name (ARN) of the Amazon SNS topic.
[REQUIRED]
The type of event that causes the notification to be sent. To query the notification types supported by Amazon EC2 Auto Scaling, call the DescribeAutoScalingNotificationTypes API.
None
Exceptions
AutoScaling.Client.exceptions.LimitExceededFault
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.ServiceLinkedRoleFailure
Examples
This example adds the specified notification to the specified Auto Scaling group.
response = client.put_notification_configuration(
AutoScalingGroupName='my-auto-scaling-group',
NotificationTypes=[
'autoscaling:TEST_NOTIFICATION',
],
TopicARN='arn:aws:sns:us-west-2:123456789012:my-sns-topic',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
put_scaling_policy
(**kwargs)¶Creates or updates a scaling policy for an Auto Scaling group. Scaling policies are used to scale an Auto Scaling group based on configurable metrics. If no policies are defined, the dynamic scaling and predictive scaling features are not used.
For more information about using dynamic scaling, see Target tracking scaling policies and Step and simple scaling policies in the Amazon EC2 Auto Scaling User Guide .
For more information about using predictive scaling, see Predictive scaling for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
You can view the scaling policies for an Auto Scaling group using the DescribePolicies API call. If you are no longer using a scaling policy, you can delete it by calling the DeletePolicy API.
See also: AWS API Documentation
Request Syntax
response = client.put_scaling_policy(
AutoScalingGroupName='string',
PolicyName='string',
PolicyType='string',
AdjustmentType='string',
MinAdjustmentStep=123,
MinAdjustmentMagnitude=123,
ScalingAdjustment=123,
Cooldown=123,
MetricAggregationType='string',
StepAdjustments=[
{
'MetricIntervalLowerBound': 123.0,
'MetricIntervalUpperBound': 123.0,
'ScalingAdjustment': 123
},
],
EstimatedInstanceWarmup=123,
TargetTrackingConfiguration={
'PredefinedMetricSpecification': {
'PredefinedMetricType': 'ASGAverageCPUUtilization'|'ASGAverageNetworkIn'|'ASGAverageNetworkOut'|'ALBRequestCountPerTarget',
'ResourceLabel': 'string'
},
'CustomizedMetricSpecification': {
'MetricName': 'string',
'Namespace': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
],
'Statistic': 'Average'|'Minimum'|'Maximum'|'SampleCount'|'Sum',
'Unit': 'string',
'Metrics': [
{
'Id': 'string',
'Expression': 'string',
'MetricStat': {
'Metric': {
'Namespace': 'string',
'MetricName': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
]
},
'Stat': 'string',
'Unit': 'string'
},
'Label': 'string',
'ReturnData': True|False
},
]
},
'TargetValue': 123.0,
'DisableScaleIn': True|False
},
Enabled=True|False,
PredictiveScalingConfiguration={
'MetricSpecifications': [
{
'TargetValue': 123.0,
'PredefinedMetricPairSpecification': {
'PredefinedMetricType': 'ASGCPUUtilization'|'ASGNetworkIn'|'ASGNetworkOut'|'ALBRequestCount',
'ResourceLabel': 'string'
},
'PredefinedScalingMetricSpecification': {
'PredefinedMetricType': 'ASGAverageCPUUtilization'|'ASGAverageNetworkIn'|'ASGAverageNetworkOut'|'ALBRequestCountPerTarget',
'ResourceLabel': 'string'
},
'PredefinedLoadMetricSpecification': {
'PredefinedMetricType': 'ASGTotalCPUUtilization'|'ASGTotalNetworkIn'|'ASGTotalNetworkOut'|'ALBTargetGroupRequestCount',
'ResourceLabel': 'string'
},
'CustomizedScalingMetricSpecification': {
'MetricDataQueries': [
{
'Id': 'string',
'Expression': 'string',
'MetricStat': {
'Metric': {
'Namespace': 'string',
'MetricName': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
]
},
'Stat': 'string',
'Unit': 'string'
},
'Label': 'string',
'ReturnData': True|False
},
]
},
'CustomizedLoadMetricSpecification': {
'MetricDataQueries': [
{
'Id': 'string',
'Expression': 'string',
'MetricStat': {
'Metric': {
'Namespace': 'string',
'MetricName': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
]
},
'Stat': 'string',
'Unit': 'string'
},
'Label': 'string',
'ReturnData': True|False
},
]
},
'CustomizedCapacityMetricSpecification': {
'MetricDataQueries': [
{
'Id': 'string',
'Expression': 'string',
'MetricStat': {
'Metric': {
'Namespace': 'string',
'MetricName': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
]
},
'Stat': 'string',
'Unit': 'string'
},
'Label': 'string',
'ReturnData': True|False
},
]
}
},
],
'Mode': 'ForecastAndScale'|'ForecastOnly',
'SchedulingBufferTime': 123,
'MaxCapacityBreachBehavior': 'HonorMaxCapacity'|'IncreaseMaxCapacity',
'MaxCapacityBuffer': 123
}
)
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
The name of the policy.
One of the following policy types:
TargetTrackingScaling
StepScaling
SimpleScaling
(default)PredictiveScaling
Specifies how the scaling adjustment is interpreted (for example, an absolute number or a percentage). The valid values are ChangeInCapacity
, ExactCapacity
, and PercentChangeInCapacity
.
Required if the policy type is StepScaling
or SimpleScaling
. For more information, see Scaling adjustment types in the Amazon EC2 Auto Scaling User Guide .
MinAdjustmentMagnitude
instead.The minimum value to scale by when the adjustment type is PercentChangeInCapacity
. For example, suppose that you create a step scaling policy to scale out an Auto Scaling group by 25 percent and you specify a MinAdjustmentMagnitude
of 2. If the group has 4 instances and the scaling policy is performed, 25 percent of 4 is 1. However, because you specified a MinAdjustmentMagnitude
of 2, Amazon EC2 Auto Scaling scales out the group by 2 instances.
Valid only if the policy type is StepScaling
or SimpleScaling
. For more information, see Scaling adjustment types in the Amazon EC2 Auto Scaling User Guide .
Note
Some Auto Scaling groups use instance weights. In this case, set the MinAdjustmentMagnitude
to a value that is at least as large as your largest instance weight.
The amount by which to scale, based on the specified adjustment type. A positive value adds to the current capacity while a negative number removes from the current capacity. For exact capacity, you must specify a positive value.
Required if the policy type is SimpleScaling
. (Not used with any other policy type.)
A cooldown period, in seconds, that applies to a specific simple scaling policy. When a cooldown period is specified here, it overrides the default cooldown.
Valid only if the policy type is SimpleScaling
. For more information, see Scaling cooldowns for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
Default: None
The aggregation type for the CloudWatch metrics. The valid values are Minimum
, Maximum
, and Average
. If the aggregation type is null, the value is treated as Average
.
Valid only if the policy type is StepScaling
.
A set of adjustments that enable you to scale based on the size of the alarm breach.
Required if the policy type is StepScaling
. (Not used with any other policy type.)
Describes information used to create a step adjustment for a step scaling policy.
For the following examples, suppose that you have an alarm with a breach threshold of 50:
There are a few rules for the step adjustments for your step policy:
For more information, see Step adjustments in the Amazon EC2 Auto Scaling User Guide .
The lower bound for the difference between the alarm threshold and the CloudWatch metric. If the metric value is above the breach threshold, the lower bound is inclusive (the metric must be greater than or equal to the threshold plus the lower bound). Otherwise, it is exclusive (the metric must be greater than the threshold plus the lower bound). A null value indicates negative infinity.
The upper bound for the difference between the alarm threshold and the CloudWatch metric. If the metric value is above the breach threshold, the upper bound is exclusive (the metric must be less than the threshold plus the upper bound). Otherwise, it is inclusive (the metric must be less than or equal to the threshold plus the upper bound). A null value indicates positive infinity.
The upper bound must be greater than the lower bound.
The amount by which to scale, based on the specified adjustment type. A positive value adds to the current capacity while a negative number removes from the current capacity.
The amount by which to scale. The adjustment is based on the value that you specified in the AdjustmentType
property (either an absolute number or a percentage). A positive value adds to the current capacity and a negative number subtracts from the current capacity.
Not needed if the default instance warmup is defined for the group.
The estimated time, in seconds, until a newly launched instance can contribute to the CloudWatch metrics. This warm-up period applies to instances launched due to a specific target tracking or step scaling policy. When a warm-up period is specified here, it overrides the default instance warmup.
Valid only if the policy type is TargetTrackingScaling
or StepScaling
.
Note
The default is to use the value for the default instance warmup defined for the group. If default instance warmup is null, then EstimatedInstanceWarmup
falls back to the value of default cooldown.
A target tracking scaling policy. Provides support for predefined or custom metrics.
The following predefined metrics are available:
ASGAverageCPUUtilization
ASGAverageNetworkIn
ASGAverageNetworkOut
ALBRequestCountPerTarget
If you specify ALBRequestCountPerTarget
for the metric, you must specify the ResourceLabel
property with the PredefinedMetricSpecification
.
For more information, see TargetTrackingConfiguration in the Amazon EC2 Auto Scaling API Reference .
Required if the policy type is TargetTrackingScaling
.
A predefined metric. You must specify either a predefined metric or a customized metric.
The metric type. The following predefined metrics are available:
ASGAverageCPUUtilization
- Average CPU utilization of the Auto Scaling group.ASGAverageNetworkIn
- Average number of bytes received on all network interfaces by the Auto Scaling group.ASGAverageNetworkOut
- Average number of bytes sent out on all network interfaces by the Auto Scaling group.ALBRequestCountPerTarget
- Average Application Load Balancer request count per target for your Auto Scaling group.A label that uniquely identifies a specific Application Load Balancer target group from which to determine the average request count served by your Auto Scaling group. You can't specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff
.
Where:
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
A customized metric. You must specify either a predefined metric or a customized metric.
The name of the metric. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics.
The namespace of the metric.
The dimensions of the metric.
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
Describes the dimension of a metric.
The name of the dimension.
The value of the dimension.
The statistic of the metric.
The unit of the metric. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
The metrics to include in the target tracking scaling policy, as a metric data query. This can include both raw metric and metric math expressions.
The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
A short name that identifies the object's results in the response. This name must be unique among all TargetTrackingMetricDataQuery
objects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
The math expression to perform on the returned data, if this object is performing a math expression. This expression can use the Id
of the other metrics to refer to those metrics, and can also use the Id
of other expressions to use the result of those expressions.
Conditional: Within each TargetTrackingMetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
Information about the metric data to return.
Conditional: Within each TargetTrackingMetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
Represents a specific metric.
The namespace of the metric. For more information, see the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
The name of the metric.
The dimensions for the metric. For the list of available dimensions, see the Amazon Web Services documentation available from the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
Describes the dimension of a metric.
The name of the dimension.
The value of the dimension.
The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metrics for scaling is Average
The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify true
for this value for only the final math expression that the metric specification is based on. You must specify false
for ReturnData
for all the other metrics and expressions used in the metric specification.
If you are only retrieving metrics and not performing any math expressions, do not specify anything for ReturnData
. This sets it to its default ( true
).
The target value for the metric.
Note
Some metrics are based on a count instead of a percentage, such as the request count for an Application Load Balancer or the number of messages in an SQS queue. If the scaling policy specifies one of these metrics, specify the target utilization as the optimal average request or message count per instance during any one-minute interval.
Indicates whether scaling in by the target tracking scaling policy is disabled. If scaling in is disabled, the target tracking scaling policy doesn't remove instances from the Auto Scaling group. Otherwise, the target tracking scaling policy can remove instances from the Auto Scaling group. The default is false
.
A predictive scaling policy. Provides support for predefined and custom metrics.
Predefined metrics include CPU utilization, network in/out, and the Application Load Balancer request count.
For more information, see PredictiveScalingConfiguration in the Amazon EC2 Auto Scaling API Reference .
Required if the policy type is PredictiveScaling
.
This structure includes the metrics and target utilization to use for predictive scaling.
This is an array, but we currently only support a single metric specification. That is, you can specify a target value and a single metric pair, or a target value and one scaling metric and one load metric.
This structure specifies the metrics and target utilization settings for a predictive scaling policy.
You must specify either a metric pair, or a load metric and a scaling metric individually. Specifying a metric pair instead of individual metrics provides a simpler way to configure metrics for a scaling policy. You choose the metric pair, and the policy automatically knows the correct sum and average statistics to use for the load metric and the scaling metric.
Example
ALBRequestCount
as the value for the metric pair and 1000.0
as the target value. For this type of metric, you must provide the metric dimension for the corresponding target group, so you also provide a resource label for the Application Load Balancer target group that is attached to your Auto Scaling group.RequestCount
and RequestCountPerTarget
metrics, respectively.For information about using custom metrics with predictive scaling, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide .
Specifies the target utilization.
Note
Some metrics are based on a count instead of a percentage, such as the request count for an Application Load Balancer or the number of messages in an SQS queue. If the scaling policy specifies one of these metrics, specify the target utilization as the optimal average request or message count per instance during any one-minute interval.
The predefined metric pair specification from which Amazon EC2 Auto Scaling determines the appropriate scaling metric and load metric to use.
Indicates which metrics to use. There are two different types of metrics for each metric type: one is a load metric and one is a scaling metric. For example, if the metric type is ASGCPUUtilization
, the Auto Scaling group's total CPU metric is used as the load metric, and the average CPU metric is used for the scaling metric.
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the total and average request count served by your Auto Scaling group. You can't specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff
.
Where:
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
The predefined scaling metric specification.
The metric type.
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the average request count served by your Auto Scaling group. You can't specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff
.
Where:
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
The predefined load metric specification.
The metric type.
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the request count served by your Auto Scaling group. You can't specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff
.
Where:
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
The customized scaling metric specification.
One or more metric data queries to provide the data points for a scaling metric. Use multiple metric data queries only if you are performing a math expression on returned data.
The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
For more information and examples, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide .
A short name that identifies the object's results in the response. This name must be unique among all MetricDataQuery
objects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
The math expression to perform on the returned data, if this object is performing a math expression. This expression can use the Id
of the other metrics to refer to those metrics, and can also use the Id
of other expressions to use the result of those expressions.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
Information about the metric data to return.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
The CloudWatch metric to return, including the metric name, namespace, and dimensions. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics.
The namespace of the metric. For more information, see the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
The name of the metric.
The dimensions for the metric. For the list of available dimensions, see the Amazon Web Services documentation available from the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
Describes the dimension of a metric.
The name of the dimension.
The value of the dimension.
The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metrics for predictive scaling are Average
and Sum
.
The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify true
for this value for only the final math expression that the metric specification is based on. You must specify false
for ReturnData
for all the other metrics and expressions used in the metric specification.
If you are only retrieving metrics and not performing any math expressions, do not specify anything for ReturnData
. This sets it to its default ( true
).
The customized load metric specification.
One or more metric data queries to provide the data points for a load metric. Use multiple metric data queries only if you are performing a math expression on returned data.
The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
For more information and examples, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide .
A short name that identifies the object's results in the response. This name must be unique among all MetricDataQuery
objects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
The math expression to perform on the returned data, if this object is performing a math expression. This expression can use the Id
of the other metrics to refer to those metrics, and can also use the Id
of other expressions to use the result of those expressions.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
Information about the metric data to return.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
The CloudWatch metric to return, including the metric name, namespace, and dimensions. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics.
The namespace of the metric. For more information, see the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
The name of the metric.
The dimensions for the metric. For the list of available dimensions, see the Amazon Web Services documentation available from the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
Describes the dimension of a metric.
The name of the dimension.
The value of the dimension.
The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metrics for predictive scaling are Average
and Sum
.
The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify true
for this value for only the final math expression that the metric specification is based on. You must specify false
for ReturnData
for all the other metrics and expressions used in the metric specification.
If you are only retrieving metrics and not performing any math expressions, do not specify anything for ReturnData
. This sets it to its default ( true
).
The customized capacity metric specification.
One or more metric data queries to provide the data points for a capacity metric. Use multiple metric data queries only if you are performing a math expression on returned data.
The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
For more information and examples, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide .
A short name that identifies the object's results in the response. This name must be unique among all MetricDataQuery
objects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
The math expression to perform on the returned data, if this object is performing a math expression. This expression can use the Id
of the other metrics to refer to those metrics, and can also use the Id
of other expressions to use the result of those expressions.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
Information about the metric data to return.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
The CloudWatch metric to return, including the metric name, namespace, and dimensions. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics.
The namespace of the metric. For more information, see the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
The name of the metric.
The dimensions for the metric. For the list of available dimensions, see the Amazon Web Services documentation available from the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
Describes the dimension of a metric.
The name of the dimension.
The value of the dimension.
The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metrics for predictive scaling are Average
and Sum
.
The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify true
for this value for only the final math expression that the metric specification is based on. You must specify false
for ReturnData
for all the other metrics and expressions used in the metric specification.
If you are only retrieving metrics and not performing any math expressions, do not specify anything for ReturnData
. This sets it to its default ( true
).
The predictive scaling mode. Defaults to ForecastOnly
if not specified.
The amount of time, in seconds, by which the instance launch time can be advanced. For example, the forecast says to add capacity at 10:00 AM, and you choose to pre-launch instances by 5 minutes. In that case, the instances will be launched at 9:55 AM. The intention is to give resources time to be provisioned. It can take a few minutes to launch an EC2 instance. The actual amount of time required depends on several factors, such as the size of the instance and whether there are startup scripts to complete.
The value must be less than the forecast interval duration of 3600 seconds (60 minutes). Defaults to 300 seconds if not specified.
Defines the behavior that should be applied if the forecast capacity approaches or exceeds the maximum capacity of the Auto Scaling group. Defaults to HonorMaxCapacity
if not specified.
The following are possible values:
HonorMaxCapacity
- Amazon EC2 Auto Scaling cannot scale out capacity higher than the maximum capacity. The maximum capacity is enforced as a hard limit.IncreaseMaxCapacity
- Amazon EC2 Auto Scaling can scale out capacity higher than the maximum capacity when the forecast capacity is close to or exceeds the maximum capacity. The upper limit is determined by the forecasted capacity and the value for MaxCapacityBuffer
.The size of the capacity buffer to use when the forecast capacity is close to or exceeds the maximum capacity. The value is specified as a percentage relative to the forecast capacity. For example, if the buffer is 10, this means a 10 percent buffer, such that if the forecast capacity is 50, and the maximum capacity is 40, then the effective maximum capacity is 55.
If set to 0, Amazon EC2 Auto Scaling may scale capacity higher than the maximum capacity to equal but not exceed forecast capacity.
Required if the MaxCapacityBreachBehavior
property is set to IncreaseMaxCapacity
, and cannot be used otherwise.
dict
Response Syntax
{
'PolicyARN': 'string',
'Alarms': [
{
'AlarmName': 'string',
'AlarmARN': 'string'
},
]
}
Response Structure
(dict) --
Contains the output of PutScalingPolicy.
PolicyARN (string) --
The Amazon Resource Name (ARN) of the policy.
Alarms (list) --
The CloudWatch alarms created for the target tracking scaling policy.
(dict) --
Describes an alarm.
AlarmName (string) --
The name of the alarm.
AlarmARN (string) --
The Amazon Resource Name (ARN) of the alarm.
Exceptions
AutoScaling.Client.exceptions.LimitExceededFault
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.ServiceLinkedRoleFailure
Examples
This example adds the specified policy to the specified Auto Scaling group.
response = client.put_scaling_policy(
AutoScalingGroupName='my-auto-scaling-group',
PolicyName='alb1000-target-tracking-scaling-policy',
PolicyType='TargetTrackingScaling',
TargetTrackingConfiguration={
'PredefinedMetricSpecification': {
'PredefinedMetricType': 'ALBRequestCountPerTarget',
'ResourceLabel': 'app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff',
},
'TargetValue': 1000.0,
},
)
print(response)
Expected Output:
{
'Alarms': [
{
'AlarmARN': 'arn:aws:cloudwatch:us-west-2:123456789012:alarm:TargetTracking-my-asg-AlarmHigh-fc0e4183-23ac-497e-9992-691c9980c38e',
'AlarmName': 'TargetTracking-my-asg-AlarmHigh-fc0e4183-23ac-497e-9992-691c9980c38e',
},
{
'AlarmARN': 'arn:aws:cloudwatch:us-west-2:123456789012:alarm:TargetTracking-my-asg-AlarmLow-61a39305-ed0c-47af-bd9e-471a352ee1a2',
'AlarmName': 'TargetTracking-my-asg-AlarmLow-61a39305-ed0c-47af-bd9e-471a352ee1a2',
},
],
'PolicyARN': 'arn:aws:autoscaling:us-west-2:123456789012:scalingPolicy:228f02c2-c665-4bfd-aaac-8b04080bea3c:autoScalingGroupName/my-auto-scaling-group:policyName/alb1000-target-tracking-scaling-policy',
'ResponseMetadata': {
'...': '...',
},
}
put_scheduled_update_group_action
(**kwargs)¶Creates or updates a scheduled scaling action for an Auto Scaling group.
For more information, see Scheduled scaling in the Amazon EC2 Auto Scaling User Guide .
You can view the scheduled actions for an Auto Scaling group using the DescribeScheduledActions API call. If you are no longer using a scheduled action, you can delete it by calling the DeleteScheduledAction API.
If you try to schedule your action in the past, Amazon EC2 Auto Scaling returns an error message.
See also: AWS API Documentation
Request Syntax
response = client.put_scheduled_update_group_action(
AutoScalingGroupName='string',
ScheduledActionName='string',
Time=datetime(2015, 1, 1),
StartTime=datetime(2015, 1, 1),
EndTime=datetime(2015, 1, 1),
Recurrence='string',
MinSize=123,
MaxSize=123,
DesiredCapacity=123,
TimeZone='string'
)
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
The name of this scaling action.
The date and time for this action to start, in YYYY-MM-DDThh:mm:ssZ format in UTC/GMT only and in quotes (for example, "2021-06-01T00:00:00Z"
).
If you specify Recurrence
and StartTime
, Amazon EC2 Auto Scaling performs the action at this time, and then performs the action based on the specified recurrence.
"2021-06-01T00:00:00Z"
.The recurring schedule for this action. This format consists of five fields separated by white spaces: [Minute] [Hour] [Day_of_Month] [Month_of_Year] [Day_of_Week]. The value must be in quotes (for example, "30 0 1 1,6,12 *"
). For more information about this format, see Crontab.
When StartTime
and EndTime
are specified with Recurrence
, they form the boundaries of when the recurring action starts and stops.
Cron expressions use Universal Coordinated Time (UTC) by default.
The desired capacity is the initial capacity of the Auto Scaling group after the scheduled action runs and the capacity it attempts to maintain. It can scale beyond this capacity if you add more scaling conditions.
Note
You must specify at least one of the following properties: MaxSize
, MinSize
, or DesiredCapacity
.
Specifies the time zone for a cron expression. If a time zone is not provided, UTC is used by default.
Valid values are the canonical names of the IANA time zones, derived from the IANA Time Zone Database (such as Etc/GMT+9
or Pacific/Tahiti
). For more information, see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones.
None
Exceptions
AutoScaling.Client.exceptions.AlreadyExistsFault
AutoScaling.Client.exceptions.LimitExceededFault
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example adds the specified scheduled action to the specified Auto Scaling group.
response = client.put_scheduled_update_group_action(
AutoScalingGroupName='my-auto-scaling-group',
DesiredCapacity=4,
EndTime=datetime(2014, 5, 12, 8, 0, 0, 0, 132, 0),
MaxSize=6,
MinSize=2,
ScheduledActionName='my-scheduled-action',
StartTime=datetime(2014, 5, 12, 8, 0, 0, 0, 132, 0),
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
put_warm_pool
(**kwargs)¶Creates or updates a warm pool for the specified Auto Scaling group. A warm pool is a pool of pre-initialized EC2 instances that sits alongside the Auto Scaling group. Whenever your application needs to scale out, the Auto Scaling group can draw on the warm pool to meet its new desired capacity. For more information and example configurations, see Warm pools for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
This operation must be called from the Region in which the Auto Scaling group was created. This operation cannot be called on an Auto Scaling group that has a mixed instances policy or a launch template or launch configuration that requests Spot Instances.
You can view the instances in the warm pool using the DescribeWarmPool API call. If you are no longer using a warm pool, you can delete it by calling the DeleteWarmPool API.
See also: AWS API Documentation
Request Syntax
response = client.put_warm_pool(
AutoScalingGroupName='string',
MaxGroupPreparedCapacity=123,
MinSize=123,
PoolState='Stopped'|'Running'|'Hibernated',
InstanceReusePolicy={
'ReuseOnScaleIn': True|False
}
)
[REQUIRED]
The name of the Auto Scaling group.
Specifies the maximum number of instances that are allowed to be in the warm pool or in any state except Terminated
for the Auto Scaling group. This is an optional property. Specify it only if you do not want the warm pool size to be determined by the difference between the group's maximum capacity and its desired capacity.
Warning
If a value for MaxGroupPreparedCapacity
is not specified, Amazon EC2 Auto Scaling launches and maintains the difference between the group's maximum capacity and its desired capacity. If you specify a value for MaxGroupPreparedCapacity
, Amazon EC2 Auto Scaling uses the difference between the MaxGroupPreparedCapacity
and the desired capacity instead.
The size of the warm pool is dynamic. Only when MaxGroupPreparedCapacity
and MinSize
are set to the same value does the warm pool have an absolute size.
If the desired capacity of the Auto Scaling group is higher than the MaxGroupPreparedCapacity
, the capacity of the warm pool is 0, unless you specify a value for MinSize
. To remove a value that you previously set, include the property but specify -1 for the value.
Stopped
.Indicates whether instances in the Auto Scaling group can be returned to the warm pool on scale in. The default is to terminate instances in the Auto Scaling group when the group scales in.
Specifies whether instances in the Auto Scaling group can be returned to the warm pool on scale in.
dict
Response Syntax
{}
Response Structure
Exceptions
AutoScaling.Client.exceptions.LimitExceededFault
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example creates a warm pool for the specified Auto Scaling group.
response = client.put_warm_pool(
AutoScalingGroupName='my-auto-scaling-group',
InstanceReusePolicy={
'ReuseOnScaleIn': True,
},
MinSize=30,
PoolState='Hibernated',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
record_lifecycle_action_heartbeat
(**kwargs)¶Records a heartbeat for the lifecycle action associated with the specified token or instance. This extends the timeout by the length of time defined using the PutLifecycleHook API call.
This step is a part of the procedure for adding a lifecycle hook to an Auto Scaling group:
For more information, see Amazon EC2 Auto Scaling lifecycle hooks in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.record_lifecycle_action_heartbeat(
LifecycleHookName='string',
AutoScalingGroupName='string',
LifecycleActionToken='string',
InstanceId='string'
)
[REQUIRED]
The name of the lifecycle hook.
[REQUIRED]
The name of the Auto Scaling group.
dict
Response Syntax
{}
Response Structure
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example records a lifecycle action heartbeat to keep the instance in a pending state.
response = client.record_lifecycle_action_heartbeat(
AutoScalingGroupName='my-auto-scaling-group',
LifecycleActionToken='bcd2f1b8-9a78-44d3-8a7a-4dd07d7cf635',
LifecycleHookName='my-lifecycle-hook',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
resume_processes
(**kwargs)¶Resumes the specified suspended auto scaling processes, or all suspended process, for the specified Auto Scaling group.
For more information, see Suspending and resuming scaling processes in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.resume_processes(
AutoScalingGroupName='string',
ScalingProcesses=[
'string',
]
)
[REQUIRED]
The name of the Auto Scaling group.
One or more of the following processes:
Launch
Terminate
AddToLoadBalancer
AlarmNotification
AZRebalance
HealthCheck
InstanceRefresh
ReplaceUnhealthy
ScheduledActions
If you omit this property, all processes are specified.
None
Exceptions
AutoScaling.Client.exceptions.ResourceInUseFault
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example resumes the specified suspended scaling process for the specified Auto Scaling group.
response = client.resume_processes(
AutoScalingGroupName='my-auto-scaling-group',
ScalingProcesses=[
'AlarmNotification',
],
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
rollback_instance_refresh
(**kwargs)¶Cancels an instance refresh that is in progress and rolls back any changes that it made. Amazon EC2 Auto Scaling replaces any instances that were replaced during the instance refresh. This restores your Auto Scaling group to the configuration that it was using before the start of the instance refresh.
This operation is part of the instance refresh feature in Amazon EC2 Auto Scaling, which helps you update instances in your Auto Scaling group after you make configuration changes.
A rollback is not supported in the following situations:
ImageId
property.$Latest
or $Default
version.When you receive a successful response from this operation, Amazon EC2 Auto Scaling immediately begins replacing instances. You can check the status of this operation through the DescribeInstanceRefreshes API operation.
See also: AWS API Documentation
Request Syntax
response = client.rollback_instance_refresh(
AutoScalingGroupName='string'
)
{
'InstanceRefreshId': 'string'
}
Response Structure
The instance refresh ID associated with the request. This is the unique ID assigned to the instance refresh when it was started.
Exceptions
AutoScaling.Client.exceptions.LimitExceededFault
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.ActiveInstanceRefreshNotFoundFault
AutoScaling.Client.exceptions.IrreversibleInstanceRefreshFault
set_desired_capacity
(**kwargs)¶Sets the size of the specified Auto Scaling group.
If a scale-in activity occurs as a result of a new DesiredCapacity
value that is lower than the current size of the group, the Auto Scaling group uses its termination policy to determine which instances to terminate.
For more information, see Manual scaling in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.set_desired_capacity(
AutoScalingGroupName='string',
DesiredCapacity=123,
HonorCooldown=True|False
)
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
The desired capacity is the initial capacity of the Auto Scaling group after this operation completes and the capacity it attempts to maintain.
None
Exceptions
AutoScaling.Client.exceptions.ScalingActivityInProgressFault
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example sets the desired capacity for the specified Auto Scaling group.
response = client.set_desired_capacity(
AutoScalingGroupName='my-auto-scaling-group',
DesiredCapacity=2,
HonorCooldown=True,
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
set_instance_health
(**kwargs)¶Sets the health status of the specified instance.
For more information, see Health checks for Auto Scaling instances in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.set_instance_health(
InstanceId='string',
HealthStatus='string',
ShouldRespectGracePeriod=True|False
)
[REQUIRED]
The ID of the instance.
[REQUIRED]
The health status of the instance. Set to Healthy
to have the instance remain in service. Set to Unhealthy
to have the instance be out of service. Amazon EC2 Auto Scaling terminates and replaces the unhealthy instance.
If the Auto Scaling group of the specified instance has a HealthCheckGracePeriod
specified for the group, by default, this call respects the grace period. Set this to False
, to have the call not respect the grace period associated with the group.
For more information about the health check grace period, see CreateAutoScalingGroup in the Amazon EC2 Auto Scaling API Reference .
None
Exceptions
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example sets the health status of the specified instance to Unhealthy.
response = client.set_instance_health(
HealthStatus='Unhealthy',
InstanceId='i-93633f9b',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
set_instance_protection
(**kwargs)¶Updates the instance protection settings of the specified instances. This operation cannot be called on instances in a warm pool.
For more information about preventing instances that are part of an Auto Scaling group from terminating on scale in, see Using instance scale-in protection in the Amazon EC2 Auto Scaling User Guide .
If you exceed your maximum limit of instance IDs, which is 50 per Auto Scaling group, the call fails.
See also: AWS API Documentation
Request Syntax
response = client.set_instance_protection(
InstanceIds=[
'string',
],
AutoScalingGroupName='string',
ProtectedFromScaleIn=True|False
)
[REQUIRED]
One or more instance IDs. You can specify up to 50 instances.
[REQUIRED]
The name of the Auto Scaling group.
[REQUIRED]
Indicates whether the instance is protected from termination by Amazon EC2 Auto Scaling when scaling in.
dict
Response Syntax
{}
Response Structure
Exceptions
AutoScaling.Client.exceptions.LimitExceededFault
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example enables instance protection for the specified instance.
response = client.set_instance_protection(
AutoScalingGroupName='my-auto-scaling-group',
InstanceIds=[
'i-93633f9b',
],
ProtectedFromScaleIn=True,
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
This example disables instance protection for the specified instance.
response = client.set_instance_protection(
AutoScalingGroupName='my-auto-scaling-group',
InstanceIds=[
'i-93633f9b',
],
ProtectedFromScaleIn=False,
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
start_instance_refresh
(**kwargs)¶Starts an instance refresh. During an instance refresh, Amazon EC2 Auto Scaling performs a rolling update of instances in an Auto Scaling group. Instances are terminated first and then replaced, which temporarily reduces the capacity available within your Auto Scaling group.
This operation is part of the instance refresh feature in Amazon EC2 Auto Scaling, which helps you update instances in your Auto Scaling group. This feature is helpful, for example, when you have a new AMI or a new user data script. You just need to create a new launch template that specifies the new AMI or user data script. Then start an instance refresh to immediately begin the process of updating instances in the group.
If successful, the request's response contains a unique ID that you can use to track the progress of the instance refresh. To query its status, call the DescribeInstanceRefreshes API. To describe the instance refreshes that have already run, call the DescribeInstanceRefreshes API. To cancel an instance refresh that is in progress, use the CancelInstanceRefresh API.
An instance refresh might fail for several reasons, such as EC2 launch failures, misconfigured health checks, or not ignoring or allowing the termination of instances that are in Standby
state or protected from scale in. You can monitor for failed EC2 launches using the scaling activities. To find the scaling activities, call the DescribeScalingActivities API.
If you enable auto rollback, your Auto Scaling group will be rolled back automatically when the instance refresh fails. You can enable this feature before starting an instance refresh by specifying the AutoRollback
property in the instance refresh preferences. Otherwise, to roll back an instance refresh before it finishes, use the RollbackInstanceRefresh API.
See also: AWS API Documentation
Request Syntax
response = client.start_instance_refresh(
AutoScalingGroupName='string',
Strategy='Rolling',
DesiredConfiguration={
'LaunchTemplate': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'MixedInstancesPolicy': {
'LaunchTemplate': {
'LaunchTemplateSpecification': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'Overrides': [
{
'InstanceType': 'string',
'WeightedCapacity': 'string',
'LaunchTemplateSpecification': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'InstanceRequirements': {
'VCpuCount': {
'Min': 123,
'Max': 123
},
'MemoryMiB': {
'Min': 123,
'Max': 123
},
'CpuManufacturers': [
'intel'|'amd'|'amazon-web-services',
],
'MemoryGiBPerVCpu': {
'Min': 123.0,
'Max': 123.0
},
'ExcludedInstanceTypes': [
'string',
],
'InstanceGenerations': [
'current'|'previous',
],
'SpotMaxPricePercentageOverLowestPrice': 123,
'OnDemandMaxPricePercentageOverLowestPrice': 123,
'BareMetal': 'included'|'excluded'|'required',
'BurstablePerformance': 'included'|'excluded'|'required',
'RequireHibernateSupport': True|False,
'NetworkInterfaceCount': {
'Min': 123,
'Max': 123
},
'LocalStorage': 'included'|'excluded'|'required',
'LocalStorageTypes': [
'hdd'|'ssd',
],
'TotalLocalStorageGB': {
'Min': 123.0,
'Max': 123.0
},
'BaselineEbsBandwidthMbps': {
'Min': 123,
'Max': 123
},
'AcceleratorTypes': [
'gpu'|'fpga'|'inference',
],
'AcceleratorCount': {
'Min': 123,
'Max': 123
},
'AcceleratorManufacturers': [
'nvidia'|'amd'|'amazon-web-services'|'xilinx',
],
'AcceleratorNames': [
'a100'|'v100'|'k80'|'t4'|'m60'|'radeon-pro-v520'|'vu9p',
],
'AcceleratorTotalMemoryMiB': {
'Min': 123,
'Max': 123
},
'NetworkBandwidthGbps': {
'Min': 123.0,
'Max': 123.0
},
'AllowedInstanceTypes': [
'string',
]
}
},
]
},
'InstancesDistribution': {
'OnDemandAllocationStrategy': 'string',
'OnDemandBaseCapacity': 123,
'OnDemandPercentageAboveBaseCapacity': 123,
'SpotAllocationStrategy': 'string',
'SpotInstancePools': 123,
'SpotMaxPrice': 'string'
}
}
},
Preferences={
'MinHealthyPercentage': 123,
'InstanceWarmup': 123,
'CheckpointPercentages': [
123,
],
'CheckpointDelay': 123,
'SkipMatching': True|False,
'AutoRollback': True|False,
'ScaleInProtectedInstances': 'Refresh'|'Ignore'|'Wait',
'StandbyInstances': 'Terminate'|'Ignore'|'Wait'
}
)
[REQUIRED]
The name of the Auto Scaling group.
Rolling
.The desired configuration. For example, the desired configuration can specify a new launch template or a new version of the current launch template.
Once the instance refresh succeeds, Amazon EC2 Auto Scaling updates the settings of the Auto Scaling group to reflect the new desired configuration.
Note
When you specify a new launch template or a new version of the current launch template for your desired configuration, consider enabling the SkipMatching
property in preferences. If it's enabled, Amazon EC2 Auto Scaling skips replacing instances that already use the specified launch template and instance types. This can help you reduce the number of replacements that are required to apply updates.
Describes the launch template and the version of the launch template that Amazon EC2 Auto Scaling uses to launch Amazon EC2 instances. For more information about launch templates, see Launch templates in the Amazon EC2 Auto Scaling User Guide .
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
Use this structure to launch multiple instance types and On-Demand Instances and Spot Instances within a single Auto Scaling group.
A mixed instances policy contains information that Amazon EC2 Auto Scaling can use to launch instances and help optimize your costs. For more information, see Auto Scaling groups with multiple instance types and purchase options in the Amazon EC2 Auto Scaling User Guide .
One or more launch templates and the instance types (overrides) that are used to launch EC2 instances to fulfill On-Demand and Spot capacities.
The launch template.
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
Any properties that you specify override the same properties in the launch template.
Use this structure to let Amazon EC2 Auto Scaling do the following when the Auto Scaling group has a mixed instances policy:
Specify the instance types that you want, or define your instance requirements instead and let Amazon EC2 Auto Scaling provision the available instance types that meet your requirements. This can provide Amazon EC2 Auto Scaling with a larger selection of instance types to choose from when fulfilling Spot and On-Demand capacities. You can view which instance types are matched before you apply the instance requirements to your Auto Scaling group.
After you define your instance requirements, you don't have to keep updating these settings to get new EC2 instance types automatically. Amazon EC2 Auto Scaling uses the instance requirements of the Auto Scaling group to determine whether a new EC2 instance type can be used.
The instance type, such as m3.xlarge
. You must specify an instance type that is supported in your requested Region and Availability Zones. For more information, see Instance types in the Amazon Elastic Compute Cloud User Guide .
You can specify up to 40 instance types per Auto Scaling group.
If you provide a list of instance types to use, you can specify the number of capacity units provided by each instance type in terms of virtual CPUs, memory, storage, throughput, or other relative performance characteristic. When a Spot or On-Demand Instance is launched, the capacity units count toward the desired capacity. Amazon EC2 Auto Scaling launches instances until the desired capacity is totally fulfilled, even if this results in an overage. For example, if there are two units remaining to fulfill capacity, and Amazon EC2 Auto Scaling can only launch an instance with a WeightedCapacity
of five units, the instance is launched, and the desired capacity is exceeded by three units. For more information, see Configuring instance weighting for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide . Value must be in the range of 1–999.
If you specify a value for WeightedCapacity
for one instance type, you must specify a value for WeightedCapacity
for all of them.
Warning
Every Auto Scaling group has three size parameters ( DesiredCapacity
, MaxSize
, and MinSize
). Usually, you set these sizes based on a specific number of instances. However, if you configure a mixed instances policy that defines weights for the instance types, you must specify these sizes with the same units that you use for weighting instances.
Provides a launch template for the specified instance type or set of instance requirements. For example, some instance types might require a launch template with a different AMI. If not provided, Amazon EC2 Auto Scaling uses the launch template that's specified in the LaunchTemplate
definition. For more information, see Specifying a different launch template for an instance type in the Amazon EC2 Auto Scaling User Guide .
You can specify up to 20 launch templates per Auto Scaling group. The launch templates specified in the overrides and in the LaunchTemplate
definition count towards this limit.
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
The instance requirements. Amazon EC2 Auto Scaling uses your specified requirements to identify instance types. Then, it uses your On-Demand and Spot allocation strategies to launch instances from these instance types.
You can specify up to four separate sets of instance requirements per Auto Scaling group. This is useful for provisioning instances from different Amazon Machine Images (AMIs) in the same Auto Scaling group. To do this, create the AMIs and create a new launch template for each AMI. Then, create a compatible set of instance requirements for each launch template.
Note
If you specify InstanceRequirements
, you can't specify InstanceType
.
The minimum and maximum number of vCPUs for an instance type.
The minimum number of vCPUs.
The maximum number of vCPUs.
The minimum and maximum instance memory size for an instance type, in MiB.
The memory minimum in MiB.
The memory maximum in MiB.
Lists which specific CPU manufacturers to include.
intel
.amd
.amazon-web-services
.Note
Don't confuse the CPU hardware manufacturer with the CPU hardware architecture. Instances will be launched with a compatible CPU architecture based on the Amazon Machine Image (AMI) that you specify in your launch template.
Default: Any manufacturer
The minimum and maximum amount of memory per vCPU for an instance type, in GiB.
Default: No minimum or maximum limits
The memory minimum in GiB.
The memory maximum in GiB.
The instance types to exclude. You can use strings with one or more wild cards, represented by an asterisk ( *
), to exclude an instance family, type, size, or generation. The following are examples: m5.8xlarge
, c5*.*
, m5a.*
, r*
, *3*
.
For example, if you specify c5*
, you are excluding the entire C5 instance family, which includes all C5a and C5n instance types. If you specify m5a.*
, Amazon EC2 Auto Scaling will exclude all the M5a instance types, but not the M5n instance types.
Note
If you specify ExcludedInstanceTypes
, you can't specify AllowedInstanceTypes
.
Default: No excluded instance types
Indicates whether current or previous generation instance types are included.
current
. The current generation includes EC2 instance types currently recommended for use. This typically includes the latest two to three generations in each instance family. For more information, see Instance types in the Amazon EC2 User Guide for Linux Instances .previous
.Default: Any current or previous generation
The price protection threshold for Spot Instances. This is the maximum you’ll pay for a Spot Instance, expressed as a percentage higher than the least expensive current generation M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999
.
If you set DesiredCapacityType
to vcpu
or memory-mib
, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price.
Default: 100
The price protection threshold for On-Demand Instances. This is the maximum you’ll pay for an On-Demand Instance, expressed as a percentage higher than the least expensive current generation M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999
.
If you set DesiredCapacityType
to vcpu
or memory-mib
, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price.
Default: 20
Indicates whether bare metal instance types are included, excluded, or required.
Default: excluded
Indicates whether burstable performance instance types are included, excluded, or required. For more information, see Burstable performance instances in the Amazon EC2 User Guide for Linux Instances .
Default: excluded
Indicates whether instance types must provide On-Demand Instance hibernation support.
Default: false
The minimum and maximum number of network interfaces for an instance type.
Default: No minimum or maximum limits
The minimum number of network interfaces.
The maximum number of network interfaces.
Indicates whether instance types with instance store volumes are included, excluded, or required. For more information, see Amazon EC2 instance store in the Amazon EC2 User Guide for Linux Instances .
Default: included
Indicates the type of local storage that is required.
hdd
.ssd
.Default: Any local storage type
The minimum and maximum total local storage size for an instance type, in GB.
Default: No minimum or maximum limits
The storage minimum in GB.
The storage maximum in GB.
The minimum and maximum baseline bandwidth performance for an instance type, in Mbps. For more information, see Amazon EBS–optimized instances in the Amazon EC2 User Guide for Linux Instances .
Default: No minimum or maximum limits
The minimum value in Mbps.
The maximum value in Mbps.
Lists the accelerator types that must be on an instance type.
gpu
.fpga
.inference
.Default: Any accelerator type
The minimum and maximum number of accelerators (GPUs, FPGAs, or Amazon Web Services Inferentia chips) for an instance type.
To exclude accelerator-enabled instance types, set Max
to 0
.
Default: No minimum or maximum limits
The minimum value.
The maximum value.
Indicates whether instance types must have accelerators by specific manufacturers.
nvidia
.amd
.amazon-web-services
.xilinx
.Default: Any manufacturer
Lists the accelerators that must be on an instance type.
a100
.v100
.k80
.t4
.m60
.radeon-pro-v520
.vu9p
.Default: Any accelerator
The minimum and maximum total memory size for the accelerators on an instance type, in MiB.
Default: No minimum or maximum limits
The memory minimum in MiB.
The memory maximum in MiB.
The minimum and maximum amount of network bandwidth, in gigabits per second (Gbps).
Default: No minimum or maximum limits
The minimum amount of network bandwidth, in gigabits per second (Gbps).
The maximum amount of network bandwidth, in gigabits per second (Gbps).
The instance types to apply your specified attributes against. All other instance types are ignored, even if they match your specified attributes.
You can use strings with one or more wild cards, represented by an asterisk ( *
), to allow an instance type, size, or generation. The following are examples: m5.8xlarge
, c5*.*
, m5a.*
, r*
, *3*
.
For example, if you specify c5*
, Amazon EC2 Auto Scaling will allow the entire C5 instance family, which includes all C5a and C5n instance types. If you specify m5a.*
, Amazon EC2 Auto Scaling will allow all the M5a instance types, but not the M5n instance types.
Note
If you specify AllowedInstanceTypes
, you can't specify ExcludedInstanceTypes
.
Default: All instance types
The instances distribution.
The allocation strategy to apply to your On-Demand Instances when they are launched. Possible instance types are determined by the launch template overrides that you specify.
The following lists the valid values:
lowest-price
Uses price to determine which instance types are the highest priority, launching the lowest priced instance types within an Availability Zone first. This is the default value for Auto Scaling groups that specify InstanceRequirements.
prioritized
You set the order of instance types for the launch template overrides from highest to lowest priority (from first to last in the list). Amazon EC2 Auto Scaling launches your highest priority instance types first. If all your On-Demand capacity cannot be fulfilled using your highest priority instance type, then Amazon EC2 Auto Scaling launches the remaining capacity using the second priority instance type, and so on. This is the default value for Auto Scaling groups that don't specify InstanceRequirements and cannot be used for groups that do.
The minimum amount of the Auto Scaling group's capacity that must be fulfilled by On-Demand Instances. This base portion is launched first as your group scales.
This number has the same unit of measurement as the group's desired capacity. If you change the default unit of measurement (number of instances) by specifying weighted capacity values in your launch template overrides list, or by changing the default desired capacity type setting of the group, you must specify this number using the same unit of measurement.
Default: 0
Controls the percentages of On-Demand Instances and Spot Instances for your additional capacity beyond OnDemandBaseCapacity
. Expressed as a number (for example, 20 specifies 20% On-Demand Instances, 80% Spot Instances). If set to 100, only On-Demand Instances are used.
Default: 100
The allocation strategy to apply to your Spot Instances when they are launched. Possible instance types are determined by the launch template overrides that you specify.
The following lists the valid values:
capacity-optimized
Requests Spot Instances using pools that are optimally chosen based on the available Spot capacity. This strategy has the lowest risk of interruption. To give certain instance types a higher chance of launching first, use capacity-optimized-prioritized
.
capacity-optimized-prioritized
You set the order of instance types for the launch template overrides from highest to lowest priority (from first to last in the list). Amazon EC2 Auto Scaling honors the instance type priorities on a best effort basis but optimizes for capacity first. Note that if the On-Demand allocation strategy is set to prioritized
, the same priority is applied when fulfilling On-Demand capacity. This is not a valid value for Auto Scaling groups that specify InstanceRequirements.
lowest-price
Requests Spot Instances using the lowest priced pools within an Availability Zone, across the number of Spot pools that you specify for the SpotInstancePools
property. To ensure that your desired capacity is met, you might receive Spot Instances from several pools. This is the default value, but it might lead to high interruption rates because this strategy only considers instance price and not available capacity.
price-capacity-optimized (recommended)
The price and capacity optimized allocation strategy looks at both price and capacity to select the Spot Instance pools that are the least likely to be interrupted and have the lowest possible price.
The number of Spot Instance pools across which to allocate your Spot Instances. The Spot pools are determined from the different instance types in the overrides. Valid only when the SpotAllocationStrategy
is lowest-price
. Value must be in the range of 1–20.
Default: 2
The maximum price per unit hour that you are willing to pay for a Spot Instance. If your maximum price is lower than the Spot price for the instance types that you selected, your Spot Instances are not launched. We do not recommend specifying a maximum price because it can lead to increased interruptions. When Spot Instances launch, you pay the current Spot price. To remove a maximum price that you previously set, include the property but specify an empty string ("") for the value.
Warning
If you specify a maximum price, your instances will be interrupted more frequently than if you do not specify one.
Valid Range: Minimum value of 0.001
Sets your preferences for the instance refresh so that it performs as expected when you start it. Includes the instance warmup time, the minimum healthy percentage, and the behaviors that you want Amazon EC2 Auto Scaling to use if instances that are in Standby
state or protected from scale in are found. You can also choose to enable additional features, such as the following:
The amount of capacity in the Auto Scaling group that must pass your group's health checks to allow the operation to continue. The value is expressed as a percentage of the desired capacity of the Auto Scaling group (rounded up to the nearest integer). The default is 90
.
Setting the minimum healthy percentage to 100 percent limits the rate of replacement to one instance at a time. In contrast, setting it to 0 percent has the effect of replacing all instances at the same time.
A time period, in seconds, during which an instance refresh waits before moving on to replacing the next instance after a new instance enters the InService
state.
This property is not required for normal usage. Instead, use the DefaultInstanceWarmup
property of the Auto Scaling group. The InstanceWarmup
and DefaultInstanceWarmup
properties work the same way. Only specify this property if you must override the DefaultInstanceWarmup
property.
If you do not specify this property, the instance warmup by default is the value of the DefaultInstanceWarmup
property, if defined (which is recommended in all cases), or the HealthCheckGracePeriod
property otherwise.
(Optional) Threshold values for each checkpoint in ascending order. Each number must be unique. To replace all instances in the Auto Scaling group, the last number in the array must be 100
.
For usage examples, see Adding checkpoints to an instance refresh in the Amazon EC2 Auto Scaling User Guide .
(Optional) The amount of time, in seconds, to wait after a checkpoint before continuing. This property is optional, but if you specify a value for it, you must also specify a value for CheckpointPercentages
. If you specify a value for CheckpointPercentages
and not for CheckpointDelay
, the CheckpointDelay
defaults to 3600
(1 hour).
(Optional) Indicates whether skip matching is enabled. If enabled ( true
), then Amazon EC2 Auto Scaling skips replacing instances that match the desired configuration. If no desired configuration is specified, then it skips replacing instances that have the same launch template and instance types that the Auto Scaling group was using before the start of the instance refresh. The default is false
.
For more information, see Use an instance refresh with skip matching in the Amazon EC2 Auto Scaling User Guide .
(Optional) Indicates whether to roll back the Auto Scaling group to its previous configuration if the instance refresh fails. The default is false
.
A rollback is not supported in the following situations:
ImageId
property.$Latest
or $Default
version.Choose the behavior that you want Amazon EC2 Auto Scaling to use if instances protected from scale in are found.
The following lists the valid values:
Refresh
Amazon EC2 Auto Scaling replaces instances that are protected from scale in.
Ignore
Amazon EC2 Auto Scaling ignores instances that are protected from scale in and continues to replace instances that are not protected.
Wait (default)
Amazon EC2 Auto Scaling waits one hour for you to remove scale-in protection. Otherwise, the instance refresh will fail.
Choose the behavior that you want Amazon EC2 Auto Scaling to use if instances in Standby
state are found.
The following lists the valid values:
Terminate
Amazon EC2 Auto Scaling terminates instances that are in Standby
.
Ignore
Amazon EC2 Auto Scaling ignores instances that are in Standby
and continues to replace instances that are in the InService
state.
Wait (default)
Amazon EC2 Auto Scaling waits one hour for you to return the instances to service. Otherwise, the instance refresh will fail.
dict
Response Syntax
{
'InstanceRefreshId': 'string'
}
Response Structure
(dict) --
InstanceRefreshId (string) --
A unique ID for tracking the progress of the instance refresh.
Exceptions
AutoScaling.Client.exceptions.LimitExceededFault
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.InstanceRefreshInProgressFault
Examples
This example starts an instance refresh for the specified Auto Scaling group.
response = client.start_instance_refresh(
AutoScalingGroupName='my-auto-scaling-group',
DesiredConfiguration={
'LaunchTemplate': {
'LaunchTemplateName': 'my-template-for-auto-scaling',
'Version': '$Latest',
},
},
Preferences={
'InstanceWarmup': 400,
'MinHealthyPercentage': 90,
'SkipMatching': True,
},
)
print(response)
Expected Output:
{
'InstanceRefreshId': '08b91cf7-8fa6-48af-b6a6-d227f40f1b9b',
'ResponseMetadata': {
'...': '...',
},
}
suspend_processes
(**kwargs)¶Suspends the specified auto scaling processes, or all processes, for the specified Auto Scaling group.
If you suspend either the Launch
or Terminate
process types, it can prevent other process types from functioning properly. For more information, see Suspending and resuming scaling processes in the Amazon EC2 Auto Scaling User Guide .
To resume processes that have been suspended, call the ResumeProcesses API.
See also: AWS API Documentation
Request Syntax
response = client.suspend_processes(
AutoScalingGroupName='string',
ScalingProcesses=[
'string',
]
)
[REQUIRED]
The name of the Auto Scaling group.
One or more of the following processes:
Launch
Terminate
AddToLoadBalancer
AlarmNotification
AZRebalance
HealthCheck
InstanceRefresh
ReplaceUnhealthy
ScheduledActions
If you omit this property, all processes are specified.
None
Exceptions
AutoScaling.Client.exceptions.ResourceInUseFault
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example suspends the specified scaling process for the specified Auto Scaling group.
response = client.suspend_processes(
AutoScalingGroupName='my-auto-scaling-group',
ScalingProcesses=[
'AlarmNotification',
],
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
terminate_instance_in_auto_scaling_group
(**kwargs)¶Terminates the specified instance and optionally adjusts the desired group size. This operation cannot be called on instances in a warm pool.
This call simply makes a termination request. The instance is not terminated immediately. When an instance is terminated, the instance status changes to terminated
. You can't connect to or start an instance after you've terminated it.
If you do not specify the option to decrement the desired capacity, Amazon EC2 Auto Scaling launches instances to replace the ones that are terminated.
By default, Amazon EC2 Auto Scaling balances instances across all Availability Zones. If you decrement the desired capacity, your Auto Scaling group can become unbalanced between Availability Zones. Amazon EC2 Auto Scaling tries to rebalance the group, and rebalancing might terminate instances in other zones. For more information, see Rebalancing activities in the Amazon EC2 Auto Scaling User Guide .
See also: AWS API Documentation
Request Syntax
response = client.terminate_instance_in_auto_scaling_group(
InstanceId='string',
ShouldDecrementDesiredCapacity=True|False
)
[REQUIRED]
The ID of the instance.
[REQUIRED]
Indicates whether terminating the instance also decrements the size of the Auto Scaling group.
dict
Response Syntax
{
'Activity': {
'ActivityId': 'string',
'AutoScalingGroupName': 'string',
'Description': 'string',
'Cause': 'string',
'StartTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'StatusCode': 'PendingSpotBidPlacement'|'WaitingForSpotInstanceRequestId'|'WaitingForSpotInstanceId'|'WaitingForInstanceId'|'PreInService'|'InProgress'|'WaitingForELBConnectionDraining'|'MidLifecycleAction'|'WaitingForInstanceWarmup'|'Successful'|'Failed'|'Cancelled',
'StatusMessage': 'string',
'Progress': 123,
'Details': 'string',
'AutoScalingGroupState': 'string',
'AutoScalingGroupARN': 'string'
}
}
Response Structure
(dict) --
Activity (dict) --
A scaling activity.
ActivityId (string) --
The ID of the activity.
AutoScalingGroupName (string) --
The name of the Auto Scaling group.
Description (string) --
A friendly, more verbose description of the activity.
Cause (string) --
The reason the activity began.
StartTime (datetime) --
The start time of the activity.
EndTime (datetime) --
The end time of the activity.
StatusCode (string) --
The current status of the activity.
StatusMessage (string) --
A friendly, more verbose description of the activity status.
Progress (integer) --
A value between 0 and 100 that indicates the progress of the activity.
Details (string) --
The details about the activity.
AutoScalingGroupState (string) --
The state of the Auto Scaling group, which is either InService
or Deleted
.
AutoScalingGroupARN (string) --
The Amazon Resource Name (ARN) of the Auto Scaling group.
Exceptions
AutoScaling.Client.exceptions.ScalingActivityInProgressFault
AutoScaling.Client.exceptions.ResourceContentionFault
Examples
This example terminates the specified instance from the specified Auto Scaling group without updating the size of the group. Auto Scaling launches a replacement instance after the specified instance terminates.
response = client.terminate_instance_in_auto_scaling_group(
InstanceId='i-93633f9b',
ShouldDecrementDesiredCapacity=False,
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
update_auto_scaling_group
(**kwargs)¶We strongly recommend that all Auto Scaling groups use launch templates to ensure full functionality for Amazon EC2 Auto Scaling and Amazon EC2.
Updates the configuration for the specified Auto Scaling group.
To update an Auto Scaling group, specify the name of the group and the property that you want to change. Any properties that you don't specify are not changed by this update request. The new settings take effect on any scaling activities after this call returns.
If you associate a new launch configuration or template with an Auto Scaling group, all new instances will get the updated configuration. Existing instances continue to run with the configuration that they were originally launched with. When you update a group to specify a mixed instances policy instead of a launch configuration or template, existing instances may be replaced to match the new purchasing options that you specified in the policy. For example, if the group currently has 100% On-Demand capacity and the policy specifies 50% Spot capacity, this means that half of your instances will be gradually terminated and relaunched as Spot Instances. When replacing instances, Amazon EC2 Auto Scaling launches new instances before terminating the old ones, so that updating your group does not compromise the performance or availability of your application.
Note the following about changing DesiredCapacity
, MaxSize
, or MinSize
:
DesiredCapacity
value that is lower than the current size of the group, the Auto Scaling group uses its termination policy to determine which instances to terminate.MinSize
without specifying a value for DesiredCapacity
, and the new MinSize
is larger than the current size of the group, this sets the group's DesiredCapacity
to the new MinSize
value.MaxSize
without specifying a value for DesiredCapacity
, and the new MaxSize
is smaller than the current size of the group, this sets the group's DesiredCapacity
to the new MaxSize
value.To see which properties have been set, call the DescribeAutoScalingGroups API. To view the scaling policies for an Auto Scaling group, call the DescribePolicies API. If the group has scaling policies, you can update them by calling the PutScalingPolicy API.
See also: AWS API Documentation
Request Syntax
response = client.update_auto_scaling_group(
AutoScalingGroupName='string',
LaunchConfigurationName='string',
LaunchTemplate={
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
MixedInstancesPolicy={
'LaunchTemplate': {
'LaunchTemplateSpecification': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'Overrides': [
{
'InstanceType': 'string',
'WeightedCapacity': 'string',
'LaunchTemplateSpecification': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'InstanceRequirements': {
'VCpuCount': {
'Min': 123,
'Max': 123
},
'MemoryMiB': {
'Min': 123,
'Max': 123
},
'CpuManufacturers': [
'intel'|'amd'|'amazon-web-services',
],
'MemoryGiBPerVCpu': {
'Min': 123.0,
'Max': 123.0
},
'ExcludedInstanceTypes': [
'string',
],
'InstanceGenerations': [
'current'|'previous',
],
'SpotMaxPricePercentageOverLowestPrice': 123,
'OnDemandMaxPricePercentageOverLowestPrice': 123,
'BareMetal': 'included'|'excluded'|'required',
'BurstablePerformance': 'included'|'excluded'|'required',
'RequireHibernateSupport': True|False,
'NetworkInterfaceCount': {
'Min': 123,
'Max': 123
},
'LocalStorage': 'included'|'excluded'|'required',
'LocalStorageTypes': [
'hdd'|'ssd',
],
'TotalLocalStorageGB': {
'Min': 123.0,
'Max': 123.0
},
'BaselineEbsBandwidthMbps': {
'Min': 123,
'Max': 123
},
'AcceleratorTypes': [
'gpu'|'fpga'|'inference',
],
'AcceleratorCount': {
'Min': 123,
'Max': 123
},
'AcceleratorManufacturers': [
'nvidia'|'amd'|'amazon-web-services'|'xilinx',
],
'AcceleratorNames': [
'a100'|'v100'|'k80'|'t4'|'m60'|'radeon-pro-v520'|'vu9p',
],
'AcceleratorTotalMemoryMiB': {
'Min': 123,
'Max': 123
},
'NetworkBandwidthGbps': {
'Min': 123.0,
'Max': 123.0
},
'AllowedInstanceTypes': [
'string',
]
}
},
]
},
'InstancesDistribution': {
'OnDemandAllocationStrategy': 'string',
'OnDemandBaseCapacity': 123,
'OnDemandPercentageAboveBaseCapacity': 123,
'SpotAllocationStrategy': 'string',
'SpotInstancePools': 123,
'SpotMaxPrice': 'string'
}
},
MinSize=123,
MaxSize=123,
DesiredCapacity=123,
DefaultCooldown=123,
AvailabilityZones=[
'string',
],
HealthCheckType='string',
HealthCheckGracePeriod=123,
PlacementGroup='string',
VPCZoneIdentifier='string',
TerminationPolicies=[
'string',
],
NewInstancesProtectedFromScaleIn=True|False,
ServiceLinkedRoleARN='string',
MaxInstanceLifetime=123,
CapacityRebalance=True|False,
Context='string',
DesiredCapacityType='string',
DefaultInstanceWarmup=123
)
[REQUIRED]
The name of the Auto Scaling group.
LaunchConfigurationName
in your update request, you can't specify LaunchTemplate
or MixedInstancesPolicy
.The launch template and version to use to specify the updates. If you specify LaunchTemplate
in your update request, you can't specify LaunchConfigurationName
or MixedInstancesPolicy
.
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
The mixed instances policy. For more information, see Auto Scaling groups with multiple instance types and purchase options in the Amazon EC2 Auto Scaling User Guide .
One or more launch templates and the instance types (overrides) that are used to launch EC2 instances to fulfill On-Demand and Spot capacities.
The launch template.
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
Any properties that you specify override the same properties in the launch template.
Use this structure to let Amazon EC2 Auto Scaling do the following when the Auto Scaling group has a mixed instances policy:
Specify the instance types that you want, or define your instance requirements instead and let Amazon EC2 Auto Scaling provision the available instance types that meet your requirements. This can provide Amazon EC2 Auto Scaling with a larger selection of instance types to choose from when fulfilling Spot and On-Demand capacities. You can view which instance types are matched before you apply the instance requirements to your Auto Scaling group.
After you define your instance requirements, you don't have to keep updating these settings to get new EC2 instance types automatically. Amazon EC2 Auto Scaling uses the instance requirements of the Auto Scaling group to determine whether a new EC2 instance type can be used.
The instance type, such as m3.xlarge
. You must specify an instance type that is supported in your requested Region and Availability Zones. For more information, see Instance types in the Amazon Elastic Compute Cloud User Guide .
You can specify up to 40 instance types per Auto Scaling group.
If you provide a list of instance types to use, you can specify the number of capacity units provided by each instance type in terms of virtual CPUs, memory, storage, throughput, or other relative performance characteristic. When a Spot or On-Demand Instance is launched, the capacity units count toward the desired capacity. Amazon EC2 Auto Scaling launches instances until the desired capacity is totally fulfilled, even if this results in an overage. For example, if there are two units remaining to fulfill capacity, and Amazon EC2 Auto Scaling can only launch an instance with a WeightedCapacity
of five units, the instance is launched, and the desired capacity is exceeded by three units. For more information, see Configuring instance weighting for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide . Value must be in the range of 1–999.
If you specify a value for WeightedCapacity
for one instance type, you must specify a value for WeightedCapacity
for all of them.
Warning
Every Auto Scaling group has three size parameters ( DesiredCapacity
, MaxSize
, and MinSize
). Usually, you set these sizes based on a specific number of instances. However, if you configure a mixed instances policy that defines weights for the instance types, you must specify these sizes with the same units that you use for weighting instances.
Provides a launch template for the specified instance type or set of instance requirements. For example, some instance types might require a launch template with a different AMI. If not provided, Amazon EC2 Auto Scaling uses the launch template that's specified in the LaunchTemplate
definition. For more information, see Specifying a different launch template for an instance type in the Amazon EC2 Auto Scaling User Guide .
You can specify up to 20 launch templates per Auto Scaling group. The launch templates specified in the overrides and in the LaunchTemplate
definition count towards this limit.
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
The instance requirements. Amazon EC2 Auto Scaling uses your specified requirements to identify instance types. Then, it uses your On-Demand and Spot allocation strategies to launch instances from these instance types.
You can specify up to four separate sets of instance requirements per Auto Scaling group. This is useful for provisioning instances from different Amazon Machine Images (AMIs) in the same Auto Scaling group. To do this, create the AMIs and create a new launch template for each AMI. Then, create a compatible set of instance requirements for each launch template.
Note
If you specify InstanceRequirements
, you can't specify InstanceType
.
The minimum and maximum number of vCPUs for an instance type.
The minimum number of vCPUs.
The maximum number of vCPUs.
The minimum and maximum instance memory size for an instance type, in MiB.
The memory minimum in MiB.
The memory maximum in MiB.
Lists which specific CPU manufacturers to include.
intel
.amd
.amazon-web-services
.Note
Don't confuse the CPU hardware manufacturer with the CPU hardware architecture. Instances will be launched with a compatible CPU architecture based on the Amazon Machine Image (AMI) that you specify in your launch template.
Default: Any manufacturer
The minimum and maximum amount of memory per vCPU for an instance type, in GiB.
Default: No minimum or maximum limits
The memory minimum in GiB.
The memory maximum in GiB.
The instance types to exclude. You can use strings with one or more wild cards, represented by an asterisk ( *
), to exclude an instance family, type, size, or generation. The following are examples: m5.8xlarge
, c5*.*
, m5a.*
, r*
, *3*
.
For example, if you specify c5*
, you are excluding the entire C5 instance family, which includes all C5a and C5n instance types. If you specify m5a.*
, Amazon EC2 Auto Scaling will exclude all the M5a instance types, but not the M5n instance types.
Note
If you specify ExcludedInstanceTypes
, you can't specify AllowedInstanceTypes
.
Default: No excluded instance types
Indicates whether current or previous generation instance types are included.
current
. The current generation includes EC2 instance types currently recommended for use. This typically includes the latest two to three generations in each instance family. For more information, see Instance types in the Amazon EC2 User Guide for Linux Instances .previous
.Default: Any current or previous generation
The price protection threshold for Spot Instances. This is the maximum you’ll pay for a Spot Instance, expressed as a percentage higher than the least expensive current generation M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999
.
If you set DesiredCapacityType
to vcpu
or memory-mib
, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price.
Default: 100
The price protection threshold for On-Demand Instances. This is the maximum you’ll pay for an On-Demand Instance, expressed as a percentage higher than the least expensive current generation M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999
.
If you set DesiredCapacityType
to vcpu
or memory-mib
, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price.
Default: 20
Indicates whether bare metal instance types are included, excluded, or required.
Default: excluded
Indicates whether burstable performance instance types are included, excluded, or required. For more information, see Burstable performance instances in the Amazon EC2 User Guide for Linux Instances .
Default: excluded
Indicates whether instance types must provide On-Demand Instance hibernation support.
Default: false
The minimum and maximum number of network interfaces for an instance type.
Default: No minimum or maximum limits
The minimum number of network interfaces.
The maximum number of network interfaces.
Indicates whether instance types with instance store volumes are included, excluded, or required. For more information, see Amazon EC2 instance store in the Amazon EC2 User Guide for Linux Instances .
Default: included
Indicates the type of local storage that is required.
hdd
.ssd
.Default: Any local storage type
The minimum and maximum total local storage size for an instance type, in GB.
Default: No minimum or maximum limits
The storage minimum in GB.
The storage maximum in GB.
The minimum and maximum baseline bandwidth performance for an instance type, in Mbps. For more information, see Amazon EBS–optimized instances in the Amazon EC2 User Guide for Linux Instances .
Default: No minimum or maximum limits
The minimum value in Mbps.
The maximum value in Mbps.
Lists the accelerator types that must be on an instance type.
gpu
.fpga
.inference
.Default: Any accelerator type
The minimum and maximum number of accelerators (GPUs, FPGAs, or Amazon Web Services Inferentia chips) for an instance type.
To exclude accelerator-enabled instance types, set Max
to 0
.
Default: No minimum or maximum limits
The minimum value.
The maximum value.
Indicates whether instance types must have accelerators by specific manufacturers.
nvidia
.amd
.amazon-web-services
.xilinx
.Default: Any manufacturer
Lists the accelerators that must be on an instance type.
a100
.v100
.k80
.t4
.m60
.radeon-pro-v520
.vu9p
.Default: Any accelerator
The minimum and maximum total memory size for the accelerators on an instance type, in MiB.
Default: No minimum or maximum limits
The memory minimum in MiB.
The memory maximum in MiB.
The minimum and maximum amount of network bandwidth, in gigabits per second (Gbps).
Default: No minimum or maximum limits
The minimum amount of network bandwidth, in gigabits per second (Gbps).
The maximum amount of network bandwidth, in gigabits per second (Gbps).
The instance types to apply your specified attributes against. All other instance types are ignored, even if they match your specified attributes.
You can use strings with one or more wild cards, represented by an asterisk ( *
), to allow an instance type, size, or generation. The following are examples: m5.8xlarge
, c5*.*
, m5a.*
, r*
, *3*
.
For example, if you specify c5*
, Amazon EC2 Auto Scaling will allow the entire C5 instance family, which includes all C5a and C5n instance types. If you specify m5a.*
, Amazon EC2 Auto Scaling will allow all the M5a instance types, but not the M5n instance types.
Note
If you specify AllowedInstanceTypes
, you can't specify ExcludedInstanceTypes
.
Default: All instance types
The instances distribution.
The allocation strategy to apply to your On-Demand Instances when they are launched. Possible instance types are determined by the launch template overrides that you specify.
The following lists the valid values:
lowest-price
Uses price to determine which instance types are the highest priority, launching the lowest priced instance types within an Availability Zone first. This is the default value for Auto Scaling groups that specify InstanceRequirements.
prioritized
You set the order of instance types for the launch template overrides from highest to lowest priority (from first to last in the list). Amazon EC2 Auto Scaling launches your highest priority instance types first. If all your On-Demand capacity cannot be fulfilled using your highest priority instance type, then Amazon EC2 Auto Scaling launches the remaining capacity using the second priority instance type, and so on. This is the default value for Auto Scaling groups that don't specify InstanceRequirements and cannot be used for groups that do.
The minimum amount of the Auto Scaling group's capacity that must be fulfilled by On-Demand Instances. This base portion is launched first as your group scales.
This number has the same unit of measurement as the group's desired capacity. If you change the default unit of measurement (number of instances) by specifying weighted capacity values in your launch template overrides list, or by changing the default desired capacity type setting of the group, you must specify this number using the same unit of measurement.
Default: 0
Controls the percentages of On-Demand Instances and Spot Instances for your additional capacity beyond OnDemandBaseCapacity
. Expressed as a number (for example, 20 specifies 20% On-Demand Instances, 80% Spot Instances). If set to 100, only On-Demand Instances are used.
Default: 100
The allocation strategy to apply to your Spot Instances when they are launched. Possible instance types are determined by the launch template overrides that you specify.
The following lists the valid values:
capacity-optimized
Requests Spot Instances using pools that are optimally chosen based on the available Spot capacity. This strategy has the lowest risk of interruption. To give certain instance types a higher chance of launching first, use capacity-optimized-prioritized
.
capacity-optimized-prioritized
You set the order of instance types for the launch template overrides from highest to lowest priority (from first to last in the list). Amazon EC2 Auto Scaling honors the instance type priorities on a best effort basis but optimizes for capacity first. Note that if the On-Demand allocation strategy is set to prioritized
, the same priority is applied when fulfilling On-Demand capacity. This is not a valid value for Auto Scaling groups that specify InstanceRequirements.
lowest-price
Requests Spot Instances using the lowest priced pools within an Availability Zone, across the number of Spot pools that you specify for the SpotInstancePools
property. To ensure that your desired capacity is met, you might receive Spot Instances from several pools. This is the default value, but it might lead to high interruption rates because this strategy only considers instance price and not available capacity.
price-capacity-optimized (recommended)
The price and capacity optimized allocation strategy looks at both price and capacity to select the Spot Instance pools that are the least likely to be interrupted and have the lowest possible price.
The number of Spot Instance pools across which to allocate your Spot Instances. The Spot pools are determined from the different instance types in the overrides. Valid only when the SpotAllocationStrategy
is lowest-price
. Value must be in the range of 1–20.
Default: 2
The maximum price per unit hour that you are willing to pay for a Spot Instance. If your maximum price is lower than the Spot price for the instance types that you selected, your Spot Instances are not launched. We do not recommend specifying a maximum price because it can lead to increased interruptions. When Spot Instances launch, you pay the current Spot price. To remove a maximum price that you previously set, include the property but specify an empty string ("") for the value.
Warning
If you specify a maximum price, your instances will be interrupted more frequently than if you do not specify one.
Valid Range: Minimum value of 0.001
The maximum size of the Auto Scaling group.
Note
With a mixed instances policy that uses instance weighting, Amazon EC2 Auto Scaling may need to go above MaxSize
to meet your capacity requirements. In this event, Amazon EC2 Auto Scaling will never go above MaxSize
by more than your largest instance weight (weights that define how many units each instance contributes to the desired capacity of the group).
Only needed if you use simple scaling policies.
The amount of time, in seconds, between one scaling activity ending and another one starting due to simple scaling policies. For more information, see Scaling cooldowns for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
One or more Availability Zones for the group.
Determines whether any additional health checks are performed on the instances in this group. Amazon EC2 health checks are always on.
The valid values are EC2
(default), ELB
, and VPC_LATTICE
. The VPC_LATTICE
health check type is reserved for use with VPC Lattice, which is in preview release and is subject to change.
InService
state. For more information, see Set the health check grace period for an Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .The name of an existing placement group into which to launch your instances. For more information, see Placement groups in the Amazon EC2 User Guide for Linux Instances .
Note
A cluster placement group is a logical grouping of instances within a single Availability Zone. You cannot specify multiple Availability Zones and a cluster placement group.
VPCZoneIdentifier
with AvailabilityZones
, the subnets that you specify must reside in those Availability Zones.A policy or a list of policies that are used to select the instances to terminate. The policies are executed in the order that you list them. For more information, see Work with Amazon EC2 Auto Scaling termination policies in the Amazon EC2 Auto Scaling User Guide .
Valid values: Default
| AllocationStrategy
| ClosestToNextInstanceHour
| NewestInstance
| OldestInstance
| OldestLaunchConfiguration
| OldestLaunchTemplate
| arn:aws:lambda:region:account-id:function:my-function:my-alias
The unit of measurement for the value specified for desired capacity. Amazon EC2 Auto Scaling supports DesiredCapacityType
for attribute-based instance type selection only. For more information, see Creating an Auto Scaling group using attribute-based instance type selection in the Amazon EC2 Auto Scaling User Guide .
By default, Amazon EC2 Auto Scaling specifies units
, which translates into number of instances.
Valid values: units
| vcpu
| memory-mib
The amount of time, in seconds, until a new instance is considered to have finished initializing and resource consumption to become stable after it enters the InService
state.
During an instance refresh, Amazon EC2 Auto Scaling waits for the warm-up period after it replaces an instance before it moves on to replacing the next instance. Amazon EC2 Auto Scaling also waits for the warm-up period before aggregating the metrics for new instances with existing instances in the Amazon CloudWatch metrics that are used for scaling, resulting in more reliable usage data. For more information, see Set the default instance warmup for an Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .
Warning
To manage various warm-up settings at the group level, we recommend that you set the default instance warmup, even if it is set to 0 seconds . To remove a value that you previously set, include the property but specify -1
for the value. However, we strongly recommend keeping the default instance warmup enabled by specifying a value of 0
or other nominal value.
None
Exceptions
AutoScaling.Client.exceptions.ScalingActivityInProgressFault
AutoScaling.Client.exceptions.ResourceContentionFault
AutoScaling.Client.exceptions.ServiceLinkedRoleFailure
Examples
This example updates multiple properties at the same time.
response = client.update_auto_scaling_group(
AutoScalingGroupName='my-auto-scaling-group',
LaunchTemplate={
'LaunchTemplateName': 'my-template-for-auto-scaling',
'Version': '2',
},
MaxSize=5,
MinSize=1,
NewInstancesProtectedFromScaleIn=True,
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
The available paginators are:
AutoScaling.Paginator.DescribeAutoScalingGroups
AutoScaling.Paginator.DescribeAutoScalingInstances
AutoScaling.Paginator.DescribeLaunchConfigurations
AutoScaling.Paginator.DescribeLoadBalancerTargetGroups
AutoScaling.Paginator.DescribeLoadBalancers
AutoScaling.Paginator.DescribeNotificationConfigurations
AutoScaling.Paginator.DescribePolicies
AutoScaling.Paginator.DescribeScalingActivities
AutoScaling.Paginator.DescribeScheduledActions
AutoScaling.Paginator.DescribeTags
AutoScaling.Paginator.
DescribeAutoScalingGroups
¶paginator = client.get_paginator('describe_auto_scaling_groups')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from AutoScaling.Client.describe_auto_scaling_groups()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
AutoScalingGroupNames=[
'string',
],
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
The names of the Auto Scaling groups. By default, you can only specify up to 50 names. You can optionally increase this limit using the MaxRecords
property.
If you omit this property, all Auto Scaling groups are described.
One or more filters to limit the results based on specific tags.
Describes a filter that is used to return a more specific list of results from a describe operation.
If you specify multiple filters, the filters are automatically logically joined with an AND
, and the request returns only the results that match all of the specified filters.
For more information, see Tag Auto Scaling groups and instances in the Amazon EC2 Auto Scaling User Guide .
The name of the filter.
The valid values for Name
depend on which API operation you're using with the filter ( DescribeAutoScalingGroups or DescribeTags ).
DescribeAutoScalingGroups
Valid values for Name
include the following:
tag-key
- Accepts tag keys. The results only include information about the Auto Scaling groups associated with these tag keys.tag-value
- Accepts tag values. The results only include information about the Auto Scaling groups associated with these tag values.tag:<key>
- Accepts the key/value combination of the tag. Use the tag key in the filter name and the tag value as the filter value. The results only include information about the Auto Scaling groups associated with the specified key/value combination.DescribeTags
Valid values for Name
include the following:
auto-scaling-group
- Accepts the names of Auto Scaling groups. The results only include information about the tags associated with these Auto Scaling groups.key
- Accepts tag keys. The results only include information about the tags associated with these tag keys.value
- Accepts tag values. The results only include information about the tags associated with these tag values.propagate-at-launch
- Accepts a Boolean value, which specifies whether tags propagate to instances at launch. The results only include information about the tags associated with the specified Boolean value.One or more filter values. Filter values are case-sensitive.
If you specify multiple values for a filter, the values are automatically logically joined with an OR
, and the request returns all results that match any of the specified values. For example, specify "tag:environment" for the filter name and "production,development" for the filter values to find Auto Scaling groups with the tag "environment=production" or "environment=development".
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
dict
Response Syntax
{
'AutoScalingGroups': [
{
'AutoScalingGroupName': 'string',
'AutoScalingGroupARN': 'string',
'LaunchConfigurationName': 'string',
'LaunchTemplate': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'MixedInstancesPolicy': {
'LaunchTemplate': {
'LaunchTemplateSpecification': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'Overrides': [
{
'InstanceType': 'string',
'WeightedCapacity': 'string',
'LaunchTemplateSpecification': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'InstanceRequirements': {
'VCpuCount': {
'Min': 123,
'Max': 123
},
'MemoryMiB': {
'Min': 123,
'Max': 123
},
'CpuManufacturers': [
'intel'|'amd'|'amazon-web-services',
],
'MemoryGiBPerVCpu': {
'Min': 123.0,
'Max': 123.0
},
'ExcludedInstanceTypes': [
'string',
],
'InstanceGenerations': [
'current'|'previous',
],
'SpotMaxPricePercentageOverLowestPrice': 123,
'OnDemandMaxPricePercentageOverLowestPrice': 123,
'BareMetal': 'included'|'excluded'|'required',
'BurstablePerformance': 'included'|'excluded'|'required',
'RequireHibernateSupport': True|False,
'NetworkInterfaceCount': {
'Min': 123,
'Max': 123
},
'LocalStorage': 'included'|'excluded'|'required',
'LocalStorageTypes': [
'hdd'|'ssd',
],
'TotalLocalStorageGB': {
'Min': 123.0,
'Max': 123.0
},
'BaselineEbsBandwidthMbps': {
'Min': 123,
'Max': 123
},
'AcceleratorTypes': [
'gpu'|'fpga'|'inference',
],
'AcceleratorCount': {
'Min': 123,
'Max': 123
},
'AcceleratorManufacturers': [
'nvidia'|'amd'|'amazon-web-services'|'xilinx',
],
'AcceleratorNames': [
'a100'|'v100'|'k80'|'t4'|'m60'|'radeon-pro-v520'|'vu9p',
],
'AcceleratorTotalMemoryMiB': {
'Min': 123,
'Max': 123
},
'NetworkBandwidthGbps': {
'Min': 123.0,
'Max': 123.0
},
'AllowedInstanceTypes': [
'string',
]
}
},
]
},
'InstancesDistribution': {
'OnDemandAllocationStrategy': 'string',
'OnDemandBaseCapacity': 123,
'OnDemandPercentageAboveBaseCapacity': 123,
'SpotAllocationStrategy': 'string',
'SpotInstancePools': 123,
'SpotMaxPrice': 'string'
}
},
'MinSize': 123,
'MaxSize': 123,
'DesiredCapacity': 123,
'PredictedCapacity': 123,
'DefaultCooldown': 123,
'AvailabilityZones': [
'string',
],
'LoadBalancerNames': [
'string',
],
'TargetGroupARNs': [
'string',
],
'HealthCheckType': 'string',
'HealthCheckGracePeriod': 123,
'Instances': [
{
'InstanceId': 'string',
'InstanceType': 'string',
'AvailabilityZone': 'string',
'LifecycleState': 'Pending'|'Pending:Wait'|'Pending:Proceed'|'Quarantined'|'InService'|'Terminating'|'Terminating:Wait'|'Terminating:Proceed'|'Terminated'|'Detaching'|'Detached'|'EnteringStandby'|'Standby'|'Warmed:Pending'|'Warmed:Pending:Wait'|'Warmed:Pending:Proceed'|'Warmed:Terminating'|'Warmed:Terminating:Wait'|'Warmed:Terminating:Proceed'|'Warmed:Terminated'|'Warmed:Stopped'|'Warmed:Running'|'Warmed:Hibernated',
'HealthStatus': 'string',
'LaunchConfigurationName': 'string',
'LaunchTemplate': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'ProtectedFromScaleIn': True|False,
'WeightedCapacity': 'string'
},
],
'CreatedTime': datetime(2015, 1, 1),
'SuspendedProcesses': [
{
'ProcessName': 'string',
'SuspensionReason': 'string'
},
],
'PlacementGroup': 'string',
'VPCZoneIdentifier': 'string',
'EnabledMetrics': [
{
'Metric': 'string',
'Granularity': 'string'
},
],
'Status': 'string',
'Tags': [
{
'ResourceId': 'string',
'ResourceType': 'string',
'Key': 'string',
'Value': 'string',
'PropagateAtLaunch': True|False
},
],
'TerminationPolicies': [
'string',
],
'NewInstancesProtectedFromScaleIn': True|False,
'ServiceLinkedRoleARN': 'string',
'MaxInstanceLifetime': 123,
'CapacityRebalance': True|False,
'WarmPoolConfiguration': {
'MaxGroupPreparedCapacity': 123,
'MinSize': 123,
'PoolState': 'Stopped'|'Running'|'Hibernated',
'Status': 'PendingDelete',
'InstanceReusePolicy': {
'ReuseOnScaleIn': True|False
}
},
'WarmPoolSize': 123,
'Context': 'string',
'DesiredCapacityType': 'string',
'DefaultInstanceWarmup': 123,
'TrafficSources': [
{
'Identifier': 'string'
},
]
},
],
}
Response Structure
(dict) --
AutoScalingGroups (list) --
The groups.
(dict) --
Describes an Auto Scaling group.
AutoScalingGroupName (string) --
The name of the Auto Scaling group.
AutoScalingGroupARN (string) --
The Amazon Resource Name (ARN) of the Auto Scaling group.
LaunchConfigurationName (string) --
The name of the associated launch configuration.
LaunchTemplate (dict) --
The launch template for the group.
LaunchTemplateId (string) --
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
LaunchTemplateName (string) --
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
Version (string) --
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
MixedInstancesPolicy (dict) --
The mixed instances policy for the group.
LaunchTemplate (dict) --
One or more launch templates and the instance types (overrides) that are used to launch EC2 instances to fulfill On-Demand and Spot capacities.
LaunchTemplateSpecification (dict) --
The launch template.
LaunchTemplateId (string) --
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
LaunchTemplateName (string) --
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
Version (string) --
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
Overrides (list) --
Any properties that you specify override the same properties in the launch template.
(dict) --
Use this structure to let Amazon EC2 Auto Scaling do the following when the Auto Scaling group has a mixed instances policy:
Specify the instance types that you want, or define your instance requirements instead and let Amazon EC2 Auto Scaling provision the available instance types that meet your requirements. This can provide Amazon EC2 Auto Scaling with a larger selection of instance types to choose from when fulfilling Spot and On-Demand capacities. You can view which instance types are matched before you apply the instance requirements to your Auto Scaling group.
After you define your instance requirements, you don't have to keep updating these settings to get new EC2 instance types automatically. Amazon EC2 Auto Scaling uses the instance requirements of the Auto Scaling group to determine whether a new EC2 instance type can be used.
InstanceType (string) --
The instance type, such as m3.xlarge
. You must specify an instance type that is supported in your requested Region and Availability Zones. For more information, see Instance types in the Amazon Elastic Compute Cloud User Guide .
You can specify up to 40 instance types per Auto Scaling group.
WeightedCapacity (string) --
If you provide a list of instance types to use, you can specify the number of capacity units provided by each instance type in terms of virtual CPUs, memory, storage, throughput, or other relative performance characteristic. When a Spot or On-Demand Instance is launched, the capacity units count toward the desired capacity. Amazon EC2 Auto Scaling launches instances until the desired capacity is totally fulfilled, even if this results in an overage. For example, if there are two units remaining to fulfill capacity, and Amazon EC2 Auto Scaling can only launch an instance with a WeightedCapacity
of five units, the instance is launched, and the desired capacity is exceeded by three units. For more information, see Configuring instance weighting for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide . Value must be in the range of 1–999.
If you specify a value for WeightedCapacity
for one instance type, you must specify a value for WeightedCapacity
for all of them.
Warning
Every Auto Scaling group has three size parameters ( DesiredCapacity
, MaxSize
, and MinSize
). Usually, you set these sizes based on a specific number of instances. However, if you configure a mixed instances policy that defines weights for the instance types, you must specify these sizes with the same units that you use for weighting instances.
LaunchTemplateSpecification (dict) --
Provides a launch template for the specified instance type or set of instance requirements. For example, some instance types might require a launch template with a different AMI. If not provided, Amazon EC2 Auto Scaling uses the launch template that's specified in the LaunchTemplate
definition. For more information, see Specifying a different launch template for an instance type in the Amazon EC2 Auto Scaling User Guide .
You can specify up to 20 launch templates per Auto Scaling group. The launch templates specified in the overrides and in the LaunchTemplate
definition count towards this limit.
LaunchTemplateId (string) --
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
LaunchTemplateName (string) --
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
Version (string) --
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
InstanceRequirements (dict) --
The instance requirements. Amazon EC2 Auto Scaling uses your specified requirements to identify instance types. Then, it uses your On-Demand and Spot allocation strategies to launch instances from these instance types.
You can specify up to four separate sets of instance requirements per Auto Scaling group. This is useful for provisioning instances from different Amazon Machine Images (AMIs) in the same Auto Scaling group. To do this, create the AMIs and create a new launch template for each AMI. Then, create a compatible set of instance requirements for each launch template.
Note
If you specify InstanceRequirements
, you can't specify InstanceType
.
VCpuCount (dict) --
The minimum and maximum number of vCPUs for an instance type.
Min (integer) --
The minimum number of vCPUs.
Max (integer) --
The maximum number of vCPUs.
MemoryMiB (dict) --
The minimum and maximum instance memory size for an instance type, in MiB.
Min (integer) --
The memory minimum in MiB.
Max (integer) --
The memory maximum in MiB.
CpuManufacturers (list) --
Lists which specific CPU manufacturers to include.
intel
.amd
.amazon-web-services
.Note
Don't confuse the CPU hardware manufacturer with the CPU hardware architecture. Instances will be launched with a compatible CPU architecture based on the Amazon Machine Image (AMI) that you specify in your launch template.
Default: Any manufacturer
MemoryGiBPerVCpu (dict) --
The minimum and maximum amount of memory per vCPU for an instance type, in GiB.
Default: No minimum or maximum limits
Min (float) --
The memory minimum in GiB.
Max (float) --
The memory maximum in GiB.
ExcludedInstanceTypes (list) --
The instance types to exclude. You can use strings with one or more wild cards, represented by an asterisk ( *
), to exclude an instance family, type, size, or generation. The following are examples: m5.8xlarge
, c5*.*
, m5a.*
, r*
, *3*
.
For example, if you specify c5*
, you are excluding the entire C5 instance family, which includes all C5a and C5n instance types. If you specify m5a.*
, Amazon EC2 Auto Scaling will exclude all the M5a instance types, but not the M5n instance types.
Note
If you specify ExcludedInstanceTypes
, you can't specify AllowedInstanceTypes
.
Default: No excluded instance types
InstanceGenerations (list) --
Indicates whether current or previous generation instance types are included.
current
. The current generation includes EC2 instance types currently recommended for use. This typically includes the latest two to three generations in each instance family. For more information, see Instance types in the Amazon EC2 User Guide for Linux Instances .previous
.Default: Any current or previous generation
SpotMaxPricePercentageOverLowestPrice (integer) --
The price protection threshold for Spot Instances. This is the maximum you’ll pay for a Spot Instance, expressed as a percentage higher than the least expensive current generation M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999
.
If you set DesiredCapacityType
to vcpu
or memory-mib
, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price.
Default: 100
OnDemandMaxPricePercentageOverLowestPrice (integer) --
The price protection threshold for On-Demand Instances. This is the maximum you’ll pay for an On-Demand Instance, expressed as a percentage higher than the least expensive current generation M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999
.
If you set DesiredCapacityType
to vcpu
or memory-mib
, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price.
Default: 20
BareMetal (string) --
Indicates whether bare metal instance types are included, excluded, or required.
Default: excluded
BurstablePerformance (string) --
Indicates whether burstable performance instance types are included, excluded, or required. For more information, see Burstable performance instances in the Amazon EC2 User Guide for Linux Instances .
Default: excluded
RequireHibernateSupport (boolean) --
Indicates whether instance types must provide On-Demand Instance hibernation support.
Default: false
NetworkInterfaceCount (dict) --
The minimum and maximum number of network interfaces for an instance type.
Default: No minimum or maximum limits
Min (integer) --
The minimum number of network interfaces.
Max (integer) --
The maximum number of network interfaces.
LocalStorage (string) --
Indicates whether instance types with instance store volumes are included, excluded, or required. For more information, see Amazon EC2 instance store in the Amazon EC2 User Guide for Linux Instances .
Default: included
LocalStorageTypes (list) --
Indicates the type of local storage that is required.
hdd
.ssd
.Default: Any local storage type
TotalLocalStorageGB (dict) --
The minimum and maximum total local storage size for an instance type, in GB.
Default: No minimum or maximum limits
Min (float) --
The storage minimum in GB.
Max (float) --
The storage maximum in GB.
BaselineEbsBandwidthMbps (dict) --
The minimum and maximum baseline bandwidth performance for an instance type, in Mbps. For more information, see Amazon EBS–optimized instances in the Amazon EC2 User Guide for Linux Instances .
Default: No minimum or maximum limits
Min (integer) --
The minimum value in Mbps.
Max (integer) --
The maximum value in Mbps.
AcceleratorTypes (list) --
Lists the accelerator types that must be on an instance type.
gpu
.fpga
.inference
.Default: Any accelerator type
AcceleratorCount (dict) --
The minimum and maximum number of accelerators (GPUs, FPGAs, or Amazon Web Services Inferentia chips) for an instance type.
To exclude accelerator-enabled instance types, set Max
to 0
.
Default: No minimum or maximum limits
Min (integer) --
The minimum value.
Max (integer) --
The maximum value.
AcceleratorManufacturers (list) --
Indicates whether instance types must have accelerators by specific manufacturers.
nvidia
.amd
.amazon-web-services
.xilinx
.Default: Any manufacturer
AcceleratorNames (list) --
Lists the accelerators that must be on an instance type.
a100
.v100
.k80
.t4
.m60
.radeon-pro-v520
.vu9p
.Default: Any accelerator
AcceleratorTotalMemoryMiB (dict) --
The minimum and maximum total memory size for the accelerators on an instance type, in MiB.
Default: No minimum or maximum limits
Min (integer) --
The memory minimum in MiB.
Max (integer) --
The memory maximum in MiB.
NetworkBandwidthGbps (dict) --
The minimum and maximum amount of network bandwidth, in gigabits per second (Gbps).
Default: No minimum or maximum limits
Min (float) --
The minimum amount of network bandwidth, in gigabits per second (Gbps).
Max (float) --
The maximum amount of network bandwidth, in gigabits per second (Gbps).
AllowedInstanceTypes (list) --
The instance types to apply your specified attributes against. All other instance types are ignored, even if they match your specified attributes.
You can use strings with one or more wild cards, represented by an asterisk ( *
), to allow an instance type, size, or generation. The following are examples: m5.8xlarge
, c5*.*
, m5a.*
, r*
, *3*
.
For example, if you specify c5*
, Amazon EC2 Auto Scaling will allow the entire C5 instance family, which includes all C5a and C5n instance types. If you specify m5a.*
, Amazon EC2 Auto Scaling will allow all the M5a instance types, but not the M5n instance types.
Note
If you specify AllowedInstanceTypes
, you can't specify ExcludedInstanceTypes
.
Default: All instance types
InstancesDistribution (dict) --
The instances distribution.
OnDemandAllocationStrategy (string) --
The allocation strategy to apply to your On-Demand Instances when they are launched. Possible instance types are determined by the launch template overrides that you specify.
The following lists the valid values:
lowest-price
Uses price to determine which instance types are the highest priority, launching the lowest priced instance types within an Availability Zone first. This is the default value for Auto Scaling groups that specify InstanceRequirements.
prioritized
You set the order of instance types for the launch template overrides from highest to lowest priority (from first to last in the list). Amazon EC2 Auto Scaling launches your highest priority instance types first. If all your On-Demand capacity cannot be fulfilled using your highest priority instance type, then Amazon EC2 Auto Scaling launches the remaining capacity using the second priority instance type, and so on. This is the default value for Auto Scaling groups that don't specify InstanceRequirements and cannot be used for groups that do.
OnDemandBaseCapacity (integer) --
The minimum amount of the Auto Scaling group's capacity that must be fulfilled by On-Demand Instances. This base portion is launched first as your group scales.
This number has the same unit of measurement as the group's desired capacity. If you change the default unit of measurement (number of instances) by specifying weighted capacity values in your launch template overrides list, or by changing the default desired capacity type setting of the group, you must specify this number using the same unit of measurement.
Default: 0
OnDemandPercentageAboveBaseCapacity (integer) --
Controls the percentages of On-Demand Instances and Spot Instances for your additional capacity beyond OnDemandBaseCapacity
. Expressed as a number (for example, 20 specifies 20% On-Demand Instances, 80% Spot Instances). If set to 100, only On-Demand Instances are used.
Default: 100
SpotAllocationStrategy (string) --
The allocation strategy to apply to your Spot Instances when they are launched. Possible instance types are determined by the launch template overrides that you specify.
The following lists the valid values:
capacity-optimized
Requests Spot Instances using pools that are optimally chosen based on the available Spot capacity. This strategy has the lowest risk of interruption. To give certain instance types a higher chance of launching first, use capacity-optimized-prioritized
.
capacity-optimized-prioritized
You set the order of instance types for the launch template overrides from highest to lowest priority (from first to last in the list). Amazon EC2 Auto Scaling honors the instance type priorities on a best effort basis but optimizes for capacity first. Note that if the On-Demand allocation strategy is set to prioritized
, the same priority is applied when fulfilling On-Demand capacity. This is not a valid value for Auto Scaling groups that specify InstanceRequirements.
lowest-price
Requests Spot Instances using the lowest priced pools within an Availability Zone, across the number of Spot pools that you specify for the SpotInstancePools
property. To ensure that your desired capacity is met, you might receive Spot Instances from several pools. This is the default value, but it might lead to high interruption rates because this strategy only considers instance price and not available capacity.
price-capacity-optimized (recommended)
The price and capacity optimized allocation strategy looks at both price and capacity to select the Spot Instance pools that are the least likely to be interrupted and have the lowest possible price.
SpotInstancePools (integer) --
The number of Spot Instance pools across which to allocate your Spot Instances. The Spot pools are determined from the different instance types in the overrides. Valid only when the SpotAllocationStrategy
is lowest-price
. Value must be in the range of 1–20.
Default: 2
SpotMaxPrice (string) --
The maximum price per unit hour that you are willing to pay for a Spot Instance. If your maximum price is lower than the Spot price for the instance types that you selected, your Spot Instances are not launched. We do not recommend specifying a maximum price because it can lead to increased interruptions. When Spot Instances launch, you pay the current Spot price. To remove a maximum price that you previously set, include the property but specify an empty string ("") for the value.
Warning
If you specify a maximum price, your instances will be interrupted more frequently than if you do not specify one.
Valid Range: Minimum value of 0.001
MinSize (integer) --
The minimum size of the group.
MaxSize (integer) --
The maximum size of the group.
DesiredCapacity (integer) --
The desired size of the group.
PredictedCapacity (integer) --
The predicted capacity of the group when it has a predictive scaling policy.
DefaultCooldown (integer) --
The duration of the default cooldown period, in seconds.
AvailabilityZones (list) --
One or more Availability Zones for the group.
LoadBalancerNames (list) --
One or more load balancers associated with the group.
TargetGroupARNs (list) --
The Amazon Resource Names (ARN) of the target groups for your load balancer.
HealthCheckType (string) --
Determines whether any additional health checks are performed on the instances in this group. Amazon EC2 health checks are always on.
The valid values are EC2
(default), ELB
, and VPC_LATTICE
. The VPC_LATTICE
health check type is reserved for use with VPC Lattice, which is in preview release and is subject to change.
HealthCheckGracePeriod (integer) --
The duration of the health check grace period, in seconds.
Instances (list) --
The EC2 instances associated with the group.
(dict) --
Describes an EC2 instance.
InstanceId (string) --
The ID of the instance.
InstanceType (string) --
The instance type of the EC2 instance.
AvailabilityZone (string) --
The Availability Zone in which the instance is running.
LifecycleState (string) --
A description of the current lifecycle state. The Quarantined
state is not used. For information about lifecycle states, see Instance lifecycle in the Amazon EC2 Auto Scaling User Guide .
HealthStatus (string) --
The last reported health status of the instance. "Healthy" means that the instance is healthy and should remain in service. "Unhealthy" means that the instance is unhealthy and that Amazon EC2 Auto Scaling should terminate and replace it.
LaunchConfigurationName (string) --
The launch configuration associated with the instance.
LaunchTemplate (dict) --
The launch template for the instance.
LaunchTemplateId (string) --
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
LaunchTemplateName (string) --
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
Version (string) --
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
ProtectedFromScaleIn (boolean) --
Indicates whether the instance is protected from termination by Amazon EC2 Auto Scaling when scaling in.
WeightedCapacity (string) --
The number of capacity units contributed by the instance based on its instance type.
Valid Range: Minimum value of 1. Maximum value of 999.
CreatedTime (datetime) --
The date and time the group was created.
SuspendedProcesses (list) --
The suspended processes associated with the group.
(dict) --
Describes an auto scaling process that has been suspended.
For more information, see Scaling processes in the Amazon EC2 Auto Scaling User Guide .
ProcessName (string) --
The name of the suspended process.
SuspensionReason (string) --
The reason that the process was suspended.
PlacementGroup (string) --
The name of the placement group into which to launch your instances, if any.
VPCZoneIdentifier (string) --
One or more subnet IDs, if applicable, separated by commas.
EnabledMetrics (list) --
The metrics enabled for the group.
(dict) --
Describes an enabled Auto Scaling group metric.
Metric (string) --
One of the following metrics:
GroupMinSize
GroupMaxSize
GroupDesiredCapacity
GroupInServiceInstances
GroupPendingInstances
GroupStandbyInstances
GroupTerminatingInstances
GroupTotalInstances
GroupInServiceCapacity
GroupPendingCapacity
GroupStandbyCapacity
GroupTerminatingCapacity
GroupTotalCapacity
WarmPoolDesiredCapacity
WarmPoolWarmedCapacity
WarmPoolPendingCapacity
WarmPoolTerminatingCapacity
WarmPoolTotalCapacity
GroupAndWarmPoolDesiredCapacity
GroupAndWarmPoolTotalCapacity
For more information, see Auto Scaling group metrics in the Amazon EC2 Auto Scaling User Guide .
Granularity (string) --
The granularity of the metric. The only valid value is 1Minute
.
Status (string) --
The current state of the group when the DeleteAutoScalingGroup operation is in progress.
Tags (list) --
The tags for the group.
(dict) --
Describes a tag for an Auto Scaling group.
ResourceId (string) --
The name of the group.
ResourceType (string) --
The type of resource. The only supported value is auto-scaling-group
.
Key (string) --
The tag key.
Value (string) --
The tag value.
PropagateAtLaunch (boolean) --
Determines whether the tag is added to new instances as they are launched in the group.
TerminationPolicies (list) --
The termination policies for the group.
NewInstancesProtectedFromScaleIn (boolean) --
Indicates whether newly launched instances are protected from termination by Amazon EC2 Auto Scaling when scaling in.
ServiceLinkedRoleARN (string) --
The Amazon Resource Name (ARN) of the service-linked role that the Auto Scaling group uses to call other Amazon Web Services on your behalf.
MaxInstanceLifetime (integer) --
The maximum amount of time, in seconds, that an instance can be in service.
Valid Range: Minimum value of 0.
CapacityRebalance (boolean) --
Indicates whether Capacity Rebalancing is enabled.
WarmPoolConfiguration (dict) --
The warm pool for the group.
MaxGroupPreparedCapacity (integer) --
The maximum number of instances that are allowed to be in the warm pool or in any state except Terminated
for the Auto Scaling group.
MinSize (integer) --
The minimum number of instances to maintain in the warm pool.
PoolState (string) --
The instance state to transition to after the lifecycle actions are complete.
Status (string) --
The status of a warm pool that is marked for deletion.
InstanceReusePolicy (dict) --
The instance reuse policy.
ReuseOnScaleIn (boolean) --
Specifies whether instances in the Auto Scaling group can be returned to the warm pool on scale in.
WarmPoolSize (integer) --
The current size of the warm pool.
Context (string) --
Reserved.
DesiredCapacityType (string) --
The unit of measurement for the value specified for desired capacity. Amazon EC2 Auto Scaling supports DesiredCapacityType
for attribute-based instance type selection only.
DefaultInstanceWarmup (integer) --
The duration of the default instance warmup, in seconds.
TrafficSources (list) --
Reserved for use with Amazon VPC Lattice, which is in preview release and is subject to change. Do not use this parameter for production workloads. It is also subject to change.
The unique identifiers of the traffic sources.
(dict) --
Describes the identifier of a traffic source.
Currently, you must specify an Amazon Resource Name (ARN) for an existing VPC Lattice target group.
Identifier (string) --
The unique identifier of the traffic source.
AutoScaling.Paginator.
DescribeAutoScalingInstances
¶paginator = client.get_paginator('describe_auto_scaling_instances')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from AutoScaling.Client.describe_auto_scaling_instances()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
InstanceIds=[
'string',
],
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
The IDs of the instances. If you omit this property, all Auto Scaling instances are described. If you specify an ID that does not exist, it is ignored with no error.
Array Members: Maximum number of 50 items.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
dict
Response Syntax
{
'AutoScalingInstances': [
{
'InstanceId': 'string',
'InstanceType': 'string',
'AutoScalingGroupName': 'string',
'AvailabilityZone': 'string',
'LifecycleState': 'string',
'HealthStatus': 'string',
'LaunchConfigurationName': 'string',
'LaunchTemplate': {
'LaunchTemplateId': 'string',
'LaunchTemplateName': 'string',
'Version': 'string'
},
'ProtectedFromScaleIn': True|False,
'WeightedCapacity': 'string'
},
],
}
Response Structure
(dict) --
AutoScalingInstances (list) --
The instances.
(dict) --
Describes an EC2 instance associated with an Auto Scaling group.
InstanceId (string) --
The ID of the instance.
InstanceType (string) --
The instance type of the EC2 instance.
AutoScalingGroupName (string) --
The name of the Auto Scaling group for the instance.
AvailabilityZone (string) --
The Availability Zone for the instance.
LifecycleState (string) --
The lifecycle state for the instance. The Quarantined
state is not used. For information about lifecycle states, see Instance lifecycle in the Amazon EC2 Auto Scaling User Guide .
Valid values: Pending
| Pending:Wait
| Pending:Proceed
| Quarantined
| InService
| Terminating
| Terminating:Wait
| Terminating:Proceed
| Terminated
| Detaching
| Detached
| EnteringStandby
| Standby
| Warmed:Pending
| Warmed:Pending:Wait
| Warmed:Pending:Proceed
| Warmed:Terminating
| Warmed:Terminating:Wait
| Warmed:Terminating:Proceed
| Warmed:Terminated
| Warmed:Stopped
| Warmed:Running
HealthStatus (string) --
The last reported health status of this instance. "Healthy" means that the instance is healthy and should remain in service. "Unhealthy" means that the instance is unhealthy and Amazon EC2 Auto Scaling should terminate and replace it.
LaunchConfigurationName (string) --
The launch configuration used to launch the instance. This value is not available if you attached the instance to the Auto Scaling group.
LaunchTemplate (dict) --
The launch template for the instance.
LaunchTemplateId (string) --
The ID of the launch template. To get the template ID, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
LaunchTemplateName (string) --
The name of the launch template. To get the template name, use the Amazon EC2 DescribeLaunchTemplates API operation. New launch templates can be created using the Amazon EC2 CreateLaunchTemplate API.
Conditional: You must specify either a LaunchTemplateId
or a LaunchTemplateName
.
Version (string) --
The version number, $Latest
, or $Default
. To get the version number, use the Amazon EC2 DescribeLaunchTemplateVersions API operation. New launch template versions can be created using the Amazon EC2 CreateLaunchTemplateVersion API. If the value is $Latest
, Amazon EC2 Auto Scaling selects the latest version of the launch template when launching instances. If the value is $Default
, Amazon EC2 Auto Scaling selects the default version of the launch template when launching instances. The default value is $Default
.
ProtectedFromScaleIn (boolean) --
Indicates whether the instance is protected from termination by Amazon EC2 Auto Scaling when scaling in.
WeightedCapacity (string) --
The number of capacity units contributed by the instance based on its instance type.
Valid Range: Minimum value of 1. Maximum value of 999.
AutoScaling.Paginator.
DescribeLaunchConfigurations
¶paginator = client.get_paginator('describe_launch_configurations')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from AutoScaling.Client.describe_launch_configurations()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
LaunchConfigurationNames=[
'string',
],
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
The launch configuration names. If you omit this property, all launch configurations are described.
Array Members: Maximum number of 50 items.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
dict
Response Syntax
{
'LaunchConfigurations': [
{
'LaunchConfigurationName': 'string',
'LaunchConfigurationARN': 'string',
'ImageId': 'string',
'KeyName': 'string',
'SecurityGroups': [
'string',
],
'ClassicLinkVPCId': 'string',
'ClassicLinkVPCSecurityGroups': [
'string',
],
'UserData': 'string',
'InstanceType': 'string',
'KernelId': 'string',
'RamdiskId': 'string',
'BlockDeviceMappings': [
{
'VirtualName': 'string',
'DeviceName': 'string',
'Ebs': {
'SnapshotId': 'string',
'VolumeSize': 123,
'VolumeType': 'string',
'DeleteOnTermination': True|False,
'Iops': 123,
'Encrypted': True|False,
'Throughput': 123
},
'NoDevice': True|False
},
],
'InstanceMonitoring': {
'Enabled': True|False
},
'SpotPrice': 'string',
'IamInstanceProfile': 'string',
'CreatedTime': datetime(2015, 1, 1),
'EbsOptimized': True|False,
'AssociatePublicIpAddress': True|False,
'PlacementTenancy': 'string',
'MetadataOptions': {
'HttpTokens': 'optional'|'required',
'HttpPutResponseHopLimit': 123,
'HttpEndpoint': 'disabled'|'enabled'
}
},
],
}
Response Structure
(dict) --
LaunchConfigurations (list) --
The launch configurations.
(dict) --
Describes a launch configuration.
LaunchConfigurationName (string) --
The name of the launch configuration.
LaunchConfigurationARN (string) --
The Amazon Resource Name (ARN) of the launch configuration.
ImageId (string) --
The ID of the Amazon Machine Image (AMI) to use to launch your EC2 instances. For more information, see Find a Linux AMI in the Amazon EC2 User Guide for Linux Instances .
KeyName (string) --
The name of the key pair.
For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide for Linux Instances .
SecurityGroups (list) --
A list that contains the security groups to assign to the instances in the Auto Scaling group. For more information, see Security Groups for Your VPC in the Amazon Virtual Private Cloud User Guide .
ClassicLinkVPCId (string) --
Available for backward compatibility.
ClassicLinkVPCSecurityGroups (list) --
Available for backward compatibility.
UserData (string) --
The user data to make available to the launched EC2 instances. For more information, see Instance metadata and user data (Linux) and Instance metadata and user data (Windows). If you are using a command line tool, base64-encoding is performed for you, and you can load the text from a file. Otherwise, you must provide base64-encoded text. User data is limited to 16 KB.
InstanceType (string) --
The instance type for the instances. For information about available instance types, see Available instance types in the Amazon EC2 User Guide for Linux Instances .
KernelId (string) --
The ID of the kernel associated with the AMI.
RamdiskId (string) --
The ID of the RAM disk associated with the AMI.
BlockDeviceMappings (list) --
The block device mapping entries that define the block devices to attach to the instances at launch. By default, the block devices specified in the block device mapping for the AMI are used. For more information, see Block Device Mapping in the Amazon EC2 User Guide for Linux Instances .
(dict) --
Describes a block device mapping.
VirtualName (string) --
The name of the instance store volume (virtual device) to attach to an instance at launch. The name must be in the form ephemeral*X* where X is a number starting from zero (0), for example, ephemeral0
.
DeviceName (string) --
The device name assigned to the volume (for example, /dev/sdh
or xvdh
). For more information, see Device naming on Linux instances in the Amazon EC2 User Guide for Linux Instances .
Note
To define a block device mapping, set the device name and exactly one of the following properties: Ebs
, NoDevice
, or VirtualName
.
Ebs (dict) --
Information to attach an EBS volume to an instance at launch.
SnapshotId (string) --
The snapshot ID of the volume to use.
You must specify either a VolumeSize
or a SnapshotId
.
VolumeSize (integer) --
The volume size, in GiBs. The following are the supported volumes sizes for each volume type:
gp2
and gp3
: 1-16,384io1
: 4-16,384st1
and sc1
: 125-16,384standard
: 1-1,024You must specify either a SnapshotId
or a VolumeSize
. If you specify both SnapshotId
and VolumeSize
, the volume size must be equal or greater than the size of the snapshot.
VolumeType (string) --
The volume type. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide for Linux Instances .
Valid values: standard
| io1
| gp2
| st1
| sc1
| gp3
DeleteOnTermination (boolean) --
Indicates whether the volume is deleted on instance termination. For Amazon EC2 Auto Scaling, the default value is true
.
Iops (integer) --
The number of input/output (I/O) operations per second (IOPS) to provision for the volume. For gp3
and io1
volumes, this represents the number of IOPS that are provisioned for the volume. For gp2
volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type:
gp3
: 3,000-16,000 IOPSio1
: 100-64,000 IOPSFor io1
volumes, we guarantee 64,000 IOPS only for Instances built on the Nitro System. Other instance families guarantee performance up to 32,000 IOPS.
Iops
is supported when the volume type isgp3
orio1
and required only when the volume type isio1
. (Not used withstandard
,gp2
,st1
, orsc1
volumes.)
Encrypted (boolean) --
Specifies whether the volume should be encrypted. Encrypted EBS volumes can only be attached to instances that support Amazon EBS encryption. For more information, see Supported instance types. If your AMI uses encrypted volumes, you can also only launch it on supported instance types.
Note
If you are creating a volume from a snapshot, you cannot create an unencrypted volume from an encrypted snapshot. Also, you cannot specify a KMS key ID when using a launch configuration.
If you enable encryption by default, the EBS volumes that you create are always encrypted, either using the Amazon Web Services managed KMS key or a customer-managed KMS key, regardless of whether the snapshot was encrypted.
For more information, see Use Amazon Web Services KMS keys to encrypt Amazon EBS volumes in the Amazon EC2 Auto Scaling User Guide .
Throughput (integer) --
The throughput (MiBps) to provision for a gp3
volume.
NoDevice (boolean) --
Setting this value to true
prevents a volume that is included in the block device mapping of the AMI from being mapped to the specified device name at launch.
If NoDevice
is true
for the root device, instances might fail the EC2 health check. In that case, Amazon EC2 Auto Scaling launches replacement instances.
InstanceMonitoring (dict) --
Controls whether instances in this group are launched with detailed ( true
) or basic ( false
) monitoring.
For more information, see Configure Monitoring for Auto Scaling Instances in the Amazon EC2 Auto Scaling User Guide .
Enabled (boolean) --
If true
, detailed monitoring is enabled. Otherwise, basic monitoring is enabled.
SpotPrice (string) --
The maximum hourly price to be paid for any Spot Instance launched to fulfill the request. Spot Instances are launched when the price you specify exceeds the current Spot price. For more information, see Requesting Spot Instances in the Amazon EC2 Auto Scaling User Guide .
IamInstanceProfile (string) --
The name or the Amazon Resource Name (ARN) of the instance profile associated with the IAM role for the instance. The instance profile contains the IAM role. For more information, see IAM role for applications that run on Amazon EC2 instances in the Amazon EC2 Auto Scaling User Guide .
CreatedTime (datetime) --
The creation date and time for the launch configuration.
EbsOptimized (boolean) --
Specifies whether the launch configuration is optimized for EBS I/O ( true
) or not ( false
). For more information, see Amazon EBS-Optimized Instances in the Amazon EC2 User Guide for Linux Instances .
AssociatePublicIpAddress (boolean) --
Specifies whether to assign a public IPv4 address to the group's instances. If the instance is launched into a default subnet, the default is to assign a public IPv4 address, unless you disabled the option to assign a public IPv4 address on the subnet. If the instance is launched into a nondefault subnet, the default is not to assign a public IPv4 address, unless you enabled the option to assign a public IPv4 address on the subnet. For more information, see Launching Auto Scaling instances in a VPC in the Amazon EC2 Auto Scaling User Guide .
PlacementTenancy (string) --
The tenancy of the instance, either default
or dedicated
. An instance with dedicated
tenancy runs on isolated, single-tenant hardware and can only be launched into a VPC.
For more information, see Configuring instance tenancy with Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
MetadataOptions (dict) --
The metadata options for the instances. For more information, see Configuring the Instance Metadata Options in the Amazon EC2 Auto Scaling User Guide .
HttpTokens (string) --
The state of token usage for your instance metadata requests. If the parameter is not specified in the request, the default state is optional
.
If the state is optional
, you can choose to retrieve instance metadata with or without a signed token header on your request. If you retrieve the IAM role credentials without a token, the version 1.0 role credentials are returned. If you retrieve the IAM role credentials using a valid signed token, the version 2.0 role credentials are returned.
If the state is required
, you must send a signed token header with any instance metadata retrieval requests. In this state, retrieving the IAM role credentials always returns the version 2.0 credentials; the version 1.0 credentials are not available.
HttpPutResponseHopLimit (integer) --
The desired HTTP PUT response hop limit for instance metadata requests. The larger the number, the further instance metadata requests can travel.
Default: 1
HttpEndpoint (string) --
This parameter enables or disables the HTTP metadata endpoint on your instances. If the parameter is not specified, the default state is enabled
.
Note
If you specify a value of disabled
, you will not be able to access your instance metadata.
AutoScaling.Paginator.
DescribeLoadBalancerTargetGroups
¶paginator = client.get_paginator('describe_load_balancer_target_groups')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from AutoScaling.Client.describe_load_balancer_target_groups()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
AutoScalingGroupName='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
[REQUIRED]
The name of the Auto Scaling group.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
dict
Response Syntax
{
'LoadBalancerTargetGroups': [
{
'LoadBalancerTargetGroupARN': 'string',
'State': 'string'
},
],
}
Response Structure
(dict) --
LoadBalancerTargetGroups (list) --
Information about the target groups.
(dict) --
Describes the state of a target group.
LoadBalancerTargetGroupARN (string) --
The Amazon Resource Name (ARN) of the target group.
State (string) --
The state of the target group.
Adding
- The Auto Scaling instances are being registered with the target group.Added
- All Auto Scaling instances are registered with the target group.InService
- At least one Auto Scaling instance passed an ELB
health check.Removing
- The Auto Scaling instances are being deregistered from the target group. If connection draining is enabled, Elastic Load Balancing waits for in-flight requests to complete before deregistering the instances.Removed
- All Auto Scaling instances are deregistered from the target group.AutoScaling.Paginator.
DescribeLoadBalancers
¶paginator = client.get_paginator('describe_load_balancers')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from AutoScaling.Client.describe_load_balancers()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
AutoScalingGroupName='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
[REQUIRED]
The name of the Auto Scaling group.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
dict
Response Syntax
{
'LoadBalancers': [
{
'LoadBalancerName': 'string',
'State': 'string'
},
],
}
Response Structure
(dict) --
LoadBalancers (list) --
The load balancers.
(dict) --
Describes the state of a Classic Load Balancer.
LoadBalancerName (string) --
The name of the load balancer.
State (string) --
One of the following load balancer states:
Adding
- The Auto Scaling instances are being registered with the load balancer.Added
- All Auto Scaling instances are registered with the load balancer.InService
- At least one Auto Scaling instance passed an ELB
health check.Removing
- The Auto Scaling instances are being deregistered from the load balancer. If connection draining is enabled, Elastic Load Balancing waits for in-flight requests to complete before deregistering the instances.Removed
- All Auto Scaling instances are deregistered from the load balancer.AutoScaling.Paginator.
DescribeNotificationConfigurations
¶paginator = client.get_paginator('describe_notification_configurations')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from AutoScaling.Client.describe_notification_configurations()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
AutoScalingGroupNames=[
'string',
],
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
The name of the Auto Scaling group.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
dict
Response Syntax
{
'NotificationConfigurations': [
{
'AutoScalingGroupName': 'string',
'TopicARN': 'string',
'NotificationType': 'string'
},
],
}
Response Structure
(dict) --
NotificationConfigurations (list) --
The notification configurations.
(dict) --
Describes a notification.
AutoScalingGroupName (string) --
The name of the Auto Scaling group.
TopicARN (string) --
The Amazon Resource Name (ARN) of the Amazon SNS topic.
NotificationType (string) --
One of the following event notification types:
autoscaling:EC2_INSTANCE_LAUNCH
autoscaling:EC2_INSTANCE_LAUNCH_ERROR
autoscaling:EC2_INSTANCE_TERMINATE
autoscaling:EC2_INSTANCE_TERMINATE_ERROR
autoscaling:TEST_NOTIFICATION
AutoScaling.Paginator.
DescribePolicies
¶paginator = client.get_paginator('describe_policies')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from AutoScaling.Client.describe_policies()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
AutoScalingGroupName='string',
PolicyNames=[
'string',
],
PolicyTypes=[
'string',
],
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
The names of one or more policies. If you omit this property, all policies are described. If a group name is provided, the results are limited to that group. If you specify an unknown policy name, it is ignored with no error.
Array Members: Maximum number of 50 items.
One or more policy types. The valid values are SimpleScaling
, StepScaling
, TargetTrackingScaling
, and PredictiveScaling
.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
dict
Response Syntax
{
'ScalingPolicies': [
{
'AutoScalingGroupName': 'string',
'PolicyName': 'string',
'PolicyARN': 'string',
'PolicyType': 'string',
'AdjustmentType': 'string',
'MinAdjustmentStep': 123,
'MinAdjustmentMagnitude': 123,
'ScalingAdjustment': 123,
'Cooldown': 123,
'StepAdjustments': [
{
'MetricIntervalLowerBound': 123.0,
'MetricIntervalUpperBound': 123.0,
'ScalingAdjustment': 123
},
],
'MetricAggregationType': 'string',
'EstimatedInstanceWarmup': 123,
'Alarms': [
{
'AlarmName': 'string',
'AlarmARN': 'string'
},
],
'TargetTrackingConfiguration': {
'PredefinedMetricSpecification': {
'PredefinedMetricType': 'ASGAverageCPUUtilization'|'ASGAverageNetworkIn'|'ASGAverageNetworkOut'|'ALBRequestCountPerTarget',
'ResourceLabel': 'string'
},
'CustomizedMetricSpecification': {
'MetricName': 'string',
'Namespace': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
],
'Statistic': 'Average'|'Minimum'|'Maximum'|'SampleCount'|'Sum',
'Unit': 'string',
'Metrics': [
{
'Id': 'string',
'Expression': 'string',
'MetricStat': {
'Metric': {
'Namespace': 'string',
'MetricName': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
]
},
'Stat': 'string',
'Unit': 'string'
},
'Label': 'string',
'ReturnData': True|False
},
]
},
'TargetValue': 123.0,
'DisableScaleIn': True|False
},
'Enabled': True|False,
'PredictiveScalingConfiguration': {
'MetricSpecifications': [
{
'TargetValue': 123.0,
'PredefinedMetricPairSpecification': {
'PredefinedMetricType': 'ASGCPUUtilization'|'ASGNetworkIn'|'ASGNetworkOut'|'ALBRequestCount',
'ResourceLabel': 'string'
},
'PredefinedScalingMetricSpecification': {
'PredefinedMetricType': 'ASGAverageCPUUtilization'|'ASGAverageNetworkIn'|'ASGAverageNetworkOut'|'ALBRequestCountPerTarget',
'ResourceLabel': 'string'
},
'PredefinedLoadMetricSpecification': {
'PredefinedMetricType': 'ASGTotalCPUUtilization'|'ASGTotalNetworkIn'|'ASGTotalNetworkOut'|'ALBTargetGroupRequestCount',
'ResourceLabel': 'string'
},
'CustomizedScalingMetricSpecification': {
'MetricDataQueries': [
{
'Id': 'string',
'Expression': 'string',
'MetricStat': {
'Metric': {
'Namespace': 'string',
'MetricName': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
]
},
'Stat': 'string',
'Unit': 'string'
},
'Label': 'string',
'ReturnData': True|False
},
]
},
'CustomizedLoadMetricSpecification': {
'MetricDataQueries': [
{
'Id': 'string',
'Expression': 'string',
'MetricStat': {
'Metric': {
'Namespace': 'string',
'MetricName': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
]
},
'Stat': 'string',
'Unit': 'string'
},
'Label': 'string',
'ReturnData': True|False
},
]
},
'CustomizedCapacityMetricSpecification': {
'MetricDataQueries': [
{
'Id': 'string',
'Expression': 'string',
'MetricStat': {
'Metric': {
'Namespace': 'string',
'MetricName': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
]
},
'Stat': 'string',
'Unit': 'string'
},
'Label': 'string',
'ReturnData': True|False
},
]
}
},
],
'Mode': 'ForecastAndScale'|'ForecastOnly',
'SchedulingBufferTime': 123,
'MaxCapacityBreachBehavior': 'HonorMaxCapacity'|'IncreaseMaxCapacity',
'MaxCapacityBuffer': 123
}
},
],
}
Response Structure
(dict) --
ScalingPolicies (list) --
The scaling policies.
(dict) --
Describes a scaling policy.
AutoScalingGroupName (string) --
The name of the Auto Scaling group.
PolicyName (string) --
The name of the scaling policy.
PolicyARN (string) --
The Amazon Resource Name (ARN) of the policy.
PolicyType (string) --
One of the following policy types:
TargetTrackingScaling
StepScaling
SimpleScaling
(default)PredictiveScaling
For more information, see Target tracking scaling policies and Step and simple scaling policies in the Amazon EC2 Auto Scaling User Guide .
AdjustmentType (string) --
Specifies how the scaling adjustment is interpreted (for example, an absolute number or a percentage). The valid values are ChangeInCapacity
, ExactCapacity
, and PercentChangeInCapacity
.
MinAdjustmentStep (integer) --
Available for backward compatibility. Use MinAdjustmentMagnitude
instead.
MinAdjustmentMagnitude (integer) --
The minimum value to scale by when the adjustment type is PercentChangeInCapacity
.
ScalingAdjustment (integer) --
The amount by which to scale, based on the specified adjustment type. A positive value adds to the current capacity while a negative number removes from the current capacity.
Cooldown (integer) --
The duration of the policy's cooldown period, in seconds.
StepAdjustments (list) --
A set of adjustments that enable you to scale based on the size of the alarm breach.
(dict) --
Describes information used to create a step adjustment for a step scaling policy.
For the following examples, suppose that you have an alarm with a breach threshold of 50:
There are a few rules for the step adjustments for your step policy:
For more information, see Step adjustments in the Amazon EC2 Auto Scaling User Guide .
MetricIntervalLowerBound (float) --
The lower bound for the difference between the alarm threshold and the CloudWatch metric. If the metric value is above the breach threshold, the lower bound is inclusive (the metric must be greater than or equal to the threshold plus the lower bound). Otherwise, it is exclusive (the metric must be greater than the threshold plus the lower bound). A null value indicates negative infinity.
MetricIntervalUpperBound (float) --
The upper bound for the difference between the alarm threshold and the CloudWatch metric. If the metric value is above the breach threshold, the upper bound is exclusive (the metric must be less than the threshold plus the upper bound). Otherwise, it is inclusive (the metric must be less than or equal to the threshold plus the upper bound). A null value indicates positive infinity.
The upper bound must be greater than the lower bound.
ScalingAdjustment (integer) --
The amount by which to scale, based on the specified adjustment type. A positive value adds to the current capacity while a negative number removes from the current capacity.
The amount by which to scale. The adjustment is based on the value that you specified in the AdjustmentType
property (either an absolute number or a percentage). A positive value adds to the current capacity and a negative number subtracts from the current capacity.
MetricAggregationType (string) --
The aggregation type for the CloudWatch metrics. The valid values are Minimum
, Maximum
, and Average
.
EstimatedInstanceWarmup (integer) --
The estimated time, in seconds, until a newly launched instance can contribute to the CloudWatch metrics.
Alarms (list) --
The CloudWatch alarms related to the policy.
(dict) --
Describes an alarm.
AlarmName (string) --
The name of the alarm.
AlarmARN (string) --
The Amazon Resource Name (ARN) of the alarm.
TargetTrackingConfiguration (dict) --
A target tracking scaling policy.
PredefinedMetricSpecification (dict) --
A predefined metric. You must specify either a predefined metric or a customized metric.
PredefinedMetricType (string) --
The metric type. The following predefined metrics are available:
ASGAverageCPUUtilization
- Average CPU utilization of the Auto Scaling group.ASGAverageNetworkIn
- Average number of bytes received on all network interfaces by the Auto Scaling group.ASGAverageNetworkOut
- Average number of bytes sent out on all network interfaces by the Auto Scaling group.ALBRequestCountPerTarget
- Average Application Load Balancer request count per target for your Auto Scaling group.ResourceLabel (string) --
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the average request count served by your Auto Scaling group. You can't specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff
.
Where:
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
CustomizedMetricSpecification (dict) --
A customized metric. You must specify either a predefined metric or a customized metric.
MetricName (string) --
The name of the metric. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics.
Namespace (string) --
The namespace of the metric.
Dimensions (list) --
The dimensions of the metric.
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
(dict) --
Describes the dimension of a metric.
Name (string) --
The name of the dimension.
Value (string) --
The value of the dimension.
Statistic (string) --
The statistic of the metric.
Unit (string) --
The unit of the metric. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
Metrics (list) --
The metrics to include in the target tracking scaling policy, as a metric data query. This can include both raw metric and metric math expressions.
(dict) --
The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
Id (string) --
A short name that identifies the object's results in the response. This name must be unique among all TargetTrackingMetricDataQuery
objects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
Expression (string) --
The math expression to perform on the returned data, if this object is performing a math expression. This expression can use the Id
of the other metrics to refer to those metrics, and can also use the Id
of other expressions to use the result of those expressions.
Conditional: Within each TargetTrackingMetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
MetricStat (dict) --
Information about the metric data to return.
Conditional: Within each TargetTrackingMetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
Metric (dict) --
Represents a specific metric.
Namespace (string) --
The namespace of the metric. For more information, see the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
MetricName (string) --
The name of the metric.
Dimensions (list) --
The dimensions for the metric. For the list of available dimensions, see the Amazon Web Services documentation available from the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
(dict) --
Describes the dimension of a metric.
Name (string) --
The name of the dimension.
Value (string) --
The value of the dimension.
Stat (string) --
The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metrics for scaling is Average
Unit (string) --
The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
Label (string) --
A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.
ReturnData (boolean) --
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify true
for this value for only the final math expression that the metric specification is based on. You must specify false
for ReturnData
for all the other metrics and expressions used in the metric specification.
If you are only retrieving metrics and not performing any math expressions, do not specify anything for ReturnData
. This sets it to its default ( true
).
TargetValue (float) --
The target value for the metric.
Note
Some metrics are based on a count instead of a percentage, such as the request count for an Application Load Balancer or the number of messages in an SQS queue. If the scaling policy specifies one of these metrics, specify the target utilization as the optimal average request or message count per instance during any one-minute interval.
DisableScaleIn (boolean) --
Indicates whether scaling in by the target tracking scaling policy is disabled. If scaling in is disabled, the target tracking scaling policy doesn't remove instances from the Auto Scaling group. Otherwise, the target tracking scaling policy can remove instances from the Auto Scaling group. The default is false
.
Enabled (boolean) --
Indicates whether the policy is enabled ( true
) or disabled ( false
).
PredictiveScalingConfiguration (dict) --
A predictive scaling policy.
MetricSpecifications (list) --
This structure includes the metrics and target utilization to use for predictive scaling.
This is an array, but we currently only support a single metric specification. That is, you can specify a target value and a single metric pair, or a target value and one scaling metric and one load metric.
(dict) --
This structure specifies the metrics and target utilization settings for a predictive scaling policy.
You must specify either a metric pair, or a load metric and a scaling metric individually. Specifying a metric pair instead of individual metrics provides a simpler way to configure metrics for a scaling policy. You choose the metric pair, and the policy automatically knows the correct sum and average statistics to use for the load metric and the scaling metric.
Example
ALBRequestCount
as the value for the metric pair and 1000.0
as the target value. For this type of metric, you must provide the metric dimension for the corresponding target group, so you also provide a resource label for the Application Load Balancer target group that is attached to your Auto Scaling group.RequestCount
and RequestCountPerTarget
metrics, respectively.For information about using custom metrics with predictive scaling, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide .
TargetValue (float) --
Specifies the target utilization.
Note
Some metrics are based on a count instead of a percentage, such as the request count for an Application Load Balancer or the number of messages in an SQS queue. If the scaling policy specifies one of these metrics, specify the target utilization as the optimal average request or message count per instance during any one-minute interval.
PredefinedMetricPairSpecification (dict) --
The predefined metric pair specification from which Amazon EC2 Auto Scaling determines the appropriate scaling metric and load metric to use.
PredefinedMetricType (string) --
Indicates which metrics to use. There are two different types of metrics for each metric type: one is a load metric and one is a scaling metric. For example, if the metric type is ASGCPUUtilization
, the Auto Scaling group's total CPU metric is used as the load metric, and the average CPU metric is used for the scaling metric.
ResourceLabel (string) --
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the total and average request count served by your Auto Scaling group. You can't specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff
.
Where:
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
PredefinedScalingMetricSpecification (dict) --
The predefined scaling metric specification.
PredefinedMetricType (string) --
The metric type.
ResourceLabel (string) --
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the average request count served by your Auto Scaling group. You can't specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff
.
Where:
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
PredefinedLoadMetricSpecification (dict) --
The predefined load metric specification.
PredefinedMetricType (string) --
The metric type.
ResourceLabel (string) --
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the request count served by your Auto Scaling group. You can't specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff
.
Where:
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
CustomizedScalingMetricSpecification (dict) --
The customized scaling metric specification.
MetricDataQueries (list) --
One or more metric data queries to provide the data points for a scaling metric. Use multiple metric data queries only if you are performing a math expression on returned data.
(dict) --
The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
For more information and examples, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide .
Id (string) --
A short name that identifies the object's results in the response. This name must be unique among all MetricDataQuery
objects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
Expression (string) --
The math expression to perform on the returned data, if this object is performing a math expression. This expression can use the Id
of the other metrics to refer to those metrics, and can also use the Id
of other expressions to use the result of those expressions.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
MetricStat (dict) --
Information about the metric data to return.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
Metric (dict) --
The CloudWatch metric to return, including the metric name, namespace, and dimensions. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics.
Namespace (string) --
The namespace of the metric. For more information, see the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
MetricName (string) --
The name of the metric.
Dimensions (list) --
The dimensions for the metric. For the list of available dimensions, see the Amazon Web Services documentation available from the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
(dict) --
Describes the dimension of a metric.
Name (string) --
The name of the dimension.
Value (string) --
The value of the dimension.
Stat (string) --
The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metrics for predictive scaling are Average
and Sum
.
Unit (string) --
The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
Label (string) --
A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.
ReturnData (boolean) --
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify true
for this value for only the final math expression that the metric specification is based on. You must specify false
for ReturnData
for all the other metrics and expressions used in the metric specification.
If you are only retrieving metrics and not performing any math expressions, do not specify anything for ReturnData
. This sets it to its default ( true
).
CustomizedLoadMetricSpecification (dict) --
The customized load metric specification.
MetricDataQueries (list) --
One or more metric data queries to provide the data points for a load metric. Use multiple metric data queries only if you are performing a math expression on returned data.
(dict) --
The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
For more information and examples, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide .
Id (string) --
A short name that identifies the object's results in the response. This name must be unique among all MetricDataQuery
objects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
Expression (string) --
The math expression to perform on the returned data, if this object is performing a math expression. This expression can use the Id
of the other metrics to refer to those metrics, and can also use the Id
of other expressions to use the result of those expressions.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
MetricStat (dict) --
Information about the metric data to return.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
Metric (dict) --
The CloudWatch metric to return, including the metric name, namespace, and dimensions. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics.
Namespace (string) --
The namespace of the metric. For more information, see the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
MetricName (string) --
The name of the metric.
Dimensions (list) --
The dimensions for the metric. For the list of available dimensions, see the Amazon Web Services documentation available from the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
(dict) --
Describes the dimension of a metric.
Name (string) --
The name of the dimension.
Value (string) --
The value of the dimension.
Stat (string) --
The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metrics for predictive scaling are Average
and Sum
.
Unit (string) --
The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
Label (string) --
A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.
ReturnData (boolean) --
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify true
for this value for only the final math expression that the metric specification is based on. You must specify false
for ReturnData
for all the other metrics and expressions used in the metric specification.
If you are only retrieving metrics and not performing any math expressions, do not specify anything for ReturnData
. This sets it to its default ( true
).
CustomizedCapacityMetricSpecification (dict) --
The customized capacity metric specification.
MetricDataQueries (list) --
One or more metric data queries to provide the data points for a capacity metric. Use multiple metric data queries only if you are performing a math expression on returned data.
(dict) --
The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
For more information and examples, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide .
Id (string) --
A short name that identifies the object's results in the response. This name must be unique among all MetricDataQuery
objects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
Expression (string) --
The math expression to perform on the returned data, if this object is performing a math expression. This expression can use the Id
of the other metrics to refer to those metrics, and can also use the Id
of other expressions to use the result of those expressions.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
MetricStat (dict) --
Information about the metric data to return.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
Metric (dict) --
The CloudWatch metric to return, including the metric name, namespace, and dimensions. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics.
Namespace (string) --
The namespace of the metric. For more information, see the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
MetricName (string) --
The name of the metric.
Dimensions (list) --
The dimensions for the metric. For the list of available dimensions, see the Amazon Web Services documentation available from the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
(dict) --
Describes the dimension of a metric.
Name (string) --
The name of the dimension.
Value (string) --
The value of the dimension.
Stat (string) --
The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metrics for predictive scaling are Average
and Sum
.
Unit (string) --
The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
Label (string) --
A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.
ReturnData (boolean) --
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify true
for this value for only the final math expression that the metric specification is based on. You must specify false
for ReturnData
for all the other metrics and expressions used in the metric specification.
If you are only retrieving metrics and not performing any math expressions, do not specify anything for ReturnData
. This sets it to its default ( true
).
Mode (string) --
The predictive scaling mode. Defaults to ForecastOnly
if not specified.
SchedulingBufferTime (integer) --
The amount of time, in seconds, by which the instance launch time can be advanced. For example, the forecast says to add capacity at 10:00 AM, and you choose to pre-launch instances by 5 minutes. In that case, the instances will be launched at 9:55 AM. The intention is to give resources time to be provisioned. It can take a few minutes to launch an EC2 instance. The actual amount of time required depends on several factors, such as the size of the instance and whether there are startup scripts to complete.
The value must be less than the forecast interval duration of 3600 seconds (60 minutes). Defaults to 300 seconds if not specified.
MaxCapacityBreachBehavior (string) --
Defines the behavior that should be applied if the forecast capacity approaches or exceeds the maximum capacity of the Auto Scaling group. Defaults to HonorMaxCapacity
if not specified.
The following are possible values:
HonorMaxCapacity
- Amazon EC2 Auto Scaling cannot scale out capacity higher than the maximum capacity. The maximum capacity is enforced as a hard limit.IncreaseMaxCapacity
- Amazon EC2 Auto Scaling can scale out capacity higher than the maximum capacity when the forecast capacity is close to or exceeds the maximum capacity. The upper limit is determined by the forecasted capacity and the value for MaxCapacityBuffer
.MaxCapacityBuffer (integer) --
The size of the capacity buffer to use when the forecast capacity is close to or exceeds the maximum capacity. The value is specified as a percentage relative to the forecast capacity. For example, if the buffer is 10, this means a 10 percent buffer, such that if the forecast capacity is 50, and the maximum capacity is 40, then the effective maximum capacity is 55.
If set to 0, Amazon EC2 Auto Scaling may scale capacity higher than the maximum capacity to equal but not exceed forecast capacity.
Required if the MaxCapacityBreachBehavior
property is set to IncreaseMaxCapacity
, and cannot be used otherwise.
AutoScaling.Paginator.
DescribeScalingActivities
¶paginator = client.get_paginator('describe_scaling_activities')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from AutoScaling.Client.describe_scaling_activities()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
ActivityIds=[
'string',
],
AutoScalingGroupName='string',
IncludeDeletedGroups=True|False,
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
The activity IDs of the desired scaling activities. If you omit this property, all activities for the past six weeks are described. If unknown activities are requested, they are ignored with no error. If you specify an Auto Scaling group, the results are limited to that group.
Array Members: Maximum number of 50 IDs.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
dict
Response Syntax
{
'Activities': [
{
'ActivityId': 'string',
'AutoScalingGroupName': 'string',
'Description': 'string',
'Cause': 'string',
'StartTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'StatusCode': 'PendingSpotBidPlacement'|'WaitingForSpotInstanceRequestId'|'WaitingForSpotInstanceId'|'WaitingForInstanceId'|'PreInService'|'InProgress'|'WaitingForELBConnectionDraining'|'MidLifecycleAction'|'WaitingForInstanceWarmup'|'Successful'|'Failed'|'Cancelled',
'StatusMessage': 'string',
'Progress': 123,
'Details': 'string',
'AutoScalingGroupState': 'string',
'AutoScalingGroupARN': 'string'
},
],
}
Response Structure
(dict) --
Activities (list) --
The scaling activities. Activities are sorted by start time. Activities still in progress are described first.
(dict) --
Describes scaling activity, which is a long-running process that represents a change to your Auto Scaling group, such as changing its size or replacing an instance.
ActivityId (string) --
The ID of the activity.
AutoScalingGroupName (string) --
The name of the Auto Scaling group.
Description (string) --
A friendly, more verbose description of the activity.
Cause (string) --
The reason the activity began.
StartTime (datetime) --
The start time of the activity.
EndTime (datetime) --
The end time of the activity.
StatusCode (string) --
The current status of the activity.
StatusMessage (string) --
A friendly, more verbose description of the activity status.
Progress (integer) --
A value between 0 and 100 that indicates the progress of the activity.
Details (string) --
The details about the activity.
AutoScalingGroupState (string) --
The state of the Auto Scaling group, which is either InService
or Deleted
.
AutoScalingGroupARN (string) --
The Amazon Resource Name (ARN) of the Auto Scaling group.
AutoScaling.Paginator.
DescribeScheduledActions
¶paginator = client.get_paginator('describe_scheduled_actions')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from AutoScaling.Client.describe_scheduled_actions()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
AutoScalingGroupName='string',
ScheduledActionNames=[
'string',
],
StartTime=datetime(2015, 1, 1),
EndTime=datetime(2015, 1, 1),
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
The names of one or more scheduled actions. If you omit this property, all scheduled actions are described. If you specify an unknown scheduled action, it is ignored with no error.
Array Members: Maximum number of 50 actions.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
dict
Response Syntax
{
'ScheduledUpdateGroupActions': [
{
'AutoScalingGroupName': 'string',
'ScheduledActionName': 'string',
'ScheduledActionARN': 'string',
'Time': datetime(2015, 1, 1),
'StartTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'Recurrence': 'string',
'MinSize': 123,
'MaxSize': 123,
'DesiredCapacity': 123,
'TimeZone': 'string'
},
],
}
Response Structure
(dict) --
ScheduledUpdateGroupActions (list) --
The scheduled actions.
(dict) --
Describes a scheduled scaling action.
AutoScalingGroupName (string) --
The name of the Auto Scaling group.
ScheduledActionName (string) --
The name of the scheduled action.
ScheduledActionARN (string) --
The Amazon Resource Name (ARN) of the scheduled action.
Time (datetime) --
This property is no longer used.
StartTime (datetime) --
The date and time in UTC for this action to start. For example, "2019-06-01T00:00:00Z"
.
EndTime (datetime) --
The date and time in UTC for the recurring schedule to end. For example, "2019-06-01T00:00:00Z"
.
Recurrence (string) --
The recurring schedule for the action, in Unix cron syntax format.
When StartTime
and EndTime
are specified with Recurrence
, they form the boundaries of when the recurring action starts and stops.
MinSize (integer) --
The minimum size of the Auto Scaling group.
MaxSize (integer) --
The maximum size of the Auto Scaling group.
DesiredCapacity (integer) --
The desired capacity is the initial capacity of the Auto Scaling group after the scheduled action runs and the capacity it attempts to maintain.
TimeZone (string) --
The time zone for the cron expression.
AutoScaling.Paginator.
DescribeTags
¶paginator = client.get_paginator('describe_tags')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from AutoScaling.Client.describe_tags()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
One or more filters to scope the tags to return. The maximum number of filters per filter type (for example, auto-scaling-group
) is 1000.
Describes a filter that is used to return a more specific list of results from a describe operation.
If you specify multiple filters, the filters are automatically logically joined with an AND
, and the request returns only the results that match all of the specified filters.
For more information, see Tag Auto Scaling groups and instances in the Amazon EC2 Auto Scaling User Guide .
The name of the filter.
The valid values for Name
depend on which API operation you're using with the filter ( DescribeAutoScalingGroups or DescribeTags ).
DescribeAutoScalingGroups
Valid values for Name
include the following:
tag-key
- Accepts tag keys. The results only include information about the Auto Scaling groups associated with these tag keys.tag-value
- Accepts tag values. The results only include information about the Auto Scaling groups associated with these tag values.tag:<key>
- Accepts the key/value combination of the tag. Use the tag key in the filter name and the tag value as the filter value. The results only include information about the Auto Scaling groups associated with the specified key/value combination.DescribeTags
Valid values for Name
include the following:
auto-scaling-group
- Accepts the names of Auto Scaling groups. The results only include information about the tags associated with these Auto Scaling groups.key
- Accepts tag keys. The results only include information about the tags associated with these tag keys.value
- Accepts tag values. The results only include information about the tags associated with these tag values.propagate-at-launch
- Accepts a Boolean value, which specifies whether tags propagate to instances at launch. The results only include information about the tags associated with the specified Boolean value.One or more filter values. Filter values are case-sensitive.
If you specify multiple values for a filter, the values are automatically logically joined with an OR
, and the request returns all results that match any of the specified values. For example, specify "tag:environment" for the filter name and "production,development" for the filter values to find Auto Scaling groups with the tag "environment=production" or "environment=development".
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
dict
Response Syntax
{
'Tags': [
{
'ResourceId': 'string',
'ResourceType': 'string',
'Key': 'string',
'Value': 'string',
'PropagateAtLaunch': True|False
},
],
}
Response Structure
(dict) --
Tags (list) --
One or more tags.
(dict) --
Describes a tag for an Auto Scaling group.
ResourceId (string) --
The name of the group.
ResourceType (string) --
The type of resource. The only supported value is auto-scaling-group
.
Key (string) --
The tag key.
Value (string) --
The tag value.
PropagateAtLaunch (boolean) --
Determines whether the tag is added to new instances as they are launched in the group.