ApplicationAutoScaling.Client.
put_scaling_policy
(**kwargs)¶Creates or updates a scaling policy for an Application Auto Scaling scalable target.
Each scalable target is identified by a service namespace, resource ID, and scalable dimension. A scaling policy applies to the scalable target identified by those three attributes. You cannot create a scaling policy until you have registered the resource as a scalable target.
Multiple scaling policies can be in force at the same time for the same scalable target. You can have one or more target tracking scaling policies, one or more step scaling policies, or both. However, there is a chance that multiple policies could conflict, instructing the scalable target to scale out or in at the same time. Application Auto Scaling gives precedence to the policy that provides the largest capacity for both scale out and scale in. For example, if one policy increases capacity by 3, another policy increases capacity by 200 percent, and the current capacity is 10, Application Auto Scaling uses the policy with the highest calculated capacity (200% of 10 = 20) and scales out to 30.
We recommend caution, however, when using target tracking scaling policies with step scaling policies because conflicts between these policies can cause undesirable behavior. For example, if the step scaling policy initiates a scale-in activity before the target tracking policy is ready to scale in, the scale-in activity will not be blocked. After the scale-in activity completes, the target tracking policy could instruct the scalable target to scale out again.
For more information, see Target tracking scaling policies and Step scaling policies in the Application Auto Scaling User Guide .
Note
If a scalable target is deregistered, the scalable target is no longer available to execute scaling policies. Any scaling policies that were specified for the scalable target are deleted.
See also: AWS API Documentation
Request Syntax
response = client.put_scaling_policy(
PolicyName='string',
ServiceNamespace='ecs'|'elasticmapreduce'|'ec2'|'appstream'|'dynamodb'|'rds'|'sagemaker'|'custom-resource'|'comprehend'|'lambda'|'cassandra'|'kafka'|'elasticache'|'neptune',
ResourceId='string',
ScalableDimension='ecs:service:DesiredCount'|'ec2:spot-fleet-request:TargetCapacity'|'elasticmapreduce:instancegroup:InstanceCount'|'appstream:fleet:DesiredCapacity'|'dynamodb:table:ReadCapacityUnits'|'dynamodb:table:WriteCapacityUnits'|'dynamodb:index:ReadCapacityUnits'|'dynamodb:index:WriteCapacityUnits'|'rds:cluster:ReadReplicaCount'|'sagemaker:variant:DesiredInstanceCount'|'custom-resource:ResourceType:Property'|'comprehend:document-classifier-endpoint:DesiredInferenceUnits'|'comprehend:entity-recognizer-endpoint:DesiredInferenceUnits'|'lambda:function:ProvisionedConcurrency'|'cassandra:table:ReadCapacityUnits'|'cassandra:table:WriteCapacityUnits'|'kafka:broker-storage:VolumeSize'|'elasticache:replication-group:NodeGroups'|'elasticache:replication-group:Replicas'|'neptune:cluster:ReadReplicaCount',
PolicyType='StepScaling'|'TargetTrackingScaling',
StepScalingPolicyConfiguration={
'AdjustmentType': 'ChangeInCapacity'|'PercentChangeInCapacity'|'ExactCapacity',
'StepAdjustments': [
{
'MetricIntervalLowerBound': 123.0,
'MetricIntervalUpperBound': 123.0,
'ScalingAdjustment': 123
},
],
'MinAdjustmentMagnitude': 123,
'Cooldown': 123,
'MetricAggregationType': 'Average'|'Minimum'|'Maximum'
},
TargetTrackingScalingPolicyConfiguration={
'TargetValue': 123.0,
'PredefinedMetricSpecification': {
'PredefinedMetricType': 'DynamoDBReadCapacityUtilization'|'DynamoDBWriteCapacityUtilization'|'ALBRequestCountPerTarget'|'RDSReaderAverageCPUUtilization'|'RDSReaderAverageDatabaseConnections'|'EC2SpotFleetRequestAverageCPUUtilization'|'EC2SpotFleetRequestAverageNetworkIn'|'EC2SpotFleetRequestAverageNetworkOut'|'SageMakerVariantInvocationsPerInstance'|'ECSServiceAverageCPUUtilization'|'ECSServiceAverageMemoryUtilization'|'AppStreamAverageCapacityUtilization'|'ComprehendInferenceUtilization'|'LambdaProvisionedConcurrencyUtilization'|'CassandraReadCapacityUtilization'|'CassandraWriteCapacityUtilization'|'KafkaBrokerStorageUtilization'|'ElastiCachePrimaryEngineCPUUtilization'|'ElastiCacheReplicaEngineCPUUtilization'|'ElastiCacheDatabaseMemoryUsageCountedForEvictPercentage'|'NeptuneReaderAverageCPUUtilization',
'ResourceLabel': 'string'
},
'CustomizedMetricSpecification': {
'MetricName': 'string',
'Namespace': 'string',
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
],
'Statistic': 'Average'|'Minimum'|'Maximum'|'SampleCount'|'Sum',
'Unit': 'string',
'Metrics': [
{
'Expression': 'string',
'Id': 'string',
'Label': 'string',
'MetricStat': {
'Metric': {
'Dimensions': [
{
'Name': 'string',
'Value': 'string'
},
],
'MetricName': 'string',
'Namespace': 'string'
},
'Stat': 'string',
'Unit': 'string'
},
'ReturnData': True|False
},
]
},
'ScaleOutCooldown': 123,
'ScaleInCooldown': 123,
'DisableScaleIn': True|False
}
)
[REQUIRED]
The name of the scaling policy.
You cannot change the name of a scaling policy, but you can delete the original scaling policy and create a new scaling policy with the same settings and a different name.
[REQUIRED]
The namespace of the Amazon Web Services service that provides the resource. For a resource provided by your own application or service, use custom-resource
instead.
[REQUIRED]
The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.
service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.replication-group
and the unique identifier is the replication group name. Example: replication-group/mycluster
.cluster
and the unique identifier is the cluster name. Example: cluster:mycluster
.[REQUIRED]
The scalable dimension. This string consists of the service namespace, resource type, and scaling property.
ecs:service:DesiredCount
- The desired task count of an ECS service.elasticmapreduce:instancegroup:InstanceCount
- The instance count of an EMR Instance Group.ec2:spot-fleet-request:TargetCapacity
- The target capacity of a Spot Fleet.appstream:fleet:DesiredCapacity
- The desired capacity of an AppStream 2.0 fleet.dynamodb:table:ReadCapacityUnits
- The provisioned read capacity for a DynamoDB table.dynamodb:table:WriteCapacityUnits
- The provisioned write capacity for a DynamoDB table.dynamodb:index:ReadCapacityUnits
- The provisioned read capacity for a DynamoDB global secondary index.dynamodb:index:WriteCapacityUnits
- The provisioned write capacity for a DynamoDB global secondary index.rds:cluster:ReadReplicaCount
- The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.sagemaker:variant:DesiredInstanceCount
- The number of EC2 instances for a SageMaker model endpoint variant.custom-resource:ResourceType:Property
- The scalable dimension for a custom resource provided by your own application or service.comprehend:document-classifier-endpoint:DesiredInferenceUnits
- The number of inference units for an Amazon Comprehend document classification endpoint.comprehend:entity-recognizer-endpoint:DesiredInferenceUnits
- The number of inference units for an Amazon Comprehend entity recognizer endpoint.lambda:function:ProvisionedConcurrency
- The provisioned concurrency for a Lambda function.cassandra:table:ReadCapacityUnits
- The provisioned read capacity for an Amazon Keyspaces table.cassandra:table:WriteCapacityUnits
- The provisioned write capacity for an Amazon Keyspaces table.kafka:broker-storage:VolumeSize
- The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.elasticache:replication-group:NodeGroups
- The number of node groups for an Amazon ElastiCache replication group.elasticache:replication-group:Replicas
- The number of replicas per node group for an Amazon ElastiCache replication group.neptune:cluster:ReadReplicaCount
- The count of read replicas in an Amazon Neptune DB cluster.The scaling policy type. This parameter is required if you are creating a scaling policy.
The following policy types are supported:
TargetTrackingScaling
—Not supported for Amazon EMR
StepScaling
—Not supported for DynamoDB, Amazon Comprehend, Lambda, Amazon Keyspaces, Amazon MSK, Amazon ElastiCache, or Neptune.
For more information, see Target tracking scaling policies and Step scaling policies in the Application Auto Scaling User Guide .
A step scaling policy.
This parameter is required if you are creating a policy and the policy type is StepScaling
.
Specifies how the ScalingAdjustment
value in a StepAdjustment is interpreted (for example, an absolute number or a percentage). The valid values are ChangeInCapacity
, ExactCapacity
, and PercentChangeInCapacity
.
AdjustmentType
is required if you are adding a new step scaling policy configuration.
A set of adjustments that enable you to scale based on the size of the alarm breach.
At least one step adjustment is required if you are adding a new step scaling policy configuration.
Represents a step adjustment for a StepScalingPolicyConfiguration. Describes an adjustment based on the difference between the value of the aggregated CloudWatch metric and the breach threshold that you've defined for the alarm.
For the following examples, suppose that you have an alarm with a breach threshold of 50:
There are a few rules for the step adjustments for your step policy:
The lower bound for the difference between the alarm threshold and the CloudWatch metric. If the metric value is above the breach threshold, the lower bound is inclusive (the metric must be greater than or equal to the threshold plus the lower bound). Otherwise, it is exclusive (the metric must be greater than the threshold plus the lower bound). A null value indicates negative infinity.
The upper bound for the difference between the alarm threshold and the CloudWatch metric. If the metric value is above the breach threshold, the upper bound is exclusive (the metric must be less than the threshold plus the upper bound). Otherwise, it is inclusive (the metric must be less than or equal to the threshold plus the upper bound). A null value indicates positive infinity.
The upper bound must be greater than the lower bound.
The amount by which to scale, based on the specified adjustment type. A positive value adds to the current capacity while a negative number removes from the current capacity. For exact capacity, you must specify a positive value.
The minimum value to scale by when the adjustment type is PercentChangeInCapacity
. For example, suppose that you create a step scaling policy to scale out an Amazon ECS service by 25 percent and you specify a MinAdjustmentMagnitude
of 2. If the service has 4 tasks and the scaling policy is performed, 25 percent of 4 is 1. However, because you specified a MinAdjustmentMagnitude
of 2, Application Auto Scaling scales out the service by 2 tasks.
The amount of time, in seconds, to wait for a previous scaling activity to take effect.
With scale-out policies, the intention is to continuously (but not excessively) scale out. After Application Auto Scaling successfully scales out using a step scaling policy, it starts to calculate the cooldown time. The scaling policy won't increase the desired capacity again unless either a larger scale out is triggered or the cooldown period ends. While the cooldown period is in effect, capacity added by the initiating scale-out activity is calculated as part of the desired capacity for the next scale-out activity. For example, when an alarm triggers a step scaling policy to increase the capacity by 2, the scaling activity completes successfully, and a cooldown period starts. If the alarm triggers again during the cooldown period but at a more aggressive step adjustment of 3, the previous increase of 2 is considered part of the current capacity. Therefore, only 1 is added to the capacity.
With scale-in policies, the intention is to scale in conservatively to protect your application’s availability, so scale-in activities are blocked until the cooldown period has expired. However, if another alarm triggers a scale-out activity during the cooldown period after a scale-in activity, Application Auto Scaling scales out the target immediately. In this case, the cooldown period for the scale-in activity stops and doesn't complete.
Application Auto Scaling provides a default value of 600 for Amazon ElastiCache replication groups and a default value of 300 for the following scalable targets:
For all other scalable targets, the default value is 0:
The aggregation type for the CloudWatch metrics. Valid values are Minimum
, Maximum
, and Average
. If the aggregation type is null, the value is treated as Average
.
A target tracking scaling policy. Includes support for predefined or customized metrics.
This parameter is required if you are creating a policy and the policy type is TargetTrackingScaling
.
The target value for the metric. Although this property accepts numbers of type Double, it won't accept values that are either too small or too large. Values must be in the range of -2^360 to 2^360. The value must be a valid number based on the choice of metric. For example, if the metric is CPU utilization, then the target value is a percent value that represents how much of the CPU can be used before scaling out.
Note
If the scaling policy specifies the ALBRequestCountPerTarget
predefined metric, specify the target utilization as the optimal average request count per target during any one-minute interval.
A predefined metric. You can specify either a predefined metric or a customized metric.
The metric type. The ALBRequestCountPerTarget
metric type applies only to Spot Fleets and ECS services.
Identifies the resource associated with the metric type. You can't specify a resource label unless the metric type is ALBRequestCountPerTarget
and there is a target group attached to the Spot Fleet or ECS service.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff
.
Where:
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
A customized metric. You can specify either a predefined metric or a customized metric.
The name of the metric. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics.
The namespace of the metric.
The dimensions of the metric.
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
Describes the dimension names and values associated with a metric.
The name of the dimension.
The value of the dimension.
The statistic of the metric.
The unit of the metric. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
The metrics to include in the target tracking scaling policy, as a metric data query. This can include both raw metric and metric math expressions.
The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
For more information and examples, see Create a target tracking scaling policy for Application Auto Scaling using metric math in the Application Auto Scaling User Guide .
The math expression to perform on the returned data, if this object is performing a math expression. This expression can use the Id
of the other metrics to refer to those metrics, and can also use the Id
of other expressions to use the result of those expressions.
Conditional: Within each TargetTrackingMetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
A short name that identifies the object's results in the response. This name must be unique among all MetricDataQuery
objects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.
Information about the metric data to return.
Conditional: Within each MetricDataQuery
object, you must specify either Expression
or MetricStat
, but not both.
The CloudWatch metric to return, including the metric name, namespace, and dimensions. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics.
The dimensions for the metric. For the list of available dimensions, see the Amazon Web Services documentation available from the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
Describes the dimension of a metric.
The name of the dimension.
The value of the dimension.
The name of the metric.
The namespace of the metric. For more information, see the table in Amazon Web Services services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metrics for scaling is Average
The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify true
for this value for only the final math expression that the metric specification is based on. You must specify false
for ReturnData
for all the other metrics and expressions used in the metric specification.
If you are only retrieving metrics and not performing any math expressions, do not specify anything for ReturnData
. This sets it to its default ( true
).
The amount of time, in seconds, to wait for a previous scale-out activity to take effect.
With the scale-out cooldown period , the intention is to continuously (but not excessively) scale out. After Application Auto Scaling successfully scales out using a target tracking scaling policy, it starts to calculate the cooldown time. The scaling policy won't increase the desired capacity again unless either a larger scale out is triggered or the cooldown period ends. While the cooldown period is in effect, the capacity added by the initiating scale-out activity is calculated as part of the desired capacity for the next scale-out activity.
Application Auto Scaling provides a default value of 600 for Amazon ElastiCache replication groups and a default value of 300 for the following scalable targets:
For all other scalable targets, the default value is 0:
The amount of time, in seconds, after a scale-in activity completes before another scale-in activity can start.
With the scale-in cooldown period , the intention is to scale in conservatively to protect your application’s availability, so scale-in activities are blocked until the cooldown period has expired. However, if another alarm triggers a scale-out activity during the scale-in cooldown period, Application Auto Scaling scales out the target immediately. In this case, the scale-in cooldown period stops and doesn't complete.
Application Auto Scaling provides a default value of 600 for Amazon ElastiCache replication groups and a default value of 300 for the following scalable targets:
For all other scalable targets, the default value is 0:
Indicates whether scale in by the target tracking scaling policy is disabled. If the value is true
, scale in is disabled and the target tracking scaling policy won't remove capacity from the scalable target. Otherwise, scale in is enabled and the target tracking scaling policy can remove capacity from the scalable target. The default value is false
.
dict
Response Syntax
{
'PolicyARN': 'string',
'Alarms': [
{
'AlarmName': 'string',
'AlarmARN': 'string'
},
]
}
Response Structure
(dict) --
PolicyARN (string) --
The Amazon Resource Name (ARN) of the resulting scaling policy.
Alarms (list) --
The CloudWatch alarms created for the target tracking scaling policy.
(dict) --
Represents a CloudWatch alarm associated with a scaling policy.
AlarmName (string) --
The name of the alarm.
AlarmARN (string) --
The Amazon Resource Name (ARN) of the alarm.
Exceptions
ApplicationAutoScaling.Client.exceptions.ValidationException
ApplicationAutoScaling.Client.exceptions.LimitExceededException
ApplicationAutoScaling.Client.exceptions.ObjectNotFoundException
ApplicationAutoScaling.Client.exceptions.ConcurrentUpdateException
ApplicationAutoScaling.Client.exceptions.FailedResourceAccessException
ApplicationAutoScaling.Client.exceptions.InternalServiceException
Examples
The following example applies a target tracking scaling policy with a predefined metric specification to an Amazon ECS service called web-app in the default cluster. The policy keeps the average CPU utilization of the service at 75 percent, with scale-out and scale-in cooldown periods of 60 seconds.
response = client.put_scaling_policy(
PolicyName='cpu75-target-tracking-scaling-policy',
PolicyType='TargetTrackingScaling',
ResourceId='service/default/web-app',
ScalableDimension='ecs:service:DesiredCount',
ServiceNamespace='ecs',
TargetTrackingScalingPolicyConfiguration={
'PredefinedMetricSpecification': {
'PredefinedMetricType': 'ECSServiceAverageCPUUtilization',
},
'ScaleInCooldown': 60,
'ScaleOutCooldown': 60,
'TargetValue': 75,
},
)
print(response)
Expected Output:
{
'Alarms': [
{
'AlarmARN': 'arn:aws:cloudwatch:us-west-2:012345678910:alarm:TargetTracking-service/default/web-app-AlarmHigh-d4f0770c-b46e-434a-a60f-3b36d653feca',
'AlarmName': 'TargetTracking-service/default/web-app-AlarmHigh-d4f0770c-b46e-434a-a60f-3b36d653feca',
},
{
'AlarmARN': 'arn:aws:cloudwatch:us-west-2:012345678910:alarm:TargetTracking-service/default/web-app-AlarmLow-1b437334-d19b-4a63-a812-6c67aaf2910d',
'AlarmName': 'TargetTracking-service/default/web-app-AlarmLow-1b437334-d19b-4a63-a812-6c67aaf2910d',
},
],
'PolicyARN': 'arn:aws:autoscaling:us-west-2:012345678910:scalingPolicy:6d8972f3-efc8-437c-92d1-6270f29a66e7:resource/ecs/service/default/web-app:policyName/cpu75-target-tracking-scaling-policy',
'ResponseMetadata': {
'...': '...',
},
}