register_scalable_target
(**kwargs)¶Registers or updates a scalable target, the resource that you want to scale.
Scalable targets are uniquely identified by the combination of resource ID, scalable dimension, and namespace, which represents some capacity dimension of the underlying service.
When you register a new scalable target, you must specify values for the minimum and maximum capacity. If the specified resource is not active in the target service, this operation does not change the resource's current capacity. Otherwise, it changes the resource's current capacity to a value that is inside of this range.
If you choose to add a scaling policy, current capacity is adjustable within the specified range when scaling starts. Application Auto Scaling scaling policies will not scale capacity to values that are outside of the minimum and maximum range.
After you register a scalable target, you do not need to register it again to use other Application Auto Scaling operations. To see which resources have been registered, use DescribeScalableTargets. You can also view the scaling policies for a service namespace by using DescribeScalableTargets. If you no longer need a scalable target, you can deregister it by using DeregisterScalableTarget.
To update a scalable target, specify the parameters that you want to change. Include the parameters that identify the scalable target: resource ID, scalable dimension, and namespace. Any parameters that you don't specify are not changed by this update request.
Note
If you call the RegisterScalableTarget
API to update an existing scalable target, Application Auto Scaling retrieves the current capacity of the resource. If it is below the minimum capacity or above the maximum capacity, Application Auto Scaling adjusts the capacity of the scalable target to place it within these bounds, even if you don't include the MinCapacity
or MaxCapacity
request parameters.
See also: AWS API Documentation
Request Syntax
response = client.register_scalable_target(
ServiceNamespace='ecs'|'elasticmapreduce'|'ec2'|'appstream'|'dynamodb'|'rds'|'sagemaker'|'custom-resource'|'comprehend'|'lambda'|'cassandra'|'kafka'|'elasticache'|'neptune',
ResourceId='string',
ScalableDimension='ecs:service:DesiredCount'|'ec2:spot-fleet-request:TargetCapacity'|'elasticmapreduce:instancegroup:InstanceCount'|'appstream:fleet:DesiredCapacity'|'dynamodb:table:ReadCapacityUnits'|'dynamodb:table:WriteCapacityUnits'|'dynamodb:index:ReadCapacityUnits'|'dynamodb:index:WriteCapacityUnits'|'rds:cluster:ReadReplicaCount'|'sagemaker:variant:DesiredInstanceCount'|'custom-resource:ResourceType:Property'|'comprehend:document-classifier-endpoint:DesiredInferenceUnits'|'comprehend:entity-recognizer-endpoint:DesiredInferenceUnits'|'lambda:function:ProvisionedConcurrency'|'cassandra:table:ReadCapacityUnits'|'cassandra:table:WriteCapacityUnits'|'kafka:broker-storage:VolumeSize'|'elasticache:replication-group:NodeGroups'|'elasticache:replication-group:Replicas'|'neptune:cluster:ReadReplicaCount',
MinCapacity=123,
MaxCapacity=123,
RoleARN='string',
SuspendedState={
'DynamicScalingInSuspended': True|False,
'DynamicScalingOutSuspended': True|False,
'ScheduledScalingSuspended': True|False
}
)
[REQUIRED]
The namespace of the Amazon Web Services service that provides the resource. For a resource provided by your own application or service, use custom-resource
instead.
[REQUIRED]
The identifier of the resource that is associated with the scalable target. This string consists of the resource type and unique identifier.
service
and the unique identifier is the cluster name and service name. Example: service/default/sample-webapp
.spot-fleet-request
and the unique identifier is the Spot Fleet request ID. Example: spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE
.instancegroup
and the unique identifier is the cluster ID and instance group ID. Example: instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0
.fleet
and the unique identifier is the fleet name. Example: fleet/sample-fleet
.table
and the unique identifier is the table name. Example: table/my-table
.index
and the unique identifier is the index name. Example: table/my-table/index/my-table-index
.cluster
and the unique identifier is the cluster name. Example: cluster:my-db-cluster
.variant
and the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering
.OutputValue
from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our GitHub repository.arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE
.arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE
.function
and the unique identifier is the function name with a function version or alias name suffix that is not $LATEST
. Example: function:my-function:prod
or function:my-function:1
.table
and the unique identifier is the table name. Example: keyspace/mykeyspace/table/mytable
.arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5
.replication-group
and the unique identifier is the replication group name. Example: replication-group/mycluster
.cluster
and the unique identifier is the cluster name. Example: cluster:mycluster
.[REQUIRED]
The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.
ecs:service:DesiredCount
- The desired task count of an ECS service.elasticmapreduce:instancegroup:InstanceCount
- The instance count of an EMR Instance Group.ec2:spot-fleet-request:TargetCapacity
- The target capacity of a Spot Fleet.appstream:fleet:DesiredCapacity
- The desired capacity of an AppStream 2.0 fleet.dynamodb:table:ReadCapacityUnits
- The provisioned read capacity for a DynamoDB table.dynamodb:table:WriteCapacityUnits
- The provisioned write capacity for a DynamoDB table.dynamodb:index:ReadCapacityUnits
- The provisioned read capacity for a DynamoDB global secondary index.dynamodb:index:WriteCapacityUnits
- The provisioned write capacity for a DynamoDB global secondary index.rds:cluster:ReadReplicaCount
- The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.sagemaker:variant:DesiredInstanceCount
- The number of EC2 instances for a SageMaker model endpoint variant.custom-resource:ResourceType:Property
- The scalable dimension for a custom resource provided by your own application or service.comprehend:document-classifier-endpoint:DesiredInferenceUnits
- The number of inference units for an Amazon Comprehend document classification endpoint.comprehend:entity-recognizer-endpoint:DesiredInferenceUnits
- The number of inference units for an Amazon Comprehend entity recognizer endpoint.lambda:function:ProvisionedConcurrency
- The provisioned concurrency for a Lambda function.cassandra:table:ReadCapacityUnits
- The provisioned read capacity for an Amazon Keyspaces table.cassandra:table:WriteCapacityUnits
- The provisioned write capacity for an Amazon Keyspaces table.kafka:broker-storage:VolumeSize
- The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.elasticache:replication-group:NodeGroups
- The number of node groups for an Amazon ElastiCache replication group.elasticache:replication-group:Replicas
- The number of replicas per node group for an Amazon ElastiCache replication group.neptune:cluster:ReadReplicaCount
- The count of read replicas in an Amazon Neptune DB cluster.The minimum value that you plan to scale in to. When a scaling policy is in effect, Application Auto Scaling can scale in (contract) as needed to the minimum capacity limit in response to changing demand. This property is required when registering a new scalable target.
For the following resources, the minimum value allowed is 0.
It's strongly recommended that you specify a value greater than 0. A value greater than 0 means that data points are continuously reported to CloudWatch that scaling policies can use to scale on a metric like average CPU utilization.
For all other resources, the minimum allowed value depends on the type of resource that you are using. If you provide a value that is lower than what a resource can accept, an error occurs. In which case, the error message will provide the minimum value that the resource can accept.
The maximum value that you plan to scale out to. When a scaling policy is in effect, Application Auto Scaling can scale out (expand) as needed to the maximum capacity limit in response to changing demand. This property is required when registering a new scalable target.
Although you can specify a large maximum capacity, note that service quotas may impose lower limits. Each service has its own default quotas for the maximum capacity of the resource. If you want to specify a higher limit, you can request an increase. For more information, consult the documentation for that service. For information about the default quotas for each service, see Service endpoints and quotas in the Amazon Web Services General Reference .
This parameter is required for services that do not support service-linked roles (such as Amazon EMR), and it must specify the ARN of an IAM role that allows Application Auto Scaling to modify the scalable target on your behalf.
If the service supports service-linked roles, Application Auto Scaling uses a service-linked role, which it creates if it does not yet exist. For more information, see Application Auto Scaling IAM roles.
An embedded object that contains attributes and attribute values that are used to suspend and resume automatic scaling. Setting the value of an attribute to true
suspends the specified scaling activities. Setting it to false
(default) resumes the specified scaling activities.
Suspension Outcomes
DynamicScalingInSuspended
, while a suspension is in effect, all scale-in activities that are triggered by a scaling policy are suspended.DynamicScalingOutSuspended
, while a suspension is in effect, all scale-out activities that are triggered by a scaling policy are suspended.ScheduledScalingSuspended
, while a suspension is in effect, all scaling activities that involve scheduled actions are suspended.For more information, see Suspending and resuming scaling in the Application Auto Scaling User Guide .
Whether scale in by a target tracking scaling policy or a step scaling policy is suspended. Set the value to true
if you don't want Application Auto Scaling to remove capacity when a scaling policy is triggered. The default is false
.
Whether scale out by a target tracking scaling policy or a step scaling policy is suspended. Set the value to true
if you don't want Application Auto Scaling to add capacity when a scaling policy is triggered. The default is false
.
Whether scheduled scaling is suspended. Set the value to true
if you don't want Application Auto Scaling to add or remove capacity by initiating scheduled actions. The default is false
.
dict
Response Syntax
{}
Response Structure
Exceptions
ApplicationAutoScaling.Client.exceptions.ValidationException
ApplicationAutoScaling.Client.exceptions.LimitExceededException
ApplicationAutoScaling.Client.exceptions.ConcurrentUpdateException
ApplicationAutoScaling.Client.exceptions.InternalServiceException
Examples
This example registers a scalable target from an Amazon ECS service called web-app that is running on the default cluster, with a minimum desired count of 1 task and a maximum desired count of 10 tasks.
response = client.register_scalable_target(
MaxCapacity=10,
MinCapacity=1,
ResourceId='service/default/web-app',
ScalableDimension='ecs:service:DesiredCount',
ServiceNamespace='ecs',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}