Table of Contents
A low-level client representing AWS Database Migration Service:
import boto3
client = boto3.client('dms')
These are the available methods:
Adds metadata tags to an AWS DMS resource, including replication instance, endpoint, security group, and migration task. These tags can also be used with cost allocation reporting to track cost associated with DMS resources, or used in a Condition statement in an IAM policy for DMS.
See also: AWS API Documentation
Request Syntax
response = client.add_tags_to_resource(
ResourceArn='string',
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
[REQUIRED]
Identifies the AWS DMS resource to which tags should be added. The value for this parameter is an Amazon Resource Name (ARN).
For AWS DMS, you can tag a replication instance, an endpoint, or a replication task.
[REQUIRED]
One or more tags to be assigned to the resource.
A user-defined key-value pair that describes metadata added to an AWS DMS resource and that is used by operations such as the following:
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
dict
Response Syntax
{}
Response Structure
Exceptions
Examples
Adds metadata tags to an AWS DMS resource, including replication instance, endpoint, security group, and migration task. These tags can also be used with cost allocation reporting to track cost associated with AWS DMS resources, or used in a Condition statement in an IAM policy for AWS DMS.
response = client.add_tags_to_resource(
# Required. Use the ARN of the resource you want to tag.
ResourceArn='arn:aws:dms:us-east-1:123456789012:endpoint:ASXWXJZLNWNT5HTWCGV2BUJQ7E',
# Required. Use the Key/Value pair format.
Tags=[
{
'Key': 'Acount',
'Value': '1633456',
},
],
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
Applies a pending maintenance action to a resource (for example, to a replication instance).
See also: AWS API Documentation
Request Syntax
response = client.apply_pending_maintenance_action(
ReplicationInstanceArn='string',
ApplyAction='string',
OptInType='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) of the AWS DMS resource that the pending maintenance action applies to.
[REQUIRED]
The pending maintenance action to apply to this resource.
[REQUIRED]
A value that specifies the type of opt-in request, or undoes an opt-in request. You can't undo an opt-in request of type immediate .
Valid values:
dict
Response Syntax
{
'ResourcePendingMaintenanceActions': {
'ResourceIdentifier': 'string',
'PendingMaintenanceActionDetails': [
{
'Action': 'string',
'AutoAppliedAfterDate': datetime(2015, 1, 1),
'ForcedApplyDate': datetime(2015, 1, 1),
'OptInStatus': 'string',
'CurrentApplyDate': datetime(2015, 1, 1),
'Description': 'string'
},
]
}
}
Response Structure
(dict) --
ResourcePendingMaintenanceActions (dict) --
The AWS DMS resource that the pending maintenance action will be applied to.
ResourceIdentifier (string) --
The Amazon Resource Name (ARN) of the DMS resource that the pending maintenance action applies to. For information about creating an ARN, see Constructing an Amazon Resource Name (ARN) for AWS DMS in the DMS documentation.
PendingMaintenanceActionDetails (list) --
Detailed information about the pending maintenance action.
(dict) --
Describes a maintenance action pending for an AWS DMS resource, including when and how it will be applied. This data type is a response element to the DescribePendingMaintenanceActions operation.
Action (string) --
The type of pending maintenance action that is available for the resource.
AutoAppliedAfterDate (datetime) --
The date of the maintenance window when the action is to be applied. The maintenance action is applied to the resource during its first maintenance window after this date. If this date is specified, any next-maintenance opt-in requests are ignored.
ForcedApplyDate (datetime) --
The date when the maintenance action will be automatically applied. The maintenance action is applied to the resource on this date regardless of the maintenance window for the resource. If this date is specified, any immediate opt-in requests are ignored.
OptInStatus (string) --
The type of opt-in request that has been received for the resource.
CurrentApplyDate (datetime) --
The effective date when the pending maintenance action will be applied to the resource. This date takes into account opt-in requests received from the ApplyPendingMaintenanceAction API operation, and also the AutoAppliedAfterDate and ForcedApplyDate parameter values. This value is blank if an opt-in request has not been received and nothing has been specified for AutoAppliedAfterDate or ForcedApplyDate .
Description (string) --
A description providing more detail about the maintenance action.
Exceptions
Check if an operation can be paginated.
Creates an endpoint using the provided settings.
See also: AWS API Documentation
Request Syntax
response = client.create_endpoint(
EndpointIdentifier='string',
EndpointType='source'|'target',
EngineName='string',
Username='string',
Password='string',
ServerName='string',
Port=123,
DatabaseName='string',
ExtraConnectionAttributes='string',
KmsKeyId='string',
Tags=[
{
'Key': 'string',
'Value': 'string'
},
],
CertificateArn='string',
SslMode='none'|'require'|'verify-ca'|'verify-full',
ServiceAccessRoleArn='string',
ExternalTableDefinition='string',
DynamoDbSettings={
'ServiceAccessRoleArn': 'string'
},
S3Settings={
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'CsvRowDelimiter': 'string',
'CsvDelimiter': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'CompressionType': 'none'|'gzip',
'EncryptionMode': 'sse-s3'|'sse-kms',
'ServerSideEncryptionKmsKeyId': 'string',
'DataFormat': 'csv'|'parquet',
'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
'DictPageSizeLimit': 123,
'RowGroupLength': 123,
'DataPageSize': 123,
'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
'EnableStatistics': True|False,
'IncludeOpForFullLoad': True|False,
'CdcInsertsOnly': True|False,
'TimestampColumnName': 'string',
'ParquetTimestampInMillisecond': True|False,
'CdcInsertsAndUpdates': True|False
},
DmsTransferSettings={
'ServiceAccessRoleArn': 'string',
'BucketName': 'string'
},
MongoDbSettings={
'Username': 'string',
'Password': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'AuthType': 'no'|'password',
'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
'NestingLevel': 'none'|'one',
'ExtractDocId': 'string',
'DocsToInvestigate': 'string',
'AuthSource': 'string',
'KmsKeyId': 'string'
},
KinesisSettings={
'StreamArn': 'string',
'MessageFormat': 'json'|'json-unformatted',
'ServiceAccessRoleArn': 'string',
'IncludeTransactionDetails': True|False,
'IncludePartitionValue': True|False,
'PartitionIncludeSchemaTable': True|False,
'IncludeTableAlterOperations': True|False,
'IncludeControlDetails': True|False
},
KafkaSettings={
'Broker': 'string',
'Topic': 'string'
},
ElasticsearchSettings={
'ServiceAccessRoleArn': 'string',
'EndpointUri': 'string',
'FullLoadErrorPercentage': 123,
'ErrorRetryDuration': 123
},
NeptuneSettings={
'ServiceAccessRoleArn': 'string',
'S3BucketName': 'string',
'S3BucketFolder': 'string',
'ErrorRetryDuration': 123,
'MaxFileSize': 123,
'MaxRetryCount': 123,
'IamAuthEnabled': True|False
},
RedshiftSettings={
'AcceptAnyDate': True|False,
'AfterConnectScript': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'ConnectionTimeout': 123,
'DatabaseName': 'string',
'DateFormat': 'string',
'EmptyAsNull': True|False,
'EncryptionMode': 'sse-s3'|'sse-kms',
'FileTransferUploadStreams': 123,
'LoadTimeout': 123,
'MaxFileSize': 123,
'Password': 'string',
'Port': 123,
'RemoveQuotes': True|False,
'ReplaceInvalidChars': 'string',
'ReplaceChars': 'string',
'ServerName': 'string',
'ServiceAccessRoleArn': 'string',
'ServerSideEncryptionKmsKeyId': 'string',
'TimeFormat': 'string',
'TrimBlanks': True|False,
'TruncateColumns': True|False,
'Username': 'string',
'WriteBufferSize': 123
}
)
[REQUIRED]
The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
[REQUIRED]
The type of endpoint. Valid values are source and target .
[REQUIRED]
The type of engine for the endpoint. Valid values, depending on the EndpointType value, include "mysql" , "oracle" , "postgres" , "mariadb" , "aurora" , "aurora-postgresql" , "redshift" , "s3" , "db2" , "azuredb" , "sybase" , "dynamodb" , "mongodb" , "kinesis" , "kafka" , "elasticsearch" , "documentdb" , and "sqlserver" .
An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
One or more tags to be assigned to the endpoint.
A user-defined key-value pair that describes metadata added to an AWS DMS resource and that is used by operations such as the following:
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
Settings in JSON format for the target Amazon DynamoDB endpoint. For information about other available settings, see Using Object Mapping to Migrate Data to DynamoDB in the AWS Database Migration Service User Guide.
The Amazon Resource Name (ARN) used by the service access IAM role.
Settings in JSON format for the target Amazon S3 endpoint. For more information about the available settings, see Extra Connection Attributes When Using Amazon S3 as a Target for AWS DMS in the AWS Database Migration Service User Guide.
The Amazon Resource Name (ARN) used by the service access IAM role.
The external table definition.
The delimiter used to separate rows in the source files. The default is a carriage return (\n ).
The delimiter used to separate columns in the source files. The default is a comma.
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path `` bucketFolder /schema_name /table_name /`` . If this parameter isn't specified, then the path used is `` schema_name /table_name /`` .
The name of the S3 bucket.
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS . To use SSE_S3 , you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:
If you are using SSE_KMS for the EncryptionMode , provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=*value* ,BucketFolder=*value* ,BucketName=*value* ,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=*value* ``
The format of the data that you want to use for output. You can choose one of the following:
The type of encoding you are using:
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN . This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0 .
A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL , DISTINCT , MAX , and MIN values. This parameter defaults to true . This value is used for .parquet file format only.
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note
AWS DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.
For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y , the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.
Note
This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly is set to true or y , only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad . If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false , every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note
AWS DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.
DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS . By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.
When the AddColumnName parameter is set to true , DMS also includes a name for the timestamp column that you set with TimestampColumnName .
A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.
Note
AWS DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond is set to true or y , AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.
Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.
Note
AWS DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.
Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false , but when CdcInsertsAndUpdates is set to true or y , INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.
For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false , CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the use of the CdcInsertsAndUpdates parameter in versions 3.3.1 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string
JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }
The IAM role that has permission to access the Amazon S3 bucket.
The name of the S3 bucket to use.
Settings in JSON format for the source MongoDB endpoint. For more information about the available settings, see Using MongoDB as a Target for AWS Database Migration Service in the AWS Database Migration Service User Guide.
The user name you use to access the MongoDB source endpoint.
The password for the user account you use to access the MongoDB source endpoint.
The name of the server on the MongoDB source endpoint.
The port value for the MongoDB source endpoint.
The database name on the MongoDB source endpoint.
The authentication type you use to access the MongoDB source endpoint.
Valid values: NO, PASSWORD
When NO is selected, user name and password parameters are not used and can be empty.
The authentication mechanism you use to access the MongoDB source endpoint.
Valid values: DEFAULT, MONGODB_CR, SCRAM_SHA_1
DEFAULT – For MongoDB version 2.x, use MONGODB_CR. For MongoDB version 3.x, use SCRAM_SHA_1. This setting isn't used when authType=No.
Specifies either document or table mode.
Valid values: NONE, ONE
Default value is NONE. Specify NONE to use document mode. Specify ONE to use table mode.
Specifies the document ID. Use this setting when NestingLevel is set to NONE.
Default value is false.
Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to ONE.
Must be a positive value greater than 0. Default value is 1000.
The MongoDB database name. This setting isn't used when authType=NO .
The default is admin.
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
Settings in JSON format for the target endpoint for Amazon Kinesis Data Streams. For more information about the available settings, see Using Amazon Kinesis Data Streams as a Target for AWS Database Migration Service in the AWS Database Migration User Guide.
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is False .
Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type . The default is False .
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is False .
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is False .
Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is False .
Settings in JSON format for the target Apache Kafka endpoint. For more information about the available settings, see Using Apache Kafka as a Target for AWS Database Migration Service in the AWS Database Migration User Guide.
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form `` broker-hostname-or-ip :port `` . For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345" .
The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.
Settings in JSON format for the target Elasticsearch endpoint. For more information about the available settings, see Extra Connection Attributes When Using Elasticsearch as a Target for AWS DMS in the AWS Database Migration User Guide.
The Amazon Resource Name (ARN) used by service to access the IAM role.
The endpoint for the Elasticsearch cluster.
The maximum percentage of records that can fail to be written before a full load operation stops.
The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
Settings in JSON format for the target Amazon Neptune endpoint. For more information about the available settings, see https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html#CHAP_Target.Neptune.EndpointSettings in the AWS Database Migration Service User Guide.
The ARN of the service role you have created for the Neptune target endpoint. For more information, see https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html#CHAP_Target.Neptune.ServiceRole in the AWS Database Migration Service User Guide.
The name of the S3 bucket for AWS DMS to temporarily store migrated graph data in CSV files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these CSV files.
A folder path where you where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName
The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
The maximum size in KB of migrated graph data stored in a CSV file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1048576 KB. If successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.
The number of times for AWS DMS to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 5.
If you want IAM authorization enabled for this endpoint, set this parameter to true and attach the appropriate role policy document to your service role specified by ServiceAccessRoleArn . The default is false .
Provides information that defines an Amazon Redshift endpoint.
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
The location where the comma-separated value (.csv) files are stored before being uploaded to the S3 bucket.
The name of the S3 bucket you want to use
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
The name of the Amazon Redshift data warehouse (service) that you are working with.
The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.
If your date and time values use formats different from each other, set this to auto .
A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false .
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS . To use SSE_S3 , create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.
The maximum size (in KB) of any .csv file used to transfer data to Amazon Redshift. This accepts a value from 1 through 1,048,576. It defaults to 32,768 KB (32 MB).
The password for the user named in the username property.
The port number for Amazon Redshift. The default value is 5439.
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false .
A list of characters that you want to replace. Use with ReplaceChars .
A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars , substituting the specified characters instead. The default is "?" .
The name of the Amazon Redshift cluster you are using.
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.
The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode , provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string' , 'epochsecs' , or 'epochmillisecs' . It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.
If your date and time values use formats different from each other, set this parameter to auto .
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false .
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false .
An Amazon Redshift user name for a registered user.
The size of the write buffer to use in rows. Valid values range from 1 through 2,048. The default is 1,024. Use this setting to tune performance.
dict
Response Syntax
{
'Endpoint': {
'EndpointIdentifier': 'string',
'EndpointType': 'source'|'target',
'EngineName': 'string',
'EngineDisplayName': 'string',
'Username': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'ExtraConnectionAttributes': 'string',
'Status': 'string',
'KmsKeyId': 'string',
'EndpointArn': 'string',
'CertificateArn': 'string',
'SslMode': 'none'|'require'|'verify-ca'|'verify-full',
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'ExternalId': 'string',
'DynamoDbSettings': {
'ServiceAccessRoleArn': 'string'
},
'S3Settings': {
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'CsvRowDelimiter': 'string',
'CsvDelimiter': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'CompressionType': 'none'|'gzip',
'EncryptionMode': 'sse-s3'|'sse-kms',
'ServerSideEncryptionKmsKeyId': 'string',
'DataFormat': 'csv'|'parquet',
'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
'DictPageSizeLimit': 123,
'RowGroupLength': 123,
'DataPageSize': 123,
'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
'EnableStatistics': True|False,
'IncludeOpForFullLoad': True|False,
'CdcInsertsOnly': True|False,
'TimestampColumnName': 'string',
'ParquetTimestampInMillisecond': True|False,
'CdcInsertsAndUpdates': True|False
},
'DmsTransferSettings': {
'ServiceAccessRoleArn': 'string',
'BucketName': 'string'
},
'MongoDbSettings': {
'Username': 'string',
'Password': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'AuthType': 'no'|'password',
'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
'NestingLevel': 'none'|'one',
'ExtractDocId': 'string',
'DocsToInvestigate': 'string',
'AuthSource': 'string',
'KmsKeyId': 'string'
},
'KinesisSettings': {
'StreamArn': 'string',
'MessageFormat': 'json'|'json-unformatted',
'ServiceAccessRoleArn': 'string',
'IncludeTransactionDetails': True|False,
'IncludePartitionValue': True|False,
'PartitionIncludeSchemaTable': True|False,
'IncludeTableAlterOperations': True|False,
'IncludeControlDetails': True|False
},
'KafkaSettings': {
'Broker': 'string',
'Topic': 'string'
},
'ElasticsearchSettings': {
'ServiceAccessRoleArn': 'string',
'EndpointUri': 'string',
'FullLoadErrorPercentage': 123,
'ErrorRetryDuration': 123
},
'NeptuneSettings': {
'ServiceAccessRoleArn': 'string',
'S3BucketName': 'string',
'S3BucketFolder': 'string',
'ErrorRetryDuration': 123,
'MaxFileSize': 123,
'MaxRetryCount': 123,
'IamAuthEnabled': True|False
},
'RedshiftSettings': {
'AcceptAnyDate': True|False,
'AfterConnectScript': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'ConnectionTimeout': 123,
'DatabaseName': 'string',
'DateFormat': 'string',
'EmptyAsNull': True|False,
'EncryptionMode': 'sse-s3'|'sse-kms',
'FileTransferUploadStreams': 123,
'LoadTimeout': 123,
'MaxFileSize': 123,
'Password': 'string',
'Port': 123,
'RemoveQuotes': True|False,
'ReplaceInvalidChars': 'string',
'ReplaceChars': 'string',
'ServerName': 'string',
'ServiceAccessRoleArn': 'string',
'ServerSideEncryptionKmsKeyId': 'string',
'TimeFormat': 'string',
'TrimBlanks': True|False,
'TruncateColumns': True|False,
'Username': 'string',
'WriteBufferSize': 123
}
}
}
Response Structure
(dict) --
Endpoint (dict) --
The endpoint that was created.
EndpointIdentifier (string) --
The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
EndpointType (string) --
The type of endpoint. Valid values are source and target .
EngineName (string) --
The database engine name. Valid values, depending on the EndpointType, include "mysql" , "oracle" , "postgres" , "mariadb" , "aurora" , "aurora-postgresql" , "redshift" , "s3" , "db2" , "azuredb" , "sybase" , "dynamodb" , "mongodb" , "kinesis" , "kafka" , "elasticsearch" , "documentdb" , and "sqlserver" .
EngineDisplayName (string) --
The expanded name for the engine name. For example, if the EngineName parameter is "aurora," this value would be "Amazon Aurora MySQL."
Username (string) --
The user name used to connect to the endpoint.
ServerName (string) --
The name of the server at the endpoint.
Port (integer) --
The port value used to access the endpoint.
DatabaseName (string) --
The name of the database at the endpoint.
ExtraConnectionAttributes (string) --
Additional connection attributes used to connect to the endpoint.
Status (string) --
The status of the endpoint.
KmsKeyId (string) --
An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
EndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
CertificateArn (string) --
The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
SslMode (string) --
The SSL mode used to connect to the endpoint. The default value is none .
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
ExternalTableDefinition (string) --
The external table definition.
ExternalId (string) --
Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
DynamoDbSettings (dict) --
The settings for the target DynamoDB database. For more information, see the DynamoDBSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
S3Settings (dict) --
The settings for the S3 target endpoint. For more information, see the S3Settings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
ExternalTableDefinition (string) --
The external table definition.
CsvRowDelimiter (string) --
The delimiter used to separate rows in the source files. The default is a carriage return (\n ).
CsvDelimiter (string) --
The delimiter used to separate columns in the source files. The default is a comma.
BucketFolder (string) --
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path `` bucketFolder /schema_name /table_name /`` . If this parameter isn't specified, then the path used is `` schema_name /table_name /`` .
BucketName (string) --
The name of the S3 bucket.
CompressionType (string) --
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS . To use SSE_S3 , you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:
ServerSideEncryptionKmsKeyId (string) --
If you are using SSE_KMS for the EncryptionMode , provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=*value* ,BucketFolder=*value* ,BucketName=*value* ,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=*value* ``
DataFormat (string) --
The format of the data that you want to use for output. You can choose one of the following:
EncodingType (string) --
The type of encoding you are using:
DictPageSizeLimit (integer) --
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN . This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.
RowGroupLength (integer) --
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).
DataPageSize (integer) --
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion (string) --
The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0 .
EnableStatistics (boolean) --
A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL , DISTINCT , MAX , and MIN values. This parameter defaults to true . This value is used for .parquet file format only.
IncludeOpForFullLoad (boolean) --
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note
AWS DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.
For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y , the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.
Note
This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
CdcInsertsOnly (boolean) --
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly is set to true or y , only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad . If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false , every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
TimestampColumnName (string) --
A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note
AWS DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.
DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS . By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.
When the AddColumnName parameter is set to true , DMS also includes a name for the timestamp column that you set with TimestampColumnName .
ParquetTimestampInMillisecond (boolean) --
A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.
Note
AWS DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond is set to true or y , AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.
Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.
Note
AWS DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.
Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.
CdcInsertsAndUpdates (boolean) --
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false , but when CdcInsertsAndUpdates is set to true or y , INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.
For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false , CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the use of the CdcInsertsAndUpdates parameter in versions 3.3.1 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
DmsTransferSettings (dict) --
The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string
JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }
ServiceAccessRoleArn (string) --
The IAM role that has permission to access the Amazon S3 bucket.
BucketName (string) --
The name of the S3 bucket to use.
MongoDbSettings (dict) --
The settings for the MongoDB source endpoint. For more information, see the MongoDbSettings structure.
Username (string) --
The user name you use to access the MongoDB source endpoint.
Password (string) --
The password for the user account you use to access the MongoDB source endpoint.
ServerName (string) --
The name of the server on the MongoDB source endpoint.
Port (integer) --
The port value for the MongoDB source endpoint.
DatabaseName (string) --
The database name on the MongoDB source endpoint.
AuthType (string) --
The authentication type you use to access the MongoDB source endpoint.
Valid values: NO, PASSWORD
When NO is selected, user name and password parameters are not used and can be empty.
AuthMechanism (string) --
The authentication mechanism you use to access the MongoDB source endpoint.
Valid values: DEFAULT, MONGODB_CR, SCRAM_SHA_1
DEFAULT – For MongoDB version 2.x, use MONGODB_CR. For MongoDB version 3.x, use SCRAM_SHA_1. This setting isn't used when authType=No.
NestingLevel (string) --
Specifies either document or table mode.
Valid values: NONE, ONE
Default value is NONE. Specify NONE to use document mode. Specify ONE to use table mode.
ExtractDocId (string) --
Specifies the document ID. Use this setting when NestingLevel is set to NONE.
Default value is false.
DocsToInvestigate (string) --
Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to ONE.
Must be a positive value greater than 0. Default value is 1000.
AuthSource (string) --
The MongoDB database name. This setting isn't used when authType=NO .
The default is admin.
KmsKeyId (string) --
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
KinesisSettings (dict) --
The settings for the Amazon Kinesis target endpoint. For more information, see the KinesisSettings structure.
StreamArn (string) --
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat (string) --
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.
IncludeTransactionDetails (boolean) --
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is False .
IncludePartitionValue (boolean) --
Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type . The default is False .
PartitionIncludeSchemaTable (boolean) --
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is False .
IncludeTableAlterOperations (boolean) --
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is False .
IncludeControlDetails (boolean) --
Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is False .
KafkaSettings (dict) --
The settings for the Apache Kafka target endpoint. For more information, see the KafkaSettings structure.
Broker (string) --
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form `` broker-hostname-or-ip :port `` . For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345" .
Topic (string) --
The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.
ElasticsearchSettings (dict) --
The settings for the Elasticsearch source endpoint. For more information, see the ElasticsearchSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by service to access the IAM role.
EndpointUri (string) --
The endpoint for the Elasticsearch cluster.
FullLoadErrorPercentage (integer) --
The maximum percentage of records that can fail to be written before a full load operation stops.
ErrorRetryDuration (integer) --
The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
NeptuneSettings (dict) --
The settings for the MongoDB source endpoint. For more information, see the NeptuneSettings structure.
ServiceAccessRoleArn (string) --
The ARN of the service role you have created for the Neptune target endpoint. For more information, see https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html#CHAP_Target.Neptune.ServiceRole in the AWS Database Migration Service User Guide.
S3BucketName (string) --
The name of the S3 bucket for AWS DMS to temporarily store migrated graph data in CSV files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these CSV files.
S3BucketFolder (string) --
A folder path where you where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName
ErrorRetryDuration (integer) --
The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize (integer) --
The maximum size in KB of migrated graph data stored in a CSV file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1048576 KB. If successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount (integer) --
The number of times for AWS DMS to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled (boolean) --
If you want IAM authorization enabled for this endpoint, set this parameter to true and attach the appropriate role policy document to your service role specified by ServiceAccessRoleArn . The default is false .
RedshiftSettings (dict) --
Settings for the Amazon Redshift endpoint.
AcceptAnyDate (boolean) --
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript (string) --
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder (string) --
The location where the comma-separated value (.csv) files are stored before being uploaded to the S3 bucket.
BucketName (string) --
The name of the S3 bucket you want to use
ConnectionTimeout (integer) --
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName (string) --
The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat (string) --
The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.
If your date and time values use formats different from each other, set this to auto .
EmptyAsNull (boolean) --
A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false .
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS . To use SSE_S3 , create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"
FileTransferUploadStreams (integer) --
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
LoadTimeout (integer) --
The amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.
MaxFileSize (integer) --
The maximum size (in KB) of any .csv file used to transfer data to Amazon Redshift. This accepts a value from 1 through 1,048,576. It defaults to 32,768 KB (32 MB).
Password (string) --
The password for the user named in the username property.
Port (integer) --
The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes (boolean) --
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false .
ReplaceInvalidChars (string) --
A list of characters that you want to replace. Use with ReplaceChars .
ReplaceChars (string) --
A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars , substituting the specified characters instead. The default is "?" .
ServerName (string) --
The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.
ServerSideEncryptionKmsKeyId (string) --
The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode , provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
TimeFormat (string) --
The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string' , 'epochsecs' , or 'epochmillisecs' . It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.
If your date and time values use formats different from each other, set this parameter to auto .
TrimBlanks (boolean) --
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false .
TruncateColumns (boolean) --
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false .
Username (string) --
An Amazon Redshift user name for a registered user.
WriteBufferSize (integer) --
The size of the write buffer to use in rows. Valid values range from 1 through 2,048. The default is 1,024. Use this setting to tune performance.
Exceptions
Examples
Creates an endpoint using the provided settings.
response = client.create_endpoint(
CertificateArn='',
DatabaseName='testdb',
EndpointIdentifier='test-endpoint-1',
EndpointType='source',
EngineName='mysql',
ExtraConnectionAttributes='',
KmsKeyId='arn:aws:kms:us-east-1:123456789012:key/4c1731d6-5435-ed4d-be13-d53411a7cfbd',
Password='pasword',
Port=3306,
ServerName='mydb.cx1llnox7iyx.us-west-2.rds.amazonaws.com',
SslMode='require',
Tags=[
{
'Key': 'Acount',
'Value': '143327655',
},
],
Username='username',
)
print(response)
Expected Output:
{
'Endpoint': {
'EndpointArn': 'arn:aws:dms:us-east-1:123456789012:endpoint:RAAR3R22XSH46S3PWLC3NJAWKM',
'EndpointIdentifier': 'test-endpoint-1',
'EndpointType': 'source',
'EngineName': 'mysql',
'KmsKeyId': 'arn:aws:kms:us-east-1:123456789012:key/4c1731d6-5435-ed4d-be13-d53411a7cfbd',
'Port': 3306,
'ServerName': 'mydb.cx1llnox7iyx.us-west-2.rds.amazonaws.com',
'Status': 'active',
'Username': 'username',
},
'ResponseMetadata': {
'...': '...',
},
}
Creates an AWS DMS event notification subscription.
You can specify the type of source (SourceType ) you want to be notified of, provide a list of AWS DMS source IDs (SourceIds ) that triggers the events, and provide a list of event categories (EventCategories ) for events you want to be notified of. If you specify both the SourceType and SourceIds , such as SourceType = replication-instance and SourceIdentifier = my-replinstance , you will be notified of all the replication instance events for the specified source. If you specify a SourceType but don't specify a SourceIdentifier , you receive notice of the events for that source type for all your AWS DMS sources. If you don't specify either SourceType nor SourceIdentifier , you will be notified of events generated from all AWS DMS sources belonging to your customer account.
For more information about AWS DMS events, see Working with Events and Notifications in the AWS Database Migration Service User Guide.
See also: AWS API Documentation
Request Syntax
response = client.create_event_subscription(
SubscriptionName='string',
SnsTopicArn='string',
SourceType='string',
EventCategories=[
'string',
],
SourceIds=[
'string',
],
Enabled=True|False,
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
[REQUIRED]
The name of the AWS DMS event notification subscription. This name must be less than 255 characters.
[REQUIRED]
The Amazon Resource Name (ARN) of the Amazon SNS topic created for event notification. The ARN is created by Amazon SNS when you create a topic and subscribe to it.
The type of AWS DMS resource that generates the events. For example, if you want to be notified of events generated by a replication instance, you set this parameter to replication-instance . If this value isn't specified, all events are returned.
Valid values: replication-instance | replication-task
A list of event categories for a source type that you want to subscribe to. For more information, see Working with Events and Notifications in the AWS Database Migration Service User Guide.
A list of identifiers for which AWS DMS provides notification events.
If you don't specify a value, notifications are provided for all sources.
If you specify multiple values, they must be of the same type. For example, if you specify a database instance ID, then all of the other values must be database instance IDs.
One or more tags to be assigned to the event subscription.
A user-defined key-value pair that describes metadata added to an AWS DMS resource and that is used by operations such as the following:
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
dict
Response Syntax
{
'EventSubscription': {
'CustomerAwsId': 'string',
'CustSubscriptionId': 'string',
'SnsTopicArn': 'string',
'Status': 'string',
'SubscriptionCreationTime': 'string',
'SourceType': 'string',
'SourceIdsList': [
'string',
],
'EventCategoriesList': [
'string',
],
'Enabled': True|False
}
}
Response Structure
(dict) --
EventSubscription (dict) --
The event subscription that was created.
CustomerAwsId (string) --
The AWS customer account associated with the AWS DMS event notification subscription.
CustSubscriptionId (string) --
The AWS DMS event notification subscription Id.
SnsTopicArn (string) --
The topic ARN of the AWS DMS event notification subscription.
Status (string) --
The status of the AWS DMS event notification subscription.
Constraints:
Can be one of the following: creating | modifying | deleting | active | no-permission | topic-not-exist
The status "no-permission" indicates that AWS DMS no longer has permission to post to the SNS topic. The status "topic-not-exist" indicates that the topic was deleted after the subscription was created.
SubscriptionCreationTime (string) --
The time the RDS event notification subscription was created.
SourceType (string) --
The type of AWS DMS resource that generates events.
Valid values: replication-instance | replication-server | security-group | replication-task
SourceIdsList (list) --
A list of source Ids for the event subscription.
EventCategoriesList (list) --
A lists of event categories.
Enabled (boolean) --
Boolean value that indicates if the event subscription is enabled.
Exceptions
Creates the replication instance using the specified parameters.
AWS DMS requires that your account have certain roles with appropriate permissions before you can create a replication instance. For information on the required roles, see Creating the IAM Roles to Use With the AWS CLI and AWS DMS API . For information on the required permissions, see IAM Permissions Needed to Use AWS DMS .
See also: AWS API Documentation
Request Syntax
response = client.create_replication_instance(
ReplicationInstanceIdentifier='string',
AllocatedStorage=123,
ReplicationInstanceClass='string',
VpcSecurityGroupIds=[
'string',
],
AvailabilityZone='string',
ReplicationSubnetGroupIdentifier='string',
PreferredMaintenanceWindow='string',
MultiAZ=True|False,
EngineVersion='string',
AutoMinorVersionUpgrade=True|False,
Tags=[
{
'Key': 'string',
'Value': 'string'
},
],
KmsKeyId='string',
PubliclyAccessible=True|False,
DnsNameServers='string'
)
[REQUIRED]
The replication instance identifier. This parameter is stored as a lowercase string.
Constraints:
Example: myrepinstance
[REQUIRED]
The compute and memory capacity of the replication instance as specified by the replication instance class.
Valid Values: dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge
Specifies the VPC security group to be used with the replication instance. The VPC security group must work with the VPC containing the replication instance.
The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
Format: ddd:hh24:mi-ddd:hh24:mi
Default: A 30-minute window selected at random from an 8-hour block of time per AWS Region, occurring on a random day of the week.
Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun
Constraints: Minimum 30-minute window.
A value that indicates whether minor engine upgrades are applied automatically to the replication instance during the maintenance window. This parameter defaults to true .
Default: true
One or more tags to be assigned to the replication instance.
A user-defined key-value pair that describes metadata added to an AWS DMS resource and that is used by operations such as the following:
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
An AWS KMS key identifier that is used to encrypt the data on the replication instance.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
dict
Response Syntax
{
'ReplicationInstance': {
'ReplicationInstanceIdentifier': 'string',
'ReplicationInstanceClass': 'string',
'ReplicationInstanceStatus': 'string',
'AllocatedStorage': 123,
'InstanceCreateTime': datetime(2015, 1, 1),
'VpcSecurityGroups': [
{
'VpcSecurityGroupId': 'string',
'Status': 'string'
},
],
'AvailabilityZone': 'string',
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
},
'PreferredMaintenanceWindow': 'string',
'PendingModifiedValues': {
'ReplicationInstanceClass': 'string',
'AllocatedStorage': 123,
'MultiAZ': True|False,
'EngineVersion': 'string'
},
'MultiAZ': True|False,
'EngineVersion': 'string',
'AutoMinorVersionUpgrade': True|False,
'KmsKeyId': 'string',
'ReplicationInstanceArn': 'string',
'ReplicationInstancePublicIpAddress': 'string',
'ReplicationInstancePrivateIpAddress': 'string',
'ReplicationInstancePublicIpAddresses': [
'string',
],
'ReplicationInstancePrivateIpAddresses': [
'string',
],
'PubliclyAccessible': True|False,
'SecondaryAvailabilityZone': 'string',
'FreeUntil': datetime(2015, 1, 1),
'DnsNameServers': 'string'
}
}
Response Structure
(dict) --
ReplicationInstance (dict) --
The replication instance that was created.
ReplicationInstanceIdentifier (string) --
The replication instance identifier. This parameter is stored as a lowercase string.
Constraints:
Example: myrepinstance
ReplicationInstanceClass (string) --
The compute and memory capacity of the replication instance.
Valid Values: dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge
ReplicationInstanceStatus (string) --
The status of the replication instance.
AllocatedStorage (integer) --
The amount of storage (in gigabytes) that is allocated for the replication instance.
InstanceCreateTime (datetime) --
The time the replication instance was created.
VpcSecurityGroups (list) --
The VPC security group for the instance.
(dict) --
Describes status of a security group associated with the virtual private cloud hosting your replication and DB instances.
VpcSecurityGroupId (string) --
The VPC security group Id.
Status (string) --
The status of the VPC security group.
AvailabilityZone (string) --
The Availability Zone for the instance.
ReplicationSubnetGroup (dict) --
The subnet group for the replication instance.
ReplicationSubnetGroupIdentifier (string) --
The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription (string) --
A description for the replication subnet group.
VpcId (string) --
The ID of the VPC.
SubnetGroupStatus (string) --
The status of the subnet group.
Subnets (list) --
The subnets that are in the subnet group.
(dict) --
In response to a request by the DescribeReplicationSubnetGroup operation, this object identifies a subnet by its given Availability Zone, subnet identifier, and status.
SubnetIdentifier (string) --
The subnet identifier.
SubnetAvailabilityZone (dict) --
The Availability Zone of the subnet.
Name (string) --
The name of the Availability Zone.
SubnetStatus (string) --
The status of the subnet.
PreferredMaintenanceWindow (string) --
The maintenance window times for the replication instance.
PendingModifiedValues (dict) --
The pending modification values.
ReplicationInstanceClass (string) --
The compute and memory capacity of the replication instance.
Valid Values: dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge
AllocatedStorage (integer) --
The amount of storage (in gigabytes) that is allocated for the replication instance.
MultiAZ (boolean) --
Specifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the Multi-AZ parameter is set to true .
EngineVersion (string) --
The engine version number of the replication instance.
MultiAZ (boolean) --
Specifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the Multi-AZ parameter is set to true .
EngineVersion (string) --
The engine version number of the replication instance.
AutoMinorVersionUpgrade (boolean) --
Boolean value indicating if minor version upgrades will be automatically applied to the instance.
KmsKeyId (string) --
An AWS KMS key identifier that is used to encrypt the data on the replication instance.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
ReplicationInstanceArn (string) --
The Amazon Resource Name (ARN) of the replication instance.
ReplicationInstancePublicIpAddress (string) --
The public IP address of the replication instance.
ReplicationInstancePrivateIpAddress (string) --
The private IP address of the replication instance.
ReplicationInstancePublicIpAddresses (list) --
One or more public IP addresses for the replication instance.
ReplicationInstancePrivateIpAddresses (list) --
One or more private IP addresses for the replication instance.
PubliclyAccessible (boolean) --
Specifies the accessibility options for the replication instance. A value of true represents an instance with a public IP address. A value of false represents an instance with a private IP address. The default value is true .
SecondaryAvailabilityZone (string) --
The Availability Zone of the standby replication instance in a Multi-AZ deployment.
FreeUntil (datetime) --
The expiration date of the free replication instance that is part of the Free DMS program.
DnsNameServers (string) --
The DNS name servers for the replication instance.
Exceptions
Examples
Creates the replication instance using the specified parameters.
response = client.create_replication_instance(
AllocatedStorage=123,
AutoMinorVersionUpgrade=True,
AvailabilityZone='',
EngineVersion='',
KmsKeyId='',
MultiAZ=True,
PreferredMaintenanceWindow='',
PubliclyAccessible=True,
ReplicationInstanceClass='',
ReplicationInstanceIdentifier='',
ReplicationSubnetGroupIdentifier='',
Tags=[
{
'Key': 'string',
'Value': 'string',
},
],
VpcSecurityGroupIds=[
],
)
print(response)
Expected Output:
{
'ReplicationInstance': {
'AllocatedStorage': 5,
'AutoMinorVersionUpgrade': True,
'EngineVersion': '1.5.0',
'KmsKeyId': 'arn:aws:kms:us-east-1:123456789012:key/4c1731d6-5435-ed4d-be13-d53411a7cfbd',
'PendingModifiedValues': {
},
'PreferredMaintenanceWindow': 'sun:06:00-sun:14:00',
'PubliclyAccessible': True,
'ReplicationInstanceArn': 'arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ',
'ReplicationInstanceClass': 'dms.t2.micro',
'ReplicationInstanceIdentifier': 'test-rep-1',
'ReplicationInstanceStatus': 'creating',
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupDescription': 'default',
'ReplicationSubnetGroupIdentifier': 'default',
'SubnetGroupStatus': 'Complete',
'Subnets': [
{
'SubnetAvailabilityZone': {
'Name': 'us-east-1d',
},
'SubnetIdentifier': 'subnet-f6dd91af',
'SubnetStatus': 'Active',
},
{
'SubnetAvailabilityZone': {
'Name': 'us-east-1b',
},
'SubnetIdentifier': 'subnet-3605751d',
'SubnetStatus': 'Active',
},
{
'SubnetAvailabilityZone': {
'Name': 'us-east-1c',
},
'SubnetIdentifier': 'subnet-c2daefb5',
'SubnetStatus': 'Active',
},
{
'SubnetAvailabilityZone': {
'Name': 'us-east-1e',
},
'SubnetIdentifier': 'subnet-85e90cb8',
'SubnetStatus': 'Active',
},
],
'VpcId': 'vpc-6741a603',
},
},
'ResponseMetadata': {
'...': '...',
},
}
Creates a replication subnet group given a list of the subnet IDs in a VPC.
See also: AWS API Documentation
Request Syntax
response = client.create_replication_subnet_group(
ReplicationSubnetGroupIdentifier='string',
ReplicationSubnetGroupDescription='string',
SubnetIds=[
'string',
],
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
[REQUIRED]
The name for the replication subnet group. This value is stored as a lowercase string.
Constraints: Must contain no more than 255 alphanumeric characters, periods, spaces, underscores, or hyphens. Must not be "default".
Example: mySubnetgroup
[REQUIRED]
The description for the subnet group.
[REQUIRED]
One or more subnet IDs to be assigned to the subnet group.
One or more tags to be assigned to the subnet group.
A user-defined key-value pair that describes metadata added to an AWS DMS resource and that is used by operations such as the following:
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
dict
Response Syntax
{
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
}
}
Response Structure
(dict) --
ReplicationSubnetGroup (dict) --
The replication subnet group that was created.
ReplicationSubnetGroupIdentifier (string) --
The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription (string) --
A description for the replication subnet group.
VpcId (string) --
The ID of the VPC.
SubnetGroupStatus (string) --
The status of the subnet group.
Subnets (list) --
The subnets that are in the subnet group.
(dict) --
In response to a request by the DescribeReplicationSubnetGroup operation, this object identifies a subnet by its given Availability Zone, subnet identifier, and status.
SubnetIdentifier (string) --
The subnet identifier.
SubnetAvailabilityZone (dict) --
The Availability Zone of the subnet.
Name (string) --
The name of the Availability Zone.
SubnetStatus (string) --
The status of the subnet.
Exceptions
Examples
Creates a replication subnet group given a list of the subnet IDs in a VPC.
response = client.create_replication_subnet_group(
ReplicationSubnetGroupDescription='US West subnet group',
ReplicationSubnetGroupIdentifier='us-west-2ab-vpc-215ds366',
SubnetIds=[
'subnet-e145356n',
'subnet-58f79200',
],
Tags=[
{
'Key': 'Acount',
'Value': '145235',
},
],
)
print(response)
Expected Output:
{
'ReplicationSubnetGroup': {
},
'ResponseMetadata': {
'...': '...',
},
}
Creates a replication task using the specified parameters.
See also: AWS API Documentation
Request Syntax
response = client.create_replication_task(
ReplicationTaskIdentifier='string',
SourceEndpointArn='string',
TargetEndpointArn='string',
ReplicationInstanceArn='string',
MigrationType='full-load'|'cdc'|'full-load-and-cdc',
TableMappings='string',
ReplicationTaskSettings='string',
CdcStartTime=datetime(2015, 1, 1),
CdcStartPosition='string',
CdcStopPosition='string',
Tags=[
{
'Key': 'string',
'Value': 'string'
},
],
TaskData='string'
)
[REQUIRED]
An identifier for the replication task.
Constraints:
[REQUIRED]
An Amazon Resource Name (ARN) that uniquely identifies the source endpoint.
[REQUIRED]
An Amazon Resource Name (ARN) that uniquely identifies the target endpoint.
[REQUIRED]
The Amazon Resource Name (ARN) of a replication instance.
[REQUIRED]
The migration type. Valid values: full-load | cdc | full-load-and-cdc
[REQUIRED]
The table mappings for the task, in JSON format. For more information, see Using Table Mapping to Specify Task Settings in the AWS Database Migration User Guide.
Indicates the start time for a change data capture (CDC) operation. Use either CdcStartTime or CdcStartPosition to specify when you want a CDC operation to start. Specifying both values results in an error.
Timestamp Example: --cdc-start-time “2018-03-08T12:12:12”
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
Note
When you use this task setting with a source PostgreSQL database, a logical replication slot should already be created and associated with the source endpoint. You can verify this by setting the slotName extra connection attribute to the name of this logical replication slot. For more information, see Extra Connection Attributes When Using PostgreSQL as a Source for AWS DMS .
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
One or more tags to be assigned to the replication task.
A user-defined key-value pair that describes metadata added to an AWS DMS resource and that is used by operations such as the following:
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
dict
Response Syntax
{
'ReplicationTask': {
'ReplicationTaskIdentifier': 'string',
'SourceEndpointArn': 'string',
'TargetEndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'MigrationType': 'full-load'|'cdc'|'full-load-and-cdc',
'TableMappings': 'string',
'ReplicationTaskSettings': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'StopReason': 'string',
'ReplicationTaskCreationDate': datetime(2015, 1, 1),
'ReplicationTaskStartDate': datetime(2015, 1, 1),
'CdcStartPosition': 'string',
'CdcStopPosition': 'string',
'RecoveryCheckpoint': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskStats': {
'FullLoadProgressPercent': 123,
'ElapsedTimeMillis': 123,
'TablesLoaded': 123,
'TablesLoading': 123,
'TablesQueued': 123,
'TablesErrored': 123,
'FreshStartDate': datetime(2015, 1, 1),
'StartDate': datetime(2015, 1, 1),
'StopDate': datetime(2015, 1, 1),
'FullLoadStartDate': datetime(2015, 1, 1),
'FullLoadFinishDate': datetime(2015, 1, 1)
},
'TaskData': 'string'
}
}
Response Structure
(dict) --
ReplicationTask (dict) --
The replication task that was created.
ReplicationTaskIdentifier (string) --
The user-assigned replication task identifier or name.
Constraints:
SourceEndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
TargetEndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
ReplicationInstanceArn (string) --
The Amazon Resource Name (ARN) of the replication instance.
MigrationType (string) --
The type of migration.
TableMappings (string) --
Table mappings specified in the task.
ReplicationTaskSettings (string) --
The settings for the replication task.
Status (string) --
The status of the replication task.
LastFailureMessage (string) --
The last error (failure) message generated for the replication instance.
StopReason (string) --
The reason the replication task was stopped.
ReplicationTaskCreationDate (datetime) --
The date the replication task was created.
ReplicationTaskStartDate (datetime) --
The date the replication task is scheduled to start.
CdcStartPosition (string) --
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want the CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition (string) --
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
RecoveryCheckpoint (string) --
Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the CdcStartPosition parameter to start a CDC operation that begins at that checkpoint.
ReplicationTaskArn (string) --
The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats (dict) --
The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent (integer) --
The percent complete for the full load migration task.
ElapsedTimeMillis (integer) --
The elapsed time of the task, in milliseconds.
TablesLoaded (integer) --
The number of tables loaded for this task.
TablesLoading (integer) --
The number of tables currently loading for this task.
TablesQueued (integer) --
The number of tables queued for this task.
TablesErrored (integer) --
The number of errors that have occurred during this task.
FreshStartDate (datetime) --
The date the replication task was started either with a fresh start or a target reload.
StartDate (datetime) --
The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType .
StopDate (datetime) --
The date the replication task was stopped.
FullLoadStartDate (datetime) --
The date the replication task full load was started.
FullLoadFinishDate (datetime) --
The date the replication task full load was completed.
TaskData (string) --
Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the AWS Database Migration User Guide.
Exceptions
Examples
Creates a replication task using the specified parameters.
response = client.create_replication_task(
CdcStartTime=datetime(2016, 12, 14, 18, 25, 43, 2, 349, 0),
MigrationType='full-load',
ReplicationInstanceArn='arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ',
ReplicationTaskIdentifier='task1',
ReplicationTaskSettings='',
SourceEndpointArn='arn:aws:dms:us-east-1:123456789012:endpoint:ZW5UAN6P4E77EC7YWHK4RZZ3BE',
TableMappings='file://mappingfile.json',
Tags=[
{
'Key': 'Acount',
'Value': '24352226',
},
],
TargetEndpointArn='arn:aws:dms:us-east-1:123456789012:endpoint:ASXWXJZLNWNT5HTWCGV2BUJQ7E',
)
print(response)
Expected Output:
{
'ReplicationTask': {
'MigrationType': 'full-load',
'ReplicationInstanceArn': 'arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ',
'ReplicationTaskArn': 'arn:aws:dms:us-east-1:123456789012:task:OEAMB3NXSTZ6LFYZFEPPBBXPYM',
'ReplicationTaskCreationDate': datetime(2016, 12, 14, 18, 25, 43, 2, 349, 0),
'ReplicationTaskIdentifier': 'task1',
'ReplicationTaskSettings': '{"TargetMetadata":{"TargetSchema":"","SupportLobs":true,"FullLobMode":true,"LobChunkSize":64,"LimitedSizeLobMode":false,"LobMaxSize":0},"FullLoadSettings":{"FullLoadEnabled":true,"ApplyChangesEnabled":false,"TargetTablePrepMode":"DROP_AND_CREATE","CreatePkAfterFullLoad":false,"StopTaskCachedChangesApplied":false,"StopTaskCachedChangesNotApplied":false,"ResumeEnabled":false,"ResumeMinTableSize":100000,"ResumeOnlyClusteredPKTables":true,"MaxFullLoadSubTasks":8,"TransactionConsistencyTimeout":600,"CommitRate":10000},"Logging":{"EnableLogging":false}}',
'SourceEndpointArn': 'arn:aws:dms:us-east-1:123456789012:endpoint:ZW5UAN6P4E77EC7YWHK4RZZ3BE',
'Status': 'creating',
'TableMappings': 'file://mappingfile.json',
'TargetEndpointArn': 'arn:aws:dms:us-east-1:123456789012:endpoint:ASXWXJZLNWNT5HTWCGV2BUJQ7E',
},
'ResponseMetadata': {
'...': '...',
},
}
Deletes the specified certificate.
See also: AWS API Documentation
Request Syntax
response = client.delete_certificate(
CertificateArn='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) of the deleted certificate.
{
'Certificate': {
'CertificateIdentifier': 'string',
'CertificateCreationDate': datetime(2015, 1, 1),
'CertificatePem': 'string',
'CertificateWallet': b'bytes',
'CertificateArn': 'string',
'CertificateOwner': 'string',
'ValidFromDate': datetime(2015, 1, 1),
'ValidToDate': datetime(2015, 1, 1),
'SigningAlgorithm': 'string',
'KeyLength': 123
}
}
Response Structure
The Secure Sockets Layer (SSL) certificate.
A customer-assigned name for the certificate. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
The date that the certificate was created.
The contents of a .pem file, which contains an X.509 certificate.
The location of an imported Oracle Wallet certificate for use with SSL.
The Amazon Resource Name (ARN) for the certificate.
The owner of the certificate.
The beginning date that the certificate is valid.
The final date that the certificate is valid.
The signing algorithm for the certificate.
The key length of the cryptographic algorithm being used.
Exceptions
Examples
Deletes the specified certificate.
response = client.delete_certificate(
CertificateArn='arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUSM457DE6XFJCJQ',
)
print(response)
Expected Output:
{
'Certificate': {
},
'ResponseMetadata': {
'...': '...',
},
}
Deletes the connection between a replication instance and an endpoint.
See also: AWS API Documentation
Request Syntax
response = client.delete_connection(
EndpointArn='string',
ReplicationInstanceArn='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
[REQUIRED]
The Amazon Resource Name (ARN) of the replication instance.
dict
Response Syntax
{
'Connection': {
'ReplicationInstanceArn': 'string',
'EndpointArn': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'EndpointIdentifier': 'string',
'ReplicationInstanceIdentifier': 'string'
}
}
Response Structure
(dict) --
Connection (dict) --
The connection that is being deleted.
ReplicationInstanceArn (string) --
The ARN of the replication instance.
EndpointArn (string) --
The ARN string that uniquely identifies the endpoint.
Status (string) --
The connection status.
LastFailureMessage (string) --
The error message when the connection last failed.
EndpointIdentifier (string) --
The identifier of the endpoint. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
ReplicationInstanceIdentifier (string) --
The replication instance identifier. This parameter is stored as a lowercase string.
Exceptions
Deletes the specified endpoint.
Note
All tasks associated with the endpoint must be deleted before you can delete the endpoint.
See also: AWS API Documentation
Request Syntax
response = client.delete_endpoint(
EndpointArn='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
{
'Endpoint': {
'EndpointIdentifier': 'string',
'EndpointType': 'source'|'target',
'EngineName': 'string',
'EngineDisplayName': 'string',
'Username': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'ExtraConnectionAttributes': 'string',
'Status': 'string',
'KmsKeyId': 'string',
'EndpointArn': 'string',
'CertificateArn': 'string',
'SslMode': 'none'|'require'|'verify-ca'|'verify-full',
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'ExternalId': 'string',
'DynamoDbSettings': {
'ServiceAccessRoleArn': 'string'
},
'S3Settings': {
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'CsvRowDelimiter': 'string',
'CsvDelimiter': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'CompressionType': 'none'|'gzip',
'EncryptionMode': 'sse-s3'|'sse-kms',
'ServerSideEncryptionKmsKeyId': 'string',
'DataFormat': 'csv'|'parquet',
'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
'DictPageSizeLimit': 123,
'RowGroupLength': 123,
'DataPageSize': 123,
'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
'EnableStatistics': True|False,
'IncludeOpForFullLoad': True|False,
'CdcInsertsOnly': True|False,
'TimestampColumnName': 'string',
'ParquetTimestampInMillisecond': True|False,
'CdcInsertsAndUpdates': True|False
},
'DmsTransferSettings': {
'ServiceAccessRoleArn': 'string',
'BucketName': 'string'
},
'MongoDbSettings': {
'Username': 'string',
'Password': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'AuthType': 'no'|'password',
'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
'NestingLevel': 'none'|'one',
'ExtractDocId': 'string',
'DocsToInvestigate': 'string',
'AuthSource': 'string',
'KmsKeyId': 'string'
},
'KinesisSettings': {
'StreamArn': 'string',
'MessageFormat': 'json'|'json-unformatted',
'ServiceAccessRoleArn': 'string',
'IncludeTransactionDetails': True|False,
'IncludePartitionValue': True|False,
'PartitionIncludeSchemaTable': True|False,
'IncludeTableAlterOperations': True|False,
'IncludeControlDetails': True|False
},
'KafkaSettings': {
'Broker': 'string',
'Topic': 'string'
},
'ElasticsearchSettings': {
'ServiceAccessRoleArn': 'string',
'EndpointUri': 'string',
'FullLoadErrorPercentage': 123,
'ErrorRetryDuration': 123
},
'NeptuneSettings': {
'ServiceAccessRoleArn': 'string',
'S3BucketName': 'string',
'S3BucketFolder': 'string',
'ErrorRetryDuration': 123,
'MaxFileSize': 123,
'MaxRetryCount': 123,
'IamAuthEnabled': True|False
},
'RedshiftSettings': {
'AcceptAnyDate': True|False,
'AfterConnectScript': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'ConnectionTimeout': 123,
'DatabaseName': 'string',
'DateFormat': 'string',
'EmptyAsNull': True|False,
'EncryptionMode': 'sse-s3'|'sse-kms',
'FileTransferUploadStreams': 123,
'LoadTimeout': 123,
'MaxFileSize': 123,
'Password': 'string',
'Port': 123,
'RemoveQuotes': True|False,
'ReplaceInvalidChars': 'string',
'ReplaceChars': 'string',
'ServerName': 'string',
'ServiceAccessRoleArn': 'string',
'ServerSideEncryptionKmsKeyId': 'string',
'TimeFormat': 'string',
'TrimBlanks': True|False,
'TruncateColumns': True|False,
'Username': 'string',
'WriteBufferSize': 123
}
}
}
Response Structure
The endpoint that was deleted.
The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
The type of endpoint. Valid values are source and target .
The database engine name. Valid values, depending on the EndpointType, include "mysql" , "oracle" , "postgres" , "mariadb" , "aurora" , "aurora-postgresql" , "redshift" , "s3" , "db2" , "azuredb" , "sybase" , "dynamodb" , "mongodb" , "kinesis" , "kafka" , "elasticsearch" , "documentdb" , and "sqlserver" .
The expanded name for the engine name. For example, if the EngineName parameter is "aurora," this value would be "Amazon Aurora MySQL."
The user name used to connect to the endpoint.
The name of the server at the endpoint.
The port value used to access the endpoint.
The name of the database at the endpoint.
Additional connection attributes used to connect to the endpoint.
The status of the endpoint.
An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
The SSL mode used to connect to the endpoint. The default value is none .
The Amazon Resource Name (ARN) used by the service access IAM role.
The external table definition.
Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
The settings for the target DynamoDB database. For more information, see the DynamoDBSettings structure.
The Amazon Resource Name (ARN) used by the service access IAM role.
The settings for the S3 target endpoint. For more information, see the S3Settings structure.
The Amazon Resource Name (ARN) used by the service access IAM role.
The external table definition.
The delimiter used to separate rows in the source files. The default is a carriage return (\n ).
The delimiter used to separate columns in the source files. The default is a comma.
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path `` bucketFolder /schema_name /table_name /`` . If this parameter isn't specified, then the path used is `` schema_name /table_name /`` .
The name of the S3 bucket.
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS . To use SSE_S3 , you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:
If you are using SSE_KMS for the EncryptionMode , provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=*value* ,BucketFolder=*value* ,BucketName=*value* ,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=*value* ``
The format of the data that you want to use for output. You can choose one of the following:
The type of encoding you are using:
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN . This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0 .
A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL , DISTINCT , MAX , and MIN values. This parameter defaults to true . This value is used for .parquet file format only.
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note
AWS DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.
For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y , the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.
Note
This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly is set to true or y , only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad . If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false , every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note
AWS DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.
DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS . By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.
When the AddColumnName parameter is set to true , DMS also includes a name for the timestamp column that you set with TimestampColumnName .
A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.
Note
AWS DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond is set to true or y , AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.
Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.
Note
AWS DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.
Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false , but when CdcInsertsAndUpdates is set to true or y , INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.
For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false , CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the use of the CdcInsertsAndUpdates parameter in versions 3.3.1 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string
JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }
The IAM role that has permission to access the Amazon S3 bucket.
The name of the S3 bucket to use.
The settings for the MongoDB source endpoint. For more information, see the MongoDbSettings structure.
The user name you use to access the MongoDB source endpoint.
The password for the user account you use to access the MongoDB source endpoint.
The name of the server on the MongoDB source endpoint.
The port value for the MongoDB source endpoint.
The database name on the MongoDB source endpoint.
The authentication type you use to access the MongoDB source endpoint.
Valid values: NO, PASSWORD
When NO is selected, user name and password parameters are not used and can be empty.
The authentication mechanism you use to access the MongoDB source endpoint.
Valid values: DEFAULT, MONGODB_CR, SCRAM_SHA_1
DEFAULT – For MongoDB version 2.x, use MONGODB_CR. For MongoDB version 3.x, use SCRAM_SHA_1. This setting isn't used when authType=No.
Specifies either document or table mode.
Valid values: NONE, ONE
Default value is NONE. Specify NONE to use document mode. Specify ONE to use table mode.
Specifies the document ID. Use this setting when NestingLevel is set to NONE.
Default value is false.
Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to ONE.
Must be a positive value greater than 0. Default value is 1000.
The MongoDB database name. This setting isn't used when authType=NO .
The default is admin.
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
The settings for the Amazon Kinesis target endpoint. For more information, see the KinesisSettings structure.
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is False .
Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type . The default is False .
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is False .
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is False .
Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is False .
The settings for the Apache Kafka target endpoint. For more information, see the KafkaSettings structure.
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form `` broker-hostname-or-ip :port `` . For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345" .
The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.
The settings for the Elasticsearch source endpoint. For more information, see the ElasticsearchSettings structure.
The Amazon Resource Name (ARN) used by service to access the IAM role.
The endpoint for the Elasticsearch cluster.
The maximum percentage of records that can fail to be written before a full load operation stops.
The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
The settings for the MongoDB source endpoint. For more information, see the NeptuneSettings structure.
The ARN of the service role you have created for the Neptune target endpoint. For more information, see https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html#CHAP_Target.Neptune.ServiceRole in the AWS Database Migration Service User Guide.
The name of the S3 bucket for AWS DMS to temporarily store migrated graph data in CSV files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these CSV files.
A folder path where you where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName
The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
The maximum size in KB of migrated graph data stored in a CSV file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1048576 KB. If successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.
The number of times for AWS DMS to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 5.
If you want IAM authorization enabled for this endpoint, set this parameter to true and attach the appropriate role policy document to your service role specified by ServiceAccessRoleArn . The default is false .
Settings for the Amazon Redshift endpoint.
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
The location where the comma-separated value (.csv) files are stored before being uploaded to the S3 bucket.
The name of the S3 bucket you want to use
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
The name of the Amazon Redshift data warehouse (service) that you are working with.
The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.
If your date and time values use formats different from each other, set this to auto .
A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false .
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS . To use SSE_S3 , create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.
The maximum size (in KB) of any .csv file used to transfer data to Amazon Redshift. This accepts a value from 1 through 1,048,576. It defaults to 32,768 KB (32 MB).
The password for the user named in the username property.
The port number for Amazon Redshift. The default value is 5439.
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false .
A list of characters that you want to replace. Use with ReplaceChars .
A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars , substituting the specified characters instead. The default is "?" .
The name of the Amazon Redshift cluster you are using.
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.
The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode , provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string' , 'epochsecs' , or 'epochmillisecs' . It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.
If your date and time values use formats different from each other, set this parameter to auto .
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false .
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false .
An Amazon Redshift user name for a registered user.
The size of the write buffer to use in rows. Valid values range from 1 through 2,048. The default is 1,024. Use this setting to tune performance.
Exceptions
Examples
Deletes the specified endpoint. All tasks associated with the endpoint must be deleted before you can delete the endpoint.
response = client.delete_endpoint(
EndpointArn='arn:aws:dms:us-east-1:123456789012:endpoint:RAAR3R22XSH46S3PWLC3NJAWKM',
)
print(response)
Expected Output:
{
'Endpoint': {
'EndpointArn': 'arn:aws:dms:us-east-1:123456789012:endpoint:RAAR3R22XSH46S3PWLC3NJAWKM',
'EndpointIdentifier': 'test-endpoint-1',
'EndpointType': 'source',
'EngineName': 'mysql',
'KmsKeyId': 'arn:aws:kms:us-east-1:123456789012:key/4c1731d6-5435-ed4d-be13-d53411a7cfbd',
'Port': 3306,
'ServerName': 'mydb.cx1llnox7iyx.us-west-2.rds.amazonaws.com',
'Status': 'active',
'Username': 'username',
},
'ResponseMetadata': {
'...': '...',
},
}
Deletes an AWS DMS event subscription.
See also: AWS API Documentation
Request Syntax
response = client.delete_event_subscription(
SubscriptionName='string'
)
[REQUIRED]
The name of the DMS event notification subscription to be deleted.
{
'EventSubscription': {
'CustomerAwsId': 'string',
'CustSubscriptionId': 'string',
'SnsTopicArn': 'string',
'Status': 'string',
'SubscriptionCreationTime': 'string',
'SourceType': 'string',
'SourceIdsList': [
'string',
],
'EventCategoriesList': [
'string',
],
'Enabled': True|False
}
}
Response Structure
The event subscription that was deleted.
The AWS customer account associated with the AWS DMS event notification subscription.
The AWS DMS event notification subscription Id.
The topic ARN of the AWS DMS event notification subscription.
The status of the AWS DMS event notification subscription.
Constraints:
Can be one of the following: creating | modifying | deleting | active | no-permission | topic-not-exist
The status "no-permission" indicates that AWS DMS no longer has permission to post to the SNS topic. The status "topic-not-exist" indicates that the topic was deleted after the subscription was created.
The time the RDS event notification subscription was created.
The type of AWS DMS resource that generates events.
Valid values: replication-instance | replication-server | security-group | replication-task
A list of source Ids for the event subscription.
A lists of event categories.
Boolean value that indicates if the event subscription is enabled.
Exceptions
Deletes the specified replication instance.
Note
You must delete any migration tasks that are associated with the replication instance before you can delete it.
See also: AWS API Documentation
Request Syntax
response = client.delete_replication_instance(
ReplicationInstanceArn='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) of the replication instance to be deleted.
{
'ReplicationInstance': {
'ReplicationInstanceIdentifier': 'string',
'ReplicationInstanceClass': 'string',
'ReplicationInstanceStatus': 'string',
'AllocatedStorage': 123,
'InstanceCreateTime': datetime(2015, 1, 1),
'VpcSecurityGroups': [
{
'VpcSecurityGroupId': 'string',
'Status': 'string'
},
],
'AvailabilityZone': 'string',
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
},
'PreferredMaintenanceWindow': 'string',
'PendingModifiedValues': {
'ReplicationInstanceClass': 'string',
'AllocatedStorage': 123,
'MultiAZ': True|False,
'EngineVersion': 'string'
},
'MultiAZ': True|False,
'EngineVersion': 'string',
'AutoMinorVersionUpgrade': True|False,
'KmsKeyId': 'string',
'ReplicationInstanceArn': 'string',
'ReplicationInstancePublicIpAddress': 'string',
'ReplicationInstancePrivateIpAddress': 'string',
'ReplicationInstancePublicIpAddresses': [
'string',
],
'ReplicationInstancePrivateIpAddresses': [
'string',
],
'PubliclyAccessible': True|False,
'SecondaryAvailabilityZone': 'string',
'FreeUntil': datetime(2015, 1, 1),
'DnsNameServers': 'string'
}
}
Response Structure
The replication instance that was deleted.
The replication instance identifier. This parameter is stored as a lowercase string.
Constraints:
Example: myrepinstance
The compute and memory capacity of the replication instance.
Valid Values: dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge
The status of the replication instance.
The amount of storage (in gigabytes) that is allocated for the replication instance.
The time the replication instance was created.
The VPC security group for the instance.
Describes status of a security group associated with the virtual private cloud hosting your replication and DB instances.
The VPC security group Id.
The status of the VPC security group.
The Availability Zone for the instance.
The subnet group for the replication instance.
The identifier of the replication instance subnet group.
A description for the replication subnet group.
The ID of the VPC.
The status of the subnet group.
The subnets that are in the subnet group.
In response to a request by the DescribeReplicationSubnetGroup operation, this object identifies a subnet by its given Availability Zone, subnet identifier, and status.
The subnet identifier.
The Availability Zone of the subnet.
The name of the Availability Zone.
The status of the subnet.
The maintenance window times for the replication instance.
The pending modification values.
The compute and memory capacity of the replication instance.
Valid Values: dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge
The amount of storage (in gigabytes) that is allocated for the replication instance.
Specifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the Multi-AZ parameter is set to true .
The engine version number of the replication instance.
Specifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the Multi-AZ parameter is set to true .
The engine version number of the replication instance.
Boolean value indicating if minor version upgrades will be automatically applied to the instance.
An AWS KMS key identifier that is used to encrypt the data on the replication instance.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
The Amazon Resource Name (ARN) of the replication instance.
The public IP address of the replication instance.
The private IP address of the replication instance.
One or more public IP addresses for the replication instance.
One or more private IP addresses for the replication instance.
Specifies the accessibility options for the replication instance. A value of true represents an instance with a public IP address. A value of false represents an instance with a private IP address. The default value is true .
The Availability Zone of the standby replication instance in a Multi-AZ deployment.
The expiration date of the free replication instance that is part of the Free DMS program.
The DNS name servers for the replication instance.
Exceptions
Examples
Deletes the specified replication instance. You must delete any migration tasks that are associated with the replication instance before you can delete it.
response = client.delete_replication_instance(
ReplicationInstanceArn='arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ',
)
print(response)
Expected Output:
{
'ReplicationInstance': {
'AllocatedStorage': 5,
'AutoMinorVersionUpgrade': True,
'EngineVersion': '1.5.0',
'KmsKeyId': 'arn:aws:kms:us-east-1:123456789012:key/4c1731d6-5435-ed4d-be13-d53411a7cfbd',
'PendingModifiedValues': {
},
'PreferredMaintenanceWindow': 'sun:06:00-sun:14:00',
'PubliclyAccessible': True,
'ReplicationInstanceArn': 'arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ',
'ReplicationInstanceClass': 'dms.t2.micro',
'ReplicationInstanceIdentifier': 'test-rep-1',
'ReplicationInstanceStatus': 'creating',
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupDescription': 'default',
'ReplicationSubnetGroupIdentifier': 'default',
'SubnetGroupStatus': 'Complete',
'Subnets': [
{
'SubnetAvailabilityZone': {
'Name': 'us-east-1d',
},
'SubnetIdentifier': 'subnet-f6dd91af',
'SubnetStatus': 'Active',
},
{
'SubnetAvailabilityZone': {
'Name': 'us-east-1b',
},
'SubnetIdentifier': 'subnet-3605751d',
'SubnetStatus': 'Active',
},
{
'SubnetAvailabilityZone': {
'Name': 'us-east-1c',
},
'SubnetIdentifier': 'subnet-c2daefb5',
'SubnetStatus': 'Active',
},
{
'SubnetAvailabilityZone': {
'Name': 'us-east-1e',
},
'SubnetIdentifier': 'subnet-85e90cb8',
'SubnetStatus': 'Active',
},
],
'VpcId': 'vpc-6741a603',
},
},
'ResponseMetadata': {
'...': '...',
},
}
Deletes a subnet group.
See also: AWS API Documentation
Request Syntax
response = client.delete_replication_subnet_group(
ReplicationSubnetGroupIdentifier='string'
)
[REQUIRED]
The subnet group name of the replication instance.
{}
Response Structure
Exceptions
Examples
Deletes a replication subnet group.
response = client.delete_replication_subnet_group(
ReplicationSubnetGroupIdentifier='us-west-2ab-vpc-215ds366',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
Deletes the specified replication task.
See also: AWS API Documentation
Request Syntax
response = client.delete_replication_task(
ReplicationTaskArn='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) of the replication task to be deleted.
{
'ReplicationTask': {
'ReplicationTaskIdentifier': 'string',
'SourceEndpointArn': 'string',
'TargetEndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'MigrationType': 'full-load'|'cdc'|'full-load-and-cdc',
'TableMappings': 'string',
'ReplicationTaskSettings': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'StopReason': 'string',
'ReplicationTaskCreationDate': datetime(2015, 1, 1),
'ReplicationTaskStartDate': datetime(2015, 1, 1),
'CdcStartPosition': 'string',
'CdcStopPosition': 'string',
'RecoveryCheckpoint': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskStats': {
'FullLoadProgressPercent': 123,
'ElapsedTimeMillis': 123,
'TablesLoaded': 123,
'TablesLoading': 123,
'TablesQueued': 123,
'TablesErrored': 123,
'FreshStartDate': datetime(2015, 1, 1),
'StartDate': datetime(2015, 1, 1),
'StopDate': datetime(2015, 1, 1),
'FullLoadStartDate': datetime(2015, 1, 1),
'FullLoadFinishDate': datetime(2015, 1, 1)
},
'TaskData': 'string'
}
}
Response Structure
The deleted replication task.
The user-assigned replication task identifier or name.
Constraints:
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
The Amazon Resource Name (ARN) of the replication instance.
The type of migration.
Table mappings specified in the task.
The settings for the replication task.
The status of the replication task.
The last error (failure) message generated for the replication instance.
The reason the replication task was stopped.
The date the replication task was created.
The date the replication task is scheduled to start.
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want the CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the CdcStartPosition parameter to start a CDC operation that begins at that checkpoint.
The Amazon Resource Name (ARN) of the replication task.
The statistics for the task, including elapsed time, tables loaded, and table errors.
The percent complete for the full load migration task.
The elapsed time of the task, in milliseconds.
The number of tables loaded for this task.
The number of tables currently loading for this task.
The number of tables queued for this task.
The number of errors that have occurred during this task.
The date the replication task was started either with a fresh start or a target reload.
The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType .
The date the replication task was stopped.
The date the replication task full load was started.
The date the replication task full load was completed.
Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the AWS Database Migration User Guide.
Exceptions
Examples
Deletes the specified replication task.
response = client.delete_replication_task(
ReplicationTaskArn='arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ',
)
print(response)
Expected Output:
{
'ReplicationTask': {
'MigrationType': 'full-load',
'ReplicationInstanceArn': 'arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ',
'ReplicationTaskArn': 'arn:aws:dms:us-east-1:123456789012:task:OEAMB3NXSTZ6LFYZFEPPBBXPYM',
'ReplicationTaskCreationDate': datetime(2016, 12, 14, 18, 25, 43, 2, 349, 0),
'ReplicationTaskIdentifier': 'task1',
'ReplicationTaskSettings': '{"TargetMetadata":{"TargetSchema":"","SupportLobs":true,"FullLobMode":true,"LobChunkSize":64,"LimitedSizeLobMode":false,"LobMaxSize":0},"FullLoadSettings":{"FullLoadEnabled":true,"ApplyChangesEnabled":false,"TargetTablePrepMode":"DROP_AND_CREATE","CreatePkAfterFullLoad":false,"StopTaskCachedChangesApplied":false,"StopTaskCachedChangesNotApplied":false,"ResumeEnabled":false,"ResumeMinTableSize":100000,"ResumeOnlyClusteredPKTables":true,"MaxFullLoadSubTasks":8,"TransactionConsistencyTimeout":600,"CommitRate":10000},"Logging":{"EnableLogging":false}}',
'SourceEndpointArn': 'arn:aws:dms:us-east-1:123456789012:endpoint:ZW5UAN6P4E77EC7YWHK4RZZ3BE',
'Status': 'creating',
'TableMappings': 'file://mappingfile.json',
'TargetEndpointArn': 'arn:aws:dms:us-east-1:123456789012:endpoint:ASXWXJZLNWNT5HTWCGV2BUJQ7E',
},
'ResponseMetadata': {
'...': '...',
},
}
Lists all of the AWS DMS attributes for a customer account. These attributes include AWS DMS quotas for the account and a unique account identifier in a particular DMS region. DMS quotas include a list of resource quotas supported by the account, such as the number of replication instances allowed. The description for each resource quota, includes the quota name, current usage toward that quota, and the quota's maximum value. DMS uses the unique account identifier to name each artifact used by DMS in the given region.
This command does not take any parameters.
See also: AWS API Documentation
Request Syntax
response = client.describe_account_attributes()
{
'AccountQuotas': [
{
'AccountQuotaName': 'string',
'Used': 123,
'Max': 123
},
],
'UniqueAccountIdentifier': 'string'
}
Response Structure
Account quota information.
Describes a quota for an AWS account, for example, the number of replication instances allowed.
The name of the AWS DMS quota for this AWS account.
The amount currently used toward the quota maximum.
The maximum allowed value for the quota.
A unique AWS DMS identifier for an account in a particular AWS Region. The value of this identifier has the following format: c99999999999 . DMS uses this identifier to name artifacts. For example, DMS uses this identifier to name the default Amazon S3 bucket for storing task assessment reports in a given AWS Region. The format of this S3 bucket name is the following: dms-*AccountNumber* -*UniqueAccountIdentifier* . Here is an example name for this default S3 bucket: dms-111122223333-c44445555666 .
Note
AWS DMS supports the UniqueAccountIdentifier parameter in versions 3.1.4 and later.
Examples
Lists all of the AWS DMS attributes for a customer account. The attributes include AWS DMS quotas for the account, such as the number of replication instances allowed. The description for a quota includes the quota name, current usage toward that quota, and the quota's maximum value. This operation does not take any parameters.
response = client.describe_account_attributes(
)
print(response)
Expected Output:
{
'AccountQuotas': [
{
'AccountQuotaName': 'ReplicationInstances',
'Max': 20,
'Used': 0,
},
{
'AccountQuotaName': 'AllocatedStorage',
'Max': 20,
'Used': 0,
},
{
'AccountQuotaName': 'Endpoints',
'Max': 20,
'Used': 0,
},
],
'ResponseMetadata': {
'...': '...',
},
}
Provides a description of the certificate.
See also: AWS API Documentation
Request Syntax
response = client.describe_certificates(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string'
)
Filters applied to the certificate described in the form of key-value pairs.
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 10
dict
Response Syntax
{
'Marker': 'string',
'Certificates': [
{
'CertificateIdentifier': 'string',
'CertificateCreationDate': datetime(2015, 1, 1),
'CertificatePem': 'string',
'CertificateWallet': b'bytes',
'CertificateArn': 'string',
'CertificateOwner': 'string',
'ValidFromDate': datetime(2015, 1, 1),
'ValidToDate': datetime(2015, 1, 1),
'SigningAlgorithm': 'string',
'KeyLength': 123
},
]
}
Response Structure
(dict) --
Marker (string) --
The pagination token.
Certificates (list) --
The Secure Sockets Layer (SSL) certificates associated with the replication instance.
(dict) --
The SSL certificate that can be used to encrypt connections between the endpoints and the replication instance.
CertificateIdentifier (string) --
A customer-assigned name for the certificate. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
CertificateCreationDate (datetime) --
The date that the certificate was created.
CertificatePem (string) --
The contents of a .pem file, which contains an X.509 certificate.
CertificateWallet (bytes) --
The location of an imported Oracle Wallet certificate for use with SSL.
CertificateArn (string) --
The Amazon Resource Name (ARN) for the certificate.
CertificateOwner (string) --
The owner of the certificate.
ValidFromDate (datetime) --
The beginning date that the certificate is valid.
ValidToDate (datetime) --
The final date that the certificate is valid.
SigningAlgorithm (string) --
The signing algorithm for the certificate.
KeyLength (integer) --
The key length of the cryptographic algorithm being used.
Exceptions
Examples
Provides a description of the certificate.
response = client.describe_certificates(
Filters=[
{
'Name': 'string',
'Values': [
'string',
'string',
],
},
],
Marker='',
MaxRecords=123,
)
print(response)
Expected Output:
{
'Certificates': [
],
'Marker': '',
'ResponseMetadata': {
'...': '...',
},
}
Describes the status of the connections that have been made between the replication instance and an endpoint. Connections are created when you test an endpoint.
See also: AWS API Documentation
Request Syntax
response = client.describe_connections(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string'
)
The filters applied to the connection.
Valid filter names: endpoint-arn | replication-instance-arn
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
dict
Response Syntax
{
'Marker': 'string',
'Connections': [
{
'ReplicationInstanceArn': 'string',
'EndpointArn': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'EndpointIdentifier': 'string',
'ReplicationInstanceIdentifier': 'string'
},
]
}
Response Structure
(dict) --
Marker (string) --
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords .
Connections (list) --
A description of the connections.
(dict) --
Status of the connection between an endpoint and a replication instance, including Amazon Resource Names (ARNs) and the last error message issued.
ReplicationInstanceArn (string) --
The ARN of the replication instance.
EndpointArn (string) --
The ARN string that uniquely identifies the endpoint.
Status (string) --
The connection status.
LastFailureMessage (string) --
The error message when the connection last failed.
EndpointIdentifier (string) --
The identifier of the endpoint. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
ReplicationInstanceIdentifier (string) --
The replication instance identifier. This parameter is stored as a lowercase string.
Exceptions
Examples
Describes the status of the connections that have been made between the replication instance and an endpoint. Connections are created when you test an endpoint.
response = client.describe_connections(
Filters=[
{
'Name': 'string',
'Values': [
'string',
'string',
],
},
],
Marker='',
MaxRecords=123,
)
print(response)
Expected Output:
{
'Connections': [
{
'EndpointArn': 'arn:aws:dms:us-east-arn:aws:dms:us-east-1:123456789012:endpoint:ZW5UAN6P4E77EC7YWHK4RZZ3BE',
'EndpointIdentifier': 'testsrc1',
'ReplicationInstanceArn': 'arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ',
'ReplicationInstanceIdentifier': 'test',
'Status': 'successful',
},
],
'Marker': '',
'ResponseMetadata': {
'...': '...',
},
}
Returns information about the type of endpoints available.
See also: AWS API Documentation
Request Syntax
response = client.describe_endpoint_types(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string'
)
Filters applied to the describe action.
Valid filter names: engine-name | endpoint-type
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
dict
Response Syntax
{
'Marker': 'string',
'SupportedEndpointTypes': [
{
'EngineName': 'string',
'SupportsCDC': True|False,
'EndpointType': 'source'|'target',
'ReplicationInstanceEngineMinimumVersion': 'string',
'EngineDisplayName': 'string'
},
]
}
Response Structure
(dict) --
Marker (string) --
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords .
SupportedEndpointTypes (list) --
The types of endpoints that are supported.
(dict) --
Provides information about types of supported endpoints in response to a request by the DescribeEndpointTypes operation. This information includes the type of endpoint, the database engine name, and whether change data capture (CDC) is supported.
EngineName (string) --
The database engine name. Valid values, depending on the EndpointType, include "mysql" , "oracle" , "postgres" , "mariadb" , "aurora" , "aurora-postgresql" , "redshift" , "s3" , "db2" , "azuredb" , "sybase" , "dynamodb" , "mongodb" , "kinesis" , "kafka" , "elasticsearch" , "documentdb" , and "sqlserver" .
SupportsCDC (boolean) --
Indicates if Change Data Capture (CDC) is supported.
EndpointType (string) --
The type of endpoint. Valid values are source and target .
ReplicationInstanceEngineMinimumVersion (string) --
The earliest AWS DMS engine version that supports this endpoint engine. Note that endpoint engines released with AWS DMS versions earlier than 3.1.1 do not return a value for this parameter.
EngineDisplayName (string) --
The expanded name for the engine name. For example, if the EngineName parameter is "aurora," this value would be "Amazon Aurora MySQL."
Examples
Returns information about the type of endpoints available.
response = client.describe_endpoint_types(
Filters=[
{
'Name': 'string',
'Values': [
'string',
'string',
],
},
],
Marker='',
MaxRecords=123,
)
print(response)
Expected Output:
{
'Marker': '',
'SupportedEndpointTypes': [
],
'ResponseMetadata': {
'...': '...',
},
}
Returns information about the endpoints for your account in the current region.
See also: AWS API Documentation
Request Syntax
response = client.describe_endpoints(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string'
)
Filters applied to the describe action.
Valid filter names: endpoint-arn | endpoint-type | endpoint-id | engine-name
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
dict
Response Syntax
{
'Marker': 'string',
'Endpoints': [
{
'EndpointIdentifier': 'string',
'EndpointType': 'source'|'target',
'EngineName': 'string',
'EngineDisplayName': 'string',
'Username': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'ExtraConnectionAttributes': 'string',
'Status': 'string',
'KmsKeyId': 'string',
'EndpointArn': 'string',
'CertificateArn': 'string',
'SslMode': 'none'|'require'|'verify-ca'|'verify-full',
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'ExternalId': 'string',
'DynamoDbSettings': {
'ServiceAccessRoleArn': 'string'
},
'S3Settings': {
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'CsvRowDelimiter': 'string',
'CsvDelimiter': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'CompressionType': 'none'|'gzip',
'EncryptionMode': 'sse-s3'|'sse-kms',
'ServerSideEncryptionKmsKeyId': 'string',
'DataFormat': 'csv'|'parquet',
'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
'DictPageSizeLimit': 123,
'RowGroupLength': 123,
'DataPageSize': 123,
'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
'EnableStatistics': True|False,
'IncludeOpForFullLoad': True|False,
'CdcInsertsOnly': True|False,
'TimestampColumnName': 'string',
'ParquetTimestampInMillisecond': True|False,
'CdcInsertsAndUpdates': True|False
},
'DmsTransferSettings': {
'ServiceAccessRoleArn': 'string',
'BucketName': 'string'
},
'MongoDbSettings': {
'Username': 'string',
'Password': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'AuthType': 'no'|'password',
'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
'NestingLevel': 'none'|'one',
'ExtractDocId': 'string',
'DocsToInvestigate': 'string',
'AuthSource': 'string',
'KmsKeyId': 'string'
},
'KinesisSettings': {
'StreamArn': 'string',
'MessageFormat': 'json'|'json-unformatted',
'ServiceAccessRoleArn': 'string',
'IncludeTransactionDetails': True|False,
'IncludePartitionValue': True|False,
'PartitionIncludeSchemaTable': True|False,
'IncludeTableAlterOperations': True|False,
'IncludeControlDetails': True|False
},
'KafkaSettings': {
'Broker': 'string',
'Topic': 'string'
},
'ElasticsearchSettings': {
'ServiceAccessRoleArn': 'string',
'EndpointUri': 'string',
'FullLoadErrorPercentage': 123,
'ErrorRetryDuration': 123
},
'NeptuneSettings': {
'ServiceAccessRoleArn': 'string',
'S3BucketName': 'string',
'S3BucketFolder': 'string',
'ErrorRetryDuration': 123,
'MaxFileSize': 123,
'MaxRetryCount': 123,
'IamAuthEnabled': True|False
},
'RedshiftSettings': {
'AcceptAnyDate': True|False,
'AfterConnectScript': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'ConnectionTimeout': 123,
'DatabaseName': 'string',
'DateFormat': 'string',
'EmptyAsNull': True|False,
'EncryptionMode': 'sse-s3'|'sse-kms',
'FileTransferUploadStreams': 123,
'LoadTimeout': 123,
'MaxFileSize': 123,
'Password': 'string',
'Port': 123,
'RemoveQuotes': True|False,
'ReplaceInvalidChars': 'string',
'ReplaceChars': 'string',
'ServerName': 'string',
'ServiceAccessRoleArn': 'string',
'ServerSideEncryptionKmsKeyId': 'string',
'TimeFormat': 'string',
'TrimBlanks': True|False,
'TruncateColumns': True|False,
'Username': 'string',
'WriteBufferSize': 123
}
},
]
}
Response Structure
(dict) --
Marker (string) --
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords .
Endpoints (list) --
Endpoint description.
(dict) --
Describes an endpoint of a database instance in response to operations such as the following:
EndpointIdentifier (string) --
The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
EndpointType (string) --
The type of endpoint. Valid values are source and target .
EngineName (string) --
The database engine name. Valid values, depending on the EndpointType, include "mysql" , "oracle" , "postgres" , "mariadb" , "aurora" , "aurora-postgresql" , "redshift" , "s3" , "db2" , "azuredb" , "sybase" , "dynamodb" , "mongodb" , "kinesis" , "kafka" , "elasticsearch" , "documentdb" , and "sqlserver" .
EngineDisplayName (string) --
The expanded name for the engine name. For example, if the EngineName parameter is "aurora," this value would be "Amazon Aurora MySQL."
Username (string) --
The user name used to connect to the endpoint.
ServerName (string) --
The name of the server at the endpoint.
Port (integer) --
The port value used to access the endpoint.
DatabaseName (string) --
The name of the database at the endpoint.
ExtraConnectionAttributes (string) --
Additional connection attributes used to connect to the endpoint.
Status (string) --
The status of the endpoint.
KmsKeyId (string) --
An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
EndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
CertificateArn (string) --
The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
SslMode (string) --
The SSL mode used to connect to the endpoint. The default value is none .
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
ExternalTableDefinition (string) --
The external table definition.
ExternalId (string) --
Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
DynamoDbSettings (dict) --
The settings for the target DynamoDB database. For more information, see the DynamoDBSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
S3Settings (dict) --
The settings for the S3 target endpoint. For more information, see the S3Settings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
ExternalTableDefinition (string) --
The external table definition.
CsvRowDelimiter (string) --
The delimiter used to separate rows in the source files. The default is a carriage return (\n ).
CsvDelimiter (string) --
The delimiter used to separate columns in the source files. The default is a comma.
BucketFolder (string) --
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path `` bucketFolder /schema_name /table_name /`` . If this parameter isn't specified, then the path used is `` schema_name /table_name /`` .
BucketName (string) --
The name of the S3 bucket.
CompressionType (string) --
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS . To use SSE_S3 , you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:
ServerSideEncryptionKmsKeyId (string) --
If you are using SSE_KMS for the EncryptionMode , provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=*value* ,BucketFolder=*value* ,BucketName=*value* ,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=*value* ``
DataFormat (string) --
The format of the data that you want to use for output. You can choose one of the following:
EncodingType (string) --
The type of encoding you are using:
DictPageSizeLimit (integer) --
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN . This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.
RowGroupLength (integer) --
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).
DataPageSize (integer) --
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion (string) --
The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0 .
EnableStatistics (boolean) --
A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL , DISTINCT , MAX , and MIN values. This parameter defaults to true . This value is used for .parquet file format only.
IncludeOpForFullLoad (boolean) --
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note
AWS DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.
For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y , the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.
Note
This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
CdcInsertsOnly (boolean) --
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly is set to true or y , only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad . If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false , every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
TimestampColumnName (string) --
A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note
AWS DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.
DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS . By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.
When the AddColumnName parameter is set to true , DMS also includes a name for the timestamp column that you set with TimestampColumnName .
ParquetTimestampInMillisecond (boolean) --
A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.
Note
AWS DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond is set to true or y , AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.
Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.
Note
AWS DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.
Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.
CdcInsertsAndUpdates (boolean) --
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false , but when CdcInsertsAndUpdates is set to true or y , INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.
For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false , CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the use of the CdcInsertsAndUpdates parameter in versions 3.3.1 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
DmsTransferSettings (dict) --
The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string
JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }
ServiceAccessRoleArn (string) --
The IAM role that has permission to access the Amazon S3 bucket.
BucketName (string) --
The name of the S3 bucket to use.
MongoDbSettings (dict) --
The settings for the MongoDB source endpoint. For more information, see the MongoDbSettings structure.
Username (string) --
The user name you use to access the MongoDB source endpoint.
Password (string) --
The password for the user account you use to access the MongoDB source endpoint.
ServerName (string) --
The name of the server on the MongoDB source endpoint.
Port (integer) --
The port value for the MongoDB source endpoint.
DatabaseName (string) --
The database name on the MongoDB source endpoint.
AuthType (string) --
The authentication type you use to access the MongoDB source endpoint.
Valid values: NO, PASSWORD
When NO is selected, user name and password parameters are not used and can be empty.
AuthMechanism (string) --
The authentication mechanism you use to access the MongoDB source endpoint.
Valid values: DEFAULT, MONGODB_CR, SCRAM_SHA_1
DEFAULT – For MongoDB version 2.x, use MONGODB_CR. For MongoDB version 3.x, use SCRAM_SHA_1. This setting isn't used when authType=No.
NestingLevel (string) --
Specifies either document or table mode.
Valid values: NONE, ONE
Default value is NONE. Specify NONE to use document mode. Specify ONE to use table mode.
ExtractDocId (string) --
Specifies the document ID. Use this setting when NestingLevel is set to NONE.
Default value is false.
DocsToInvestigate (string) --
Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to ONE.
Must be a positive value greater than 0. Default value is 1000.
AuthSource (string) --
The MongoDB database name. This setting isn't used when authType=NO .
The default is admin.
KmsKeyId (string) --
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
KinesisSettings (dict) --
The settings for the Amazon Kinesis target endpoint. For more information, see the KinesisSettings structure.
StreamArn (string) --
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat (string) --
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.
IncludeTransactionDetails (boolean) --
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is False .
IncludePartitionValue (boolean) --
Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type . The default is False .
PartitionIncludeSchemaTable (boolean) --
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is False .
IncludeTableAlterOperations (boolean) --
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is False .
IncludeControlDetails (boolean) --
Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is False .
KafkaSettings (dict) --
The settings for the Apache Kafka target endpoint. For more information, see the KafkaSettings structure.
Broker (string) --
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form `` broker-hostname-or-ip :port `` . For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345" .
Topic (string) --
The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.
ElasticsearchSettings (dict) --
The settings for the Elasticsearch source endpoint. For more information, see the ElasticsearchSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by service to access the IAM role.
EndpointUri (string) --
The endpoint for the Elasticsearch cluster.
FullLoadErrorPercentage (integer) --
The maximum percentage of records that can fail to be written before a full load operation stops.
ErrorRetryDuration (integer) --
The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
NeptuneSettings (dict) --
The settings for the MongoDB source endpoint. For more information, see the NeptuneSettings structure.
ServiceAccessRoleArn (string) --
The ARN of the service role you have created for the Neptune target endpoint. For more information, see https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html#CHAP_Target.Neptune.ServiceRole in the AWS Database Migration Service User Guide.
S3BucketName (string) --
The name of the S3 bucket for AWS DMS to temporarily store migrated graph data in CSV files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these CSV files.
S3BucketFolder (string) --
A folder path where you where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName
ErrorRetryDuration (integer) --
The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize (integer) --
The maximum size in KB of migrated graph data stored in a CSV file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1048576 KB. If successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount (integer) --
The number of times for AWS DMS to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled (boolean) --
If you want IAM authorization enabled for this endpoint, set this parameter to true and attach the appropriate role policy document to your service role specified by ServiceAccessRoleArn . The default is false .
RedshiftSettings (dict) --
Settings for the Amazon Redshift endpoint.
AcceptAnyDate (boolean) --
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript (string) --
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder (string) --
The location where the comma-separated value (.csv) files are stored before being uploaded to the S3 bucket.
BucketName (string) --
The name of the S3 bucket you want to use
ConnectionTimeout (integer) --
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName (string) --
The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat (string) --
The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.
If your date and time values use formats different from each other, set this to auto .
EmptyAsNull (boolean) --
A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false .
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS . To use SSE_S3 , create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"
FileTransferUploadStreams (integer) --
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
LoadTimeout (integer) --
The amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.
MaxFileSize (integer) --
The maximum size (in KB) of any .csv file used to transfer data to Amazon Redshift. This accepts a value from 1 through 1,048,576. It defaults to 32,768 KB (32 MB).
Password (string) --
The password for the user named in the username property.
Port (integer) --
The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes (boolean) --
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false .
ReplaceInvalidChars (string) --
A list of characters that you want to replace. Use with ReplaceChars .
ReplaceChars (string) --
A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars , substituting the specified characters instead. The default is "?" .
ServerName (string) --
The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.
ServerSideEncryptionKmsKeyId (string) --
The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode , provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
TimeFormat (string) --
The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string' , 'epochsecs' , or 'epochmillisecs' . It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.
If your date and time values use formats different from each other, set this parameter to auto .
TrimBlanks (boolean) --
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false .
TruncateColumns (boolean) --
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false .
Username (string) --
An Amazon Redshift user name for a registered user.
WriteBufferSize (integer) --
The size of the write buffer to use in rows. Valid values range from 1 through 2,048. The default is 1,024. Use this setting to tune performance.
Exceptions
Examples
Returns information about the endpoints for your account in the current region.
response = client.describe_endpoints(
Filters=[
{
'Name': 'string',
'Values': [
'string',
'string',
],
},
],
Marker='',
MaxRecords=123,
)
print(response)
Expected Output:
{
'Endpoints': [
],
'Marker': '',
'ResponseMetadata': {
'...': '...',
},
}
Lists categories for all event source types, or, if specified, for a specified source type. You can see a list of the event categories and source types in Working with Events and Notifications in the AWS Database Migration Service User Guide.
See also: AWS API Documentation
Request Syntax
response = client.describe_event_categories(
SourceType='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
]
)
The type of AWS DMS resource that generates events.
Valid values: replication-instance | replication-task
Filters applied to the action.
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
dict
Response Syntax
{
'EventCategoryGroupList': [
{
'SourceType': 'string',
'EventCategories': [
'string',
]
},
]
}
Response Structure
(dict) --
EventCategoryGroupList (list) --
A list of event categories.
(dict) --
Lists categories of events subscribed to, and generated by, the applicable AWS DMS resource type.
SourceType (string) --
The type of AWS DMS resource that generates events.
Valid values: replication-instance | replication-server | security-group | replication-task
EventCategories (list) --
A list of event categories from a source type that you've chosen.
Lists all the event subscriptions for a customer account. The description of a subscription includes SubscriptionName , SNSTopicARN , CustomerID , SourceType , SourceID , CreationTime , and Status .
If you specify SubscriptionName , this action lists the description for that subscription.
See also: AWS API Documentation
Request Syntax
response = client.describe_event_subscriptions(
SubscriptionName='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string'
)
Filters applied to the action.
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
dict
Response Syntax
{
'Marker': 'string',
'EventSubscriptionsList': [
{
'CustomerAwsId': 'string',
'CustSubscriptionId': 'string',
'SnsTopicArn': 'string',
'Status': 'string',
'SubscriptionCreationTime': 'string',
'SourceType': 'string',
'SourceIdsList': [
'string',
],
'EventCategoriesList': [
'string',
],
'Enabled': True|False
},
]
}
Response Structure
(dict) --
Marker (string) --
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords .
EventSubscriptionsList (list) --
A list of event subscriptions.
(dict) --
Describes an event notification subscription created by the CreateEventSubscription operation.
CustomerAwsId (string) --
The AWS customer account associated with the AWS DMS event notification subscription.
CustSubscriptionId (string) --
The AWS DMS event notification subscription Id.
SnsTopicArn (string) --
The topic ARN of the AWS DMS event notification subscription.
Status (string) --
The status of the AWS DMS event notification subscription.
Constraints:
Can be one of the following: creating | modifying | deleting | active | no-permission | topic-not-exist
The status "no-permission" indicates that AWS DMS no longer has permission to post to the SNS topic. The status "topic-not-exist" indicates that the topic was deleted after the subscription was created.
SubscriptionCreationTime (string) --
The time the RDS event notification subscription was created.
SourceType (string) --
The type of AWS DMS resource that generates events.
Valid values: replication-instance | replication-server | security-group | replication-task
SourceIdsList (list) --
A list of source Ids for the event subscription.
EventCategoriesList (list) --
A lists of event categories.
Enabled (boolean) --
Boolean value that indicates if the event subscription is enabled.
Exceptions
Lists events for a given source identifier and source type. You can also specify a start and end time. For more information on AWS DMS events, see Working with Events and Notifications in the AWS Database Migration User Guide.
See also: AWS API Documentation
Request Syntax
response = client.describe_events(
SourceIdentifier='string',
SourceType='replication-instance',
StartTime=datetime(2015, 1, 1),
EndTime=datetime(2015, 1, 1),
Duration=123,
EventCategories=[
'string',
],
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string'
)
The type of AWS DMS resource that generates events.
Valid values: replication-instance | replication-task
A list of event categories for the source type that you've chosen.
Filters applied to the action.
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
dict
Response Syntax
{
'Marker': 'string',
'Events': [
{
'SourceIdentifier': 'string',
'SourceType': 'replication-instance',
'Message': 'string',
'EventCategories': [
'string',
],
'Date': datetime(2015, 1, 1)
},
]
}
Response Structure
(dict) --
Marker (string) --
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords .
Events (list) --
The events described.
(dict) --
Describes an identifiable significant activity that affects a replication instance or task. This object can provide the message, the available event categories, the date and source of the event, and the AWS DMS resource type.
SourceIdentifier (string) --
The identifier of an event source.
SourceType (string) --
The type of AWS DMS resource that generates events.
Valid values: replication-instance | endpoint | replication-task
Message (string) --
The event message.
EventCategories (list) --
The event categories available for the specified source type.
Date (datetime) --
The date of the event.
Returns information about the replication instance types that can be created in the specified region.
See also: AWS API Documentation
Request Syntax
response = client.describe_orderable_replication_instances(
MaxRecords=123,
Marker='string'
)
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
dict
Response Syntax
{
'OrderableReplicationInstances': [
{
'EngineVersion': 'string',
'ReplicationInstanceClass': 'string',
'StorageType': 'string',
'MinAllocatedStorage': 123,
'MaxAllocatedStorage': 123,
'DefaultAllocatedStorage': 123,
'IncludedAllocatedStorage': 123,
'AvailabilityZones': [
'string',
],
'ReleaseStatus': 'beta'
},
],
'Marker': 'string'
}
Response Structure
(dict) --
OrderableReplicationInstances (list) --
The order-able replication instances available.
(dict) --
In response to the DescribeOrderableReplicationInstances operation, this object describes an available replication instance. This description includes the replication instance's type, engine version, and allocated storage.
EngineVersion (string) --
The version of the replication engine.
ReplicationInstanceClass (string) --
The compute and memory capacity of the replication instance.
Valid Values: dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge
StorageType (string) --
The type of storage used by the replication instance.
MinAllocatedStorage (integer) --
The minimum amount of storage (in gigabytes) that can be allocated for the replication instance.
MaxAllocatedStorage (integer) --
The minimum amount of storage (in gigabytes) that can be allocated for the replication instance.
DefaultAllocatedStorage (integer) --
The default amount of storage (in gigabytes) that is allocated for the replication instance.
IncludedAllocatedStorage (integer) --
The amount of storage (in gigabytes) that is allocated for the replication instance.
AvailabilityZones (list) --
List of Availability Zones for this replication instance.
ReleaseStatus (string) --
The value returned when the specified EngineVersion of the replication instance is in Beta or test mode. This indicates some features might not work as expected.
Note
AWS DMS supports the ReleaseStatus parameter in versions 3.1.4 and later.
Marker (string) --
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords .
Examples
Returns information about the replication instance types that can be created in the specified region.
response = client.describe_orderable_replication_instances(
Marker='',
MaxRecords=123,
)
print(response)
Expected Output:
{
'Marker': '',
'OrderableReplicationInstances': [
],
'ResponseMetadata': {
'...': '...',
},
}
For internal use only
See also: AWS API Documentation
Request Syntax
response = client.describe_pending_maintenance_actions(
ReplicationInstanceArn='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
Marker='string',
MaxRecords=123
)
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
dict
Response Syntax
{
'PendingMaintenanceActions': [
{
'ResourceIdentifier': 'string',
'PendingMaintenanceActionDetails': [
{
'Action': 'string',
'AutoAppliedAfterDate': datetime(2015, 1, 1),
'ForcedApplyDate': datetime(2015, 1, 1),
'OptInStatus': 'string',
'CurrentApplyDate': datetime(2015, 1, 1),
'Description': 'string'
},
]
},
],
'Marker': 'string'
}
Response Structure
(dict) --
PendingMaintenanceActions (list) --
The pending maintenance action.
(dict) --
Identifies an AWS DMS resource and any pending actions for it.
ResourceIdentifier (string) --
The Amazon Resource Name (ARN) of the DMS resource that the pending maintenance action applies to. For information about creating an ARN, see Constructing an Amazon Resource Name (ARN) for AWS DMS in the DMS documentation.
PendingMaintenanceActionDetails (list) --
Detailed information about the pending maintenance action.
(dict) --
Describes a maintenance action pending for an AWS DMS resource, including when and how it will be applied. This data type is a response element to the DescribePendingMaintenanceActions operation.
Action (string) --
The type of pending maintenance action that is available for the resource.
AutoAppliedAfterDate (datetime) --
The date of the maintenance window when the action is to be applied. The maintenance action is applied to the resource during its first maintenance window after this date. If this date is specified, any next-maintenance opt-in requests are ignored.
ForcedApplyDate (datetime) --
The date when the maintenance action will be automatically applied. The maintenance action is applied to the resource on this date regardless of the maintenance window for the resource. If this date is specified, any immediate opt-in requests are ignored.
OptInStatus (string) --
The type of opt-in request that has been received for the resource.
CurrentApplyDate (datetime) --
The effective date when the pending maintenance action will be applied to the resource. This date takes into account opt-in requests received from the ApplyPendingMaintenanceAction API operation, and also the AutoAppliedAfterDate and ForcedApplyDate parameter values. This value is blank if an opt-in request has not been received and nothing has been specified for AutoAppliedAfterDate or ForcedApplyDate .
Description (string) --
A description providing more detail about the maintenance action.
Marker (string) --
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords .
Exceptions
Returns the status of the RefreshSchemas operation.
See also: AWS API Documentation
Request Syntax
response = client.describe_refresh_schemas_status(
EndpointArn='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
{
'RefreshSchemasStatus': {
'EndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'Status': 'successful'|'failed'|'refreshing',
'LastRefreshDate': datetime(2015, 1, 1),
'LastFailureMessage': 'string'
}
}
Response Structure
The status of the schema.
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
The Amazon Resource Name (ARN) of the replication instance.
The status of the schema.
The date the schema was last refreshed.
The last failure message for the schema.
Exceptions
Examples
Returns the status of the refresh-schemas operation.
response = client.describe_refresh_schemas_status(
EndpointArn='',
)
print(response)
Expected Output:
{
'RefreshSchemasStatus': {
},
'ResponseMetadata': {
'...': '...',
},
}
Returns information about the task logs for the specified task.
See also: AWS API Documentation
Request Syntax
response = client.describe_replication_instance_task_logs(
ReplicationInstanceArn='string',
MaxRecords=123,
Marker='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) of the replication instance.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
dict
Response Syntax
{
'ReplicationInstanceArn': 'string',
'ReplicationInstanceTaskLogs': [
{
'ReplicationTaskName': 'string',
'ReplicationTaskArn': 'string',
'ReplicationInstanceTaskLogSize': 123
},
],
'Marker': 'string'
}
Response Structure
(dict) --
ReplicationInstanceArn (string) --
The Amazon Resource Name (ARN) of the replication instance.
ReplicationInstanceTaskLogs (list) --
An array of replication task log metadata. Each member of the array contains the replication task name, ARN, and task log size (in bytes).
(dict) --
Contains metadata for a replication instance task log.
ReplicationTaskName (string) --
The name of the replication task.
ReplicationTaskArn (string) --
The Amazon Resource Name (ARN) of the replication task.
ReplicationInstanceTaskLogSize (integer) --
The size, in bytes, of the replication task log.
Marker (string) --
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords .
Exceptions
Returns information about replication instances for your account in the current region.
See also: AWS API Documentation
Request Syntax
response = client.describe_replication_instances(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string'
)
Filters applied to the describe action.
Valid filter names: replication-instance-arn | replication-instance-id | replication-instance-class | engine-version
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
dict
Response Syntax
{
'Marker': 'string',
'ReplicationInstances': [
{
'ReplicationInstanceIdentifier': 'string',
'ReplicationInstanceClass': 'string',
'ReplicationInstanceStatus': 'string',
'AllocatedStorage': 123,
'InstanceCreateTime': datetime(2015, 1, 1),
'VpcSecurityGroups': [
{
'VpcSecurityGroupId': 'string',
'Status': 'string'
},
],
'AvailabilityZone': 'string',
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
},
'PreferredMaintenanceWindow': 'string',
'PendingModifiedValues': {
'ReplicationInstanceClass': 'string',
'AllocatedStorage': 123,
'MultiAZ': True|False,
'EngineVersion': 'string'
},
'MultiAZ': True|False,
'EngineVersion': 'string',
'AutoMinorVersionUpgrade': True|False,
'KmsKeyId': 'string',
'ReplicationInstanceArn': 'string',
'ReplicationInstancePublicIpAddress': 'string',
'ReplicationInstancePrivateIpAddress': 'string',
'ReplicationInstancePublicIpAddresses': [
'string',
],
'ReplicationInstancePrivateIpAddresses': [
'string',
],
'PubliclyAccessible': True|False,
'SecondaryAvailabilityZone': 'string',
'FreeUntil': datetime(2015, 1, 1),
'DnsNameServers': 'string'
},
]
}
Response Structure
(dict) --
Marker (string) --
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords .
ReplicationInstances (list) --
The replication instances described.
(dict) --
Provides information that defines a replication instance.
ReplicationInstanceIdentifier (string) --
The replication instance identifier. This parameter is stored as a lowercase string.
Constraints:
Example: myrepinstance
ReplicationInstanceClass (string) --
The compute and memory capacity of the replication instance.
Valid Values: dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge
ReplicationInstanceStatus (string) --
The status of the replication instance.
AllocatedStorage (integer) --
The amount of storage (in gigabytes) that is allocated for the replication instance.
InstanceCreateTime (datetime) --
The time the replication instance was created.
VpcSecurityGroups (list) --
The VPC security group for the instance.
(dict) --
Describes status of a security group associated with the virtual private cloud hosting your replication and DB instances.
VpcSecurityGroupId (string) --
The VPC security group Id.
Status (string) --
The status of the VPC security group.
AvailabilityZone (string) --
The Availability Zone for the instance.
ReplicationSubnetGroup (dict) --
The subnet group for the replication instance.
ReplicationSubnetGroupIdentifier (string) --
The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription (string) --
A description for the replication subnet group.
VpcId (string) --
The ID of the VPC.
SubnetGroupStatus (string) --
The status of the subnet group.
Subnets (list) --
The subnets that are in the subnet group.
(dict) --
In response to a request by the DescribeReplicationSubnetGroup operation, this object identifies a subnet by its given Availability Zone, subnet identifier, and status.
SubnetIdentifier (string) --
The subnet identifier.
SubnetAvailabilityZone (dict) --
The Availability Zone of the subnet.
Name (string) --
The name of the Availability Zone.
SubnetStatus (string) --
The status of the subnet.
PreferredMaintenanceWindow (string) --
The maintenance window times for the replication instance.
PendingModifiedValues (dict) --
The pending modification values.
ReplicationInstanceClass (string) --
The compute and memory capacity of the replication instance.
Valid Values: dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge
AllocatedStorage (integer) --
The amount of storage (in gigabytes) that is allocated for the replication instance.
MultiAZ (boolean) --
Specifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the Multi-AZ parameter is set to true .
EngineVersion (string) --
The engine version number of the replication instance.
MultiAZ (boolean) --
Specifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the Multi-AZ parameter is set to true .
EngineVersion (string) --
The engine version number of the replication instance.
AutoMinorVersionUpgrade (boolean) --
Boolean value indicating if minor version upgrades will be automatically applied to the instance.
KmsKeyId (string) --
An AWS KMS key identifier that is used to encrypt the data on the replication instance.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
ReplicationInstanceArn (string) --
The Amazon Resource Name (ARN) of the replication instance.
ReplicationInstancePublicIpAddress (string) --
The public IP address of the replication instance.
ReplicationInstancePrivateIpAddress (string) --
The private IP address of the replication instance.
ReplicationInstancePublicIpAddresses (list) --
One or more public IP addresses for the replication instance.
ReplicationInstancePrivateIpAddresses (list) --
One or more private IP addresses for the replication instance.
PubliclyAccessible (boolean) --
Specifies the accessibility options for the replication instance. A value of true represents an instance with a public IP address. A value of false represents an instance with a private IP address. The default value is true .
SecondaryAvailabilityZone (string) --
The Availability Zone of the standby replication instance in a Multi-AZ deployment.
FreeUntil (datetime) --
The expiration date of the free replication instance that is part of the Free DMS program.
DnsNameServers (string) --
The DNS name servers for the replication instance.
Exceptions
Examples
Returns the status of the refresh-schemas operation.
response = client.describe_replication_instances(
Filters=[
{
'Name': 'string',
'Values': [
'string',
'string',
],
},
],
Marker='',
MaxRecords=123,
)
print(response)
Expected Output:
{
'Marker': '',
'ReplicationInstances': [
],
'ResponseMetadata': {
'...': '...',
},
}
Returns information about the replication subnet groups.
See also: AWS API Documentation
Request Syntax
response = client.describe_replication_subnet_groups(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string'
)
Filters applied to the describe action.
Valid filter names: replication-subnet-group-id
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
dict
Response Syntax
{
'Marker': 'string',
'ReplicationSubnetGroups': [
{
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
},
]
}
Response Structure
(dict) --
Marker (string) --
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords .
ReplicationSubnetGroups (list) --
A description of the replication subnet groups.
(dict) --
Describes a subnet group in response to a request by the DescribeReplicationSubnetGroup operation.
ReplicationSubnetGroupIdentifier (string) --
The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription (string) --
A description for the replication subnet group.
VpcId (string) --
The ID of the VPC.
SubnetGroupStatus (string) --
The status of the subnet group.
Subnets (list) --
The subnets that are in the subnet group.
(dict) --
In response to a request by the DescribeReplicationSubnetGroup operation, this object identifies a subnet by its given Availability Zone, subnet identifier, and status.
SubnetIdentifier (string) --
The subnet identifier.
SubnetAvailabilityZone (dict) --
The Availability Zone of the subnet.
Name (string) --
The name of the Availability Zone.
SubnetStatus (string) --
The status of the subnet.
Exceptions
Examples
Returns information about the replication subnet groups.
response = client.describe_replication_subnet_groups(
Filters=[
{
'Name': 'string',
'Values': [
'string',
'string',
],
},
],
Marker='',
MaxRecords=123,
)
print(response)
Expected Output:
{
'Marker': '',
'ReplicationSubnetGroups': [
],
'ResponseMetadata': {
'...': '...',
},
}
Returns the task assessment results from Amazon S3. This action always returns the latest results.
See also: AWS API Documentation
Request Syntax
response = client.describe_replication_task_assessment_results(
ReplicationTaskArn='string',
MaxRecords=123,
Marker='string'
)
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
dict
Response Syntax
{
'Marker': 'string',
'BucketName': 'string',
'ReplicationTaskAssessmentResults': [
{
'ReplicationTaskIdentifier': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskLastAssessmentDate': datetime(2015, 1, 1),
'AssessmentStatus': 'string',
'AssessmentResultsFile': 'string',
'AssessmentResults': 'string',
'S3ObjectUrl': 'string'
},
]
}
Response Structure
(dict) --
Marker (string) --
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords .
BucketName (string) --
ReplicationTaskAssessmentResults (list) --
The task assessment report.
(dict) --
The task assessment report in JSON format.
ReplicationTaskIdentifier (string) --
The replication task identifier of the task on which the task assessment was run.
ReplicationTaskArn (string) --
The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskLastAssessmentDate (datetime) --
The date the task assessment was completed.
AssessmentStatus (string) --
The status of the task assessment.
AssessmentResultsFile (string) --
The file containing the results of the task assessment.
AssessmentResults (string) --
The task assessment results in JSON format.
S3ObjectUrl (string) --
The URL of the S3 object containing the task assessment results.
Exceptions
Returns information about replication tasks for your account in the current region.
See also: AWS API Documentation
Request Syntax
response = client.describe_replication_tasks(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
WithoutSettings=True|False
)
Filters applied to the describe action.
Valid filter names: replication-task-arn | replication-task-id | migration-type | endpoint-arn | replication-instance-arn
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
dict
Response Syntax
{
'Marker': 'string',
'ReplicationTasks': [
{
'ReplicationTaskIdentifier': 'string',
'SourceEndpointArn': 'string',
'TargetEndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'MigrationType': 'full-load'|'cdc'|'full-load-and-cdc',
'TableMappings': 'string',
'ReplicationTaskSettings': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'StopReason': 'string',
'ReplicationTaskCreationDate': datetime(2015, 1, 1),
'ReplicationTaskStartDate': datetime(2015, 1, 1),
'CdcStartPosition': 'string',
'CdcStopPosition': 'string',
'RecoveryCheckpoint': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskStats': {
'FullLoadProgressPercent': 123,
'ElapsedTimeMillis': 123,
'TablesLoaded': 123,
'TablesLoading': 123,
'TablesQueued': 123,
'TablesErrored': 123,
'FreshStartDate': datetime(2015, 1, 1),
'StartDate': datetime(2015, 1, 1),
'StopDate': datetime(2015, 1, 1),
'FullLoadStartDate': datetime(2015, 1, 1),
'FullLoadFinishDate': datetime(2015, 1, 1)
},
'TaskData': 'string'
},
]
}
Response Structure
(dict) --
Marker (string) --
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords .
ReplicationTasks (list) --
A description of the replication tasks.
(dict) --
Provides information that describes a replication task created by the CreateReplicationTask operation.
ReplicationTaskIdentifier (string) --
The user-assigned replication task identifier or name.
Constraints:
SourceEndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
TargetEndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
ReplicationInstanceArn (string) --
The Amazon Resource Name (ARN) of the replication instance.
MigrationType (string) --
The type of migration.
TableMappings (string) --
Table mappings specified in the task.
ReplicationTaskSettings (string) --
The settings for the replication task.
Status (string) --
The status of the replication task.
LastFailureMessage (string) --
The last error (failure) message generated for the replication instance.
StopReason (string) --
The reason the replication task was stopped.
ReplicationTaskCreationDate (datetime) --
The date the replication task was created.
ReplicationTaskStartDate (datetime) --
The date the replication task is scheduled to start.
CdcStartPosition (string) --
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want the CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition (string) --
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
RecoveryCheckpoint (string) --
Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the CdcStartPosition parameter to start a CDC operation that begins at that checkpoint.
ReplicationTaskArn (string) --
The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats (dict) --
The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent (integer) --
The percent complete for the full load migration task.
ElapsedTimeMillis (integer) --
The elapsed time of the task, in milliseconds.
TablesLoaded (integer) --
The number of tables loaded for this task.
TablesLoading (integer) --
The number of tables currently loading for this task.
TablesQueued (integer) --
The number of tables queued for this task.
TablesErrored (integer) --
The number of errors that have occurred during this task.
FreshStartDate (datetime) --
The date the replication task was started either with a fresh start or a target reload.
StartDate (datetime) --
The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType .
StopDate (datetime) --
The date the replication task was stopped.
FullLoadStartDate (datetime) --
The date the replication task full load was started.
FullLoadFinishDate (datetime) --
The date the replication task full load was completed.
TaskData (string) --
Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the AWS Database Migration User Guide.
Exceptions
Examples
Returns information about replication tasks for your account in the current region.
response = client.describe_replication_tasks(
Filters=[
{
'Name': 'string',
'Values': [
'string',
'string',
],
},
],
Marker='',
MaxRecords=123,
)
print(response)
Expected Output:
{
'Marker': '',
'ReplicationTasks': [
],
'ResponseMetadata': {
'...': '...',
},
}
Returns information about the schema for the specified endpoint.
See also: AWS API Documentation
Request Syntax
response = client.describe_schemas(
EndpointArn='string',
MaxRecords=123,
Marker='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
dict
Response Syntax
{
'Marker': 'string',
'Schemas': [
'string',
]
}
Response Structure
(dict) --
Marker (string) --
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords .
Schemas (list) --
The described schema.
Exceptions
Examples
Returns information about the schema for the specified endpoint.
response = client.describe_schemas(
EndpointArn='',
Marker='',
MaxRecords=123,
)
print(response)
Expected Output:
{
'Marker': '',
'Schemas': [
],
'ResponseMetadata': {
'...': '...',
},
}
Returns table statistics on the database migration task, including table name, rows inserted, rows updated, and rows deleted.
Note that the "last updated" column the DMS console only indicates the time that AWS DMS last updated the table statistics record for a table. It does not indicate the time of the last update to the table.
See also: AWS API Documentation
Request Syntax
response = client.describe_table_statistics(
ReplicationTaskArn='string',
MaxRecords=123,
Marker='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
]
)
[REQUIRED]
The Amazon Resource Name (ARN) of the replication task.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 500.
Filters applied to the describe table statistics action.
Valid filter names: schema-name | table-name | table-state
A combination of filters creates an AND condition where each record matches all specified filters.
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
dict
Response Syntax
{
'ReplicationTaskArn': 'string',
'TableStatistics': [
{
'SchemaName': 'string',
'TableName': 'string',
'Inserts': 123,
'Deletes': 123,
'Updates': 123,
'Ddls': 123,
'FullLoadRows': 123,
'FullLoadCondtnlChkFailedRows': 123,
'FullLoadErrorRows': 123,
'FullLoadStartTime': datetime(2015, 1, 1),
'FullLoadEndTime': datetime(2015, 1, 1),
'FullLoadReloaded': True|False,
'LastUpdateTime': datetime(2015, 1, 1),
'TableState': 'string',
'ValidationPendingRecords': 123,
'ValidationFailedRecords': 123,
'ValidationSuspendedRecords': 123,
'ValidationState': 'string',
'ValidationStateDetails': 'string'
},
],
'Marker': 'string'
}
Response Structure
(dict) --
ReplicationTaskArn (string) --
The Amazon Resource Name (ARN) of the replication task.
TableStatistics (list) --
The table statistics.
(dict) --
Provides a collection of table statistics in response to a request by the DescribeTableStatistics operation.
SchemaName (string) --
The schema name.
TableName (string) --
The name of the table.
Inserts (integer) --
The number of insert actions performed on a table.
Deletes (integer) --
The number of delete actions performed on a table.
Updates (integer) --
The number of update actions performed on a table.
Ddls (integer) --
The data definition language (DDL) used to build and modify the structure of your tables.
FullLoadRows (integer) --
The number of rows added during the full load operation.
FullLoadCondtnlChkFailedRows (integer) --
The number of rows that failed conditional checks during the full load operation (valid only for migrations where DynamoDB is the target).
FullLoadErrorRows (integer) --
The number of rows that failed to load during the full load operation (valid only for migrations where DynamoDB is the target).
FullLoadStartTime (datetime) --
The time when the full load operation started.
FullLoadEndTime (datetime) --
The time when the full load operation completed.
FullLoadReloaded (boolean) --
A value that indicates if the table was reloaded (true ) or loaded as part of a new full load operation (false ).
LastUpdateTime (datetime) --
The last time a table was updated.
TableState (string) --
The state of the tables described.
Valid states: Table does not exist | Before load | Full load | Table completed | Table cancelled | Table error | Table all | Table updates | Table is being reloaded
ValidationPendingRecords (integer) --
The number of records that have yet to be validated.
ValidationFailedRecords (integer) --
The number of records that failed validation.
ValidationSuspendedRecords (integer) --
The number of records that couldn't be validated.
ValidationState (string) --
The validation state of the table.
This parameter can have the following values:
ValidationStateDetails (string) --
Additional details about the state of validation.
Marker (string) --
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords .
Exceptions
Examples
Returns table statistics on the database migration task, including table name, rows inserted, rows updated, and rows deleted.
response = client.describe_table_statistics(
Marker='',
MaxRecords=123,
ReplicationTaskArn='',
)
print(response)
Expected Output:
{
'Marker': '',
'ReplicationTaskArn': '',
'TableStatistics': [
],
'ResponseMetadata': {
'...': '...',
},
}
Generate a presigned url given a client, its method, and arguments
The presigned url
Create a paginator for an operation.
Returns an object that can wait for some condition.
Uploads the specified certificate.
See also: AWS API Documentation
Request Syntax
response = client.import_certificate(
CertificateIdentifier='string',
CertificatePem='string',
CertificateWallet=b'bytes',
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
[REQUIRED]
A customer-assigned name for the certificate. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
The tags associated with the certificate.
A user-defined key-value pair that describes metadata added to an AWS DMS resource and that is used by operations such as the following:
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
dict
Response Syntax
{
'Certificate': {
'CertificateIdentifier': 'string',
'CertificateCreationDate': datetime(2015, 1, 1),
'CertificatePem': 'string',
'CertificateWallet': b'bytes',
'CertificateArn': 'string',
'CertificateOwner': 'string',
'ValidFromDate': datetime(2015, 1, 1),
'ValidToDate': datetime(2015, 1, 1),
'SigningAlgorithm': 'string',
'KeyLength': 123
}
}
Response Structure
(dict) --
Certificate (dict) --
The certificate to be uploaded.
CertificateIdentifier (string) --
A customer-assigned name for the certificate. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
CertificateCreationDate (datetime) --
The date that the certificate was created.
CertificatePem (string) --
The contents of a .pem file, which contains an X.509 certificate.
CertificateWallet (bytes) --
The location of an imported Oracle Wallet certificate for use with SSL.
CertificateArn (string) --
The Amazon Resource Name (ARN) for the certificate.
CertificateOwner (string) --
The owner of the certificate.
ValidFromDate (datetime) --
The beginning date that the certificate is valid.
ValidToDate (datetime) --
The final date that the certificate is valid.
SigningAlgorithm (string) --
The signing algorithm for the certificate.
KeyLength (integer) --
The key length of the cryptographic algorithm being used.
Exceptions
Examples
Uploads the specified certificate.
response = client.import_certificate(
CertificateIdentifier='',
CertificatePem='',
)
print(response)
Expected Output:
{
'Certificate': {
},
'ResponseMetadata': {
'...': '...',
},
}
Lists all tags for an AWS DMS resource.
See also: AWS API Documentation
Request Syntax
response = client.list_tags_for_resource(
ResourceArn='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) string that uniquely identifies the AWS DMS resource.
{
'TagList': [
{
'Key': 'string',
'Value': 'string'
},
]
}
Response Structure
A list of tags for the resource.
A user-defined key-value pair that describes metadata added to an AWS DMS resource and that is used by operations such as the following:
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
Exceptions
Examples
Lists all tags for an AWS DMS resource.
response = client.list_tags_for_resource(
ResourceArn='',
)
print(response)
Expected Output:
{
'TagList': [
],
'ResponseMetadata': {
'...': '...',
},
}
Modifies the specified endpoint.
See also: AWS API Documentation
Request Syntax
response = client.modify_endpoint(
EndpointArn='string',
EndpointIdentifier='string',
EndpointType='source'|'target',
EngineName='string',
Username='string',
Password='string',
ServerName='string',
Port=123,
DatabaseName='string',
ExtraConnectionAttributes='string',
CertificateArn='string',
SslMode='none'|'require'|'verify-ca'|'verify-full',
ServiceAccessRoleArn='string',
ExternalTableDefinition='string',
DynamoDbSettings={
'ServiceAccessRoleArn': 'string'
},
S3Settings={
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'CsvRowDelimiter': 'string',
'CsvDelimiter': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'CompressionType': 'none'|'gzip',
'EncryptionMode': 'sse-s3'|'sse-kms',
'ServerSideEncryptionKmsKeyId': 'string',
'DataFormat': 'csv'|'parquet',
'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
'DictPageSizeLimit': 123,
'RowGroupLength': 123,
'DataPageSize': 123,
'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
'EnableStatistics': True|False,
'IncludeOpForFullLoad': True|False,
'CdcInsertsOnly': True|False,
'TimestampColumnName': 'string',
'ParquetTimestampInMillisecond': True|False,
'CdcInsertsAndUpdates': True|False
},
DmsTransferSettings={
'ServiceAccessRoleArn': 'string',
'BucketName': 'string'
},
MongoDbSettings={
'Username': 'string',
'Password': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'AuthType': 'no'|'password',
'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
'NestingLevel': 'none'|'one',
'ExtractDocId': 'string',
'DocsToInvestigate': 'string',
'AuthSource': 'string',
'KmsKeyId': 'string'
},
KinesisSettings={
'StreamArn': 'string',
'MessageFormat': 'json'|'json-unformatted',
'ServiceAccessRoleArn': 'string',
'IncludeTransactionDetails': True|False,
'IncludePartitionValue': True|False,
'PartitionIncludeSchemaTable': True|False,
'IncludeTableAlterOperations': True|False,
'IncludeControlDetails': True|False
},
KafkaSettings={
'Broker': 'string',
'Topic': 'string'
},
ElasticsearchSettings={
'ServiceAccessRoleArn': 'string',
'EndpointUri': 'string',
'FullLoadErrorPercentage': 123,
'ErrorRetryDuration': 123
},
NeptuneSettings={
'ServiceAccessRoleArn': 'string',
'S3BucketName': 'string',
'S3BucketFolder': 'string',
'ErrorRetryDuration': 123,
'MaxFileSize': 123,
'MaxRetryCount': 123,
'IamAuthEnabled': True|False
},
RedshiftSettings={
'AcceptAnyDate': True|False,
'AfterConnectScript': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'ConnectionTimeout': 123,
'DatabaseName': 'string',
'DateFormat': 'string',
'EmptyAsNull': True|False,
'EncryptionMode': 'sse-s3'|'sse-kms',
'FileTransferUploadStreams': 123,
'LoadTimeout': 123,
'MaxFileSize': 123,
'Password': 'string',
'Port': 123,
'RemoveQuotes': True|False,
'ReplaceInvalidChars': 'string',
'ReplaceChars': 'string',
'ServerName': 'string',
'ServiceAccessRoleArn': 'string',
'ServerSideEncryptionKmsKeyId': 'string',
'TimeFormat': 'string',
'TrimBlanks': True|False,
'TruncateColumns': True|False,
'Username': 'string',
'WriteBufferSize': 123
}
)
[REQUIRED]
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
Settings in JSON format for the target Amazon DynamoDB endpoint. For information about other available settings, see Using Object Mapping to Migrate Data to DynamoDB in the AWS Database Migration Service User Guide.
The Amazon Resource Name (ARN) used by the service access IAM role.
Settings in JSON format for the target Amazon S3 endpoint. For more information about the available settings, see Extra Connection Attributes When Using Amazon S3 as a Target for AWS DMS in the AWS Database Migration Service User Guide.
The Amazon Resource Name (ARN) used by the service access IAM role.
The external table definition.
The delimiter used to separate rows in the source files. The default is a carriage return (\n ).
The delimiter used to separate columns in the source files. The default is a comma.
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path `` bucketFolder /schema_name /table_name /`` . If this parameter isn't specified, then the path used is `` schema_name /table_name /`` .
The name of the S3 bucket.
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS . To use SSE_S3 , you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:
If you are using SSE_KMS for the EncryptionMode , provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=*value* ,BucketFolder=*value* ,BucketName=*value* ,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=*value* ``
The format of the data that you want to use for output. You can choose one of the following:
The type of encoding you are using:
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN . This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0 .
A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL , DISTINCT , MAX , and MIN values. This parameter defaults to true . This value is used for .parquet file format only.
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note
AWS DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.
For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y , the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.
Note
This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly is set to true or y , only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad . If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false , every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note
AWS DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.
DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS . By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.
When the AddColumnName parameter is set to true , DMS also includes a name for the timestamp column that you set with TimestampColumnName .
A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.
Note
AWS DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond is set to true or y , AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.
Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.
Note
AWS DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.
Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false , but when CdcInsertsAndUpdates is set to true or y , INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.
For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false , CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the use of the CdcInsertsAndUpdates parameter in versions 3.3.1 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
The settings in JSON format for the DMS transfer type of source endpoint.
Attributes include the following:
Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string ,BucketName=string,CompressionType=string
JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }
The IAM role that has permission to access the Amazon S3 bucket.
The name of the S3 bucket to use.
Settings in JSON format for the source MongoDB endpoint. For more information about the available settings, see the configuration properties section in Using MongoDB as a Target for AWS Database Migration Service in the AWS Database Migration Service User Guide.
The user name you use to access the MongoDB source endpoint.
The password for the user account you use to access the MongoDB source endpoint.
The name of the server on the MongoDB source endpoint.
The port value for the MongoDB source endpoint.
The database name on the MongoDB source endpoint.
The authentication type you use to access the MongoDB source endpoint.
Valid values: NO, PASSWORD
When NO is selected, user name and password parameters are not used and can be empty.
The authentication mechanism you use to access the MongoDB source endpoint.
Valid values: DEFAULT, MONGODB_CR, SCRAM_SHA_1
DEFAULT – For MongoDB version 2.x, use MONGODB_CR. For MongoDB version 3.x, use SCRAM_SHA_1. This setting isn't used when authType=No.
Specifies either document or table mode.
Valid values: NONE, ONE
Default value is NONE. Specify NONE to use document mode. Specify ONE to use table mode.
Specifies the document ID. Use this setting when NestingLevel is set to NONE.
Default value is false.
Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to ONE.
Must be a positive value greater than 0. Default value is 1000.
The MongoDB database name. This setting isn't used when authType=NO .
The default is admin.
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
Settings in JSON format for the target endpoint for Amazon Kinesis Data Streams. For more information about the available settings, see Using Amazon Kinesis Data Streams as a Target for AWS Database Migration Service in the AWS Database Migration User Guide.
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is False .
Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type . The default is False .
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is False .
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is False .
Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is False .
Settings in JSON format for the target Apache Kafka endpoint. For more information about the available settings, see Using Apache Kafka as a Target for AWS Database Migration Service in the AWS Database Migration User Guide.
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form `` broker-hostname-or-ip :port `` . For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345" .
The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.
Settings in JSON format for the target Elasticsearch endpoint. For more information about the available settings, see Extra Connection Attributes When Using Elasticsearch as a Target for AWS DMS in the AWS Database Migration User Guide.
The Amazon Resource Name (ARN) used by service to access the IAM role.
The endpoint for the Elasticsearch cluster.
The maximum percentage of records that can fail to be written before a full load operation stops.
The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
Settings in JSON format for the target Amazon Neptune endpoint. For more information about the available settings, see https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html#CHAP_Target.Neptune.EndpointSettings in the AWS Database Migration Service User Guide.
The ARN of the service role you have created for the Neptune target endpoint. For more information, see https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html#CHAP_Target.Neptune.ServiceRole in the AWS Database Migration Service User Guide.
The name of the S3 bucket for AWS DMS to temporarily store migrated graph data in CSV files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these CSV files.
A folder path where you where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName
The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
The maximum size in KB of migrated graph data stored in a CSV file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1048576 KB. If successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.
The number of times for AWS DMS to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 5.
If you want IAM authorization enabled for this endpoint, set this parameter to true and attach the appropriate role policy document to your service role specified by ServiceAccessRoleArn . The default is false .
Provides information that defines an Amazon Redshift endpoint.
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
The location where the comma-separated value (.csv) files are stored before being uploaded to the S3 bucket.
The name of the S3 bucket you want to use
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
The name of the Amazon Redshift data warehouse (service) that you are working with.
The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.
If your date and time values use formats different from each other, set this to auto .
A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false .
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS . To use SSE_S3 , create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.
The maximum size (in KB) of any .csv file used to transfer data to Amazon Redshift. This accepts a value from 1 through 1,048,576. It defaults to 32,768 KB (32 MB).
The password for the user named in the username property.
The port number for Amazon Redshift. The default value is 5439.
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false .
A list of characters that you want to replace. Use with ReplaceChars .
A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars , substituting the specified characters instead. The default is "?" .
The name of the Amazon Redshift cluster you are using.
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.
The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode , provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string' , 'epochsecs' , or 'epochmillisecs' . It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.
If your date and time values use formats different from each other, set this parameter to auto .
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false .
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false .
An Amazon Redshift user name for a registered user.
The size of the write buffer to use in rows. Valid values range from 1 through 2,048. The default is 1,024. Use this setting to tune performance.
dict
Response Syntax
{
'Endpoint': {
'EndpointIdentifier': 'string',
'EndpointType': 'source'|'target',
'EngineName': 'string',
'EngineDisplayName': 'string',
'Username': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'ExtraConnectionAttributes': 'string',
'Status': 'string',
'KmsKeyId': 'string',
'EndpointArn': 'string',
'CertificateArn': 'string',
'SslMode': 'none'|'require'|'verify-ca'|'verify-full',
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'ExternalId': 'string',
'DynamoDbSettings': {
'ServiceAccessRoleArn': 'string'
},
'S3Settings': {
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'CsvRowDelimiter': 'string',
'CsvDelimiter': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'CompressionType': 'none'|'gzip',
'EncryptionMode': 'sse-s3'|'sse-kms',
'ServerSideEncryptionKmsKeyId': 'string',
'DataFormat': 'csv'|'parquet',
'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
'DictPageSizeLimit': 123,
'RowGroupLength': 123,
'DataPageSize': 123,
'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
'EnableStatistics': True|False,
'IncludeOpForFullLoad': True|False,
'CdcInsertsOnly': True|False,
'TimestampColumnName': 'string',
'ParquetTimestampInMillisecond': True|False,
'CdcInsertsAndUpdates': True|False
},
'DmsTransferSettings': {
'ServiceAccessRoleArn': 'string',
'BucketName': 'string'
},
'MongoDbSettings': {
'Username': 'string',
'Password': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'AuthType': 'no'|'password',
'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
'NestingLevel': 'none'|'one',
'ExtractDocId': 'string',
'DocsToInvestigate': 'string',
'AuthSource': 'string',
'KmsKeyId': 'string'
},
'KinesisSettings': {
'StreamArn': 'string',
'MessageFormat': 'json'|'json-unformatted',
'ServiceAccessRoleArn': 'string',
'IncludeTransactionDetails': True|False,
'IncludePartitionValue': True|False,
'PartitionIncludeSchemaTable': True|False,
'IncludeTableAlterOperations': True|False,
'IncludeControlDetails': True|False
},
'KafkaSettings': {
'Broker': 'string',
'Topic': 'string'
},
'ElasticsearchSettings': {
'ServiceAccessRoleArn': 'string',
'EndpointUri': 'string',
'FullLoadErrorPercentage': 123,
'ErrorRetryDuration': 123
},
'NeptuneSettings': {
'ServiceAccessRoleArn': 'string',
'S3BucketName': 'string',
'S3BucketFolder': 'string',
'ErrorRetryDuration': 123,
'MaxFileSize': 123,
'MaxRetryCount': 123,
'IamAuthEnabled': True|False
},
'RedshiftSettings': {
'AcceptAnyDate': True|False,
'AfterConnectScript': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'ConnectionTimeout': 123,
'DatabaseName': 'string',
'DateFormat': 'string',
'EmptyAsNull': True|False,
'EncryptionMode': 'sse-s3'|'sse-kms',
'FileTransferUploadStreams': 123,
'LoadTimeout': 123,
'MaxFileSize': 123,
'Password': 'string',
'Port': 123,
'RemoveQuotes': True|False,
'ReplaceInvalidChars': 'string',
'ReplaceChars': 'string',
'ServerName': 'string',
'ServiceAccessRoleArn': 'string',
'ServerSideEncryptionKmsKeyId': 'string',
'TimeFormat': 'string',
'TrimBlanks': True|False,
'TruncateColumns': True|False,
'Username': 'string',
'WriteBufferSize': 123
}
}
}
Response Structure
(dict) --
Endpoint (dict) --
The modified endpoint.
EndpointIdentifier (string) --
The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
EndpointType (string) --
The type of endpoint. Valid values are source and target .
EngineName (string) --
The database engine name. Valid values, depending on the EndpointType, include "mysql" , "oracle" , "postgres" , "mariadb" , "aurora" , "aurora-postgresql" , "redshift" , "s3" , "db2" , "azuredb" , "sybase" , "dynamodb" , "mongodb" , "kinesis" , "kafka" , "elasticsearch" , "documentdb" , and "sqlserver" .
EngineDisplayName (string) --
The expanded name for the engine name. For example, if the EngineName parameter is "aurora," this value would be "Amazon Aurora MySQL."
Username (string) --
The user name used to connect to the endpoint.
ServerName (string) --
The name of the server at the endpoint.
Port (integer) --
The port value used to access the endpoint.
DatabaseName (string) --
The name of the database at the endpoint.
ExtraConnectionAttributes (string) --
Additional connection attributes used to connect to the endpoint.
Status (string) --
The status of the endpoint.
KmsKeyId (string) --
An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
EndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
CertificateArn (string) --
The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
SslMode (string) --
The SSL mode used to connect to the endpoint. The default value is none .
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
ExternalTableDefinition (string) --
The external table definition.
ExternalId (string) --
Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
DynamoDbSettings (dict) --
The settings for the target DynamoDB database. For more information, see the DynamoDBSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
S3Settings (dict) --
The settings for the S3 target endpoint. For more information, see the S3Settings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
ExternalTableDefinition (string) --
The external table definition.
CsvRowDelimiter (string) --
The delimiter used to separate rows in the source files. The default is a carriage return (\n ).
CsvDelimiter (string) --
The delimiter used to separate columns in the source files. The default is a comma.
BucketFolder (string) --
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path `` bucketFolder /schema_name /table_name /`` . If this parameter isn't specified, then the path used is `` schema_name /table_name /`` .
BucketName (string) --
The name of the S3 bucket.
CompressionType (string) --
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS . To use SSE_S3 , you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:
ServerSideEncryptionKmsKeyId (string) --
If you are using SSE_KMS for the EncryptionMode , provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=*value* ,BucketFolder=*value* ,BucketName=*value* ,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=*value* ``
DataFormat (string) --
The format of the data that you want to use for output. You can choose one of the following:
EncodingType (string) --
The type of encoding you are using:
DictPageSizeLimit (integer) --
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN . This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.
RowGroupLength (integer) --
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).
DataPageSize (integer) --
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion (string) --
The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0 .
EnableStatistics (boolean) --
A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL , DISTINCT , MAX , and MIN values. This parameter defaults to true . This value is used for .parquet file format only.
IncludeOpForFullLoad (boolean) --
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note
AWS DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.
For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y , the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.
Note
This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
CdcInsertsOnly (boolean) --
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly is set to true or y , only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad . If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false , every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
TimestampColumnName (string) --
A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note
AWS DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.
DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS . By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.
When the AddColumnName parameter is set to true , DMS also includes a name for the timestamp column that you set with TimestampColumnName .
ParquetTimestampInMillisecond (boolean) --
A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.
Note
AWS DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond is set to true or y , AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.
Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.
Note
AWS DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.
Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.
CdcInsertsAndUpdates (boolean) --
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false , but when CdcInsertsAndUpdates is set to true or y , INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.
For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false , CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the use of the CdcInsertsAndUpdates parameter in versions 3.3.1 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
DmsTransferSettings (dict) --
The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string
JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }
ServiceAccessRoleArn (string) --
The IAM role that has permission to access the Amazon S3 bucket.
BucketName (string) --
The name of the S3 bucket to use.
MongoDbSettings (dict) --
The settings for the MongoDB source endpoint. For more information, see the MongoDbSettings structure.
Username (string) --
The user name you use to access the MongoDB source endpoint.
Password (string) --
The password for the user account you use to access the MongoDB source endpoint.
ServerName (string) --
The name of the server on the MongoDB source endpoint.
Port (integer) --
The port value for the MongoDB source endpoint.
DatabaseName (string) --
The database name on the MongoDB source endpoint.
AuthType (string) --
The authentication type you use to access the MongoDB source endpoint.
Valid values: NO, PASSWORD
When NO is selected, user name and password parameters are not used and can be empty.
AuthMechanism (string) --
The authentication mechanism you use to access the MongoDB source endpoint.
Valid values: DEFAULT, MONGODB_CR, SCRAM_SHA_1
DEFAULT – For MongoDB version 2.x, use MONGODB_CR. For MongoDB version 3.x, use SCRAM_SHA_1. This setting isn't used when authType=No.
NestingLevel (string) --
Specifies either document or table mode.
Valid values: NONE, ONE
Default value is NONE. Specify NONE to use document mode. Specify ONE to use table mode.
ExtractDocId (string) --
Specifies the document ID. Use this setting when NestingLevel is set to NONE.
Default value is false.
DocsToInvestigate (string) --
Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to ONE.
Must be a positive value greater than 0. Default value is 1000.
AuthSource (string) --
The MongoDB database name. This setting isn't used when authType=NO .
The default is admin.
KmsKeyId (string) --
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
KinesisSettings (dict) --
The settings for the Amazon Kinesis target endpoint. For more information, see the KinesisSettings structure.
StreamArn (string) --
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat (string) --
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.
IncludeTransactionDetails (boolean) --
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is False .
IncludePartitionValue (boolean) --
Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type . The default is False .
PartitionIncludeSchemaTable (boolean) --
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is False .
IncludeTableAlterOperations (boolean) --
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is False .
IncludeControlDetails (boolean) --
Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is False .
KafkaSettings (dict) --
The settings for the Apache Kafka target endpoint. For more information, see the KafkaSettings structure.
Broker (string) --
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form `` broker-hostname-or-ip :port `` . For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345" .
Topic (string) --
The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.
ElasticsearchSettings (dict) --
The settings for the Elasticsearch source endpoint. For more information, see the ElasticsearchSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by service to access the IAM role.
EndpointUri (string) --
The endpoint for the Elasticsearch cluster.
FullLoadErrorPercentage (integer) --
The maximum percentage of records that can fail to be written before a full load operation stops.
ErrorRetryDuration (integer) --
The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
NeptuneSettings (dict) --
The settings for the MongoDB source endpoint. For more information, see the NeptuneSettings structure.
ServiceAccessRoleArn (string) --
The ARN of the service role you have created for the Neptune target endpoint. For more information, see https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html#CHAP_Target.Neptune.ServiceRole in the AWS Database Migration Service User Guide.
S3BucketName (string) --
The name of the S3 bucket for AWS DMS to temporarily store migrated graph data in CSV files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these CSV files.
S3BucketFolder (string) --
A folder path where you where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName
ErrorRetryDuration (integer) --
The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize (integer) --
The maximum size in KB of migrated graph data stored in a CSV file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1048576 KB. If successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount (integer) --
The number of times for AWS DMS to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled (boolean) --
If you want IAM authorization enabled for this endpoint, set this parameter to true and attach the appropriate role policy document to your service role specified by ServiceAccessRoleArn . The default is false .
RedshiftSettings (dict) --
Settings for the Amazon Redshift endpoint.
AcceptAnyDate (boolean) --
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript (string) --
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder (string) --
The location where the comma-separated value (.csv) files are stored before being uploaded to the S3 bucket.
BucketName (string) --
The name of the S3 bucket you want to use
ConnectionTimeout (integer) --
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName (string) --
The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat (string) --
The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.
If your date and time values use formats different from each other, set this to auto .
EmptyAsNull (boolean) --
A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false .
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS . To use SSE_S3 , create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"
FileTransferUploadStreams (integer) --
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
LoadTimeout (integer) --
The amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.
MaxFileSize (integer) --
The maximum size (in KB) of any .csv file used to transfer data to Amazon Redshift. This accepts a value from 1 through 1,048,576. It defaults to 32,768 KB (32 MB).
Password (string) --
The password for the user named in the username property.
Port (integer) --
The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes (boolean) --
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false .
ReplaceInvalidChars (string) --
A list of characters that you want to replace. Use with ReplaceChars .
ReplaceChars (string) --
A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars , substituting the specified characters instead. The default is "?" .
ServerName (string) --
The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.
ServerSideEncryptionKmsKeyId (string) --
The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode , provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
TimeFormat (string) --
The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string' , 'epochsecs' , or 'epochmillisecs' . It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.
If your date and time values use formats different from each other, set this parameter to auto .
TrimBlanks (boolean) --
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false .
TruncateColumns (boolean) --
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false .
Username (string) --
An Amazon Redshift user name for a registered user.
WriteBufferSize (integer) --
The size of the write buffer to use in rows. Valid values range from 1 through 2,048. The default is 1,024. Use this setting to tune performance.
Exceptions
Examples
Modifies the specified endpoint.
response = client.modify_endpoint(
CertificateArn='',
DatabaseName='',
EndpointArn='',
EndpointIdentifier='',
EndpointType='source',
EngineName='',
ExtraConnectionAttributes='',
Password='',
Port=123,
ServerName='',
SslMode='require',
Username='',
)
print(response)
Expected Output:
{
'Endpoint': {
},
'ResponseMetadata': {
'...': '...',
},
}
Modifies an existing AWS DMS event notification subscription.
See also: AWS API Documentation
Request Syntax
response = client.modify_event_subscription(
SubscriptionName='string',
SnsTopicArn='string',
SourceType='string',
EventCategories=[
'string',
],
Enabled=True|False
)
[REQUIRED]
The name of the AWS DMS event notification subscription to be modified.
The type of AWS DMS resource that generates the events you want to subscribe to.
Valid values: replication-instance | replication-task
A list of event categories for a source type that you want to subscribe to. Use the DescribeEventCategories action to see a list of event categories.
dict
Response Syntax
{
'EventSubscription': {
'CustomerAwsId': 'string',
'CustSubscriptionId': 'string',
'SnsTopicArn': 'string',
'Status': 'string',
'SubscriptionCreationTime': 'string',
'SourceType': 'string',
'SourceIdsList': [
'string',
],
'EventCategoriesList': [
'string',
],
'Enabled': True|False
}
}
Response Structure
(dict) --
EventSubscription (dict) --
The modified event subscription.
CustomerAwsId (string) --
The AWS customer account associated with the AWS DMS event notification subscription.
CustSubscriptionId (string) --
The AWS DMS event notification subscription Id.
SnsTopicArn (string) --
The topic ARN of the AWS DMS event notification subscription.
Status (string) --
The status of the AWS DMS event notification subscription.
Constraints:
Can be one of the following: creating | modifying | deleting | active | no-permission | topic-not-exist
The status "no-permission" indicates that AWS DMS no longer has permission to post to the SNS topic. The status "topic-not-exist" indicates that the topic was deleted after the subscription was created.
SubscriptionCreationTime (string) --
The time the RDS event notification subscription was created.
SourceType (string) --
The type of AWS DMS resource that generates events.
Valid values: replication-instance | replication-server | security-group | replication-task
SourceIdsList (list) --
A list of source Ids for the event subscription.
EventCategoriesList (list) --
A lists of event categories.
Enabled (boolean) --
Boolean value that indicates if the event subscription is enabled.
Exceptions
Modifies the replication instance to apply new settings. You can change one or more parameters by specifying these parameters and the new values in the request.
Some settings are applied during the maintenance window.
See also: AWS API Documentation
Request Syntax
response = client.modify_replication_instance(
ReplicationInstanceArn='string',
AllocatedStorage=123,
ApplyImmediately=True|False,
ReplicationInstanceClass='string',
VpcSecurityGroupIds=[
'string',
],
PreferredMaintenanceWindow='string',
MultiAZ=True|False,
EngineVersion='string',
AllowMajorVersionUpgrade=True|False,
AutoMinorVersionUpgrade=True|False,
ReplicationInstanceIdentifier='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) of the replication instance.
The compute and memory capacity of the replication instance.
Valid Values: dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge
Specifies the VPC security group to be used with the replication instance. The VPC security group must work with the VPC containing the replication instance.
The weekly time range (in UTC) during which system maintenance can occur, which might result in an outage. Changing this parameter does not result in an outage, except in the following situation, and the change is asynchronously applied as soon as possible. If moving this window to the current time, there must be at least 30 minutes between the current time and end of the window to ensure pending changes are applied.
Default: Uses existing setting
Format: ddd:hh24:mi-ddd:hh24:mi
Valid Days: Mon | Tue | Wed | Thu | Fri | Sat | Sun
Constraints: Must be at least 30 minutes
Indicates that major version upgrades are allowed. Changing this parameter does not result in an outage, and the change is asynchronously applied as soon as possible.
This parameter must be set to true when specifying a value for the EngineVersion parameter that is a different major version than the replication instance's current version.
A value that indicates that minor version upgrades are applied automatically to the replication instance during the maintenance window. Changing this parameter doesn't result in an outage, except in the case dsecribed following. The change is asynchronously applied as soon as possible.
An outage does result if these factors apply:
dict
Response Syntax
{
'ReplicationInstance': {
'ReplicationInstanceIdentifier': 'string',
'ReplicationInstanceClass': 'string',
'ReplicationInstanceStatus': 'string',
'AllocatedStorage': 123,
'InstanceCreateTime': datetime(2015, 1, 1),
'VpcSecurityGroups': [
{
'VpcSecurityGroupId': 'string',
'Status': 'string'
},
],
'AvailabilityZone': 'string',
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
},
'PreferredMaintenanceWindow': 'string',
'PendingModifiedValues': {
'ReplicationInstanceClass': 'string',
'AllocatedStorage': 123,
'MultiAZ': True|False,
'EngineVersion': 'string'
},
'MultiAZ': True|False,
'EngineVersion': 'string',
'AutoMinorVersionUpgrade': True|False,
'KmsKeyId': 'string',
'ReplicationInstanceArn': 'string',
'ReplicationInstancePublicIpAddress': 'string',
'ReplicationInstancePrivateIpAddress': 'string',
'ReplicationInstancePublicIpAddresses': [
'string',
],
'ReplicationInstancePrivateIpAddresses': [
'string',
],
'PubliclyAccessible': True|False,
'SecondaryAvailabilityZone': 'string',
'FreeUntil': datetime(2015, 1, 1),
'DnsNameServers': 'string'
}
}
Response Structure
(dict) --
ReplicationInstance (dict) --
The modified replication instance.
ReplicationInstanceIdentifier (string) --
The replication instance identifier. This parameter is stored as a lowercase string.
Constraints:
Example: myrepinstance
ReplicationInstanceClass (string) --
The compute and memory capacity of the replication instance.
Valid Values: dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge
ReplicationInstanceStatus (string) --
The status of the replication instance.
AllocatedStorage (integer) --
The amount of storage (in gigabytes) that is allocated for the replication instance.
InstanceCreateTime (datetime) --
The time the replication instance was created.
VpcSecurityGroups (list) --
The VPC security group for the instance.
(dict) --
Describes status of a security group associated with the virtual private cloud hosting your replication and DB instances.
VpcSecurityGroupId (string) --
The VPC security group Id.
Status (string) --
The status of the VPC security group.
AvailabilityZone (string) --
The Availability Zone for the instance.
ReplicationSubnetGroup (dict) --
The subnet group for the replication instance.
ReplicationSubnetGroupIdentifier (string) --
The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription (string) --
A description for the replication subnet group.
VpcId (string) --
The ID of the VPC.
SubnetGroupStatus (string) --
The status of the subnet group.
Subnets (list) --
The subnets that are in the subnet group.
(dict) --
In response to a request by the DescribeReplicationSubnetGroup operation, this object identifies a subnet by its given Availability Zone, subnet identifier, and status.
SubnetIdentifier (string) --
The subnet identifier.
SubnetAvailabilityZone (dict) --
The Availability Zone of the subnet.
Name (string) --
The name of the Availability Zone.
SubnetStatus (string) --
The status of the subnet.
PreferredMaintenanceWindow (string) --
The maintenance window times for the replication instance.
PendingModifiedValues (dict) --
The pending modification values.
ReplicationInstanceClass (string) --
The compute and memory capacity of the replication instance.
Valid Values: dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge
AllocatedStorage (integer) --
The amount of storage (in gigabytes) that is allocated for the replication instance.
MultiAZ (boolean) --
Specifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the Multi-AZ parameter is set to true .
EngineVersion (string) --
The engine version number of the replication instance.
MultiAZ (boolean) --
Specifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the Multi-AZ parameter is set to true .
EngineVersion (string) --
The engine version number of the replication instance.
AutoMinorVersionUpgrade (boolean) --
Boolean value indicating if minor version upgrades will be automatically applied to the instance.
KmsKeyId (string) --
An AWS KMS key identifier that is used to encrypt the data on the replication instance.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
ReplicationInstanceArn (string) --
The Amazon Resource Name (ARN) of the replication instance.
ReplicationInstancePublicIpAddress (string) --
The public IP address of the replication instance.
ReplicationInstancePrivateIpAddress (string) --
The private IP address of the replication instance.
ReplicationInstancePublicIpAddresses (list) --
One or more public IP addresses for the replication instance.
ReplicationInstancePrivateIpAddresses (list) --
One or more private IP addresses for the replication instance.
PubliclyAccessible (boolean) --
Specifies the accessibility options for the replication instance. A value of true represents an instance with a public IP address. A value of false represents an instance with a private IP address. The default value is true .
SecondaryAvailabilityZone (string) --
The Availability Zone of the standby replication instance in a Multi-AZ deployment.
FreeUntil (datetime) --
The expiration date of the free replication instance that is part of the Free DMS program.
DnsNameServers (string) --
The DNS name servers for the replication instance.
Exceptions
Examples
Modifies the replication instance to apply new settings. You can change one or more parameters by specifying these parameters and the new values in the request. Some settings are applied during the maintenance window.
response = client.modify_replication_instance(
AllocatedStorage=123,
AllowMajorVersionUpgrade=True,
ApplyImmediately=True,
AutoMinorVersionUpgrade=True,
EngineVersion='1.5.0',
MultiAZ=True,
PreferredMaintenanceWindow='sun:06:00-sun:14:00',
ReplicationInstanceArn='arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ',
ReplicationInstanceClass='dms.t2.micro',
ReplicationInstanceIdentifier='test-rep-1',
VpcSecurityGroupIds=[
],
)
print(response)
Expected Output:
{
'ReplicationInstance': {
'AllocatedStorage': 5,
'AutoMinorVersionUpgrade': True,
'EngineVersion': '1.5.0',
'KmsKeyId': 'arn:aws:kms:us-east-1:123456789012:key/4c1731d6-5435-ed4d-be13-d53411a7cfbd',
'PendingModifiedValues': {
},
'PreferredMaintenanceWindow': 'sun:06:00-sun:14:00',
'PubliclyAccessible': True,
'ReplicationInstanceArn': 'arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ',
'ReplicationInstanceClass': 'dms.t2.micro',
'ReplicationInstanceIdentifier': 'test-rep-1',
'ReplicationInstanceStatus': 'available',
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupDescription': 'default',
'ReplicationSubnetGroupIdentifier': 'default',
'SubnetGroupStatus': 'Complete',
'Subnets': [
{
'SubnetAvailabilityZone': {
'Name': 'us-east-1d',
},
'SubnetIdentifier': 'subnet-f6dd91af',
'SubnetStatus': 'Active',
},
{
'SubnetAvailabilityZone': {
'Name': 'us-east-1b',
},
'SubnetIdentifier': 'subnet-3605751d',
'SubnetStatus': 'Active',
},
{
'SubnetAvailabilityZone': {
'Name': 'us-east-1c',
},
'SubnetIdentifier': 'subnet-c2daefb5',
'SubnetStatus': 'Active',
},
{
'SubnetAvailabilityZone': {
'Name': 'us-east-1e',
},
'SubnetIdentifier': 'subnet-85e90cb8',
'SubnetStatus': 'Active',
},
],
'VpcId': 'vpc-6741a603',
},
},
'ResponseMetadata': {
'...': '...',
},
}
Modifies the settings for the specified replication subnet group.
See also: AWS API Documentation
Request Syntax
response = client.modify_replication_subnet_group(
ReplicationSubnetGroupIdentifier='string',
ReplicationSubnetGroupDescription='string',
SubnetIds=[
'string',
]
)
[REQUIRED]
The name of the replication instance subnet group.
[REQUIRED]
A list of subnet IDs.
dict
Response Syntax
{
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
}
}
Response Structure
(dict) --
ReplicationSubnetGroup (dict) --
The modified replication subnet group.
ReplicationSubnetGroupIdentifier (string) --
The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription (string) --
A description for the replication subnet group.
VpcId (string) --
The ID of the VPC.
SubnetGroupStatus (string) --
The status of the subnet group.
Subnets (list) --
The subnets that are in the subnet group.
(dict) --
In response to a request by the DescribeReplicationSubnetGroup operation, this object identifies a subnet by its given Availability Zone, subnet identifier, and status.
SubnetIdentifier (string) --
The subnet identifier.
SubnetAvailabilityZone (dict) --
The Availability Zone of the subnet.
Name (string) --
The name of the Availability Zone.
SubnetStatus (string) --
The status of the subnet.
Exceptions
Examples
Modifies the settings for the specified replication subnet group.
response = client.modify_replication_subnet_group(
ReplicationSubnetGroupDescription='',
ReplicationSubnetGroupIdentifier='',
SubnetIds=[
],
)
print(response)
Expected Output:
{
'ReplicationSubnetGroup': {
},
'ResponseMetadata': {
'...': '...',
},
}
Modifies the specified replication task.
You can't modify the task endpoints. The task must be stopped before you can modify it.
For more information about AWS DMS tasks, see Working with Migration Tasks in the AWS Database Migration Service User Guide .
See also: AWS API Documentation
Request Syntax
response = client.modify_replication_task(
ReplicationTaskArn='string',
ReplicationTaskIdentifier='string',
MigrationType='full-load'|'cdc'|'full-load-and-cdc',
TableMappings='string',
ReplicationTaskSettings='string',
CdcStartTime=datetime(2015, 1, 1),
CdcStartPosition='string',
CdcStopPosition='string',
TaskData='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) of the replication task.
The replication task identifier.
Constraints:
Indicates the start time for a change data capture (CDC) operation. Use either CdcStartTime or CdcStartPosition to specify when you want a CDC operation to start. Specifying both values results in an error.
Timestamp Example: --cdc-start-time “2018-03-08T12:12:12”
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
Note
When you use this task setting with a source PostgreSQL database, a logical replication slot should already be created and associated with the source endpoint. You can verify this by setting the slotName extra connection attribute to the name of this logical replication slot. For more information, see Extra Connection Attributes When Using PostgreSQL as a Source for AWS DMS .
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
dict
Response Syntax
{
'ReplicationTask': {
'ReplicationTaskIdentifier': 'string',
'SourceEndpointArn': 'string',
'TargetEndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'MigrationType': 'full-load'|'cdc'|'full-load-and-cdc',
'TableMappings': 'string',
'ReplicationTaskSettings': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'StopReason': 'string',
'ReplicationTaskCreationDate': datetime(2015, 1, 1),
'ReplicationTaskStartDate': datetime(2015, 1, 1),
'CdcStartPosition': 'string',
'CdcStopPosition': 'string',
'RecoveryCheckpoint': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskStats': {
'FullLoadProgressPercent': 123,
'ElapsedTimeMillis': 123,
'TablesLoaded': 123,
'TablesLoading': 123,
'TablesQueued': 123,
'TablesErrored': 123,
'FreshStartDate': datetime(2015, 1, 1),
'StartDate': datetime(2015, 1, 1),
'StopDate': datetime(2015, 1, 1),
'FullLoadStartDate': datetime(2015, 1, 1),
'FullLoadFinishDate': datetime(2015, 1, 1)
},
'TaskData': 'string'
}
}
Response Structure
(dict) --
ReplicationTask (dict) --
The replication task that was modified.
ReplicationTaskIdentifier (string) --
The user-assigned replication task identifier or name.
Constraints:
SourceEndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
TargetEndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
ReplicationInstanceArn (string) --
The Amazon Resource Name (ARN) of the replication instance.
MigrationType (string) --
The type of migration.
TableMappings (string) --
Table mappings specified in the task.
ReplicationTaskSettings (string) --
The settings for the replication task.
Status (string) --
The status of the replication task.
LastFailureMessage (string) --
The last error (failure) message generated for the replication instance.
StopReason (string) --
The reason the replication task was stopped.
ReplicationTaskCreationDate (datetime) --
The date the replication task was created.
ReplicationTaskStartDate (datetime) --
The date the replication task is scheduled to start.
CdcStartPosition (string) --
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want the CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition (string) --
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
RecoveryCheckpoint (string) --
Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the CdcStartPosition parameter to start a CDC operation that begins at that checkpoint.
ReplicationTaskArn (string) --
The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats (dict) --
The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent (integer) --
The percent complete for the full load migration task.
ElapsedTimeMillis (integer) --
The elapsed time of the task, in milliseconds.
TablesLoaded (integer) --
The number of tables loaded for this task.
TablesLoading (integer) --
The number of tables currently loading for this task.
TablesQueued (integer) --
The number of tables queued for this task.
TablesErrored (integer) --
The number of errors that have occurred during this task.
FreshStartDate (datetime) --
The date the replication task was started either with a fresh start or a target reload.
StartDate (datetime) --
The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType .
StopDate (datetime) --
The date the replication task was stopped.
FullLoadStartDate (datetime) --
The date the replication task full load was started.
FullLoadFinishDate (datetime) --
The date the replication task full load was completed.
TaskData (string) --
Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the AWS Database Migration User Guide.
Exceptions
Reboots a replication instance. Rebooting results in a momentary outage, until the replication instance becomes available again.
See also: AWS API Documentation
Request Syntax
response = client.reboot_replication_instance(
ReplicationInstanceArn='string',
ForceFailover=True|False
)
[REQUIRED]
The Amazon Resource Name (ARN) of the replication instance.
dict
Response Syntax
{
'ReplicationInstance': {
'ReplicationInstanceIdentifier': 'string',
'ReplicationInstanceClass': 'string',
'ReplicationInstanceStatus': 'string',
'AllocatedStorage': 123,
'InstanceCreateTime': datetime(2015, 1, 1),
'VpcSecurityGroups': [
{
'VpcSecurityGroupId': 'string',
'Status': 'string'
},
],
'AvailabilityZone': 'string',
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
},
'PreferredMaintenanceWindow': 'string',
'PendingModifiedValues': {
'ReplicationInstanceClass': 'string',
'AllocatedStorage': 123,
'MultiAZ': True|False,
'EngineVersion': 'string'
},
'MultiAZ': True|False,
'EngineVersion': 'string',
'AutoMinorVersionUpgrade': True|False,
'KmsKeyId': 'string',
'ReplicationInstanceArn': 'string',
'ReplicationInstancePublicIpAddress': 'string',
'ReplicationInstancePrivateIpAddress': 'string',
'ReplicationInstancePublicIpAddresses': [
'string',
],
'ReplicationInstancePrivateIpAddresses': [
'string',
],
'PubliclyAccessible': True|False,
'SecondaryAvailabilityZone': 'string',
'FreeUntil': datetime(2015, 1, 1),
'DnsNameServers': 'string'
}
}
Response Structure
(dict) --
ReplicationInstance (dict) --
The replication instance that is being rebooted.
ReplicationInstanceIdentifier (string) --
The replication instance identifier. This parameter is stored as a lowercase string.
Constraints:
Example: myrepinstance
ReplicationInstanceClass (string) --
The compute and memory capacity of the replication instance.
Valid Values: dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge
ReplicationInstanceStatus (string) --
The status of the replication instance.
AllocatedStorage (integer) --
The amount of storage (in gigabytes) that is allocated for the replication instance.
InstanceCreateTime (datetime) --
The time the replication instance was created.
VpcSecurityGroups (list) --
The VPC security group for the instance.
(dict) --
Describes status of a security group associated with the virtual private cloud hosting your replication and DB instances.
VpcSecurityGroupId (string) --
The VPC security group Id.
Status (string) --
The status of the VPC security group.
AvailabilityZone (string) --
The Availability Zone for the instance.
ReplicationSubnetGroup (dict) --
The subnet group for the replication instance.
ReplicationSubnetGroupIdentifier (string) --
The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription (string) --
A description for the replication subnet group.
VpcId (string) --
The ID of the VPC.
SubnetGroupStatus (string) --
The status of the subnet group.
Subnets (list) --
The subnets that are in the subnet group.
(dict) --
In response to a request by the DescribeReplicationSubnetGroup operation, this object identifies a subnet by its given Availability Zone, subnet identifier, and status.
SubnetIdentifier (string) --
The subnet identifier.
SubnetAvailabilityZone (dict) --
The Availability Zone of the subnet.
Name (string) --
The name of the Availability Zone.
SubnetStatus (string) --
The status of the subnet.
PreferredMaintenanceWindow (string) --
The maintenance window times for the replication instance.
PendingModifiedValues (dict) --
The pending modification values.
ReplicationInstanceClass (string) --
The compute and memory capacity of the replication instance.
Valid Values: dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge
AllocatedStorage (integer) --
The amount of storage (in gigabytes) that is allocated for the replication instance.
MultiAZ (boolean) --
Specifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the Multi-AZ parameter is set to true .
EngineVersion (string) --
The engine version number of the replication instance.
MultiAZ (boolean) --
Specifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the Multi-AZ parameter is set to true .
EngineVersion (string) --
The engine version number of the replication instance.
AutoMinorVersionUpgrade (boolean) --
Boolean value indicating if minor version upgrades will be automatically applied to the instance.
KmsKeyId (string) --
An AWS KMS key identifier that is used to encrypt the data on the replication instance.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
ReplicationInstanceArn (string) --
The Amazon Resource Name (ARN) of the replication instance.
ReplicationInstancePublicIpAddress (string) --
The public IP address of the replication instance.
ReplicationInstancePrivateIpAddress (string) --
The private IP address of the replication instance.
ReplicationInstancePublicIpAddresses (list) --
One or more public IP addresses for the replication instance.
ReplicationInstancePrivateIpAddresses (list) --
One or more private IP addresses for the replication instance.
PubliclyAccessible (boolean) --
Specifies the accessibility options for the replication instance. A value of true represents an instance with a public IP address. A value of false represents an instance with a private IP address. The default value is true .
SecondaryAvailabilityZone (string) --
The Availability Zone of the standby replication instance in a Multi-AZ deployment.
FreeUntil (datetime) --
The expiration date of the free replication instance that is part of the Free DMS program.
DnsNameServers (string) --
The DNS name servers for the replication instance.
Exceptions
Populates the schema for the specified endpoint. This is an asynchronous operation and can take several minutes. You can check the status of this operation by calling the DescribeRefreshSchemasStatus operation.
See also: AWS API Documentation
Request Syntax
response = client.refresh_schemas(
EndpointArn='string',
ReplicationInstanceArn='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
[REQUIRED]
The Amazon Resource Name (ARN) of the replication instance.
dict
Response Syntax
{
'RefreshSchemasStatus': {
'EndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'Status': 'successful'|'failed'|'refreshing',
'LastRefreshDate': datetime(2015, 1, 1),
'LastFailureMessage': 'string'
}
}
Response Structure
(dict) --
RefreshSchemasStatus (dict) --
The status of the refreshed schema.
EndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
ReplicationInstanceArn (string) --
The Amazon Resource Name (ARN) of the replication instance.
Status (string) --
The status of the schema.
LastRefreshDate (datetime) --
The date the schema was last refreshed.
LastFailureMessage (string) --
The last failure message for the schema.
Exceptions
Examples
Populates the schema for the specified endpoint. This is an asynchronous operation and can take several minutes. You can check the status of this operation by calling the describe-refresh-schemas-status operation.
response = client.refresh_schemas(
EndpointArn='',
ReplicationInstanceArn='',
)
print(response)
Expected Output:
{
'RefreshSchemasStatus': {
},
'ResponseMetadata': {
'...': '...',
},
}
Reloads the target database table with the source data.
See also: AWS API Documentation
Request Syntax
response = client.reload_tables(
ReplicationTaskArn='string',
TablesToReload=[
{
'SchemaName': 'string',
'TableName': 'string'
},
],
ReloadOption='data-reload'|'validate-only'
)
[REQUIRED]
The Amazon Resource Name (ARN) of the replication task.
[REQUIRED]
The name and schema of the table to be reloaded.
Provides the name of the schema and table to be reloaded.
The schema name of the table to be reloaded.
The table name of the table to be reloaded.
Options for reload. Specify data-reload to reload the data and re-validate it if validation is enabled. Specify validate-only to re-validate the table. This option applies only when validation is enabled for the task.
Valid values: data-reload, validate-only
Default value is data-reload.
dict
Response Syntax
{
'ReplicationTaskArn': 'string'
}
Response Structure
(dict) --
ReplicationTaskArn (string) --
The Amazon Resource Name (ARN) of the replication task.
Exceptions
Removes metadata tags from a DMS resource.
See also: AWS API Documentation
Request Syntax
response = client.remove_tags_from_resource(
ResourceArn='string',
TagKeys=[
'string',
]
)
[REQUIRED]
An AWS DMS resource from which you want to remove tag(s). The value for this parameter is an Amazon Resource Name (ARN).
[REQUIRED]
The tag key (name) of the tag to be removed.
dict
Response Syntax
{}
Response Structure
Exceptions
Examples
Removes metadata tags from an AWS DMS resource.
response = client.remove_tags_from_resource(
ResourceArn='arn:aws:dms:us-east-1:123456789012:endpoint:ASXWXJZLNWNT5HTWCGV2BUJQ7E',
TagKeys=[
],
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
Starts the replication task.
For more information about AWS DMS tasks, see Working with Migration Tasks in the AWS Database Migration Service User Guide.
See also: AWS API Documentation
Request Syntax
response = client.start_replication_task(
ReplicationTaskArn='string',
StartReplicationTaskType='start-replication'|'resume-processing'|'reload-target',
CdcStartTime=datetime(2015, 1, 1),
CdcStartPosition='string',
CdcStopPosition='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) of the replication task to be started.
[REQUIRED]
The type of replication task.
Indicates the start time for a change data capture (CDC) operation. Use either CdcStartTime or CdcStartPosition to specify when you want a CDC operation to start. Specifying both values results in an error.
Timestamp Example: --cdc-start-time “2018-03-08T12:12:12”
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
Note
When you use this task setting with a source PostgreSQL database, a logical replication slot should already be created and associated with the source endpoint. You can verify this by setting the slotName extra connection attribute to the name of this logical replication slot. For more information, see Extra Connection Attributes When Using PostgreSQL as a Source for AWS DMS .
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
dict
Response Syntax
{
'ReplicationTask': {
'ReplicationTaskIdentifier': 'string',
'SourceEndpointArn': 'string',
'TargetEndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'MigrationType': 'full-load'|'cdc'|'full-load-and-cdc',
'TableMappings': 'string',
'ReplicationTaskSettings': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'StopReason': 'string',
'ReplicationTaskCreationDate': datetime(2015, 1, 1),
'ReplicationTaskStartDate': datetime(2015, 1, 1),
'CdcStartPosition': 'string',
'CdcStopPosition': 'string',
'RecoveryCheckpoint': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskStats': {
'FullLoadProgressPercent': 123,
'ElapsedTimeMillis': 123,
'TablesLoaded': 123,
'TablesLoading': 123,
'TablesQueued': 123,
'TablesErrored': 123,
'FreshStartDate': datetime(2015, 1, 1),
'StartDate': datetime(2015, 1, 1),
'StopDate': datetime(2015, 1, 1),
'FullLoadStartDate': datetime(2015, 1, 1),
'FullLoadFinishDate': datetime(2015, 1, 1)
},
'TaskData': 'string'
}
}
Response Structure
(dict) --
ReplicationTask (dict) --
The replication task started.
ReplicationTaskIdentifier (string) --
The user-assigned replication task identifier or name.
Constraints:
SourceEndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
TargetEndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
ReplicationInstanceArn (string) --
The Amazon Resource Name (ARN) of the replication instance.
MigrationType (string) --
The type of migration.
TableMappings (string) --
Table mappings specified in the task.
ReplicationTaskSettings (string) --
The settings for the replication task.
Status (string) --
The status of the replication task.
LastFailureMessage (string) --
The last error (failure) message generated for the replication instance.
StopReason (string) --
The reason the replication task was stopped.
ReplicationTaskCreationDate (datetime) --
The date the replication task was created.
ReplicationTaskStartDate (datetime) --
The date the replication task is scheduled to start.
CdcStartPosition (string) --
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want the CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition (string) --
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
RecoveryCheckpoint (string) --
Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the CdcStartPosition parameter to start a CDC operation that begins at that checkpoint.
ReplicationTaskArn (string) --
The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats (dict) --
The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent (integer) --
The percent complete for the full load migration task.
ElapsedTimeMillis (integer) --
The elapsed time of the task, in milliseconds.
TablesLoaded (integer) --
The number of tables loaded for this task.
TablesLoading (integer) --
The number of tables currently loading for this task.
TablesQueued (integer) --
The number of tables queued for this task.
TablesErrored (integer) --
The number of errors that have occurred during this task.
FreshStartDate (datetime) --
The date the replication task was started either with a fresh start or a target reload.
StartDate (datetime) --
The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType .
StopDate (datetime) --
The date the replication task was stopped.
FullLoadStartDate (datetime) --
The date the replication task full load was started.
FullLoadFinishDate (datetime) --
The date the replication task full load was completed.
TaskData (string) --
Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the AWS Database Migration User Guide.
Exceptions
Examples
Starts the replication task.
response = client.start_replication_task(
CdcStartTime=datetime(2016, 12, 14, 13, 33, 20, 2, 349, 0),
ReplicationTaskArn='arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ',
StartReplicationTaskType='start-replication',
)
print(response)
Expected Output:
{
'ReplicationTask': {
'MigrationType': 'full-load',
'ReplicationInstanceArn': 'arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ',
'ReplicationTaskArn': 'arn:aws:dms:us-east-1:123456789012:task:OEAMB3NXSTZ6LFYZFEPPBBXPYM',
'ReplicationTaskCreationDate': datetime(2016, 12, 14, 18, 25, 43, 2, 349, 0),
'ReplicationTaskIdentifier': 'task1',
'ReplicationTaskSettings': '{"TargetMetadata":{"TargetSchema":"","SupportLobs":true,"FullLobMode":true,"LobChunkSize":64,"LimitedSizeLobMode":false,"LobMaxSize":0},"FullLoadSettings":{"FullLoadEnabled":true,"ApplyChangesEnabled":false,"TargetTablePrepMode":"DROP_AND_CREATE","CreatePkAfterFullLoad":false,"StopTaskCachedChangesApplied":false,"StopTaskCachedChangesNotApplied":false,"ResumeEnabled":false,"ResumeMinTableSize":100000,"ResumeOnlyClusteredPKTables":true,"MaxFullLoadSubTasks":8,"TransactionConsistencyTimeout":600,"CommitRate":10000},"Logging":{"EnableLogging":false}}',
'SourceEndpointArn': 'arn:aws:dms:us-east-1:123456789012:endpoint:ZW5UAN6P4E77EC7YWHK4RZZ3BE',
'Status': 'creating',
'TableMappings': 'file://mappingfile.json',
'TargetEndpointArn': 'arn:aws:dms:us-east-1:123456789012:endpoint:ASXWXJZLNWNT5HTWCGV2BUJQ7E',
},
'ResponseMetadata': {
'...': '...',
},
}
Starts the replication task assessment for unsupported data types in the source database.
See also: AWS API Documentation
Request Syntax
response = client.start_replication_task_assessment(
ReplicationTaskArn='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) of the replication task.
{
'ReplicationTask': {
'ReplicationTaskIdentifier': 'string',
'SourceEndpointArn': 'string',
'TargetEndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'MigrationType': 'full-load'|'cdc'|'full-load-and-cdc',
'TableMappings': 'string',
'ReplicationTaskSettings': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'StopReason': 'string',
'ReplicationTaskCreationDate': datetime(2015, 1, 1),
'ReplicationTaskStartDate': datetime(2015, 1, 1),
'CdcStartPosition': 'string',
'CdcStopPosition': 'string',
'RecoveryCheckpoint': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskStats': {
'FullLoadProgressPercent': 123,
'ElapsedTimeMillis': 123,
'TablesLoaded': 123,
'TablesLoading': 123,
'TablesQueued': 123,
'TablesErrored': 123,
'FreshStartDate': datetime(2015, 1, 1),
'StartDate': datetime(2015, 1, 1),
'StopDate': datetime(2015, 1, 1),
'FullLoadStartDate': datetime(2015, 1, 1),
'FullLoadFinishDate': datetime(2015, 1, 1)
},
'TaskData': 'string'
}
}
Response Structure
The assessed replication task.
The user-assigned replication task identifier or name.
Constraints:
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
The Amazon Resource Name (ARN) of the replication instance.
The type of migration.
Table mappings specified in the task.
The settings for the replication task.
The status of the replication task.
The last error (failure) message generated for the replication instance.
The reason the replication task was stopped.
The date the replication task was created.
The date the replication task is scheduled to start.
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want the CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the CdcStartPosition parameter to start a CDC operation that begins at that checkpoint.
The Amazon Resource Name (ARN) of the replication task.
The statistics for the task, including elapsed time, tables loaded, and table errors.
The percent complete for the full load migration task.
The elapsed time of the task, in milliseconds.
The number of tables loaded for this task.
The number of tables currently loading for this task.
The number of tables queued for this task.
The number of errors that have occurred during this task.
The date the replication task was started either with a fresh start or a target reload.
The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType .
The date the replication task was stopped.
The date the replication task full load was started.
The date the replication task full load was completed.
Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the AWS Database Migration User Guide.
Exceptions
Stops the replication task.
See also: AWS API Documentation
Request Syntax
response = client.stop_replication_task(
ReplicationTaskArn='string'
)
[REQUIRED]
The Amazon Resource Name(ARN) of the replication task to be stopped.
{
'ReplicationTask': {
'ReplicationTaskIdentifier': 'string',
'SourceEndpointArn': 'string',
'TargetEndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'MigrationType': 'full-load'|'cdc'|'full-load-and-cdc',
'TableMappings': 'string',
'ReplicationTaskSettings': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'StopReason': 'string',
'ReplicationTaskCreationDate': datetime(2015, 1, 1),
'ReplicationTaskStartDate': datetime(2015, 1, 1),
'CdcStartPosition': 'string',
'CdcStopPosition': 'string',
'RecoveryCheckpoint': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskStats': {
'FullLoadProgressPercent': 123,
'ElapsedTimeMillis': 123,
'TablesLoaded': 123,
'TablesLoading': 123,
'TablesQueued': 123,
'TablesErrored': 123,
'FreshStartDate': datetime(2015, 1, 1),
'StartDate': datetime(2015, 1, 1),
'StopDate': datetime(2015, 1, 1),
'FullLoadStartDate': datetime(2015, 1, 1),
'FullLoadFinishDate': datetime(2015, 1, 1)
},
'TaskData': 'string'
}
}
Response Structure
The replication task stopped.
The user-assigned replication task identifier or name.
Constraints:
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
The Amazon Resource Name (ARN) of the replication instance.
The type of migration.
Table mappings specified in the task.
The settings for the replication task.
The status of the replication task.
The last error (failure) message generated for the replication instance.
The reason the replication task was stopped.
The date the replication task was created.
The date the replication task is scheduled to start.
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want the CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the CdcStartPosition parameter to start a CDC operation that begins at that checkpoint.
The Amazon Resource Name (ARN) of the replication task.
The statistics for the task, including elapsed time, tables loaded, and table errors.
The percent complete for the full load migration task.
The elapsed time of the task, in milliseconds.
The number of tables loaded for this task.
The number of tables currently loading for this task.
The number of tables queued for this task.
The number of errors that have occurred during this task.
The date the replication task was started either with a fresh start or a target reload.
The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType .
The date the replication task was stopped.
The date the replication task full load was started.
The date the replication task full load was completed.
Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the AWS Database Migration User Guide.
Exceptions
Examples
Stops the replication task.
response = client.stop_replication_task(
ReplicationTaskArn='arn:aws:dms:us-east-1:123456789012:endpoint:ASXWXJZLNWNT5HTWCGV2BUJQ7E',
)
print(response)
Expected Output:
{
'ReplicationTask': {
'MigrationType': 'full-load',
'ReplicationInstanceArn': 'arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ',
'ReplicationTaskArn': 'arn:aws:dms:us-east-1:123456789012:task:OEAMB3NXSTZ6LFYZFEPPBBXPYM',
'ReplicationTaskCreationDate': datetime(2016, 12, 14, 18, 25, 43, 2, 349, 0),
'ReplicationTaskIdentifier': 'task1',
'ReplicationTaskSettings': '{"TargetMetadata":{"TargetSchema":"","SupportLobs":true,"FullLobMode":true,"LobChunkSize":64,"LimitedSizeLobMode":false,"LobMaxSize":0},"FullLoadSettings":{"FullLoadEnabled":true,"ApplyChangesEnabled":false,"TargetTablePrepMode":"DROP_AND_CREATE","CreatePkAfterFullLoad":false,"StopTaskCachedChangesApplied":false,"StopTaskCachedChangesNotApplied":false,"ResumeEnabled":false,"ResumeMinTableSize":100000,"ResumeOnlyClusteredPKTables":true,"MaxFullLoadSubTasks":8,"TransactionConsistencyTimeout":600,"CommitRate":10000},"Logging":{"EnableLogging":false}}',
'SourceEndpointArn': 'arn:aws:dms:us-east-1:123456789012:endpoint:ZW5UAN6P4E77EC7YWHK4RZZ3BE',
'Status': 'creating',
'TableMappings': 'file://mappingfile.json',
'TargetEndpointArn': 'arn:aws:dms:us-east-1:123456789012:endpoint:ASXWXJZLNWNT5HTWCGV2BUJQ7E',
},
'ResponseMetadata': {
'...': '...',
},
}
Tests the connection between the replication instance and the endpoint.
See also: AWS API Documentation
Request Syntax
response = client.test_connection(
ReplicationInstanceArn='string',
EndpointArn='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) of the replication instance.
[REQUIRED]
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
dict
Response Syntax
{
'Connection': {
'ReplicationInstanceArn': 'string',
'EndpointArn': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'EndpointIdentifier': 'string',
'ReplicationInstanceIdentifier': 'string'
}
}
Response Structure
(dict) --
Connection (dict) --
The connection tested.
ReplicationInstanceArn (string) --
The ARN of the replication instance.
EndpointArn (string) --
The ARN string that uniquely identifies the endpoint.
Status (string) --
The connection status.
LastFailureMessage (string) --
The error message when the connection last failed.
EndpointIdentifier (string) --
The identifier of the endpoint. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
ReplicationInstanceIdentifier (string) --
The replication instance identifier. This parameter is stored as a lowercase string.
Exceptions
Examples
Tests the connection between the replication instance and the endpoint.
response = client.test_connection(
EndpointArn='arn:aws:dms:us-east-1:123456789012:endpoint:RAAR3R22XSH46S3PWLC3NJAWKM',
ReplicationInstanceArn='arn:aws:dms:us-east-1:123456789012:rep:6UTDJGBOUS3VI3SUWA66XFJCJQ',
)
print(response)
Expected Output:
{
'Connection': {
},
'ResponseMetadata': {
'...': '...',
},
}
The available paginators are:
paginator = client.get_paginator('describe_certificates')
Creates an iterator that will paginate through responses from DatabaseMigrationService.Client.describe_certificates().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Filters applied to the certificate described in the form of key-value pairs.
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'Certificates': [
{
'CertificateIdentifier': 'string',
'CertificateCreationDate': datetime(2015, 1, 1),
'CertificatePem': 'string',
'CertificateWallet': b'bytes',
'CertificateArn': 'string',
'CertificateOwner': 'string',
'ValidFromDate': datetime(2015, 1, 1),
'ValidToDate': datetime(2015, 1, 1),
'SigningAlgorithm': 'string',
'KeyLength': 123
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
Certificates (list) --
The Secure Sockets Layer (SSL) certificates associated with the replication instance.
(dict) --
The SSL certificate that can be used to encrypt connections between the endpoints and the replication instance.
CertificateIdentifier (string) --
A customer-assigned name for the certificate. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
CertificateCreationDate (datetime) --
The date that the certificate was created.
CertificatePem (string) --
The contents of a .pem file, which contains an X.509 certificate.
CertificateWallet (bytes) --
The location of an imported Oracle Wallet certificate for use with SSL.
CertificateArn (string) --
The Amazon Resource Name (ARN) for the certificate.
CertificateOwner (string) --
The owner of the certificate.
ValidFromDate (datetime) --
The beginning date that the certificate is valid.
ValidToDate (datetime) --
The final date that the certificate is valid.
SigningAlgorithm (string) --
The signing algorithm for the certificate.
KeyLength (integer) --
The key length of the cryptographic algorithm being used.
NextToken (string) --
A token to resume pagination.
paginator = client.get_paginator('describe_connections')
Creates an iterator that will paginate through responses from DatabaseMigrationService.Client.describe_connections().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
The filters applied to the connection.
Valid filter names: endpoint-arn | replication-instance-arn
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'Connections': [
{
'ReplicationInstanceArn': 'string',
'EndpointArn': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'EndpointIdentifier': 'string',
'ReplicationInstanceIdentifier': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
Connections (list) --
A description of the connections.
(dict) --
Status of the connection between an endpoint and a replication instance, including Amazon Resource Names (ARNs) and the last error message issued.
ReplicationInstanceArn (string) --
The ARN of the replication instance.
EndpointArn (string) --
The ARN string that uniquely identifies the endpoint.
Status (string) --
The connection status.
LastFailureMessage (string) --
The error message when the connection last failed.
EndpointIdentifier (string) --
The identifier of the endpoint. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
ReplicationInstanceIdentifier (string) --
The replication instance identifier. This parameter is stored as a lowercase string.
NextToken (string) --
A token to resume pagination.
paginator = client.get_paginator('describe_endpoint_types')
Creates an iterator that will paginate through responses from DatabaseMigrationService.Client.describe_endpoint_types().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Filters applied to the describe action.
Valid filter names: engine-name | endpoint-type
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'SupportedEndpointTypes': [
{
'EngineName': 'string',
'SupportsCDC': True|False,
'EndpointType': 'source'|'target',
'ReplicationInstanceEngineMinimumVersion': 'string',
'EngineDisplayName': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
SupportedEndpointTypes (list) --
The types of endpoints that are supported.
(dict) --
Provides information about types of supported endpoints in response to a request by the DescribeEndpointTypes operation. This information includes the type of endpoint, the database engine name, and whether change data capture (CDC) is supported.
EngineName (string) --
The database engine name. Valid values, depending on the EndpointType, include "mysql" , "oracle" , "postgres" , "mariadb" , "aurora" , "aurora-postgresql" , "redshift" , "s3" , "db2" , "azuredb" , "sybase" , "dynamodb" , "mongodb" , "kinesis" , "kafka" , "elasticsearch" , "documentdb" , and "sqlserver" .
SupportsCDC (boolean) --
Indicates if Change Data Capture (CDC) is supported.
EndpointType (string) --
The type of endpoint. Valid values are source and target .
ReplicationInstanceEngineMinimumVersion (string) --
The earliest AWS DMS engine version that supports this endpoint engine. Note that endpoint engines released with AWS DMS versions earlier than 3.1.1 do not return a value for this parameter.
EngineDisplayName (string) --
The expanded name for the engine name. For example, if the EngineName parameter is "aurora," this value would be "Amazon Aurora MySQL."
NextToken (string) --
A token to resume pagination.
paginator = client.get_paginator('describe_endpoints')
Creates an iterator that will paginate through responses from DatabaseMigrationService.Client.describe_endpoints().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Filters applied to the describe action.
Valid filter names: endpoint-arn | endpoint-type | endpoint-id | engine-name
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'Endpoints': [
{
'EndpointIdentifier': 'string',
'EndpointType': 'source'|'target',
'EngineName': 'string',
'EngineDisplayName': 'string',
'Username': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'ExtraConnectionAttributes': 'string',
'Status': 'string',
'KmsKeyId': 'string',
'EndpointArn': 'string',
'CertificateArn': 'string',
'SslMode': 'none'|'require'|'verify-ca'|'verify-full',
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'ExternalId': 'string',
'DynamoDbSettings': {
'ServiceAccessRoleArn': 'string'
},
'S3Settings': {
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'CsvRowDelimiter': 'string',
'CsvDelimiter': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'CompressionType': 'none'|'gzip',
'EncryptionMode': 'sse-s3'|'sse-kms',
'ServerSideEncryptionKmsKeyId': 'string',
'DataFormat': 'csv'|'parquet',
'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
'DictPageSizeLimit': 123,
'RowGroupLength': 123,
'DataPageSize': 123,
'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
'EnableStatistics': True|False,
'IncludeOpForFullLoad': True|False,
'CdcInsertsOnly': True|False,
'TimestampColumnName': 'string',
'ParquetTimestampInMillisecond': True|False,
'CdcInsertsAndUpdates': True|False
},
'DmsTransferSettings': {
'ServiceAccessRoleArn': 'string',
'BucketName': 'string'
},
'MongoDbSettings': {
'Username': 'string',
'Password': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'AuthType': 'no'|'password',
'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
'NestingLevel': 'none'|'one',
'ExtractDocId': 'string',
'DocsToInvestigate': 'string',
'AuthSource': 'string',
'KmsKeyId': 'string'
},
'KinesisSettings': {
'StreamArn': 'string',
'MessageFormat': 'json'|'json-unformatted',
'ServiceAccessRoleArn': 'string',
'IncludeTransactionDetails': True|False,
'IncludePartitionValue': True|False,
'PartitionIncludeSchemaTable': True|False,
'IncludeTableAlterOperations': True|False,
'IncludeControlDetails': True|False
},
'KafkaSettings': {
'Broker': 'string',
'Topic': 'string'
},
'ElasticsearchSettings': {
'ServiceAccessRoleArn': 'string',
'EndpointUri': 'string',
'FullLoadErrorPercentage': 123,
'ErrorRetryDuration': 123
},
'NeptuneSettings': {
'ServiceAccessRoleArn': 'string',
'S3BucketName': 'string',
'S3BucketFolder': 'string',
'ErrorRetryDuration': 123,
'MaxFileSize': 123,
'MaxRetryCount': 123,
'IamAuthEnabled': True|False
},
'RedshiftSettings': {
'AcceptAnyDate': True|False,
'AfterConnectScript': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'ConnectionTimeout': 123,
'DatabaseName': 'string',
'DateFormat': 'string',
'EmptyAsNull': True|False,
'EncryptionMode': 'sse-s3'|'sse-kms',
'FileTransferUploadStreams': 123,
'LoadTimeout': 123,
'MaxFileSize': 123,
'Password': 'string',
'Port': 123,
'RemoveQuotes': True|False,
'ReplaceInvalidChars': 'string',
'ReplaceChars': 'string',
'ServerName': 'string',
'ServiceAccessRoleArn': 'string',
'ServerSideEncryptionKmsKeyId': 'string',
'TimeFormat': 'string',
'TrimBlanks': True|False,
'TruncateColumns': True|False,
'Username': 'string',
'WriteBufferSize': 123
}
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
Endpoints (list) --
Endpoint description.
(dict) --
Describes an endpoint of a database instance in response to operations such as the following:
EndpointIdentifier (string) --
The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
EndpointType (string) --
The type of endpoint. Valid values are source and target .
EngineName (string) --
The database engine name. Valid values, depending on the EndpointType, include "mysql" , "oracle" , "postgres" , "mariadb" , "aurora" , "aurora-postgresql" , "redshift" , "s3" , "db2" , "azuredb" , "sybase" , "dynamodb" , "mongodb" , "kinesis" , "kafka" , "elasticsearch" , "documentdb" , and "sqlserver" .
EngineDisplayName (string) --
The expanded name for the engine name. For example, if the EngineName parameter is "aurora," this value would be "Amazon Aurora MySQL."
Username (string) --
The user name used to connect to the endpoint.
ServerName (string) --
The name of the server at the endpoint.
Port (integer) --
The port value used to access the endpoint.
DatabaseName (string) --
The name of the database at the endpoint.
ExtraConnectionAttributes (string) --
Additional connection attributes used to connect to the endpoint.
Status (string) --
The status of the endpoint.
KmsKeyId (string) --
An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
EndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
CertificateArn (string) --
The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
SslMode (string) --
The SSL mode used to connect to the endpoint. The default value is none .
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
ExternalTableDefinition (string) --
The external table definition.
ExternalId (string) --
Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
DynamoDbSettings (dict) --
The settings for the target DynamoDB database. For more information, see the DynamoDBSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
S3Settings (dict) --
The settings for the S3 target endpoint. For more information, see the S3Settings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
ExternalTableDefinition (string) --
The external table definition.
CsvRowDelimiter (string) --
The delimiter used to separate rows in the source files. The default is a carriage return (\n ).
CsvDelimiter (string) --
The delimiter used to separate columns in the source files. The default is a comma.
BucketFolder (string) --
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path `` bucketFolder /schema_name /table_name /`` . If this parameter isn't specified, then the path used is `` schema_name /table_name /`` .
BucketName (string) --
The name of the S3 bucket.
CompressionType (string) --
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS . To use SSE_S3 , you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:
ServerSideEncryptionKmsKeyId (string) --
If you are using SSE_KMS for the EncryptionMode , provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=*value* ,BucketFolder=*value* ,BucketName=*value* ,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=*value* ``
DataFormat (string) --
The format of the data that you want to use for output. You can choose one of the following:
EncodingType (string) --
The type of encoding you are using:
DictPageSizeLimit (integer) --
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN . This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.
RowGroupLength (integer) --
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).
DataPageSize (integer) --
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion (string) --
The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0 .
EnableStatistics (boolean) --
A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL , DISTINCT , MAX , and MIN values. This parameter defaults to true . This value is used for .parquet file format only.
IncludeOpForFullLoad (boolean) --
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note
AWS DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.
For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y , the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.
Note
This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
CdcInsertsOnly (boolean) --
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly is set to true or y , only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad . If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false , every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
TimestampColumnName (string) --
A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note
AWS DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.
DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS . By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.
When the AddColumnName parameter is set to true , DMS also includes a name for the timestamp column that you set with TimestampColumnName .
ParquetTimestampInMillisecond (boolean) --
A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.
Note
AWS DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond is set to true or y , AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.
Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.
Note
AWS DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.
Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.
CdcInsertsAndUpdates (boolean) --
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false , but when CdcInsertsAndUpdates is set to true or y , INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.
For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false , CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the use of the CdcInsertsAndUpdates parameter in versions 3.3.1 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
DmsTransferSettings (dict) --
The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string
JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }
ServiceAccessRoleArn (string) --
The IAM role that has permission to access the Amazon S3 bucket.
BucketName (string) --
The name of the S3 bucket to use.
MongoDbSettings (dict) --
The settings for the MongoDB source endpoint. For more information, see the MongoDbSettings structure.
Username (string) --
The user name you use to access the MongoDB source endpoint.
Password (string) --
The password for the user account you use to access the MongoDB source endpoint.
ServerName (string) --
The name of the server on the MongoDB source endpoint.
Port (integer) --
The port value for the MongoDB source endpoint.
DatabaseName (string) --
The database name on the MongoDB source endpoint.
AuthType (string) --
The authentication type you use to access the MongoDB source endpoint.
Valid values: NO, PASSWORD
When NO is selected, user name and password parameters are not used and can be empty.
AuthMechanism (string) --
The authentication mechanism you use to access the MongoDB source endpoint.
Valid values: DEFAULT, MONGODB_CR, SCRAM_SHA_1
DEFAULT – For MongoDB version 2.x, use MONGODB_CR. For MongoDB version 3.x, use SCRAM_SHA_1. This setting isn't used when authType=No.
NestingLevel (string) --
Specifies either document or table mode.
Valid values: NONE, ONE
Default value is NONE. Specify NONE to use document mode. Specify ONE to use table mode.
ExtractDocId (string) --
Specifies the document ID. Use this setting when NestingLevel is set to NONE.
Default value is false.
DocsToInvestigate (string) --
Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to ONE.
Must be a positive value greater than 0. Default value is 1000.
AuthSource (string) --
The MongoDB database name. This setting isn't used when authType=NO .
The default is admin.
KmsKeyId (string) --
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
KinesisSettings (dict) --
The settings for the Amazon Kinesis target endpoint. For more information, see the KinesisSettings structure.
StreamArn (string) --
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat (string) --
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.
IncludeTransactionDetails (boolean) --
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is False .
IncludePartitionValue (boolean) --
Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type . The default is False .
PartitionIncludeSchemaTable (boolean) --
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is False .
IncludeTableAlterOperations (boolean) --
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is False .
IncludeControlDetails (boolean) --
Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is False .
KafkaSettings (dict) --
The settings for the Apache Kafka target endpoint. For more information, see the KafkaSettings structure.
Broker (string) --
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form `` broker-hostname-or-ip :port `` . For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345" .
Topic (string) --
The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.
ElasticsearchSettings (dict) --
The settings for the Elasticsearch source endpoint. For more information, see the ElasticsearchSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by service to access the IAM role.
EndpointUri (string) --
The endpoint for the Elasticsearch cluster.
FullLoadErrorPercentage (integer) --
The maximum percentage of records that can fail to be written before a full load operation stops.
ErrorRetryDuration (integer) --
The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
NeptuneSettings (dict) --
The settings for the MongoDB source endpoint. For more information, see the NeptuneSettings structure.
ServiceAccessRoleArn (string) --
The ARN of the service role you have created for the Neptune target endpoint. For more information, see https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html#CHAP_Target.Neptune.ServiceRole in the AWS Database Migration Service User Guide.
S3BucketName (string) --
The name of the S3 bucket for AWS DMS to temporarily store migrated graph data in CSV files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these CSV files.
S3BucketFolder (string) --
A folder path where you where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName
ErrorRetryDuration (integer) --
The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize (integer) --
The maximum size in KB of migrated graph data stored in a CSV file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1048576 KB. If successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount (integer) --
The number of times for AWS DMS to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled (boolean) --
If you want IAM authorization enabled for this endpoint, set this parameter to true and attach the appropriate role policy document to your service role specified by ServiceAccessRoleArn . The default is false .
RedshiftSettings (dict) --
Settings for the Amazon Redshift endpoint.
AcceptAnyDate (boolean) --
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript (string) --
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder (string) --
The location where the comma-separated value (.csv) files are stored before being uploaded to the S3 bucket.
BucketName (string) --
The name of the S3 bucket you want to use
ConnectionTimeout (integer) --
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName (string) --
The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat (string) --
The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.
If your date and time values use formats different from each other, set this to auto .
EmptyAsNull (boolean) --
A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false .
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS . To use SSE_S3 , create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"
FileTransferUploadStreams (integer) --
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
LoadTimeout (integer) --
The amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.
MaxFileSize (integer) --
The maximum size (in KB) of any .csv file used to transfer data to Amazon Redshift. This accepts a value from 1 through 1,048,576. It defaults to 32,768 KB (32 MB).
Password (string) --
The password for the user named in the username property.
Port (integer) --
The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes (boolean) --
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false .
ReplaceInvalidChars (string) --
A list of characters that you want to replace. Use with ReplaceChars .
ReplaceChars (string) --
A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars , substituting the specified characters instead. The default is "?" .
ServerName (string) --
The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.
ServerSideEncryptionKmsKeyId (string) --
The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode , provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
TimeFormat (string) --
The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string' , 'epochsecs' , or 'epochmillisecs' . It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.
If your date and time values use formats different from each other, set this parameter to auto .
TrimBlanks (boolean) --
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false .
TruncateColumns (boolean) --
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false .
Username (string) --
An Amazon Redshift user name for a registered user.
WriteBufferSize (integer) --
The size of the write buffer to use in rows. Valid values range from 1 through 2,048. The default is 1,024. Use this setting to tune performance.
NextToken (string) --
A token to resume pagination.
paginator = client.get_paginator('describe_event_subscriptions')
Creates an iterator that will paginate through responses from DatabaseMigrationService.Client.describe_event_subscriptions().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
SubscriptionName='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Filters applied to the action.
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'EventSubscriptionsList': [
{
'CustomerAwsId': 'string',
'CustSubscriptionId': 'string',
'SnsTopicArn': 'string',
'Status': 'string',
'SubscriptionCreationTime': 'string',
'SourceType': 'string',
'SourceIdsList': [
'string',
],
'EventCategoriesList': [
'string',
],
'Enabled': True|False
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
EventSubscriptionsList (list) --
A list of event subscriptions.
(dict) --
Describes an event notification subscription created by the CreateEventSubscription operation.
CustomerAwsId (string) --
The AWS customer account associated with the AWS DMS event notification subscription.
CustSubscriptionId (string) --
The AWS DMS event notification subscription Id.
SnsTopicArn (string) --
The topic ARN of the AWS DMS event notification subscription.
Status (string) --
The status of the AWS DMS event notification subscription.
Constraints:
Can be one of the following: creating | modifying | deleting | active | no-permission | topic-not-exist
The status "no-permission" indicates that AWS DMS no longer has permission to post to the SNS topic. The status "topic-not-exist" indicates that the topic was deleted after the subscription was created.
SubscriptionCreationTime (string) --
The time the RDS event notification subscription was created.
SourceType (string) --
The type of AWS DMS resource that generates events.
Valid values: replication-instance | replication-server | security-group | replication-task
SourceIdsList (list) --
A list of source Ids for the event subscription.
EventCategoriesList (list) --
A lists of event categories.
Enabled (boolean) --
Boolean value that indicates if the event subscription is enabled.
NextToken (string) --
A token to resume pagination.
paginator = client.get_paginator('describe_events')
Creates an iterator that will paginate through responses from DatabaseMigrationService.Client.describe_events().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
SourceIdentifier='string',
SourceType='replication-instance',
StartTime=datetime(2015, 1, 1),
EndTime=datetime(2015, 1, 1),
Duration=123,
EventCategories=[
'string',
],
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
The type of AWS DMS resource that generates events.
Valid values: replication-instance | replication-task
A list of event categories for the source type that you've chosen.
Filters applied to the action.
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'Events': [
{
'SourceIdentifier': 'string',
'SourceType': 'replication-instance',
'Message': 'string',
'EventCategories': [
'string',
],
'Date': datetime(2015, 1, 1)
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
Events (list) --
The events described.
(dict) --
Describes an identifiable significant activity that affects a replication instance or task. This object can provide the message, the available event categories, the date and source of the event, and the AWS DMS resource type.
SourceIdentifier (string) --
The identifier of an event source.
SourceType (string) --
The type of AWS DMS resource that generates events.
Valid values: replication-instance | endpoint | replication-task
Message (string) --
The event message.
EventCategories (list) --
The event categories available for the specified source type.
Date (datetime) --
The date of the event.
NextToken (string) --
A token to resume pagination.
paginator = client.get_paginator('describe_orderable_replication_instances')
Creates an iterator that will paginate through responses from DatabaseMigrationService.Client.describe_orderable_replication_instances().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
{
'OrderableReplicationInstances': [
{
'EngineVersion': 'string',
'ReplicationInstanceClass': 'string',
'StorageType': 'string',
'MinAllocatedStorage': 123,
'MaxAllocatedStorage': 123,
'DefaultAllocatedStorage': 123,
'IncludedAllocatedStorage': 123,
'AvailabilityZones': [
'string',
],
'ReleaseStatus': 'beta'
},
],
'NextToken': 'string'
}
Response Structure
The order-able replication instances available.
In response to the DescribeOrderableReplicationInstances operation, this object describes an available replication instance. This description includes the replication instance's type, engine version, and allocated storage.
The version of the replication engine.
The compute and memory capacity of the replication instance.
Valid Values: dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge
The type of storage used by the replication instance.
The minimum amount of storage (in gigabytes) that can be allocated for the replication instance.
The minimum amount of storage (in gigabytes) that can be allocated for the replication instance.
The default amount of storage (in gigabytes) that is allocated for the replication instance.
The amount of storage (in gigabytes) that is allocated for the replication instance.
List of Availability Zones for this replication instance.
The value returned when the specified EngineVersion of the replication instance is in Beta or test mode. This indicates some features might not work as expected.
Note
AWS DMS supports the ReleaseStatus parameter in versions 3.1.4 and later.
A token to resume pagination.
paginator = client.get_paginator('describe_replication_instances')
Creates an iterator that will paginate through responses from DatabaseMigrationService.Client.describe_replication_instances().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Filters applied to the describe action.
Valid filter names: replication-instance-arn | replication-instance-id | replication-instance-class | engine-version
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'ReplicationInstances': [
{
'ReplicationInstanceIdentifier': 'string',
'ReplicationInstanceClass': 'string',
'ReplicationInstanceStatus': 'string',
'AllocatedStorage': 123,
'InstanceCreateTime': datetime(2015, 1, 1),
'VpcSecurityGroups': [
{
'VpcSecurityGroupId': 'string',
'Status': 'string'
},
],
'AvailabilityZone': 'string',
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
},
'PreferredMaintenanceWindow': 'string',
'PendingModifiedValues': {
'ReplicationInstanceClass': 'string',
'AllocatedStorage': 123,
'MultiAZ': True|False,
'EngineVersion': 'string'
},
'MultiAZ': True|False,
'EngineVersion': 'string',
'AutoMinorVersionUpgrade': True|False,
'KmsKeyId': 'string',
'ReplicationInstanceArn': 'string',
'ReplicationInstancePublicIpAddress': 'string',
'ReplicationInstancePrivateIpAddress': 'string',
'ReplicationInstancePublicIpAddresses': [
'string',
],
'ReplicationInstancePrivateIpAddresses': [
'string',
],
'PubliclyAccessible': True|False,
'SecondaryAvailabilityZone': 'string',
'FreeUntil': datetime(2015, 1, 1),
'DnsNameServers': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
ReplicationInstances (list) --
The replication instances described.
(dict) --
Provides information that defines a replication instance.
ReplicationInstanceIdentifier (string) --
The replication instance identifier. This parameter is stored as a lowercase string.
Constraints:
Example: myrepinstance
ReplicationInstanceClass (string) --
The compute and memory capacity of the replication instance.
Valid Values: dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge
ReplicationInstanceStatus (string) --
The status of the replication instance.
AllocatedStorage (integer) --
The amount of storage (in gigabytes) that is allocated for the replication instance.
InstanceCreateTime (datetime) --
The time the replication instance was created.
VpcSecurityGroups (list) --
The VPC security group for the instance.
(dict) --
Describes status of a security group associated with the virtual private cloud hosting your replication and DB instances.
VpcSecurityGroupId (string) --
The VPC security group Id.
Status (string) --
The status of the VPC security group.
AvailabilityZone (string) --
The Availability Zone for the instance.
ReplicationSubnetGroup (dict) --
The subnet group for the replication instance.
ReplicationSubnetGroupIdentifier (string) --
The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription (string) --
A description for the replication subnet group.
VpcId (string) --
The ID of the VPC.
SubnetGroupStatus (string) --
The status of the subnet group.
Subnets (list) --
The subnets that are in the subnet group.
(dict) --
In response to a request by the DescribeReplicationSubnetGroup operation, this object identifies a subnet by its given Availability Zone, subnet identifier, and status.
SubnetIdentifier (string) --
The subnet identifier.
SubnetAvailabilityZone (dict) --
The Availability Zone of the subnet.
Name (string) --
The name of the Availability Zone.
SubnetStatus (string) --
The status of the subnet.
PreferredMaintenanceWindow (string) --
The maintenance window times for the replication instance.
PendingModifiedValues (dict) --
The pending modification values.
ReplicationInstanceClass (string) --
The compute and memory capacity of the replication instance.
Valid Values: dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge
AllocatedStorage (integer) --
The amount of storage (in gigabytes) that is allocated for the replication instance.
MultiAZ (boolean) --
Specifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the Multi-AZ parameter is set to true .
EngineVersion (string) --
The engine version number of the replication instance.
MultiAZ (boolean) --
Specifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the Multi-AZ parameter is set to true .
EngineVersion (string) --
The engine version number of the replication instance.
AutoMinorVersionUpgrade (boolean) --
Boolean value indicating if minor version upgrades will be automatically applied to the instance.
KmsKeyId (string) --
An AWS KMS key identifier that is used to encrypt the data on the replication instance.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
ReplicationInstanceArn (string) --
The Amazon Resource Name (ARN) of the replication instance.
ReplicationInstancePublicIpAddress (string) --
The public IP address of the replication instance.
ReplicationInstancePrivateIpAddress (string) --
The private IP address of the replication instance.
ReplicationInstancePublicIpAddresses (list) --
One or more public IP addresses for the replication instance.
ReplicationInstancePrivateIpAddresses (list) --
One or more private IP addresses for the replication instance.
PubliclyAccessible (boolean) --
Specifies the accessibility options for the replication instance. A value of true represents an instance with a public IP address. A value of false represents an instance with a private IP address. The default value is true .
SecondaryAvailabilityZone (string) --
The Availability Zone of the standby replication instance in a Multi-AZ deployment.
FreeUntil (datetime) --
The expiration date of the free replication instance that is part of the Free DMS program.
DnsNameServers (string) --
The DNS name servers for the replication instance.
NextToken (string) --
A token to resume pagination.
paginator = client.get_paginator('describe_replication_subnet_groups')
Creates an iterator that will paginate through responses from DatabaseMigrationService.Client.describe_replication_subnet_groups().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Filters applied to the describe action.
Valid filter names: replication-subnet-group-id
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'ReplicationSubnetGroups': [
{
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
ReplicationSubnetGroups (list) --
A description of the replication subnet groups.
(dict) --
Describes a subnet group in response to a request by the DescribeReplicationSubnetGroup operation.
ReplicationSubnetGroupIdentifier (string) --
The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription (string) --
A description for the replication subnet group.
VpcId (string) --
The ID of the VPC.
SubnetGroupStatus (string) --
The status of the subnet group.
Subnets (list) --
The subnets that are in the subnet group.
(dict) --
In response to a request by the DescribeReplicationSubnetGroup operation, this object identifies a subnet by its given Availability Zone, subnet identifier, and status.
SubnetIdentifier (string) --
The subnet identifier.
SubnetAvailabilityZone (dict) --
The Availability Zone of the subnet.
Name (string) --
The name of the Availability Zone.
SubnetStatus (string) --
The status of the subnet.
NextToken (string) --
A token to resume pagination.
paginator = client.get_paginator('describe_replication_task_assessment_results')
Creates an iterator that will paginate through responses from DatabaseMigrationService.Client.describe_replication_task_assessment_results().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
ReplicationTaskArn='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'BucketName': 'string',
'ReplicationTaskAssessmentResults': [
{
'ReplicationTaskIdentifier': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskLastAssessmentDate': datetime(2015, 1, 1),
'AssessmentStatus': 'string',
'AssessmentResultsFile': 'string',
'AssessmentResults': 'string',
'S3ObjectUrl': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
BucketName (string) --
ReplicationTaskAssessmentResults (list) --
The task assessment report.
(dict) --
The task assessment report in JSON format.
ReplicationTaskIdentifier (string) --
The replication task identifier of the task on which the task assessment was run.
ReplicationTaskArn (string) --
The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskLastAssessmentDate (datetime) --
The date the task assessment was completed.
AssessmentStatus (string) --
The status of the task assessment.
AssessmentResultsFile (string) --
The file containing the results of the task assessment.
AssessmentResults (string) --
The task assessment results in JSON format.
S3ObjectUrl (string) --
The URL of the S3 object containing the task assessment results.
NextToken (string) --
A token to resume pagination.
paginator = client.get_paginator('describe_replication_tasks')
Creates an iterator that will paginate through responses from DatabaseMigrationService.Client.describe_replication_tasks().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
WithoutSettings=True|False,
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Filters applied to the describe action.
Valid filter names: replication-task-arn | replication-task-id | migration-type | endpoint-arn | replication-instance-arn
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'ReplicationTasks': [
{
'ReplicationTaskIdentifier': 'string',
'SourceEndpointArn': 'string',
'TargetEndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'MigrationType': 'full-load'|'cdc'|'full-load-and-cdc',
'TableMappings': 'string',
'ReplicationTaskSettings': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'StopReason': 'string',
'ReplicationTaskCreationDate': datetime(2015, 1, 1),
'ReplicationTaskStartDate': datetime(2015, 1, 1),
'CdcStartPosition': 'string',
'CdcStopPosition': 'string',
'RecoveryCheckpoint': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskStats': {
'FullLoadProgressPercent': 123,
'ElapsedTimeMillis': 123,
'TablesLoaded': 123,
'TablesLoading': 123,
'TablesQueued': 123,
'TablesErrored': 123,
'FreshStartDate': datetime(2015, 1, 1),
'StartDate': datetime(2015, 1, 1),
'StopDate': datetime(2015, 1, 1),
'FullLoadStartDate': datetime(2015, 1, 1),
'FullLoadFinishDate': datetime(2015, 1, 1)
},
'TaskData': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
ReplicationTasks (list) --
A description of the replication tasks.
(dict) --
Provides information that describes a replication task created by the CreateReplicationTask operation.
ReplicationTaskIdentifier (string) --
The user-assigned replication task identifier or name.
Constraints:
SourceEndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
TargetEndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
ReplicationInstanceArn (string) --
The Amazon Resource Name (ARN) of the replication instance.
MigrationType (string) --
The type of migration.
TableMappings (string) --
Table mappings specified in the task.
ReplicationTaskSettings (string) --
The settings for the replication task.
Status (string) --
The status of the replication task.
LastFailureMessage (string) --
The last error (failure) message generated for the replication instance.
StopReason (string) --
The reason the replication task was stopped.
ReplicationTaskCreationDate (datetime) --
The date the replication task was created.
ReplicationTaskStartDate (datetime) --
The date the replication task is scheduled to start.
CdcStartPosition (string) --
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want the CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition (string) --
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
RecoveryCheckpoint (string) --
Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the CdcStartPosition parameter to start a CDC operation that begins at that checkpoint.
ReplicationTaskArn (string) --
The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats (dict) --
The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent (integer) --
The percent complete for the full load migration task.
ElapsedTimeMillis (integer) --
The elapsed time of the task, in milliseconds.
TablesLoaded (integer) --
The number of tables loaded for this task.
TablesLoading (integer) --
The number of tables currently loading for this task.
TablesQueued (integer) --
The number of tables queued for this task.
TablesErrored (integer) --
The number of errors that have occurred during this task.
FreshStartDate (datetime) --
The date the replication task was started either with a fresh start or a target reload.
StartDate (datetime) --
The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType .
StopDate (datetime) --
The date the replication task was stopped.
FullLoadStartDate (datetime) --
The date the replication task full load was started.
FullLoadFinishDate (datetime) --
The date the replication task full load was completed.
TaskData (string) --
Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the AWS Database Migration User Guide.
NextToken (string) --
A token to resume pagination.
paginator = client.get_paginator('describe_schemas')
Creates an iterator that will paginate through responses from DatabaseMigrationService.Client.describe_schemas().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
EndpointArn='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
[REQUIRED]
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'Schemas': [
'string',
],
'NextToken': 'string'
}
Response Structure
(dict) --
Schemas (list) --
The described schema.
NextToken (string) --
A token to resume pagination.
paginator = client.get_paginator('describe_table_statistics')
Creates an iterator that will paginate through responses from DatabaseMigrationService.Client.describe_table_statistics().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
ReplicationTaskArn='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
[REQUIRED]
The Amazon Resource Name (ARN) of the replication task.
Filters applied to the describe table statistics action.
Valid filter names: schema-name | table-name | table-state
A combination of filters creates an AND condition where each record matches all specified filters.
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'ReplicationTaskArn': 'string',
'TableStatistics': [
{
'SchemaName': 'string',
'TableName': 'string',
'Inserts': 123,
'Deletes': 123,
'Updates': 123,
'Ddls': 123,
'FullLoadRows': 123,
'FullLoadCondtnlChkFailedRows': 123,
'FullLoadErrorRows': 123,
'FullLoadStartTime': datetime(2015, 1, 1),
'FullLoadEndTime': datetime(2015, 1, 1),
'FullLoadReloaded': True|False,
'LastUpdateTime': datetime(2015, 1, 1),
'TableState': 'string',
'ValidationPendingRecords': 123,
'ValidationFailedRecords': 123,
'ValidationSuspendedRecords': 123,
'ValidationState': 'string',
'ValidationStateDetails': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
ReplicationTaskArn (string) --
The Amazon Resource Name (ARN) of the replication task.
TableStatistics (list) --
The table statistics.
(dict) --
Provides a collection of table statistics in response to a request by the DescribeTableStatistics operation.
SchemaName (string) --
The schema name.
TableName (string) --
The name of the table.
Inserts (integer) --
The number of insert actions performed on a table.
Deletes (integer) --
The number of delete actions performed on a table.
Updates (integer) --
The number of update actions performed on a table.
Ddls (integer) --
The data definition language (DDL) used to build and modify the structure of your tables.
FullLoadRows (integer) --
The number of rows added during the full load operation.
FullLoadCondtnlChkFailedRows (integer) --
The number of rows that failed conditional checks during the full load operation (valid only for migrations where DynamoDB is the target).
FullLoadErrorRows (integer) --
The number of rows that failed to load during the full load operation (valid only for migrations where DynamoDB is the target).
FullLoadStartTime (datetime) --
The time when the full load operation started.
FullLoadEndTime (datetime) --
The time when the full load operation completed.
FullLoadReloaded (boolean) --
A value that indicates if the table was reloaded (true ) or loaded as part of a new full load operation (false ).
LastUpdateTime (datetime) --
The last time a table was updated.
TableState (string) --
The state of the tables described.
Valid states: Table does not exist | Before load | Full load | Table completed | Table cancelled | Table error | Table all | Table updates | Table is being reloaded
ValidationPendingRecords (integer) --
The number of records that have yet to be validated.
ValidationFailedRecords (integer) --
The number of records that failed validation.
ValidationSuspendedRecords (integer) --
The number of records that couldn't be validated.
ValidationState (string) --
The validation state of the table.
This parameter can have the following values:
ValidationStateDetails (string) --
Additional details about the state of validation.
NextToken (string) --
A token to resume pagination.
The available waiters are:
waiter = client.get_waiter('endpoint_deleted')
Polls DatabaseMigrationService.Client.describe_endpoints() every 5 seconds until a successful state is reached. An error is returned after 60 failed checks.
See also: AWS API Documentation
Request Syntax
waiter.wait(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
Filters applied to the describe action.
Valid filter names: endpoint-arn | endpoint-type | endpoint-id | engine-name
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
A dictionary that provides parameters to control waiting behavior.
The amount of time in seconds to wait between attempts. Default: 5
The maximum number of attempts to be made. Default: 60
None
waiter = client.get_waiter('replication_instance_available')
Polls DatabaseMigrationService.Client.describe_replication_instances() every 60 seconds until a successful state is reached. An error is returned after 60 failed checks.
See also: AWS API Documentation
Request Syntax
waiter.wait(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
Filters applied to the describe action.
Valid filter names: replication-instance-arn | replication-instance-id | replication-instance-class | engine-version
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
A dictionary that provides parameters to control waiting behavior.
The amount of time in seconds to wait between attempts. Default: 60
The maximum number of attempts to be made. Default: 60
None
waiter = client.get_waiter('replication_instance_deleted')
Polls DatabaseMigrationService.Client.describe_replication_instances() every 15 seconds until a successful state is reached. An error is returned after 60 failed checks.
See also: AWS API Documentation
Request Syntax
waiter.wait(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
Filters applied to the describe action.
Valid filter names: replication-instance-arn | replication-instance-id | replication-instance-class | engine-version
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
A dictionary that provides parameters to control waiting behavior.
The amount of time in seconds to wait between attempts. Default: 15
The maximum number of attempts to be made. Default: 60
None
waiter = client.get_waiter('replication_task_deleted')
Polls DatabaseMigrationService.Client.describe_replication_tasks() every 15 seconds until a successful state is reached. An error is returned after 60 failed checks.
See also: AWS API Documentation
Request Syntax
waiter.wait(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
WithoutSettings=True|False,
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
Filters applied to the describe action.
Valid filter names: replication-task-arn | replication-task-id | migration-type | endpoint-arn | replication-instance-arn
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
A dictionary that provides parameters to control waiting behavior.
The amount of time in seconds to wait between attempts. Default: 15
The maximum number of attempts to be made. Default: 60
None
waiter = client.get_waiter('replication_task_ready')
Polls DatabaseMigrationService.Client.describe_replication_tasks() every 15 seconds until a successful state is reached. An error is returned after 60 failed checks.
See also: AWS API Documentation
Request Syntax
waiter.wait(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
WithoutSettings=True|False,
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
Filters applied to the describe action.
Valid filter names: replication-task-arn | replication-task-id | migration-type | endpoint-arn | replication-instance-arn
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
A dictionary that provides parameters to control waiting behavior.
The amount of time in seconds to wait between attempts. Default: 15
The maximum number of attempts to be made. Default: 60
None
waiter = client.get_waiter('replication_task_running')
Polls DatabaseMigrationService.Client.describe_replication_tasks() every 15 seconds until a successful state is reached. An error is returned after 60 failed checks.
See also: AWS API Documentation
Request Syntax
waiter.wait(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
WithoutSettings=True|False,
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
Filters applied to the describe action.
Valid filter names: replication-task-arn | replication-task-id | migration-type | endpoint-arn | replication-instance-arn
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
A dictionary that provides parameters to control waiting behavior.
The amount of time in seconds to wait between attempts. Default: 15
The maximum number of attempts to be made. Default: 60
None
waiter = client.get_waiter('replication_task_stopped')
Polls DatabaseMigrationService.Client.describe_replication_tasks() every 15 seconds until a successful state is reached. An error is returned after 60 failed checks.
See also: AWS API Documentation
Request Syntax
waiter.wait(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
WithoutSettings=True|False,
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
Filters applied to the describe action.
Valid filter names: replication-task-arn | replication-task-id | migration-type | endpoint-arn | replication-instance-arn
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
A dictionary that provides parameters to control waiting behavior.
The amount of time in seconds to wait between attempts. Default: 15
The maximum number of attempts to be made. Default: 60
None
waiter = client.get_waiter('test_connection_succeeds')
Polls DatabaseMigrationService.Client.describe_connections() every 5 seconds until a successful state is reached. An error is returned after 60 failed checks.
See also: AWS API Documentation
Request Syntax
waiter.wait(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
The filters applied to the connection.
Valid filter names: endpoint-arn | replication-instance-arn
Identifies the name and value of a source filter object used to limit the number and type of records transferred from your source to your target.
The name of the filter.
The filter value.
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
A dictionary that provides parameters to control waiting behavior.
The amount of time in seconds to wait between attempts. Default: 5
The maximum number of attempts to be made. Default: 60
None