create_crawler
(**kwargs)¶Creates a new crawler with specified targets, role, configuration, and optional schedule. At least one crawl target must be specified, in the s3Targets
field, the jdbcTargets
field, or the DynamoDBTargets
field.
See also: AWS API Documentation
Request Syntax
response = client.create_crawler(
Name='string',
Role='string',
DatabaseName='string',
Description='string',
Targets={
'S3Targets': [
{
'Path': 'string',
'Exclusions': [
'string',
],
'ConnectionName': 'string',
'SampleSize': 123,
'EventQueueArn': 'string',
'DlqEventQueueArn': 'string'
},
],
'JdbcTargets': [
{
'ConnectionName': 'string',
'Path': 'string',
'Exclusions': [
'string',
],
'EnableAdditionalMetadata': [
'COMMENTS'|'RAWTYPES',
]
},
],
'MongoDBTargets': [
{
'ConnectionName': 'string',
'Path': 'string',
'ScanAll': True|False
},
],
'DynamoDBTargets': [
{
'Path': 'string',
'scanAll': True|False,
'scanRate': 123.0
},
],
'CatalogTargets': [
{
'DatabaseName': 'string',
'Tables': [
'string',
],
'ConnectionName': 'string',
'EventQueueArn': 'string',
'DlqEventQueueArn': 'string'
},
],
'DeltaTargets': [
{
'DeltaTables': [
'string',
],
'ConnectionName': 'string',
'WriteManifest': True|False,
'CreateNativeDeltaTable': True|False
},
]
},
Schedule='string',
Classifiers=[
'string',
],
TablePrefix='string',
SchemaChangePolicy={
'UpdateBehavior': 'LOG'|'UPDATE_IN_DATABASE',
'DeleteBehavior': 'LOG'|'DELETE_FROM_DATABASE'|'DEPRECATE_IN_DATABASE'
},
RecrawlPolicy={
'RecrawlBehavior': 'CRAWL_EVERYTHING'|'CRAWL_NEW_FOLDERS_ONLY'|'CRAWL_EVENT_MODE'
},
LineageConfiguration={
'CrawlerLineageSettings': 'ENABLE'|'DISABLE'
},
LakeFormationConfiguration={
'UseLakeFormationCredentials': True|False,
'AccountId': 'string'
},
Configuration='string',
CrawlerSecurityConfiguration='string',
Tags={
'string': 'string'
}
)
[REQUIRED]
Name of the new crawler.
[REQUIRED]
The IAM role or Amazon Resource Name (ARN) of an IAM role used by the new crawler to access customer resources.
arn:aws:daylight:us-east-1::database/sometable/*
.[REQUIRED]
A list of collection of targets to crawl.
Specifies Amazon Simple Storage Service (Amazon S3) targets.
Specifies a data store in Amazon Simple Storage Service (Amazon S3).
The path to the Amazon S3 target.
A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler.
The name of a connection which allows a job or crawler to access data in Amazon S3 within an Amazon Virtual Private Cloud environment (Amazon VPC).
Sets the number of files in each leaf folder to be crawled when crawling sample files in a dataset. If not set, all the files are crawled. A valid value is an integer between 1 and 249.
A valid Amazon SQS ARN. For example, arn:aws:sqs:region:account:sqs
.
A valid Amazon dead-letter SQS ARN. For example, arn:aws:sqs:region:account:deadLetterQueue
.
Specifies JDBC targets.
Specifies a JDBC data store to crawl.
The name of the connection to use to connect to the JDBC target.
The path of the JDBC target.
A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler.
Specify a value of RAWTYPES
or COMMENTS
to enable additional metadata in table responses. RAWTYPES
provides the native-level datatype. COMMENTS
provides comments associated with a column or table in the database.
If you do not need additional metadata, keep the field empty.
Specifies Amazon DocumentDB or MongoDB targets.
Specifies an Amazon DocumentDB or MongoDB data store to crawl.
The name of the connection to use to connect to the Amazon DocumentDB or MongoDB target.
The path of the Amazon DocumentDB or MongoDB target (database/collection).
Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table.
A value of true
means to scan all records, while a value of false
means to sample the records. If no value is specified, the value defaults to true
.
Specifies Amazon DynamoDB targets.
Specifies an Amazon DynamoDB table to crawl.
The name of the DynamoDB table to crawl.
Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table.
A value of true
means to scan all records, while a value of false
means to sample the records. If no value is specified, the value defaults to true
.
The percentage of the configured read capacity units to use by the Glue crawler. Read capacity units is a term defined by DynamoDB, and is a numeric value that acts as rate limiter for the number of reads that can be performed on that table per second.
The valid values are null or a value between 0.1 to 1.5. A null value is used when user does not provide a value, and defaults to 0.5 of the configured Read Capacity Unit (for provisioned tables), or 0.25 of the max configured Read Capacity Unit (for tables using on-demand mode).
Specifies Glue Data Catalog targets.
Specifies an Glue Data Catalog target.
The name of the database to be synchronized.
A list of the tables to be synchronized.
The name of the connection for an Amazon S3-backed Data Catalog table to be a target of the crawl when using a Catalog
connection type paired with a NETWORK
Connection type.
A valid Amazon SQS ARN. For example, arn:aws:sqs:region:account:sqs
.
A valid Amazon dead-letter SQS ARN. For example, arn:aws:sqs:region:account:deadLetterQueue
.
Specifies Delta data store targets.
Specifies a Delta data store to crawl one or more Delta tables.
A list of the Amazon S3 paths to the Delta tables.
The name of the connection to use to connect to the Delta table target.
Specifies whether to write the manifest files to the Delta table path.
Specifies whether the crawler will create native tables, to allow integration with query engines that support querying of the Delta transaction log directly.
cron
expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *)
.A list of custom classifiers that the user has registered. By default, all built-in classifiers are included in a crawl, but these custom classifiers always override the default classifiers for a given classification.
The policy for the crawler's update and deletion behavior.
The update behavior when the crawler finds a changed schema.
The deletion behavior when the crawler finds a deleted object.
A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run.
Specifies whether to crawl the entire dataset again or to crawl only folders that were added since the last crawler run.
A value of CRAWL_EVERYTHING
specifies crawling the entire dataset again.
A value of CRAWL_NEW_FOLDERS_ONLY
specifies crawling only folders that were added since the last crawler run.
A value of CRAWL_EVENT_MODE
specifies crawling only the changes identified by Amazon S3 events.
Specifies data lineage configuration settings for the crawler.
Specifies whether data lineage is enabled for the crawler. Valid values are:
Specifies Lake Formation configuration settings for the crawler.
Specifies whether to use Lake Formation credentials for the crawler instead of the IAM role credentials.
Required for cross account crawls. For same account crawls as the target data, this can be left as null.
SecurityConfiguration
structure to be used by this crawler.The tags to use with this crawler request. You may use tags to limit access to the crawler. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.
dict
Response Syntax
{}
Response Structure
Exceptions
Glue.Client.exceptions.InvalidInputException
Glue.Client.exceptions.AlreadyExistsException
Glue.Client.exceptions.OperationTimeoutException
Glue.Client.exceptions.ResourceNumberLimitExceededException