S3 Customization Reference

S3 Transfers

Note

All classes documented below are considered public and thus will not be exposed to breaking changes. If a class from the boto3.s3.transfer module is not documented below, it is considered internal and users should be very cautious in directly using them because breaking changes may be introduced from version to version of the library. It is recommended to use the variants of the transfer functions injected into the S3 client instead.

class boto3.s3.transfer.TransferConfig(multipart_threshold=8388608, max_concurrency=10, multipart_chunksize=8388608, num_download_attempts=5, max_io_queue=100, io_chunksize=262144, use_threads=True)[source]

Configuration object for managed S3 transfers

Parameters
  • multipart_threshold -- The transfer size threshold for which multipart uploads, downloads, and copies will automatically be triggered.
  • max_concurrency -- The maximum number of threads that will be making requests to perform a transfer. If use_threads is set to False, the value provided is ignored as the transfer will only ever use the main thread.
  • multipart_chunksize -- The partition size of each part for a multipart transfer.
  • num_download_attempts -- The number of download attempts that will be retried upon errors with downloading an object in S3. Note that these retries account for errors that occur when streaming down the data from s3 (i.e. socket errors and read timeouts that occur after recieving an OK response from s3). Other retryable exceptions such as throttling errors and 5xx errors are already retried by botocore (this default is 5). This does not take into account the number of exceptions retried by botocore.
  • max_io_queue -- The maximum amount of read parts that can be queued in memory to be written for a download. The size of each of these read parts is at most the size of io_chunksize.
  • io_chunksize -- The max size of each chunk in the io queue. Currently, this is size used when read is called on the downloaded stream as well.
  • use_threads -- If True, threads will be used when performing S3 transfers. If False, no threads will be used in performing transfers: all logic will be ran in the main thread.
ALIAS = {'max_io_queue': 'max_io_queue_size', 'max_concurrency': 'max_request_concurrency'}
class boto3.s3.transfer.S3Transfer(client=None, config=None, osutil=None, manager=None)[source]
ALLOWED_DOWNLOAD_ARGS = ['VersionId', 'SSECustomerAlgorithm', 'SSECustomerKey', 'SSECustomerKeyMD5', 'RequestPayer']
ALLOWED_UPLOAD_ARGS = ['ACL', 'CacheControl', 'ContentDisposition', 'ContentEncoding', 'ContentLanguage', 'ContentType', 'Expires', 'GrantFullControl', 'GrantRead', 'GrantReadACP', 'GrantWriteACP', 'Metadata', 'RequestPayer', 'ServerSideEncryption', 'StorageClass', 'SSECustomerAlgorithm', 'SSECustomerKey', 'SSECustomerKeyMD5', 'SSEKMSKeyId', 'WebsiteRedirectLocation']
download_file(bucket, key, filename, extra_args=None, callback=None)[source]

Download an S3 object to a file.

Variants have also been injected into S3 client, Bucket and Object. You don't have to use S3Transfer.download_file() directly.

upload_file(filename, bucket, key, callback=None, extra_args=None)[source]

Upload a file to an S3 object.

Variants have also been injected into S3 client, Bucket and Object. You don't have to use S3Transfer.upload_file() directly.