S3 customization reference

S3 transfers


All classes documented below are considered public and thus will not be exposed to breaking changes. If a class from the boto3.s3.transfer module is not documented below, it is considered internal and users should be very cautious in directly using them because breaking changes may be introduced from version to version of the library. It is recommended to use the variants of the transfer functions injected into the S3 client instead.

See also

S3.Client.upload_file() S3.Client.upload_fileobj() S3.Client.download_file() S3.Client.download_fileobj()

class boto3.s3.transfer.TransferConfig(multipart_threshold=8388608, max_concurrency=10, multipart_chunksize=8388608, num_download_attempts=5, max_io_queue=100, io_chunksize=262144, use_threads=True, max_bandwidth=None)[source]

Configuration object for managed S3 transfers

  • multipart_threshold -- The transfer size threshold for which multipart uploads, downloads, and copies will automatically be triggered.
  • max_concurrency -- The maximum number of threads that will be making requests to perform a transfer. If use_threads is set to False, the value provided is ignored as the transfer will only ever use the main thread.
  • multipart_chunksize -- The partition size of each part for a multipart transfer.
  • num_download_attempts -- The number of download attempts that will be retried upon errors with downloading an object in S3. Note that these retries account for errors that occur when streaming down the data from s3 (i.e. socket errors and read timeouts that occur after receiving an OK response from s3). Other retryable exceptions such as throttling errors and 5xx errors are already retried by botocore (this default is 5). This does not take into account the number of exceptions retried by botocore.
  • max_io_queue -- The maximum amount of read parts that can be queued in memory to be written for a download. The size of each of these read parts is at most the size of io_chunksize.
  • io_chunksize -- The max size of each chunk in the io queue. Currently, this is size used when read is called on the downloaded stream as well.
  • use_threads -- If True, threads will be used when performing S3 transfers. If False, no threads will be used in performing transfers; all logic will be run in the main thread.
  • max_bandwidth -- The maximum bandwidth that will be consumed in uploading and downloading file content. The value is an integer in terms of bytes per second.
ALIAS = {'max_concurrency': 'max_request_concurrency', 'max_io_queue': 'max_io_queue_size'}
class boto3.s3.transfer.S3Transfer(client=None, config=None, osutil=None, manager=None)[source]
ALLOWED_DOWNLOAD_ARGS = ['ChecksumMode', 'VersionId', 'SSECustomerAlgorithm', 'SSECustomerKey', 'SSECustomerKeyMD5', 'RequestPayer', 'ExpectedBucketOwner']
ALLOWED_UPLOAD_ARGS = ['ACL', 'CacheControl', 'ChecksumAlgorithm', 'ContentDisposition', 'ContentEncoding', 'ContentLanguage', 'ContentType', 'ExpectedBucketOwner', 'Expires', 'GrantFullControl', 'GrantRead', 'GrantReadACP', 'GrantWriteACP', 'Metadata', 'ObjectLockLegalHoldStatus', 'ObjectLockMode', 'ObjectLockRetainUntilDate', 'RequestPayer', 'ServerSideEncryption', 'StorageClass', 'SSECustomerAlgorithm', 'SSECustomerKey', 'SSECustomerKeyMD5', 'SSEKMSKeyId', 'SSEKMSEncryptionContext', 'Tagging', 'WebsiteRedirectLocation']
download_file(bucket, key, filename, extra_args=None, callback=None)[source]

Download an S3 object to a file.

Variants have also been injected into S3 client, Bucket and Object. You don't have to use S3Transfer.download_file() directly.

See also

S3.Client.download_file() S3.Client.download_fileobj()

upload_file(filename, bucket, key, callback=None, extra_args=None)[source]

Upload a file to an S3 object.

Variants have also been injected into S3 client, Bucket and Object. You don't have to use S3Transfer.upload_file() directly.

See also

S3.Client.upload_file() S3.Client.upload_fileobj()