Table of Contents
Transfer.
Client
¶A low-level client representing AWS Transfer Family
Amazon Web Services Transfer Family is a fully managed service that enables the transfer of files over the File Transfer Protocol (FTP), File Transfer Protocol over SSL (FTPS), or Secure Shell (SSH) File Transfer Protocol (SFTP) directly into and out of Amazon Simple Storage Service (Amazon S3). Amazon Web Services helps you seamlessly migrate your file transfer workflows to Amazon Web Services Transfer Family by integrating with existing authentication systems, and providing DNS routing with Amazon Route 53 so nothing changes for your customers and partners, or their applications. With your data in Amazon S3, you can use it with Amazon Web Services services for processing, analytics, machine learning, and archiving. Getting started with Amazon Web Services Transfer Family is easy since there is no infrastructure to buy and set up.
import boto3
client = boto3.client('transfer')
These are the available methods:
can_paginate()
close()
create_access()
create_server()
create_user()
create_workflow()
delete_access()
delete_server()
delete_ssh_public_key()
delete_user()
delete_workflow()
describe_access()
describe_execution()
describe_security_policy()
describe_server()
describe_user()
describe_workflow()
get_paginator()
get_waiter()
import_ssh_public_key()
list_accesses()
list_executions()
list_security_policies()
list_servers()
list_tags_for_resource()
list_users()
list_workflows()
send_workflow_step_state()
start_server()
stop_server()
tag_resource()
test_identity_provider()
untag_resource()
update_access()
update_server()
update_user()
can_paginate
(operation_name)¶Check if an operation can be paginated.
create_foo
, and you'd normally invoke the
operation as client.create_foo(**kwargs)
, if the
create_foo
operation can be paginated, you can use the
call client.get_paginator("create_foo")
.True
if the operation can be paginated,
False
otherwise.close
()¶Closes underlying endpoint connections.
create_access
(**kwargs)¶Used by administrators to choose which groups in the directory should have access to upload and download files over the enabled protocols using Amazon Web Services Transfer Family. For example, a Microsoft Active Directory might contain 50,000 users, but only a small fraction might need the ability to transfer files to the server. An administrator can use CreateAccess
to limit the access to the correct set of users who need this ability.
See also: AWS API Documentation
Request Syntax
response = client.create_access(
HomeDirectory='string',
HomeDirectoryType='PATH'|'LOGICAL',
HomeDirectoryMappings=[
{
'Entry': 'string',
'Target': 'string'
},
],
Policy='string',
PosixProfile={
'Uid': 123,
'Gid': 123,
'SecondaryGids': [
123,
]
},
Role='string',
ServerId='string',
ExternalId='string'
)
The landing directory (folder) for a user when they log in to the server using the client.
A HomeDirectory
example is /bucket_name/home/mydirectory
.
PATH
, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it LOGICAL
, you need to provide mappings in the HomeDirectoryMappings
for how you want to make Amazon S3 or EFS paths visible to your users.Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the Entry
and Target
pair, where Entry
shows how the path is made visible and Target
is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Amazon Web Services Identity and Access Management (IAM) role provides access to paths in Target
. This value can only be set when HomeDirectoryType
is set to LOGICAL .
The following is an Entry
and Target
pair example.
[ { "Entry": "/directory1", "Target": "/bucket_name/home/mydirectory" } ]
In most cases, you can use this value instead of the session policy to lock down your user to the designated home directory ("chroot
"). To do this, you can set Entry
to /
and set Target
to the HomeDirectory
parameter value.
The following is an Entry
and Target
pair example for chroot
.
[ { "Entry": "/", "Target": "/bucket_name/home/mydirectory" } ]
Represents an object that contains entries and targets for HomeDirectoryMappings
.
The following is an Entry
and Target
pair example for chroot
.
[ { "Entry": "/", "Target": "/bucket_name/home/mydirectory" } ]
Represents an entry for HomeDirectoryMappings
.
Represents the map target that is used in a HomeDirectorymapEntry
.
A session policy for your user so that you can use the same IAM role across multiple users. This policy scopes down user access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include ${Transfer:UserName}
, ${Transfer:HomeDirectory}
, and ${Transfer:HomeBucket}
.
Note
This only applies when the domain of ServerId
is S3. EFS does not use session policies.
For session policies, Amazon Web Services Transfer Family stores the policy as a JSON blob, instead of the Amazon Resource Name (ARN) of the policy. You save the policy as a JSON blob and pass it in the Policy
argument.
For an example of a session policy, see Example session policy .
For more information, see AssumeRole in the Amazon Web Services Security Token Service API Reference .
The full POSIX identity, including user ID (Uid
), group ID (Gid
), and any secondary groups IDs (SecondaryGids
), that controls your users' access to your Amazon EFS file systems. The POSIX permissions that are set on files and directories in your file system determine the level of access your users get when transferring files into and out of your Amazon EFS file systems.
The POSIX user ID used for all EFS operations by this user.
The POSIX group ID used for all EFS operations by this user.
The secondary POSIX group IDs used for all EFS operations by this user.
[REQUIRED]
Specifies the Amazon Resource Name (ARN) of the IAM role that controls your users' access to your Amazon S3 bucket or EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.
[REQUIRED]
A system-assigned unique identifier for a server instance. This is the specific server that you added your user to.
[REQUIRED]
A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Amazon Web Services Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.
Get-ADGroup -Filter {samAccountName -like "*YourGroupName* *"} -Properties * | Select SamAccountName,ObjectSid
In that command, replace YourGroupName with the name of your Active Directory group.
The regex used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-
dict
Response Syntax
{
'ServerId': 'string',
'ExternalId': 'string'
}
Response Structure
(dict) --
ServerId (string) --
The ID of the server that the user is attached to.
ExternalId (string) --
The external ID of the group whose users have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Amazon Web Services Transfer Family.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceExistsException
Transfer.Client.exceptions.ResourceNotFoundException
create_server
(**kwargs)¶Instantiates an auto-scaling virtual server based on the selected file transfer protocol in Amazon Web Services. When you make updates to your file transfer protocol-enabled server or when you work with users, use the service-generated ServerId
property that is assigned to the newly created server.
See also: AWS API Documentation
Request Syntax
response = client.create_server(
Certificate='string',
Domain='S3'|'EFS',
EndpointDetails={
'AddressAllocationIds': [
'string',
],
'SubnetIds': [
'string',
],
'VpcEndpointId': 'string',
'VpcId': 'string',
'SecurityGroupIds': [
'string',
]
},
EndpointType='PUBLIC'|'VPC'|'VPC_ENDPOINT',
HostKey='string',
IdentityProviderDetails={
'Url': 'string',
'InvocationRole': 'string',
'DirectoryId': 'string',
'Function': 'string'
},
IdentityProviderType='SERVICE_MANAGED'|'API_GATEWAY'|'AWS_DIRECTORY_SERVICE'|'AWS_LAMBDA',
LoggingRole='string',
PostAuthenticationLoginBanner='string',
PreAuthenticationLoginBanner='string',
Protocols=[
'SFTP'|'FTP'|'FTPS',
],
ProtocolDetails={
'PassiveIp': 'string',
'TlsSessionResumptionMode': 'DISABLED'|'ENABLED'|'ENFORCED',
'SetStatOption': 'DEFAULT'|'ENABLE_NO_OP'
},
SecurityPolicyName='string',
Tags=[
{
'Key': 'string',
'Value': 'string'
},
],
WorkflowDetails={
'OnUpload': [
{
'WorkflowId': 'string',
'ExecutionRole': 'string'
},
]
}
)
The Amazon Resource Name (ARN) of the Amazon Web Services Certificate Manager (ACM) certificate. Required when Protocols
is set to FTPS
.
To request a new public certificate, see Request a public certificate in the Amazon Web Services Certificate Manager User Guide .
To import an existing certificate into ACM, see Importing certificates into ACM in the Amazon Web Services Certificate Manager User Guide .
To request a private certificate to use FTPS through private IP addresses, see Request a private certificate in the Amazon Web Services Certificate Manager User Guide .
Certificates with the following cryptographic algorithms and key sizes are supported:
Note
The certificate must be a valid SSL/TLS X.509 version 3 certificate with FQDN or IP address specified and information about the issuer.
The domain of the storage system that is used for file transfers. There are two domains available: Amazon Simple Storage Service (Amazon S3) and Amazon Elastic File System (Amazon EFS). The default value is S3.
Note
After the server is created, the domain cannot be changed.
The virtual private cloud (VPC) endpoint settings that are configured for your server. When you host your endpoint within your VPC, you can make it accessible only to resources within your VPC, or you can attach Elastic IP addresses and make it accessible to clients over the internet. Your VPC's default security groups are automatically assigned to your endpoint.
A list of address allocation IDs that are required to attach an Elastic IP address to your server's endpoint.
Note
This property can only be set when EndpointType
is set to VPC
and it is only valid in the UpdateServer
API.
A list of subnet IDs that are required to host your server endpoint in your VPC.
Note
This property can only be set when EndpointType
is set to VPC
.
The ID of the VPC endpoint.
Note
This property can only be set when EndpointType
is set to VPC_ENDPOINT
.
For more information, see https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html#deprecate-vpc-endpoint.
The VPC ID of the VPC in which a server's endpoint will be hosted.
Note
This property can only be set when EndpointType
is set to VPC
.
A list of security groups IDs that are available to attach to your server's endpoint.
Note
This property can only be set when EndpointType
is set to VPC
.
You can edit the SecurityGroupIds
property in the UpdateServer API only if you are changing the EndpointType
from PUBLIC
or VPC_ENDPOINT
to VPC
. To change security groups associated with your server's VPC endpoint after creation, use the Amazon EC2 ModifyVpcEndpoint API.
The type of endpoint that you want your server to use. You can choose to make your server's endpoint publicly accessible (PUBLIC) or host it inside your VPC. With an endpoint that is hosted in a VPC, you can restrict access to your server and resources only within your VPC or choose to make it internet facing by attaching Elastic IP addresses directly to it.
Note
After May 19, 2021, you won't be able to create a server using EndpointType=VPC_ENDPOINT
in your Amazon Web Services account if your account hasn't already done so before May 19, 2021. If you have already created servers with EndpointType=VPC_ENDPOINT
in your Amazon Web Services account on or before May 19, 2021, you will not be affected. After this date, use EndpointType
=``VPC`` .
For more information, see https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html#deprecate-vpc-endpoint.
It is recommended that you use VPC
as the EndpointType
. With this endpoint type, you have the option to directly associate up to three Elastic IPv4 addresses (BYO IP included) with your server's endpoint and use VPC security groups to restrict traffic by the client's public IP address. This is not possible with EndpointType
set to VPC_ENDPOINT
.
The RSA, ECDSA, or ED25519 private key to use for your server.
Use the following command to generate an RSA 2048 bit key with no passphrase:
ssh-keygen -t rsa -b 2048 -N "" -m PEM -f my-new-server-key
.
Use a minimum value of 2048 for the -b
option: you can create a stronger key using 3072 or 4096.
Use the following command to generate an ECDSA 256 bit key with no passphrase:
ssh-keygen -t ecdsa -b 256 -N "" -m PEM -f my-new-server-key
.
Valid values for the -b
option for ECDSA are 256, 384, and 521.
Use the following command to generate an ED25519 key with no passphrase:
ssh-keygen -t ed25519 -N "" -f my-new-server-key
.
For all of these commands, you can replace my-new-server-key with a string of your choice.
Warning
If you aren't planning to migrate existing users from an existing SFTP-enabled server to a new server, don't update the host key. Accidentally changing a server's host key can be disruptive.
For more information, see Change the host key for your SFTP-enabled server in the Amazon Web Services Transfer Family User Guide .
Required when IdentityProviderType
is set to AWS_DIRECTORY_SERVICE
or API_GATEWAY
. Accepts an array containing all of the information required to use a directory in AWS_DIRECTORY_SERVICE
or invoke a customer-supplied authentication API, including the API Gateway URL. Not required when IdentityProviderType
is set to SERVICE_MANAGED
.
Provides the location of the service endpoint used to authenticate users.
Provides the type of InvocationRole
used to authenticate the user account.
The identifier of the Amazon Web Services Directory Service directory that you want to stop sharing.
The ARN for a lambda function to use for the Identity provider.
Specifies the mode of authentication for a server. The default value is SERVICE_MANAGED
, which allows you to store and access user credentials within the Amazon Web Services Transfer Family service.
Use AWS_DIRECTORY_SERVICE
to provide access to Active Directory groups in Amazon Web Services Managed Active Directory or Microsoft Active Directory in your on-premises environment or in Amazon Web Services using AD Connectors. This option also requires you to provide a Directory ID using the IdentityProviderDetails
parameter.
Use the API_GATEWAY
value to integrate with an identity provider of your choosing. The API_GATEWAY
setting requires you to provide an API Gateway endpoint URL to call for authentication using the IdentityProviderDetails
parameter.
Use the AWS_LAMBDA
value to directly use a Lambda function as your identity provider. If you choose this value, you must specify the ARN for the lambda function in the Function
parameter for the IdentityProviderDetails
data type.
Specify a string to display when users connect to a server. This string is displayed after the user authenticates.
Note
The SFTP protocol does not support post-authentication display banners.
Specify a string to display when users connect to a server. This string is displayed before the user authenticates. For example, the following banner displays details about using the system.
This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel.
Specifies the file transfer protocol or protocols over which your file transfer protocol client can connect to your server's endpoint. The available protocols are:
SFTP
(Secure Shell (SSH) File Transfer Protocol): File transfer over SSHFTPS
(File Transfer Protocol Secure): File transfer with TLS encryptionFTP
(File Transfer Protocol): Unencrypted file transferNote
If you select FTPS
, you must choose a certificate stored in Amazon Web Services Certificate Manager (ACM) which is used to identify your server when clients connect to it over FTPS.
If Protocol
includes either FTP
or FTPS
, then the EndpointType
must be VPC
and the IdentityProviderType
must be AWS_DIRECTORY_SERVICE
or API_GATEWAY
.
If Protocol
includes FTP
, then AddressAllocationIds
cannot be associated.
If Protocol
is set only to SFTP
, the EndpointType
can be set to PUBLIC
and the IdentityProviderType
can be set to SERVICE_MANAGED
.
The protocol settings that are configured for your server.
PassiveIp
parameter to indicate passive mode (for FTP and FTPS protocols). Enter a single dotted-quad IPv4 address, such as the external IP address of a firewall, router, or load balancer.SetStatOption
to ignore the error that is generated when the client attempts to use SETSTAT on a file you are uploading to an S3 bucket. Set the value to ENABLE_NO_OP
to have the Transfer Family server ignore the SETSTAT command, and upload files without needing to make any changes to your SFTP client. Note that with SetStatOption
set to ENABLE_NO_OP
, Transfer generates a log entry to CloudWatch Logs, so you can determine when the client is making a SETSTAT call.TlsSessionResumptionMode
parameter to determine whether or not your Transfer server resumes recent, negotiated sessions through a unique session ID.Indicates passive mode, for FTP and FTPS protocols. Enter a single IPv4 address, such as the public IP address of a firewall, router, or load balancer. For example:
``aws transfer update-server --protocol-details PassiveIp=*0.0.0.0* ``
Replace `` 0.0.0.0 `` in the example above with the actual IP address you want to use.
Note
If you change the PassiveIp
value, you must stop and then restart your Transfer Family server for the change to take effect. For details on using passive mode (PASV) in a NAT environment, see Configuring your FTPS server behind a firewall or NAT with Transfer Family .
A property used with Transfer Family servers that use the FTPS protocol. TLS Session Resumption provides a mechanism to resume or share a negotiated secret key between the control and data connection for an FTPS session. TlsSessionResumptionMode
determines whether or not the server resumes recent, negotiated sessions through a unique session ID. This property is available during CreateServer
and UpdateServer
calls. If a TlsSessionResumptionMode
value is not specified during CreateServer
, it is set to ENFORCED
by default.
DISABLED
: the server does not process TLS session resumption client requests and creates a new TLS session for each request.ENABLED
: the server processes and accepts clients that are performing TLS session resumption. The server doesn't reject client data connections that do not perform the TLS session resumption client processing.ENFORCED
: the server processes and accepts clients that are performing TLS session resumption. The server rejects client data connections that do not perform the TLS session resumption client processing. Before you set the value to ENFORCED
, test your clients.Note
Not all FTPS clients perform TLS session resumption. So, if you choose to enforce TLS session resumption, you prevent any connections from FTPS clients that don't perform the protocol negotiation. To determine whether or not you can use the ENFORCED
value, you need to test your clients.
Use the SetStatOption
to ignore the error that is generated when the client attempts to use SETSTAT
on a file you are uploading to an S3 bucket.
Some SFTP file transfer clients can attempt to change the attributes of remote files, including timestamp and permissions, using commands, such as SETSTAT
when uploading the file. However, these commands are not compatible with object storage systems, such as Amazon S3. Due to this incompatibility, file uploads from these clients can result in errors even when the file is otherwise successfully uploaded.
Set the value to ENABLE_NO_OP
to have the Transfer Family server ignore the SETSTAT
command, and upload files without needing to make any changes to your SFTP client. While the SetStatOption
ENABLE_NO_OP
setting ignores the error, it does generate a log entry in Amazon CloudWatch Logs, so you can determine when the client is making a SETSTAT
call.
Note
If you want to preserve the original timestamp for your file, and modify other file attributes using SETSTAT
, you can use Amazon EFS as backend storage with Transfer Family.
Key-value pairs that can be used to group and search for servers.
Creates a key-value pair for a specific resource. Tags are metadata that you can use to search for and group a resource for various purposes. You can apply tags to servers, users, and roles. A tag key can take more than one value. For example, to group servers for accounting purposes, you might create a tag called Group
and assign the values Research
and Accounting
to that group.
The name assigned to the tag that you create.
Contains one or more values that you assigned to the key name you create.
Specifies the workflow ID for the workflow to assign and the execution role used for executing the workflow.
A trigger that starts a workflow: the workflow begins to execute after a file is uploaded.
To remove an associated workflow from a server, you can provide an empty OnUpload
object, as in the following example.
aws transfer update-server --server-id s-01234567890abcdef --workflow-details '{"OnUpload":[]}'
Specifies the workflow ID for the workflow to assign and the execution role used for executing the workflow.
A unique identifier for the workflow.
Includes the necessary permissions for S3, EFS, and Lambda operations that Transfer can assume, so that all workflow steps can operate on the required resources
dict
Response Syntax
{
'ServerId': 'string'
}
Response Structure
(dict) --
ServerId (string) --
The service-assigned ID of the server that is created.
Exceptions
Transfer.Client.exceptions.AccessDeniedException
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceExistsException
Transfer.Client.exceptions.ResourceNotFoundException
Transfer.Client.exceptions.ThrottlingException
create_user
(**kwargs)¶Creates a user and associates them with an existing file transfer protocol-enabled server. You can only create and associate users with servers that have the IdentityProviderType
set to SERVICE_MANAGED
. Using parameters for CreateUser
, you can specify the user name, set the home directory, store the user's public key, and assign the user's Amazon Web Services Identity and Access Management (IAM) role. You can also optionally add a session policy, and assign metadata with tags that can be used to group and search for users.
See also: AWS API Documentation
Request Syntax
response = client.create_user(
HomeDirectory='string',
HomeDirectoryType='PATH'|'LOGICAL',
HomeDirectoryMappings=[
{
'Entry': 'string',
'Target': 'string'
},
],
Policy='string',
PosixProfile={
'Uid': 123,
'Gid': 123,
'SecondaryGids': [
123,
]
},
Role='string',
ServerId='string',
SshPublicKeyBody='string',
Tags=[
{
'Key': 'string',
'Value': 'string'
},
],
UserName='string'
)
The landing directory (folder) for a user when they log in to the server using the client.
A HomeDirectory
example is /bucket_name/home/mydirectory
.
PATH
, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it LOGICAL
, you need to provide mappings in the HomeDirectoryMappings
for how you want to make Amazon S3 or EFS paths visible to your users.Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the Entry
and Target
pair, where Entry
shows how the path is made visible and Target
is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Amazon Web Services Identity and Access Management (IAM) role provides access to paths in Target
. This value can only be set when HomeDirectoryType
is set to LOGICAL .
The following is an Entry
and Target
pair example.
[ { "Entry": "/directory1", "Target": "/bucket_name/home/mydirectory" } ]
In most cases, you can use this value instead of the session policy to lock your user down to the designated home directory ("chroot
"). To do this, you can set Entry
to /
and set Target
to the HomeDirectory parameter value.
The following is an Entry
and Target
pair example for chroot
.
[ { "Entry": "/", "Target": "/bucket_name/home/mydirectory" } ]
Represents an object that contains entries and targets for HomeDirectoryMappings
.
The following is an Entry
and Target
pair example for chroot
.
[ { "Entry": "/", "Target": "/bucket_name/home/mydirectory" } ]
Represents an entry for HomeDirectoryMappings
.
Represents the map target that is used in a HomeDirectorymapEntry
.
A session policy for your user so that you can use the same IAM role across multiple users. This policy scopes down user access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include ${Transfer:UserName}
, ${Transfer:HomeDirectory}
, and ${Transfer:HomeBucket}
.
Note
This only applies when the domain of ServerId
is S3. EFS does not use session policies.
For session policies, Amazon Web Services Transfer Family stores the policy as a JSON blob, instead of the Amazon Resource Name (ARN) of the policy. You save the policy as a JSON blob and pass it in the Policy
argument.
For an example of a session policy, see Example session policy .
For more information, see AssumeRole in the Amazon Web Services Security Token Service API Reference .
Specifies the full POSIX identity, including user ID (Uid
), group ID (Gid
), and any secondary groups IDs (SecondaryGids
), that controls your users' access to your Amazon EFS file systems. The POSIX permissions that are set on files and directories in Amazon EFS determine the level of access your users get when transferring files into and out of your Amazon EFS file systems.
The POSIX user ID used for all EFS operations by this user.
The POSIX group ID used for all EFS operations by this user.
The secondary POSIX group IDs used for all EFS operations by this user.
[REQUIRED]
Specifies the Amazon Resource Name (ARN) of the IAM role that controls your users' access to your Amazon S3 bucket or EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.
[REQUIRED]
A system-assigned unique identifier for a server instance. This is the specific server that you added your user to.
The public portion of the Secure Shell (SSH) key used to authenticate the user to the server.
Transfer Family accepts RSA, ECDSA, and ED25519 keys.
Key-value pairs that can be used to group and search for users. Tags are metadata attached to users for any purpose.
Creates a key-value pair for a specific resource. Tags are metadata that you can use to search for and group a resource for various purposes. You can apply tags to servers, users, and roles. A tag key can take more than one value. For example, to group servers for accounting purposes, you might create a tag called Group
and assign the values Research
and Accounting
to that group.
The name assigned to the tag that you create.
Contains one or more values that you assigned to the key name you create.
[REQUIRED]
A unique string that identifies a user and is associated with a ServerId
. This user name must be a minimum of 3 and a maximum of 100 characters long. The following are valid characters: a-z, A-Z, 0-9, underscore '_', hyphen '-', period '.', and at sign '@'. The user name can't start with a hyphen, period, or at sign.
dict
Response Syntax
{
'ServerId': 'string',
'UserName': 'string'
}
Response Structure
(dict) --
ServerId (string) --
The ID of the server that the user is attached to.
UserName (string) --
A unique string that identifies a user account associated with a server.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceExistsException
Transfer.Client.exceptions.ResourceNotFoundException
create_workflow
(**kwargs)¶Allows you to create a workflow with specified steps and step details the workflow invokes after file transfer completes. After creating a workflow, you can associate the workflow created with any transfer servers by specifying the workflow-details
field in CreateServer
and UpdateServer
operations.
See also: AWS API Documentation
Request Syntax
response = client.create_workflow(
Description='string',
Steps=[
{
'Type': 'COPY'|'CUSTOM'|'TAG'|'DELETE',
'CopyStepDetails': {
'Name': 'string',
'DestinationFileLocation': {
'S3FileLocation': {
'Bucket': 'string',
'Key': 'string'
},
'EfsFileLocation': {
'FileSystemId': 'string',
'Path': 'string'
}
},
'OverwriteExisting': 'TRUE'|'FALSE',
'SourceFileLocation': 'string'
},
'CustomStepDetails': {
'Name': 'string',
'Target': 'string',
'TimeoutSeconds': 123,
'SourceFileLocation': 'string'
},
'DeleteStepDetails': {
'Name': 'string',
'SourceFileLocation': 'string'
},
'TagStepDetails': {
'Name': 'string',
'Tags': [
{
'Key': 'string',
'Value': 'string'
},
],
'SourceFileLocation': 'string'
}
},
],
OnExceptionSteps=[
{
'Type': 'COPY'|'CUSTOM'|'TAG'|'DELETE',
'CopyStepDetails': {
'Name': 'string',
'DestinationFileLocation': {
'S3FileLocation': {
'Bucket': 'string',
'Key': 'string'
},
'EfsFileLocation': {
'FileSystemId': 'string',
'Path': 'string'
}
},
'OverwriteExisting': 'TRUE'|'FALSE',
'SourceFileLocation': 'string'
},
'CustomStepDetails': {
'Name': 'string',
'Target': 'string',
'TimeoutSeconds': 123,
'SourceFileLocation': 'string'
},
'DeleteStepDetails': {
'Name': 'string',
'SourceFileLocation': 'string'
},
'TagStepDetails': {
'Name': 'string',
'Tags': [
{
'Key': 'string',
'Value': 'string'
},
],
'SourceFileLocation': 'string'
}
},
],
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
[REQUIRED]
Specifies the details for the steps that are in the specified workflow.
The TYPE
specifies which of the following actions is being taken for this step.
Note
Currently, copying and tagging are supported only on S3.
For file location, you specify either the S3 bucket and key, or the EFS filesystem ID and path.
The basic building block of a workflow.
Currently, the following step types are supported.
Details for a step that performs a file copy.
Consists of the following values:
FALSE
.The name of the step, used as an identifier.
Specifies the location for the file being copied. Only applicable for Copy type workflow steps. Use ${Transfer:username}
in this field to parametrize the destination prefix by username.
Specifies the details for the S3 file being copied.
Specifies the S3 bucket for the customer input file.
The name assigned to the file when it was created in S3. You use the object key to retrieve the object.
Reserved for future use.
The ID of the file system, assigned by Amazon EFS.
The pathname for the folder being used by a workflow.
A flag that indicates whether or not to overwrite an existing file of the same name. The default is FALSE
.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.${original.file}
to use the originally-uploaded file location as input for this step.Details for a step that invokes a lambda function.
Consists of the lambda function name, target, and timeout (in seconds).
The name of the step, used as an identifier.
The ARN for the lambda function that is being called.
Timeout, in seconds, for the step.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.${original.file}
to use the originally-uploaded file location as input for this step.Details for a step that deletes the file.
The name of the step, used as an identifier.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.${original.file}
to use the originally-uploaded file location as input for this step.Details for a step that creates one or more tags.
You specify one or more tags: each tag contains a key/value pair.
The name of the step, used as an identifier.
Array that contains from 1 to 10 key/value pairs.
Specifies the key-value pair that are assigned to a file during the execution of a Tagging step.
The name assigned to the tag that you create.
The value that corresponds to the key.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.${original.file}
to use the originally-uploaded file location as input for this step.Specifies the steps (actions) to take if errors are encountered during execution of the workflow.
Note
For custom steps, the lambda function needs to send FAILURE
to the call back API to kick off the exception steps. Additionally, if the lambda does not send SUCCESS
before it times out, the exception steps are executed.
The basic building block of a workflow.
Currently, the following step types are supported.
Details for a step that performs a file copy.
Consists of the following values:
FALSE
.The name of the step, used as an identifier.
Specifies the location for the file being copied. Only applicable for Copy type workflow steps. Use ${Transfer:username}
in this field to parametrize the destination prefix by username.
Specifies the details for the S3 file being copied.
Specifies the S3 bucket for the customer input file.
The name assigned to the file when it was created in S3. You use the object key to retrieve the object.
Reserved for future use.
The ID of the file system, assigned by Amazon EFS.
The pathname for the folder being used by a workflow.
A flag that indicates whether or not to overwrite an existing file of the same name. The default is FALSE
.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.${original.file}
to use the originally-uploaded file location as input for this step.Details for a step that invokes a lambda function.
Consists of the lambda function name, target, and timeout (in seconds).
The name of the step, used as an identifier.
The ARN for the lambda function that is being called.
Timeout, in seconds, for the step.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.${original.file}
to use the originally-uploaded file location as input for this step.Details for a step that deletes the file.
The name of the step, used as an identifier.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.${original.file}
to use the originally-uploaded file location as input for this step.Details for a step that creates one or more tags.
You specify one or more tags: each tag contains a key/value pair.
The name of the step, used as an identifier.
Array that contains from 1 to 10 key/value pairs.
Specifies the key-value pair that are assigned to a file during the execution of a Tagging step.
The name assigned to the tag that you create.
The value that corresponds to the key.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.${original.file}
to use the originally-uploaded file location as input for this step.Key-value pairs that can be used to group and search for workflows. Tags are metadata attached to workflows for any purpose.
Creates a key-value pair for a specific resource. Tags are metadata that you can use to search for and group a resource for various purposes. You can apply tags to servers, users, and roles. A tag key can take more than one value. For example, to group servers for accounting purposes, you might create a tag called Group
and assign the values Research
and Accounting
to that group.
The name assigned to the tag that you create.
Contains one or more values that you assigned to the key name you create.
dict
Response Syntax
{
'WorkflowId': 'string'
}
Response Structure
(dict) --
WorkflowId (string) --
A unique identifier for the workflow.
Exceptions
Transfer.Client.exceptions.AccessDeniedException
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceExistsException
Transfer.Client.exceptions.ThrottlingException
delete_access
(**kwargs)¶Allows you to delete the access specified in the ServerID
and ExternalID
parameters.
See also: AWS API Documentation
Request Syntax
response = client.delete_access(
ServerId='string',
ExternalId='string'
)
[REQUIRED]
A system-assigned unique identifier for a server that has this user assigned.
[REQUIRED]
A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Amazon Web Services Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.
Get-ADGroup -Filter {samAccountName -like "*YourGroupName* *"} -Properties * | Select SamAccountName,ObjectSid
In that command, replace YourGroupName with the name of your Active Directory group.
The regex used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-
None
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
delete_server
(**kwargs)¶Deletes the file transfer protocol-enabled server that you specify.
No response returns from this operation.
See also: AWS API Documentation
Request Syntax
response = client.delete_server(
ServerId='string'
)
[REQUIRED]
A unique system-assigned identifier for a server instance.
Exceptions
Transfer.Client.exceptions.AccessDeniedException
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
delete_ssh_public_key
(**kwargs)¶Deletes a user's Secure Shell (SSH) public key.
See also: AWS API Documentation
Request Syntax
response = client.delete_ssh_public_key(
ServerId='string',
SshPublicKeyId='string',
UserName='string'
)
[REQUIRED]
A system-assigned unique identifier for a file transfer protocol-enabled server instance that has the user assigned to it.
[REQUIRED]
A unique identifier used to reference your user's specific SSH key.
[REQUIRED]
A unique string that identifies a user whose public key is being deleted.
None
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
Transfer.Client.exceptions.ThrottlingException
delete_user
(**kwargs)¶Deletes the user belonging to a file transfer protocol-enabled server you specify.
No response returns from this operation.
Note
When you delete a user from a server, the user's information is lost.
See also: AWS API Documentation
Request Syntax
response = client.delete_user(
ServerId='string',
UserName='string'
)
[REQUIRED]
A system-assigned unique identifier for a server instance that has the user assigned to it.
[REQUIRED]
A unique string that identifies a user that is being deleted from a server.
None
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
delete_workflow
(**kwargs)¶Deletes the specified workflow.
See also: AWS API Documentation
Request Syntax
response = client.delete_workflow(
WorkflowId='string'
)
[REQUIRED]
A unique identifier for the workflow.
Exceptions
Transfer.Client.exceptions.AccessDeniedException
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
describe_access
(**kwargs)¶Describes the access that is assigned to the specific file transfer protocol-enabled server, as identified by its ServerId
property and its ExternalID
.
The response from this call returns the properties of the access that is associated with the ServerId
value that was specified.
See also: AWS API Documentation
Request Syntax
response = client.describe_access(
ServerId='string',
ExternalId='string'
)
[REQUIRED]
A system-assigned unique identifier for a server that has this access assigned.
[REQUIRED]
A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Amazon Web Services Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.
Get-ADGroup -Filter {samAccountName -like "*YourGroupName* *"} -Properties * | Select SamAccountName,ObjectSid
In that command, replace YourGroupName with the name of your Active Directory group.
The regex used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-
dict
Response Syntax
{
'ServerId': 'string',
'Access': {
'HomeDirectory': 'string',
'HomeDirectoryMappings': [
{
'Entry': 'string',
'Target': 'string'
},
],
'HomeDirectoryType': 'PATH'|'LOGICAL',
'Policy': 'string',
'PosixProfile': {
'Uid': 123,
'Gid': 123,
'SecondaryGids': [
123,
]
},
'Role': 'string',
'ExternalId': 'string'
}
}
Response Structure
(dict) --
ServerId (string) --
A system-assigned unique identifier for a server that has this access assigned.
Access (dict) --
The external ID of the server that the access is attached to.
HomeDirectory (string) --
The landing directory (folder) for a user when they log in to the server using the client.
A HomeDirectory
example is /bucket_name/home/mydirectory
.
HomeDirectoryMappings (list) --
Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the Entry
and Target
pair, where Entry
shows how the path is made visible and Target
is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Amazon Web Services Identity and Access Management (IAM) role provides access to paths in Target
. This value can only be set when HomeDirectoryType
is set to LOGICAL .
In most cases, you can use this value instead of the session policy to lock down the associated access to the designated home directory ("chroot
"). To do this, you can set Entry
to '/' and set Target
to the HomeDirectory
parameter value.
(dict) --
Represents an object that contains entries and targets for HomeDirectoryMappings
.
The following is an Entry
and Target
pair example for chroot
.
[ { "Entry": "/", "Target": "/bucket_name/home/mydirectory" } ]
Entry (string) --
Represents an entry for HomeDirectoryMappings
.
Target (string) --
Represents the map target that is used in a HomeDirectorymapEntry
.
HomeDirectoryType (string) --
The type of landing directory (folder) you want your users' home directory to be when they log into the server. If you set it to PATH
, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it LOGICAL
, you need to provide mappings in the HomeDirectoryMappings
for how you want to make Amazon S3 or EFS paths visible to your users.
Policy (string) --
A session policy for your user so that you can use the same IAM role across multiple users. This policy scopes down user access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include ${Transfer:UserName}
, ${Transfer:HomeDirectory}
, and ${Transfer:HomeBucket}
.
PosixProfile (dict) --
The full POSIX identity, including user ID (Uid
), group ID (Gid
), and any secondary groups IDs (SecondaryGids
), that controls your users' access to your Amazon EFS file systems. The POSIX permissions that are set on files and directories in your file system determine the level of access your users get when transferring files into and out of your Amazon EFS file systems.
Uid (integer) --
The POSIX user ID used for all EFS operations by this user.
Gid (integer) --
The POSIX group ID used for all EFS operations by this user.
SecondaryGids (list) --
The secondary POSIX group IDs used for all EFS operations by this user.
Role (string) --
Specifies the Amazon Resource Name (ARN) of the IAM role that controls your users' access to your Amazon S3 bucket or EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.
ExternalId (string) --
A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Amazon Web Services Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.
Get-ADGroup -Filter {samAccountName -like "*YourGroupName* *"} -Properties * | Select SamAccountName,ObjectSid
In that command, replace YourGroupName with the name of your Active Directory group.
The regex used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
describe_execution
(**kwargs)¶You can use DescribeExecution
to check the details of the execution of the specified workflow.
See also: AWS API Documentation
Request Syntax
response = client.describe_execution(
ExecutionId='string',
WorkflowId='string'
)
[REQUIRED]
A unique identifier for the execution of a workflow.
[REQUIRED]
A unique identifier for the workflow.
dict
Response Syntax
{
'WorkflowId': 'string',
'Execution': {
'ExecutionId': 'string',
'InitialFileLocation': {
'S3FileLocation': {
'Bucket': 'string',
'Key': 'string',
'VersionId': 'string',
'Etag': 'string'
},
'EfsFileLocation': {
'FileSystemId': 'string',
'Path': 'string'
}
},
'ServiceMetadata': {
'UserDetails': {
'UserName': 'string',
'ServerId': 'string',
'SessionId': 'string'
}
},
'ExecutionRole': 'string',
'LoggingConfiguration': {
'LoggingRole': 'string',
'LogGroupName': 'string'
},
'PosixProfile': {
'Uid': 123,
'Gid': 123,
'SecondaryGids': [
123,
]
},
'Status': 'IN_PROGRESS'|'COMPLETED'|'EXCEPTION'|'HANDLING_EXCEPTION',
'Results': {
'Steps': [
{
'StepType': 'COPY'|'CUSTOM'|'TAG'|'DELETE',
'Outputs': 'string',
'Error': {
'Type': 'PERMISSION_DENIED'|'CUSTOM_STEP_FAILED'|'THROTTLED'|'ALREADY_EXISTS'|'NOT_FOUND'|'BAD_REQUEST'|'TIMEOUT'|'INTERNAL_SERVER_ERROR',
'Message': 'string'
}
},
],
'OnExceptionSteps': [
{
'StepType': 'COPY'|'CUSTOM'|'TAG'|'DELETE',
'Outputs': 'string',
'Error': {
'Type': 'PERMISSION_DENIED'|'CUSTOM_STEP_FAILED'|'THROTTLED'|'ALREADY_EXISTS'|'NOT_FOUND'|'BAD_REQUEST'|'TIMEOUT'|'INTERNAL_SERVER_ERROR',
'Message': 'string'
}
},
]
}
}
}
Response Structure
(dict) --
WorkflowId (string) --
A unique identifier for the workflow.
Execution (dict) --
The structure that contains the details of the workflow' execution.
ExecutionId (string) --
A unique identifier for the execution of a workflow.
InitialFileLocation (dict) --
A structure that describes the Amazon S3 or EFS file location. This is the file location when the execution begins: if the file is being copied, this is the initial (as opposed to destination) file location.
S3FileLocation (dict) --
Specifies the S3 details for the file being used, such as bucket, Etag, and so forth.
Bucket (string) --
Specifies the S3 bucket that contains the file being used.
Key (string) --
The name assigned to the file when it was created in S3. You use the object key to retrieve the object.
VersionId (string) --
Specifies the file version.
Etag (string) --
The entity tag is a hash of the object. The ETag reflects changes only to the contents of an object, not its metadata.
EfsFileLocation (dict) --
Specifies the Amazon EFS ID and the path for the file being used.
FileSystemId (string) --
The ID of the file system, assigned by Amazon EFS.
Path (string) --
The pathname for the folder being used by a workflow.
ServiceMetadata (dict) --
A container object for the session details associated with a workflow.
UserDetails (dict) --
The Server ID (ServerId
), Session ID (SessionId
) and user (UserName
) make up the UserDetails
.
UserName (string) --
A unique string that identifies a user account associated with a server.
ServerId (string) --
The system-assigned unique identifier for a Transfer server instance.
SessionId (string) --
The system-assigned unique identifier for a session that corresponds to the workflow.
ExecutionRole (string) --
The IAM role associated with the execution.
LoggingConfiguration (dict) --
The IAM logging role associated with the execution.
LoggingRole (string) --
Specifies the Amazon Resource Name (ARN) of the Amazon Web Services Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFS events. When set, user activity can be viewed in your CloudWatch logs.
LogGroupName (string) --
The name of the CloudWatch logging group for the Amazon Web Services Transfer server to which this workflow belongs.
PosixProfile (dict) --
The full POSIX identity, including user ID (Uid
), group ID (Gid
), and any secondary groups IDs (SecondaryGids
), that controls your users' access to your Amazon EFS file systems. The POSIX permissions that are set on files and directories in your file system determine the level of access your users get when transferring files into and out of your Amazon EFS file systems.
Uid (integer) --
The POSIX user ID used for all EFS operations by this user.
Gid (integer) --
The POSIX group ID used for all EFS operations by this user.
SecondaryGids (list) --
The secondary POSIX group IDs used for all EFS operations by this user.
Status (string) --
The status is one of the execution. Can be in progress, completed, exception encountered, or handling the exception.
Results (dict) --
A structure that describes the execution results. This includes a list of the steps along with the details of each step, error type and message (if any), and the OnExceptionSteps
structure.
Steps (list) --
Specifies the details for the steps that are in the specified workflow.
(dict) --
Specifies the following details for the step: error (if any), outputs (if any), and the step type.
StepType (string) --
One of the available step types.
Outputs (string) --
The values for the key/value pair applied as a tag to the file. Only applicable if the step type is TAG
.
Error (dict) --
Specifies the details for an error, if it occurred during execution of the specified workfow step.
Type (string) --
Specifies the error type.
ALREADY_EXISTS
: occurs for a copy step, if the overwrite option is not selected and a file with the same name already exists in the target location.BAD_REQUEST
: a general bad request: for example, a step that attempts to tag an EFS file returns BAD_REQUEST
, as only S3 files can be tagged.CUSTOM_STEP_FAILED
: occurs when the custom step provided a callback that indicates failure.INTERNAL_SERVER_ERROR
: a catch-all error that can occur for a variety of reasons.NOT_FOUND
: occurs when a requested entity, for example a source file for a copy step, does not exist.PERMISSION_DENIED
: occurs if your policy does not contain the correct permissions to complete one or more of the steps in the workflow.TIMEOUT
: occurs when the execution times out.Note
You can set the TimeoutSeconds
for a custom step, anywhere from 1 second to 1800 seconds (30 minutes).
THROTTLED
: occurs if you exceed the new execution refill rate of one workflow per second.Message (string) --
Specifies the descriptive message that corresponds to the ErrorType
.
OnExceptionSteps (list) --
Specifies the steps (actions) to take if errors are encountered during execution of the workflow.
(dict) --
Specifies the following details for the step: error (if any), outputs (if any), and the step type.
StepType (string) --
One of the available step types.
Outputs (string) --
The values for the key/value pair applied as a tag to the file. Only applicable if the step type is TAG
.
Error (dict) --
Specifies the details for an error, if it occurred during execution of the specified workfow step.
Type (string) --
Specifies the error type.
ALREADY_EXISTS
: occurs for a copy step, if the overwrite option is not selected and a file with the same name already exists in the target location.BAD_REQUEST
: a general bad request: for example, a step that attempts to tag an EFS file returns BAD_REQUEST
, as only S3 files can be tagged.CUSTOM_STEP_FAILED
: occurs when the custom step provided a callback that indicates failure.INTERNAL_SERVER_ERROR
: a catch-all error that can occur for a variety of reasons.NOT_FOUND
: occurs when a requested entity, for example a source file for a copy step, does not exist.PERMISSION_DENIED
: occurs if your policy does not contain the correct permissions to complete one or more of the steps in the workflow.TIMEOUT
: occurs when the execution times out.Note
You can set the TimeoutSeconds
for a custom step, anywhere from 1 second to 1800 seconds (30 minutes).
THROTTLED
: occurs if you exceed the new execution refill rate of one workflow per second.Message (string) --
Specifies the descriptive message that corresponds to the ErrorType
.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
describe_security_policy
(**kwargs)¶Describes the security policy that is attached to your file transfer protocol-enabled server. The response contains a description of the security policy's properties. For more information about security policies, see Working with security policies .
See also: AWS API Documentation
Request Syntax
response = client.describe_security_policy(
SecurityPolicyName='string'
)
[REQUIRED]
Specifies the name of the security policy that is attached to the server.
{
'SecurityPolicy': {
'Fips': True|False,
'SecurityPolicyName': 'string',
'SshCiphers': [
'string',
],
'SshKexs': [
'string',
],
'SshMacs': [
'string',
],
'TlsCiphers': [
'string',
]
}
}
Response Structure
An array containing the properties of the security policy.
Specifies whether this policy enables Federal Information Processing Standards (FIPS).
Specifies the name of the security policy that is attached to the server.
Specifies the enabled Secure Shell (SSH) cipher encryption algorithms in the security policy that is attached to the server.
Specifies the enabled SSH key exchange (KEX) encryption algorithms in the security policy that is attached to the server.
Specifies the enabled SSH message authentication code (MAC) encryption algorithms in the security policy that is attached to the server.
Specifies the enabled Transport Layer Security (TLS) cipher encryption algorithms in the security policy that is attached to the server.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
describe_server
(**kwargs)¶Describes a file transfer protocol-enabled server that you specify by passing the ServerId
parameter.
The response contains a description of a server's properties. When you set EndpointType
to VPC, the response will contain the EndpointDetails
.
See also: AWS API Documentation
Request Syntax
response = client.describe_server(
ServerId='string'
)
[REQUIRED]
A system-assigned unique identifier for a server.
{
'Server': {
'Arn': 'string',
'Certificate': 'string',
'ProtocolDetails': {
'PassiveIp': 'string',
'TlsSessionResumptionMode': 'DISABLED'|'ENABLED'|'ENFORCED',
'SetStatOption': 'DEFAULT'|'ENABLE_NO_OP'
},
'Domain': 'S3'|'EFS',
'EndpointDetails': {
'AddressAllocationIds': [
'string',
],
'SubnetIds': [
'string',
],
'VpcEndpointId': 'string',
'VpcId': 'string',
'SecurityGroupIds': [
'string',
]
},
'EndpointType': 'PUBLIC'|'VPC'|'VPC_ENDPOINT',
'HostKeyFingerprint': 'string',
'IdentityProviderDetails': {
'Url': 'string',
'InvocationRole': 'string',
'DirectoryId': 'string',
'Function': 'string'
},
'IdentityProviderType': 'SERVICE_MANAGED'|'API_GATEWAY'|'AWS_DIRECTORY_SERVICE'|'AWS_LAMBDA',
'LoggingRole': 'string',
'PostAuthenticationLoginBanner': 'string',
'PreAuthenticationLoginBanner': 'string',
'Protocols': [
'SFTP'|'FTP'|'FTPS',
],
'SecurityPolicyName': 'string',
'ServerId': 'string',
'State': 'OFFLINE'|'ONLINE'|'STARTING'|'STOPPING'|'START_FAILED'|'STOP_FAILED',
'Tags': [
{
'Key': 'string',
'Value': 'string'
},
],
'UserCount': 123,
'WorkflowDetails': {
'OnUpload': [
{
'WorkflowId': 'string',
'ExecutionRole': 'string'
},
]
}
}
}
Response Structure
An array containing the properties of a server with the ServerID
you specified.
Specifies the unique Amazon Resource Name (ARN) of the server.
Specifies the ARN of the Amazon Web ServicesCertificate Manager (ACM) certificate. Required when Protocols
is set to FTPS
.
The protocol settings that are configured for your server.
Use the PassiveIp
parameter to indicate passive mode. Enter a single IPv4 address, such as the public IP address of a firewall, router, or load balancer.
Indicates passive mode, for FTP and FTPS protocols. Enter a single IPv4 address, such as the public IP address of a firewall, router, or load balancer. For example:
``aws transfer update-server --protocol-details PassiveIp=*0.0.0.0* ``
Replace `` 0.0.0.0 `` in the example above with the actual IP address you want to use.
Note
If you change the PassiveIp
value, you must stop and then restart your Transfer Family server for the change to take effect. For details on using passive mode (PASV) in a NAT environment, see Configuring your FTPS server behind a firewall or NAT with Transfer Family .
A property used with Transfer Family servers that use the FTPS protocol. TLS Session Resumption provides a mechanism to resume or share a negotiated secret key between the control and data connection for an FTPS session. TlsSessionResumptionMode
determines whether or not the server resumes recent, negotiated sessions through a unique session ID. This property is available during CreateServer
and UpdateServer
calls. If a TlsSessionResumptionMode
value is not specified during CreateServer
, it is set to ENFORCED
by default.
DISABLED
: the server does not process TLS session resumption client requests and creates a new TLS session for each request.ENABLED
: the server processes and accepts clients that are performing TLS session resumption. The server doesn't reject client data connections that do not perform the TLS session resumption client processing.ENFORCED
: the server processes and accepts clients that are performing TLS session resumption. The server rejects client data connections that do not perform the TLS session resumption client processing. Before you set the value to ENFORCED
, test your clients.Note
Not all FTPS clients perform TLS session resumption. So, if you choose to enforce TLS session resumption, you prevent any connections from FTPS clients that don't perform the protocol negotiation. To determine whether or not you can use the ENFORCED
value, you need to test your clients.
Use the SetStatOption
to ignore the error that is generated when the client attempts to use SETSTAT
on a file you are uploading to an S3 bucket.
Some SFTP file transfer clients can attempt to change the attributes of remote files, including timestamp and permissions, using commands, such as SETSTAT
when uploading the file. However, these commands are not compatible with object storage systems, such as Amazon S3. Due to this incompatibility, file uploads from these clients can result in errors even when the file is otherwise successfully uploaded.
Set the value to ENABLE_NO_OP
to have the Transfer Family server ignore the SETSTAT
command, and upload files without needing to make any changes to your SFTP client. While the SetStatOption
ENABLE_NO_OP
setting ignores the error, it does generate a log entry in Amazon CloudWatch Logs, so you can determine when the client is making a SETSTAT
call.
Note
If you want to preserve the original timestamp for your file, and modify other file attributes using SETSTAT
, you can use Amazon EFS as backend storage with Transfer Family.
Specifies the domain of the storage system that is used for file transfers.
The virtual private cloud (VPC) endpoint settings that are configured for your server. When you host your endpoint within your VPC, you can make it accessible only to resources within your VPC, or you can attach Elastic IP addresses and make it accessible to clients over the internet. Your VPC's default security groups are automatically assigned to your endpoint.
A list of address allocation IDs that are required to attach an Elastic IP address to your server's endpoint.
Note
This property can only be set when EndpointType
is set to VPC
and it is only valid in the UpdateServer
API.
A list of subnet IDs that are required to host your server endpoint in your VPC.
Note
This property can only be set when EndpointType
is set to VPC
.
The ID of the VPC endpoint.
Note
This property can only be set when EndpointType
is set to VPC_ENDPOINT
.
For more information, see https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html#deprecate-vpc-endpoint.
The VPC ID of the VPC in which a server's endpoint will be hosted.
Note
This property can only be set when EndpointType
is set to VPC
.
A list of security groups IDs that are available to attach to your server's endpoint.
Note
This property can only be set when EndpointType
is set to VPC
.
You can edit the SecurityGroupIds
property in the UpdateServer API only if you are changing the EndpointType
from PUBLIC
or VPC_ENDPOINT
to VPC
. To change security groups associated with your server's VPC endpoint after creation, use the Amazon EC2 ModifyVpcEndpoint API.
Defines the type of endpoint that your server is connected to. If your server is connected to a VPC endpoint, your server isn't accessible over the public internet.
Specifies the Base64-encoded SHA256 fingerprint of the server's host key. This value is equivalent to the output of the ssh-keygen -l -f my-new-server-key
command.
Specifies information to call a customer-supplied authentication API. This field is not populated when the IdentityProviderType
of a server is AWS_DIRECTORY_SERVICE
or SERVICE_MANAGED
.
Provides the location of the service endpoint used to authenticate users.
Provides the type of InvocationRole
used to authenticate the user account.
The identifier of the Amazon Web Services Directory Service directory that you want to stop sharing.
The ARN for a lambda function to use for the Identity provider.
Specifies the mode of authentication for a server. The default value is SERVICE_MANAGED
, which allows you to store and access user credentials within the Amazon Web Services Transfer Family service.
Use AWS_DIRECTORY_SERVICE
to provide access to Active Directory groups in Amazon Web Services Managed Active Directory or Microsoft Active Directory in your on-premises environment or in Amazon Web Services using AD Connectors. This option also requires you to provide a Directory ID using the IdentityProviderDetails
parameter.
Use the API_GATEWAY
value to integrate with an identity provider of your choosing. The API_GATEWAY
setting requires you to provide an API Gateway endpoint URL to call for authentication using the IdentityProviderDetails
parameter.
Use the AWS_LAMBDA
value to directly use a Lambda function as your identity provider. If you choose this value, you must specify the ARN for the lambda function in the Function
parameter for the IdentityProviderDetails
data type.
Specifies the Amazon Resource Name (ARN) of the Amazon Web Services Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFS events. When set, user activity can be viewed in your CloudWatch logs.
Specify a string to display when users connect to a server. This string is displayed after the user authenticates.
Note
The SFTP protocol does not support post-authentication display banners.
Specify a string to display when users connect to a server. This string is displayed before the user authenticates. For example, the following banner displays details about using the system.
This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel.
Specifies the file transfer protocol or protocols over which your file transfer protocol client can connect to your server's endpoint. The available protocols are:
SFTP
(Secure Shell (SSH) File Transfer Protocol): File transfer over SSHFTPS
(File Transfer Protocol Secure): File transfer with TLS encryptionFTP
(File Transfer Protocol): Unencrypted file transferSpecifies the name of the security policy that is attached to the server.
Specifies the unique system-assigned identifier for a server that you instantiate.
Specifies the condition of a server for the server that was described. A value of ONLINE
indicates that the server can accept jobs and transfer files. A State
value of OFFLINE
means that the server cannot perform file transfer operations.
The states of STARTING
and STOPPING
indicate that the server is in an intermediate state, either not fully able to respond, or not fully offline. The values of START_FAILED
or STOP_FAILED
can indicate an error condition.
Specifies the key-value pairs that you can use to search for and group servers that were assigned to the server that was described.
Creates a key-value pair for a specific resource. Tags are metadata that you can use to search for and group a resource for various purposes. You can apply tags to servers, users, and roles. A tag key can take more than one value. For example, to group servers for accounting purposes, you might create a tag called Group
and assign the values Research
and Accounting
to that group.
The name assigned to the tag that you create.
Contains one or more values that you assigned to the key name you create.
Specifies the number of users that are assigned to a server you specified with the ServerId
.
Specifies the workflow ID for the workflow to assign and the execution role used for executing the workflow.
A trigger that starts a workflow: the workflow begins to execute after a file is uploaded.
To remove an associated workflow from a server, you can provide an empty OnUpload
object, as in the following example.
aws transfer update-server --server-id s-01234567890abcdef --workflow-details '{"OnUpload":[]}'
Specifies the workflow ID for the workflow to assign and the execution role used for executing the workflow.
A unique identifier for the workflow.
Includes the necessary permissions for S3, EFS, and Lambda operations that Transfer can assume, so that all workflow steps can operate on the required resources
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
describe_user
(**kwargs)¶Describes the user assigned to the specific file transfer protocol-enabled server, as identified by its ServerId
property.
The response from this call returns the properties of the user associated with the ServerId
value that was specified.
See also: AWS API Documentation
Request Syntax
response = client.describe_user(
ServerId='string',
UserName='string'
)
[REQUIRED]
A system-assigned unique identifier for a server that has this user assigned.
[REQUIRED]
The name of the user assigned to one or more servers. User names are part of the sign-in credentials to use the Amazon Web Services Transfer Family service and perform file transfer tasks.
dict
Response Syntax
{
'ServerId': 'string',
'User': {
'Arn': 'string',
'HomeDirectory': 'string',
'HomeDirectoryMappings': [
{
'Entry': 'string',
'Target': 'string'
},
],
'HomeDirectoryType': 'PATH'|'LOGICAL',
'Policy': 'string',
'PosixProfile': {
'Uid': 123,
'Gid': 123,
'SecondaryGids': [
123,
]
},
'Role': 'string',
'SshPublicKeys': [
{
'DateImported': datetime(2015, 1, 1),
'SshPublicKeyBody': 'string',
'SshPublicKeyId': 'string'
},
],
'Tags': [
{
'Key': 'string',
'Value': 'string'
},
],
'UserName': 'string'
}
}
Response Structure
(dict) --
ServerId (string) --
A system-assigned unique identifier for a server that has this user assigned.
User (dict) --
An array containing the properties of the user account for the ServerID
value that you specified.
Arn (string) --
Specifies the unique Amazon Resource Name (ARN) for the user that was requested to be described.
HomeDirectory (string) --
The landing directory (folder) for a user when they log in to the server using the client.
A HomeDirectory
example is /bucket_name/home/mydirectory
.
HomeDirectoryMappings (list) --
Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the Entry
and Target
pair, where Entry
shows how the path is made visible and Target
is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Amazon Web Services Identity and Access Management (IAM) role provides access to paths in Target
. This value can only be set when HomeDirectoryType
is set to LOGICAL .
In most cases, you can use this value instead of the session policy to lock your user down to the designated home directory ("chroot
"). To do this, you can set Entry
to '/' and set Target
to the HomeDirectory parameter value.
(dict) --
Represents an object that contains entries and targets for HomeDirectoryMappings
.
The following is an Entry
and Target
pair example for chroot
.
[ { "Entry": "/", "Target": "/bucket_name/home/mydirectory" } ]
Entry (string) --
Represents an entry for HomeDirectoryMappings
.
Target (string) --
Represents the map target that is used in a HomeDirectorymapEntry
.
HomeDirectoryType (string) --
The type of landing directory (folder) you want your users' home directory to be when they log into the server. If you set it to PATH
, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it LOGICAL
, you need to provide mappings in the HomeDirectoryMappings
for how you want to make Amazon S3 or EFS paths visible to your users.
Policy (string) --
A session policy for your user so that you can use the same IAM role across multiple users. This policy scopes down user access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include ${Transfer:UserName}
, ${Transfer:HomeDirectory}
, and ${Transfer:HomeBucket}
.
PosixProfile (dict) --
Specifies the full POSIX identity, including user ID (Uid
), group ID (Gid
), and any secondary groups IDs (SecondaryGids
), that controls your users' access to your Amazon Elastic File System (Amazon EFS) file systems. The POSIX permissions that are set on files and directories in your file system determine the level of access your users get when transferring files into and out of your Amazon EFS file systems.
Uid (integer) --
The POSIX user ID used for all EFS operations by this user.
Gid (integer) --
The POSIX group ID used for all EFS operations by this user.
SecondaryGids (list) --
The secondary POSIX group IDs used for all EFS operations by this user.
Role (string) --
Specifies the Amazon Resource Name (ARN) of the IAM role that controls your users' access to your Amazon S3 bucket or EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.
SshPublicKeys (list) --
Specifies the public key portion of the Secure Shell (SSH) keys stored for the described user.
(dict) --
Provides information about the public Secure Shell (SSH) key that is associated with a user account for the specific file transfer protocol-enabled server (as identified by ServerId
). The information returned includes the date the key was imported, the public key contents, and the public key ID. A user can store more than one SSH public key associated with their user name on a specific server.
DateImported (datetime) --
Specifies the date that the public key was added to the user account.
SshPublicKeyBody (string) --
Specifies the content of the SSH public key as specified by the PublicKeyId
.
Transfer Family accepts RSA, ECDSA, and ED25519 keys.
SshPublicKeyId (string) --
Specifies the SshPublicKeyId
parameter contains the identifier of the public key.
Tags (list) --
Specifies the key-value pairs for the user requested. Tag can be used to search for and group users for a variety of purposes.
(dict) --
Creates a key-value pair for a specific resource. Tags are metadata that you can use to search for and group a resource for various purposes. You can apply tags to servers, users, and roles. A tag key can take more than one value. For example, to group servers for accounting purposes, you might create a tag called Group
and assign the values Research
and Accounting
to that group.
Key (string) --
The name assigned to the tag that you create.
Value (string) --
Contains one or more values that you assigned to the key name you create.
UserName (string) --
Specifies the name of the user that was requested to be described. User names are used for authentication purposes. This is the string that will be used by your user when they log in to your server.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
describe_workflow
(**kwargs)¶Describes the specified workflow.
See also: AWS API Documentation
Request Syntax
response = client.describe_workflow(
WorkflowId='string'
)
[REQUIRED]
A unique identifier for the workflow.
{
'Workflow': {
'Arn': 'string',
'Description': 'string',
'Steps': [
{
'Type': 'COPY'|'CUSTOM'|'TAG'|'DELETE',
'CopyStepDetails': {
'Name': 'string',
'DestinationFileLocation': {
'S3FileLocation': {
'Bucket': 'string',
'Key': 'string'
},
'EfsFileLocation': {
'FileSystemId': 'string',
'Path': 'string'
}
},
'OverwriteExisting': 'TRUE'|'FALSE',
'SourceFileLocation': 'string'
},
'CustomStepDetails': {
'Name': 'string',
'Target': 'string',
'TimeoutSeconds': 123,
'SourceFileLocation': 'string'
},
'DeleteStepDetails': {
'Name': 'string',
'SourceFileLocation': 'string'
},
'TagStepDetails': {
'Name': 'string',
'Tags': [
{
'Key': 'string',
'Value': 'string'
},
],
'SourceFileLocation': 'string'
}
},
],
'OnExceptionSteps': [
{
'Type': 'COPY'|'CUSTOM'|'TAG'|'DELETE',
'CopyStepDetails': {
'Name': 'string',
'DestinationFileLocation': {
'S3FileLocation': {
'Bucket': 'string',
'Key': 'string'
},
'EfsFileLocation': {
'FileSystemId': 'string',
'Path': 'string'
}
},
'OverwriteExisting': 'TRUE'|'FALSE',
'SourceFileLocation': 'string'
},
'CustomStepDetails': {
'Name': 'string',
'Target': 'string',
'TimeoutSeconds': 123,
'SourceFileLocation': 'string'
},
'DeleteStepDetails': {
'Name': 'string',
'SourceFileLocation': 'string'
},
'TagStepDetails': {
'Name': 'string',
'Tags': [
{
'Key': 'string',
'Value': 'string'
},
],
'SourceFileLocation': 'string'
}
},
],
'WorkflowId': 'string',
'Tags': [
{
'Key': 'string',
'Value': 'string'
},
]
}
}
Response Structure
The structure that contains the details of the workflow.
Specifies the unique Amazon Resource Name (ARN) for the workflow.
Specifies the text description for the workflow.
Specifies the details for the steps that are in the specified workflow.
The basic building block of a workflow.
Currently, the following step types are supported.
Details for a step that performs a file copy.
Consists of the following values:
FALSE
.The name of the step, used as an identifier.
Specifies the location for the file being copied. Only applicable for Copy type workflow steps. Use ${Transfer:username}
in this field to parametrize the destination prefix by username.
Specifies the details for the S3 file being copied.
Specifies the S3 bucket for the customer input file.
The name assigned to the file when it was created in S3. You use the object key to retrieve the object.
Reserved for future use.
The ID of the file system, assigned by Amazon EFS.
The pathname for the folder being used by a workflow.
A flag that indicates whether or not to overwrite an existing file of the same name. The default is FALSE
.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.${original.file}
to use the originally-uploaded file location as input for this step.Details for a step that invokes a lambda function.
Consists of the lambda function name, target, and timeout (in seconds).
The name of the step, used as an identifier.
The ARN for the lambda function that is being called.
Timeout, in seconds, for the step.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.${original.file}
to use the originally-uploaded file location as input for this step.Details for a step that deletes the file.
The name of the step, used as an identifier.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.${original.file}
to use the originally-uploaded file location as input for this step.Details for a step that creates one or more tags.
You specify one or more tags: each tag contains a key/value pair.
The name of the step, used as an identifier.
Array that contains from 1 to 10 key/value pairs.
Specifies the key-value pair that are assigned to a file during the execution of a Tagging step.
The name assigned to the tag that you create.
The value that corresponds to the key.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.${original.file}
to use the originally-uploaded file location as input for this step.Specifies the steps (actions) to take if errors are encountered during execution of the workflow.
The basic building block of a workflow.
Currently, the following step types are supported.
Details for a step that performs a file copy.
Consists of the following values:
FALSE
.The name of the step, used as an identifier.
Specifies the location for the file being copied. Only applicable for Copy type workflow steps. Use ${Transfer:username}
in this field to parametrize the destination prefix by username.
Specifies the details for the S3 file being copied.
Specifies the S3 bucket for the customer input file.
The name assigned to the file when it was created in S3. You use the object key to retrieve the object.
Reserved for future use.
The ID of the file system, assigned by Amazon EFS.
The pathname for the folder being used by a workflow.
A flag that indicates whether or not to overwrite an existing file of the same name. The default is FALSE
.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.${original.file}
to use the originally-uploaded file location as input for this step.Details for a step that invokes a lambda function.
Consists of the lambda function name, target, and timeout (in seconds).
The name of the step, used as an identifier.
The ARN for the lambda function that is being called.
Timeout, in seconds, for the step.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.${original.file}
to use the originally-uploaded file location as input for this step.Details for a step that deletes the file.
The name of the step, used as an identifier.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.${original.file}
to use the originally-uploaded file location as input for this step.Details for a step that creates one or more tags.
You specify one or more tags: each tag contains a key/value pair.
The name of the step, used as an identifier.
Array that contains from 1 to 10 key/value pairs.
Specifies the key-value pair that are assigned to a file during the execution of a Tagging step.
The name assigned to the tag that you create.
The value that corresponds to the key.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.${original.file}
to use the originally-uploaded file location as input for this step.A unique identifier for the workflow.
Key-value pairs that can be used to group and search for workflows. Tags are metadata attached to workflows for any purpose.
Creates a key-value pair for a specific resource. Tags are metadata that you can use to search for and group a resource for various purposes. You can apply tags to servers, users, and roles. A tag key can take more than one value. For example, to group servers for accounting purposes, you might create a tag called Group
and assign the values Research
and Accounting
to that group.
The name assigned to the tag that you create.
Contains one or more values that you assigned to the key name you create.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
get_paginator
(operation_name)¶Create a paginator for an operation.
create_foo
, and you'd normally invoke the
operation as client.create_foo(**kwargs)
, if the
create_foo
operation can be paginated, you can use the
call client.get_paginator("create_foo")
.client.can_paginate
method to
check if an operation is pageable.get_waiter
(waiter_name)¶Returns an object that can wait for some condition.
import_ssh_public_key
(**kwargs)¶Adds a Secure Shell (SSH) public key to a user account identified by a UserName
value assigned to the specific file transfer protocol-enabled server, identified by ServerId
.
The response returns the UserName
value, the ServerId
value, and the name of the SshPublicKeyId
.
See also: AWS API Documentation
Request Syntax
response = client.import_ssh_public_key(
ServerId='string',
SshPublicKeyBody='string',
UserName='string'
)
[REQUIRED]
A system-assigned unique identifier for a server.
[REQUIRED]
The public key portion of an SSH key pair.
Transfer Family accepts RSA, ECDSA, and ED25519 keys.
[REQUIRED]
The name of the user account that is assigned to one or more servers.
dict
Response Syntax
{
'ServerId': 'string',
'SshPublicKeyId': 'string',
'UserName': 'string'
}
Response Structure
(dict) --
Identifies the user, the server they belong to, and the identifier of the SSH public key associated with that user. A user can have more than one key on each server that they are associated with.
ServerId (string) --
A system-assigned unique identifier for a server.
SshPublicKeyId (string) --
The name given to a public key by the system that was imported.
UserName (string) --
A user name assigned to the ServerID
value that you specified.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceExistsException
Transfer.Client.exceptions.ResourceNotFoundException
Transfer.Client.exceptions.ThrottlingException
list_accesses
(**kwargs)¶Lists the details for all the accesses you have on your server.
See also: AWS API Documentation
Request Syntax
response = client.list_accesses(
MaxResults=123,
NextToken='string',
ServerId='string'
)
ListAccesses
call, a NextToken
parameter is returned in the output. You can then pass in a subsequent command to the NextToken
parameter to continue listing additional accesses.[REQUIRED]
A system-assigned unique identifier for a server that has users assigned to it.
dict
Response Syntax
{
'NextToken': 'string',
'ServerId': 'string',
'Accesses': [
{
'HomeDirectory': 'string',
'HomeDirectoryType': 'PATH'|'LOGICAL',
'Role': 'string',
'ExternalId': 'string'
},
]
}
Response Structure
(dict) --
NextToken (string) --
When you can get additional results from the ListAccesses
call, a NextToken
parameter is returned in the output. You can then pass in a subsequent command to the NextToken
parameter to continue listing additional accesses.
ServerId (string) --
A system-assigned unique identifier for a server that has users assigned to it.
Accesses (list) --
Returns the accesses and their properties for the ServerId
value that you specify.
(dict) --
Lists the properties for one or more specified associated accesses.
HomeDirectory (string) --
The landing directory (folder) for a user when they log in to the server using the client.
A HomeDirectory
example is /bucket_name/home/mydirectory
.
HomeDirectoryType (string) --
The type of landing directory (folder) you want your users' home directory to be when they log into the server. If you set it to PATH
, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it LOGICAL
, you need to provide mappings in the HomeDirectoryMappings
for how you want to make Amazon S3 or EFS paths visible to your users.
Role (string) --
Specifies the Amazon Resource Name (ARN) of the IAM role that controls your users' access to your Amazon S3 bucket or EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.
ExternalId (string) --
A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Amazon Web Services Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.
Get-ADGroup -Filter {samAccountName -like "*YourGroupName* *"} -Properties * | Select SamAccountName,ObjectSid
In that command, replace YourGroupName with the name of your Active Directory group.
The regex used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidNextTokenException
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
list_executions
(**kwargs)¶Lists all executions for the specified workflow.
See also: AWS API Documentation
Request Syntax
response = client.list_executions(
MaxResults=123,
NextToken='string',
WorkflowId='string'
)
ListExecutions
returns theNextToken
parameter in the output. You can then pass theNextToken
parameter in a subsequent command to continue listing additional executions.
This is useful for pagination, for instance. If you have 100 executions for a workflow, you might only want to list first 10. If so, callthe API by specifing the max-results
:
aws transfer list-executions --max-results 10
This returns details for the first 10 executions, as well as the pointer (NextToken
) to the eleventh execution. You can now call the API again, suppling the NextToken
value you received:
aws transfer list-executions --max-results 10 --next-token $somePointerReturnedFromPreviousListResult
This call returns the next 10 executions, the 11th through the 20th. You can then repeat the call until the details for all 100 executions have been returned.
[REQUIRED]
A unique identifier for the workflow.
dict
Response Syntax
{
'NextToken': 'string',
'WorkflowId': 'string',
'Executions': [
{
'ExecutionId': 'string',
'InitialFileLocation': {
'S3FileLocation': {
'Bucket': 'string',
'Key': 'string',
'VersionId': 'string',
'Etag': 'string'
},
'EfsFileLocation': {
'FileSystemId': 'string',
'Path': 'string'
}
},
'ServiceMetadata': {
'UserDetails': {
'UserName': 'string',
'ServerId': 'string',
'SessionId': 'string'
}
},
'Status': 'IN_PROGRESS'|'COMPLETED'|'EXCEPTION'|'HANDLING_EXCEPTION'
},
]
}
Response Structure
(dict) --
NextToken (string) --
ListExecutions
returns theNextToken
parameter in the output. You can then pass theNextToken
parameter in a subsequent command to continue listing additional executions.
WorkflowId (string) --
A unique identifier for the workflow.
Executions (list) --
Returns the details for each execution.
IN_PROGRESS
, COMPLETED
, EXCEPTION
, HANDLING_EXEPTION
.(dict) --
Returns properties of the execution that is specified.
ExecutionId (string) --
A unique identifier for the execution of a workflow.
InitialFileLocation (dict) --
A structure that describes the Amazon S3 or EFS file location. This is the file location when the execution begins: if the file is being copied, this is the initial (as opposed to destination) file location.
S3FileLocation (dict) --
Specifies the S3 details for the file being used, such as bucket, Etag, and so forth.
Bucket (string) --
Specifies the S3 bucket that contains the file being used.
Key (string) --
The name assigned to the file when it was created in S3. You use the object key to retrieve the object.
VersionId (string) --
Specifies the file version.
Etag (string) --
The entity tag is a hash of the object. The ETag reflects changes only to the contents of an object, not its metadata.
EfsFileLocation (dict) --
Specifies the Amazon EFS ID and the path for the file being used.
FileSystemId (string) --
The ID of the file system, assigned by Amazon EFS.
Path (string) --
The pathname for the folder being used by a workflow.
ServiceMetadata (dict) --
A container object for the session details associated with a workflow.
UserDetails (dict) --
The Server ID (ServerId
), Session ID (SessionId
) and user (UserName
) make up the UserDetails
.
UserName (string) --
A unique string that identifies a user account associated with a server.
ServerId (string) --
The system-assigned unique identifier for a Transfer server instance.
SessionId (string) --
The system-assigned unique identifier for a session that corresponds to the workflow.
Status (string) --
The status is one of the execution. Can be in progress, completed, exception encountered, or handling the exception.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidNextTokenException
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
list_security_policies
(**kwargs)¶Lists the security policies that are attached to your file transfer protocol-enabled servers.
See also: AWS API Documentation
Request Syntax
response = client.list_security_policies(
MaxResults=123,
NextToken='string'
)
ListSecurityPolicies
query.ListSecurityPolicies
command, a NextToken
parameter is returned in the output. You can then pass the NextToken
parameter in a subsequent command to continue listing additional security policies.dict
Response Syntax
{
'NextToken': 'string',
'SecurityPolicyNames': [
'string',
]
}
Response Structure
(dict) --
NextToken (string) --
When you can get additional results from the ListSecurityPolicies
operation, a NextToken
parameter is returned in the output. In a following command, you can pass in the NextToken
parameter to continue listing security policies.
SecurityPolicyNames (list) --
An array of security policies that were listed.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidNextTokenException
Transfer.Client.exceptions.InvalidRequestException
list_servers
(**kwargs)¶Lists the file transfer protocol-enabled servers that are associated with your Amazon Web Services account.
See also: AWS API Documentation
Request Syntax
response = client.list_servers(
MaxResults=123,
NextToken='string'
)
ListServers
query.ListServers
command, a NextToken
parameter is returned in the output. You can then pass the NextToken
parameter in a subsequent command to continue listing additional servers.dict
Response Syntax
{
'NextToken': 'string',
'Servers': [
{
'Arn': 'string',
'Domain': 'S3'|'EFS',
'IdentityProviderType': 'SERVICE_MANAGED'|'API_GATEWAY'|'AWS_DIRECTORY_SERVICE'|'AWS_LAMBDA',
'EndpointType': 'PUBLIC'|'VPC'|'VPC_ENDPOINT',
'LoggingRole': 'string',
'ServerId': 'string',
'State': 'OFFLINE'|'ONLINE'|'STARTING'|'STOPPING'|'START_FAILED'|'STOP_FAILED',
'UserCount': 123
},
]
}
Response Structure
(dict) --
NextToken (string) --
When you can get additional results from the ListServers
operation, a NextToken
parameter is returned in the output. In a following command, you can pass in the NextToken
parameter to continue listing additional servers.
Servers (list) --
An array of servers that were listed.
(dict) --
Returns properties of a file transfer protocol-enabled server that was specified.
Arn (string) --
Specifies the unique Amazon Resource Name (ARN) for a server to be listed.
Domain (string) --
Specifies the domain of the storage system that is used for file transfers.
IdentityProviderType (string) --
Specifies the mode of authentication for a server. The default value is SERVICE_MANAGED
, which allows you to store and access user credentials within the Amazon Web Services Transfer Family service.
Use AWS_DIRECTORY_SERVICE
to provide access to Active Directory groups in Amazon Web Services Managed Active Directory or Microsoft Active Directory in your on-premises environment or in Amazon Web Services using AD Connectors. This option also requires you to provide a Directory ID using the IdentityProviderDetails
parameter.
Use the API_GATEWAY
value to integrate with an identity provider of your choosing. The API_GATEWAY
setting requires you to provide an API Gateway endpoint URL to call for authentication using the IdentityProviderDetails
parameter.
Use the AWS_LAMBDA
value to directly use a Lambda function as your identity provider. If you choose this value, you must specify the ARN for the lambda function in the Function
parameter for the IdentityProviderDetails
data type.
EndpointType (string) --
Specifies the type of VPC endpoint that your server is connected to. If your server is connected to a VPC endpoint, your server isn't accessible over the public internet.
LoggingRole (string) --
Specifies the Amazon Resource Name (ARN) of the Amazon Web Services Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFS events. When set, user activity can be viewed in your CloudWatch logs.
ServerId (string) --
Specifies the unique system assigned identifier for the servers that were listed.
State (string) --
Specifies the condition of a server for the server that was described. A value of ONLINE
indicates that the server can accept jobs and transfer files. A State
value of OFFLINE
means that the server cannot perform file transfer operations.
The states of STARTING
and STOPPING
indicate that the server is in an intermediate state, either not fully able to respond, or not fully offline. The values of START_FAILED
or STOP_FAILED
can indicate an error condition.
UserCount (integer) --
Specifies the number of users that are assigned to a server you specified with the ServerId
.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidNextTokenException
Transfer.Client.exceptions.InvalidRequestException
Lists all of the tags associated with the Amazon Resource Name (ARN) that you specify. The resource can be a user, server, or role.
See also: AWS API Documentation
Request Syntax
response = client.list_tags_for_resource(
Arn='string',
MaxResults=123,
NextToken='string'
)
[REQUIRED]
Requests the tags associated with a particular Amazon Resource Name (ARN). An ARN is an identifier for a specific Amazon Web Services resource, such as a server, user, or role.
ListTagsForResource
request.ListTagsForResource
operation, a NextToken
parameter is returned in the input. You can then pass in a subsequent command to the NextToken
parameter to continue listing additional tags.dict
Response Syntax
{
'Arn': 'string',
'NextToken': 'string',
'Tags': [
{
'Key': 'string',
'Value': 'string'
},
]
}
Response Structure
(dict) --
Arn (string) --
The ARN you specified to list the tags of.
NextToken (string) --
When you can get additional results from the ListTagsForResource
call, a NextToken
parameter is returned in the output. You can then pass in a subsequent command to the NextToken
parameter to continue listing additional tags.
Tags (list) --
Key-value pairs that are assigned to a resource, usually for the purpose of grouping and searching for items. Tags are metadata that you define.
(dict) --
Creates a key-value pair for a specific resource. Tags are metadata that you can use to search for and group a resource for various purposes. You can apply tags to servers, users, and roles. A tag key can take more than one value. For example, to group servers for accounting purposes, you might create a tag called Group
and assign the values Research
and Accounting
to that group.
Key (string) --
The name assigned to the tag that you create.
Value (string) --
Contains one or more values that you assigned to the key name you create.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidNextTokenException
Transfer.Client.exceptions.InvalidRequestException
list_users
(**kwargs)¶Lists the users for a file transfer protocol-enabled server that you specify by passing the ServerId
parameter.
See also: AWS API Documentation
Request Syntax
response = client.list_users(
MaxResults=123,
NextToken='string',
ServerId='string'
)
ListUsers
request.ListUsers
call, a NextToken
parameter is returned in the output. You can then pass in a subsequent command to the NextToken
parameter to continue listing additional users.[REQUIRED]
A system-assigned unique identifier for a server that has users assigned to it.
dict
Response Syntax
{
'NextToken': 'string',
'ServerId': 'string',
'Users': [
{
'Arn': 'string',
'HomeDirectory': 'string',
'HomeDirectoryType': 'PATH'|'LOGICAL',
'Role': 'string',
'SshPublicKeyCount': 123,
'UserName': 'string'
},
]
}
Response Structure
(dict) --
NextToken (string) --
When you can get additional results from the ListUsers
call, a NextToken
parameter is returned in the output. You can then pass in a subsequent command to the NextToken
parameter to continue listing additional users.
ServerId (string) --
A system-assigned unique identifier for a server that the users are assigned to.
Users (list) --
Returns the user accounts and their properties for the ServerId
value that you specify.
(dict) --
Returns properties of the user that you specify.
Arn (string) --
Provides the unique Amazon Resource Name (ARN) for the user that you want to learn about.
HomeDirectory (string) --
The landing directory (folder) for a user when they log in to the server using the client.
A HomeDirectory
example is /bucket_name/home/mydirectory
.
HomeDirectoryType (string) --
The type of landing directory (folder) you want your users' home directory to be when they log into the server. If you set it to PATH
, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it LOGICAL
, you need to provide mappings in the HomeDirectoryMappings
for how you want to make Amazon S3 or EFS paths visible to your users.
Role (string) --
Specifies the Amazon Resource Name (ARN) of the IAM role that controls your users' access to your Amazon S3 bucket or EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.
Note
The IAM role that controls your users' access to your Amazon S3 bucket for servers with Domain=S3
, or your EFS file system for servers with Domain=EFS
.
The policies attached to this role determine the level of access you want to provide your users when transferring files into and out of your S3 buckets or EFS file systems.
SshPublicKeyCount (integer) --
Specifies the number of SSH public keys stored for the user you specified.
UserName (string) --
Specifies the name of the user whose ARN was specified. User names are used for authentication purposes.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidNextTokenException
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
list_workflows
(**kwargs)¶Lists all of your workflows.
See also: AWS API Documentation
Request Syntax
response = client.list_workflows(
MaxResults=123,
NextToken='string'
)
ListWorkflows
returns the NextToken
parameter in the output. You can then pass the NextToken
parameter in a subsequent command to continue listing additional workflows.dict
Response Syntax
{
'NextToken': 'string',
'Workflows': [
{
'WorkflowId': 'string',
'Description': 'string',
'Arn': 'string'
},
]
}
Response Structure
(dict) --
NextToken (string) --
ListWorkflows
returns theNextToken
parameter in the output. You can then pass theNextToken
parameter in a subsequent command to continue listing additional workflows.
Workflows (list) --
Returns the Arn
, WorkflowId
, and Description
for each workflow.
(dict) --
Contains the ID, text description, and Amazon Resource Name (ARN) for the workflow.
WorkflowId (string) --
A unique identifier for the workflow.
Description (string) --
Specifies the text description for the workflow.
Arn (string) --
Specifies the unique Amazon Resource Name (ARN) for the workflow.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidNextTokenException
Transfer.Client.exceptions.InvalidRequestException
send_workflow_step_state
(**kwargs)¶Sends a callback for asynchronous custom steps.
The ExecutionId
, WorkflowId
, and Token
are passed to the target resource during execution of a custom step of a workflow. You must include those with their callback as well as providing a status.
See also: AWS API Documentation
Request Syntax
response = client.send_workflow_step_state(
WorkflowId='string',
ExecutionId='string',
Token='string',
Status='SUCCESS'|'FAILURE'
)
[REQUIRED]
A unique identifier for the workflow.
[REQUIRED]
A unique identifier for the execution of a workflow.
[REQUIRED]
Used to distinguish between multiple callbacks for multiple Lambda steps within the same execution.
[REQUIRED]
Indicates whether the specified step succeeded or failed.
dict
Response Syntax
{}
Response Structure
Exceptions
Transfer.Client.exceptions.AccessDeniedException
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
Transfer.Client.exceptions.ThrottlingException
start_server
(**kwargs)¶Changes the state of a file transfer protocol-enabled server from OFFLINE
to ONLINE
. It has no impact on a server that is already ONLINE
. An ONLINE
server can accept and process file transfer jobs.
The state of STARTING
indicates that the server is in an intermediate state, either not fully able to respond, or not fully online. The values of START_FAILED
can indicate an error condition.
No response is returned from this call.
See also: AWS API Documentation
Request Syntax
response = client.start_server(
ServerId='string'
)
[REQUIRED]
A system-assigned unique identifier for a server that you start.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
Transfer.Client.exceptions.ThrottlingException
stop_server
(**kwargs)¶Changes the state of a file transfer protocol-enabled server from ONLINE
to OFFLINE
. An OFFLINE
server cannot accept and process file transfer jobs. Information tied to your server, such as server and user properties, are not affected by stopping your server.
Note
Stopping the server will not reduce or impact your file transfer protocol endpoint billing; you must delete the server to stop being billed.
The state of STOPPING
indicates that the server is in an intermediate state, either not fully able to respond, or not fully offline. The values of STOP_FAILED
can indicate an error condition.
No response is returned from this call.
See also: AWS API Documentation
Request Syntax
response = client.stop_server(
ServerId='string'
)
[REQUIRED]
A system-assigned unique identifier for a server that you stopped.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
Transfer.Client.exceptions.ThrottlingException
tag_resource
(**kwargs)¶Attaches a key-value pair to a resource, as identified by its Amazon Resource Name (ARN). Resources are users, servers, roles, and other entities.
There is no response returned from this call.
See also: AWS API Documentation
Request Syntax
response = client.tag_resource(
Arn='string',
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
[REQUIRED]
An Amazon Resource Name (ARN) for a specific Amazon Web Services resource, such as a server, user, or role.
[REQUIRED]
Key-value pairs assigned to ARNs that you can use to group and search for resources by type. You can attach this metadata to user accounts for any purpose.
Creates a key-value pair for a specific resource. Tags are metadata that you can use to search for and group a resource for various purposes. You can apply tags to servers, users, and roles. A tag key can take more than one value. For example, to group servers for accounting purposes, you might create a tag called Group
and assign the values Research
and Accounting
to that group.
The name assigned to the tag that you create.
Contains one or more values that you assigned to the key name you create.
None
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
test_identity_provider
(**kwargs)¶If the IdentityProviderType
of a file transfer protocol-enabled server is AWS_DIRECTORY_SERVICE
or API_Gateway
, tests whether your identity provider is set up successfully. We highly recommend that you call this operation to test your authentication method as soon as you create your server. By doing so, you can troubleshoot issues with the identity provider integration to ensure that your users can successfully use the service.
The ServerId
and UserName
parameters are required. The ServerProtocol
, SourceIp
, and UserPassword
are all optional.
Note
You cannot use TestIdentityProvider
if the IdentityProviderType
of your server is SERVICE_MANAGED
.
Response
field is empty.An error occurred (InvalidRequestException) when calling the TestIdentityProvider operation: s-*server-ID* not configured for external auth
--server-id
parameter that does not identify an actual Transfer server, you receive the following error: An error occurred (ResourceNotFoundException) when calling the TestIdentityProvider operation: Unknown server
See also: AWS API Documentation
Request Syntax
response = client.test_identity_provider(
ServerId='string',
ServerProtocol='SFTP'|'FTP'|'FTPS',
SourceIp='string',
UserName='string',
UserPassword='string'
)
[REQUIRED]
A system-assigned identifier for a specific server. That server's user authentication method is tested with a user name and password.
The type of file transfer protocol to be tested.
The available protocols are:
[REQUIRED]
The name of the user account to be tested.
dict
Response Syntax
{
'Response': 'string',
'StatusCode': 123,
'Message': 'string',
'Url': 'string'
}
Response Structure
(dict) --
Response (string) --
The response that is returned from your API Gateway.
StatusCode (integer) --
The HTTP status code that is the response from your API Gateway.
Message (string) --
A message that indicates whether the test was successful or not.
Note
If an empty string is returned, the most likely cause is that the authentication failed due to an incorrect username or password.
Url (string) --
The endpoint of the service used to authenticate a user.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
untag_resource
(**kwargs)¶Detaches a key-value pair from a resource, as identified by its Amazon Resource Name (ARN). Resources are users, servers, roles, and other entities.
No response is returned from this call.
See also: AWS API Documentation
Request Syntax
response = client.untag_resource(
Arn='string',
TagKeys=[
'string',
]
)
[REQUIRED]
The value of the resource that will have the tag removed. An Amazon Resource Name (ARN) is an identifier for a specific Amazon Web Services resource, such as a server, user, or role.
[REQUIRED]
TagKeys are key-value pairs assigned to ARNs that can be used to group and search for resources by type. This metadata can be attached to resources for any purpose.
None
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
update_access
(**kwargs)¶Allows you to update parameters for the access specified in the ServerID
and ExternalID
parameters.
See also: AWS API Documentation
Request Syntax
response = client.update_access(
HomeDirectory='string',
HomeDirectoryType='PATH'|'LOGICAL',
HomeDirectoryMappings=[
{
'Entry': 'string',
'Target': 'string'
},
],
Policy='string',
PosixProfile={
'Uid': 123,
'Gid': 123,
'SecondaryGids': [
123,
]
},
Role='string',
ServerId='string',
ExternalId='string'
)
The landing directory (folder) for a user when they log in to the server using the client.
A HomeDirectory
example is /bucket_name/home/mydirectory
.
PATH
, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it LOGICAL
, you need to provide mappings in the HomeDirectoryMappings
for how you want to make Amazon S3 or EFS paths visible to your users.Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the Entry
and Target
pair, where Entry
shows how the path is made visible and Target
is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Amazon Web Services Identity and Access Management (IAM) role provides access to paths in Target
. This value can only be set when HomeDirectoryType
is set to LOGICAL .
The following is an Entry
and Target
pair example.
[ { "Entry": "/directory1", "Target": "/bucket_name/home/mydirectory" } ]
In most cases, you can use this value instead of the session policy to lock down your user to the designated home directory ("chroot
"). To do this, you can set Entry
to /
and set Target
to the HomeDirectory
parameter value.
The following is an Entry
and Target
pair example for chroot
.
[ { "Entry": "/", "Target": "/bucket_name/home/mydirectory" } ]
Represents an object that contains entries and targets for HomeDirectoryMappings
.
The following is an Entry
and Target
pair example for chroot
.
[ { "Entry": "/", "Target": "/bucket_name/home/mydirectory" } ]
Represents an entry for HomeDirectoryMappings
.
Represents the map target that is used in a HomeDirectorymapEntry
.
A session policy for your user so that you can use the same IAM role across multiple users. This policy scopes down user access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include ${Transfer:UserName}
, ${Transfer:HomeDirectory}
, and ${Transfer:HomeBucket}
.
Note
This only applies when the domain of ServerId
is S3. EFS does not use session policies.
For session policies, Amazon Web Services Transfer Family stores the policy as a JSON blob, instead of the Amazon Resource Name (ARN) of the policy. You save the policy as a JSON blob and pass it in the Policy
argument.
For an example of a session policy, see Example session policy .
For more information, see AssumeRole in the Amazon Web ServicesSecurity Token Service API Reference .
The full POSIX identity, including user ID (Uid
), group ID (Gid
), and any secondary groups IDs (SecondaryGids
), that controls your users' access to your Amazon EFS file systems. The POSIX permissions that are set on files and directories in your file system determine the level of access your users get when transferring files into and out of your Amazon EFS file systems.
The POSIX user ID used for all EFS operations by this user.
The POSIX group ID used for all EFS operations by this user.
The secondary POSIX group IDs used for all EFS operations by this user.
[REQUIRED]
A system-assigned unique identifier for a server instance. This is the specific server that you added your user to.
[REQUIRED]
A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Amazon Web Services Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.
Get-ADGroup -Filter {samAccountName -like "*YourGroupName* *"} -Properties * | Select SamAccountName,ObjectSid
In that command, replace YourGroupName with the name of your Active Directory group.
The regex used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-
dict
Response Syntax
{
'ServerId': 'string',
'ExternalId': 'string'
}
Response Structure
(dict) --
ServerId (string) --
The ID of the server that the user is attached to.
ExternalId (string) --
The external ID of the group whose users have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Amazon Web ServicesTransfer Family.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceExistsException
Transfer.Client.exceptions.ResourceNotFoundException
update_server
(**kwargs)¶Updates the file transfer protocol-enabled server's properties after that server has been created.
The UpdateServer
call returns the ServerId
of the server you updated.
See also: AWS API Documentation
Request Syntax
response = client.update_server(
Certificate='string',
ProtocolDetails={
'PassiveIp': 'string',
'TlsSessionResumptionMode': 'DISABLED'|'ENABLED'|'ENFORCED',
'SetStatOption': 'DEFAULT'|'ENABLE_NO_OP'
},
EndpointDetails={
'AddressAllocationIds': [
'string',
],
'SubnetIds': [
'string',
],
'VpcEndpointId': 'string',
'VpcId': 'string',
'SecurityGroupIds': [
'string',
]
},
EndpointType='PUBLIC'|'VPC'|'VPC_ENDPOINT',
HostKey='string',
IdentityProviderDetails={
'Url': 'string',
'InvocationRole': 'string',
'DirectoryId': 'string',
'Function': 'string'
},
LoggingRole='string',
PostAuthenticationLoginBanner='string',
PreAuthenticationLoginBanner='string',
Protocols=[
'SFTP'|'FTP'|'FTPS',
],
SecurityPolicyName='string',
ServerId='string',
WorkflowDetails={
'OnUpload': [
{
'WorkflowId': 'string',
'ExecutionRole': 'string'
},
]
}
)
The Amazon Resource Name (ARN) of the Amazon Web ServicesCertificate Manager (ACM) certificate. Required when Protocols
is set to FTPS
.
To request a new public certificate, see Request a public certificate in the Amazon Web ServicesCertificate Manager User Guide .
To import an existing certificate into ACM, see Importing certificates into ACM in the Amazon Web ServicesCertificate Manager User Guide .
To request a private certificate to use FTPS through private IP addresses, see Request a private certificate in the Amazon Web ServicesCertificate Manager User Guide .
Certificates with the following cryptographic algorithms and key sizes are supported:
Note
The certificate must be a valid SSL/TLS X.509 version 3 certificate with FQDN or IP address specified and information about the issuer.
The protocol settings that are configured for your server.
PassiveIp
parameter to indicate passive mode (for FTP and FTPS protocols). Enter a single dotted-quad IPv4 address, such as the external IP address of a firewall, router, or load balancer.SetStatOption
to ignore the error that is generated when the client attempts to use SETSTAT on a file you are uploading to an S3 bucket. Set the value to ENABLE_NO_OP
to have the Transfer Family server ignore the SETSTAT command, and upload files without needing to make any changes to your SFTP client. Note that with SetStatOption
set to ENABLE_NO_OP
, Transfer generates a log entry to CloudWatch Logs, so you can determine when the client is making a SETSTAT call.TlsSessionResumptionMode
parameter to determine whether or not your Transfer server resumes recent, negotiated sessions through a unique session ID.Indicates passive mode, for FTP and FTPS protocols. Enter a single IPv4 address, such as the public IP address of a firewall, router, or load balancer. For example:
``aws transfer update-server --protocol-details PassiveIp=*0.0.0.0* ``
Replace `` 0.0.0.0 `` in the example above with the actual IP address you want to use.
Note
If you change the PassiveIp
value, you must stop and then restart your Transfer Family server for the change to take effect. For details on using passive mode (PASV) in a NAT environment, see Configuring your FTPS server behind a firewall or NAT with Transfer Family .
A property used with Transfer Family servers that use the FTPS protocol. TLS Session Resumption provides a mechanism to resume or share a negotiated secret key between the control and data connection for an FTPS session. TlsSessionResumptionMode
determines whether or not the server resumes recent, negotiated sessions through a unique session ID. This property is available during CreateServer
and UpdateServer
calls. If a TlsSessionResumptionMode
value is not specified during CreateServer
, it is set to ENFORCED
by default.
DISABLED
: the server does not process TLS session resumption client requests and creates a new TLS session for each request.ENABLED
: the server processes and accepts clients that are performing TLS session resumption. The server doesn't reject client data connections that do not perform the TLS session resumption client processing.ENFORCED
: the server processes and accepts clients that are performing TLS session resumption. The server rejects client data connections that do not perform the TLS session resumption client processing. Before you set the value to ENFORCED
, test your clients.Note
Not all FTPS clients perform TLS session resumption. So, if you choose to enforce TLS session resumption, you prevent any connections from FTPS clients that don't perform the protocol negotiation. To determine whether or not you can use the ENFORCED
value, you need to test your clients.
Use the SetStatOption
to ignore the error that is generated when the client attempts to use SETSTAT
on a file you are uploading to an S3 bucket.
Some SFTP file transfer clients can attempt to change the attributes of remote files, including timestamp and permissions, using commands, such as SETSTAT
when uploading the file. However, these commands are not compatible with object storage systems, such as Amazon S3. Due to this incompatibility, file uploads from these clients can result in errors even when the file is otherwise successfully uploaded.
Set the value to ENABLE_NO_OP
to have the Transfer Family server ignore the SETSTAT
command, and upload files without needing to make any changes to your SFTP client. While the SetStatOption
ENABLE_NO_OP
setting ignores the error, it does generate a log entry in Amazon CloudWatch Logs, so you can determine when the client is making a SETSTAT
call.
Note
If you want to preserve the original timestamp for your file, and modify other file attributes using SETSTAT
, you can use Amazon EFS as backend storage with Transfer Family.
The virtual private cloud (VPC) endpoint settings that are configured for your server. When you host your endpoint within your VPC, you can make it accessible only to resources within your VPC, or you can attach Elastic IP addresses and make it accessible to clients over the internet. Your VPC's default security groups are automatically assigned to your endpoint.
A list of address allocation IDs that are required to attach an Elastic IP address to your server's endpoint.
Note
This property can only be set when EndpointType
is set to VPC
and it is only valid in the UpdateServer
API.
A list of subnet IDs that are required to host your server endpoint in your VPC.
Note
This property can only be set when EndpointType
is set to VPC
.
The ID of the VPC endpoint.
Note
This property can only be set when EndpointType
is set to VPC_ENDPOINT
.
For more information, see https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html#deprecate-vpc-endpoint.
The VPC ID of the VPC in which a server's endpoint will be hosted.
Note
This property can only be set when EndpointType
is set to VPC
.
A list of security groups IDs that are available to attach to your server's endpoint.
Note
This property can only be set when EndpointType
is set to VPC
.
You can edit the SecurityGroupIds
property in the UpdateServer API only if you are changing the EndpointType
from PUBLIC
or VPC_ENDPOINT
to VPC
. To change security groups associated with your server's VPC endpoint after creation, use the Amazon EC2 ModifyVpcEndpoint API.
The type of endpoint that you want your server to use. You can choose to make your server's endpoint publicly accessible (PUBLIC) or host it inside your VPC. With an endpoint that is hosted in a VPC, you can restrict access to your server and resources only within your VPC or choose to make it internet facing by attaching Elastic IP addresses directly to it.
Note
After May 19, 2021, you won't be able to create a server using EndpointType=VPC_ENDPOINT
in your Amazon Web Servicesaccount if your account hasn't already done so before May 19, 2021. If you have already created servers with EndpointType=VPC_ENDPOINT
in your Amazon Web Servicesaccount on or before May 19, 2021, you will not be affected. After this date, use EndpointType
=``VPC`` .
For more information, see https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html#deprecate-vpc-endpoint.
It is recommended that you use VPC
as the EndpointType
. With this endpoint type, you have the option to directly associate up to three Elastic IPv4 addresses (BYO IP included) with your server's endpoint and use VPC security groups to restrict traffic by the client's public IP address. This is not possible with EndpointType
set to VPC_ENDPOINT
.
The RSA, ECDSA, or ED25519 private key to use for your server.
Use the following command to generate an RSA 2048 bit key with no passphrase:
ssh-keygen -t rsa -b 2048 -N "" -m PEM -f my-new-server-key
.
Use a minimum value of 2048 for the -b
option: you can create a stronger key using 3072 or 4096.
Use the following command to generate an ECDSA 256 bit key with no passphrase:
ssh-keygen -t ecdsa -b 256 -N "" -m PEM -f my-new-server-key
.
Valid values for the -b
option for ECDSA are 256, 384, and 521.
Use the following command to generate an ED25519 key with no passphrase:
ssh-keygen -t ed25519 -N "" -f my-new-server-key
.
For all of these commands, you can replace my-new-server-key with a string of your choice.
Warning
If you aren't planning to migrate existing users from an existing SFTP-enabled server to a new server, don't update the host key. Accidentally changing a server's host key can be disruptive.
For more information, see Change the host key for your SFTP-enabled server in the Amazon Web Services Transfer Family User Guide .
An array containing all of the information required to call a customer's authentication API method.
Provides the location of the service endpoint used to authenticate users.
Provides the type of InvocationRole
used to authenticate the user account.
The identifier of the Amazon Web Services Directory Service directory that you want to stop sharing.
The ARN for a lambda function to use for the Identity provider.
Specify a string to display when users connect to a server. This string is displayed after the user authenticates.
Note
The SFTP protocol does not support post-authentication display banners.
Specify a string to display when users connect to a server. This string is displayed before the user authenticates. For example, the following banner displays details about using the system.
This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel.
Specifies the file transfer protocol or protocols over which your file transfer protocol client can connect to your server's endpoint. The available protocols are:
Note
If you select FTPS
, you must choose a certificate stored in Amazon Web ServicesCertificate Manager (ACM) which will be used to identify your server when clients connect to it over FTPS.
If Protocol
includes either FTP
or FTPS
, then the EndpointType
must be VPC
and the IdentityProviderType
must be AWS_DIRECTORY_SERVICE
or API_GATEWAY
.
If Protocol
includes FTP
, then AddressAllocationIds
cannot be associated.
If Protocol
is set only to SFTP
, the EndpointType
can be set to PUBLIC
and the IdentityProviderType
can be set to SERVICE_MANAGED
.
[REQUIRED]
A system-assigned unique identifier for a server instance that the user account is assigned to.
Specifies the workflow ID for the workflow to assign and the execution role used for executing the workflow.
To remove an associated workflow from a server, you can provide an empty OnUpload
object, as in the following example.
aws transfer update-server --server-id s-01234567890abcdef --workflow-details '{"OnUpload":[]}'
A trigger that starts a workflow: the workflow begins to execute after a file is uploaded.
To remove an associated workflow from a server, you can provide an empty OnUpload
object, as in the following example.
aws transfer update-server --server-id s-01234567890abcdef --workflow-details '{"OnUpload":[]}'
Specifies the workflow ID for the workflow to assign and the execution role used for executing the workflow.
A unique identifier for the workflow.
Includes the necessary permissions for S3, EFS, and Lambda operations that Transfer can assume, so that all workflow steps can operate on the required resources
dict
Response Syntax
{
'ServerId': 'string'
}
Response Structure
(dict) --
ServerId (string) --
A system-assigned unique identifier for a server that the user account is assigned to.
Exceptions
Transfer.Client.exceptions.AccessDeniedException
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.ConflictException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceExistsException
Transfer.Client.exceptions.ResourceNotFoundException
Transfer.Client.exceptions.ThrottlingException
update_user
(**kwargs)¶Assigns new properties to a user. Parameters you pass modify any or all of the following: the home directory, role, and policy for the UserName
and ServerId
you specify.
The response returns the ServerId
and the UserName
for the updated user.
See also: AWS API Documentation
Request Syntax
response = client.update_user(
HomeDirectory='string',
HomeDirectoryType='PATH'|'LOGICAL',
HomeDirectoryMappings=[
{
'Entry': 'string',
'Target': 'string'
},
],
Policy='string',
PosixProfile={
'Uid': 123,
'Gid': 123,
'SecondaryGids': [
123,
]
},
Role='string',
ServerId='string',
UserName='string'
)
The landing directory (folder) for a user when they log in to the server using the client.
A HomeDirectory
example is /bucket_name/home/mydirectory
.
PATH
, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it LOGICAL
, you need to provide mappings in the HomeDirectoryMappings
for how you want to make Amazon S3 or EFS paths visible to your users.Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the Entry
and Target
pair, where Entry
shows how the path is made visible and Target
is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Amazon Web Services Identity and Access Management (IAM) role provides access to paths in Target
. This value can only be set when HomeDirectoryType
is set to LOGICAL .
The following is an Entry
and Target
pair example.
[ { "Entry": "/directory1", "Target": "/bucket_name/home/mydirectory" } ]
In most cases, you can use this value instead of the session policy to lock down your user to the designated home directory ("chroot
"). To do this, you can set Entry
to '/' and set Target
to the HomeDirectory parameter value.
The following is an Entry
and Target
pair example for chroot
.
[ { "Entry": "/", "Target": "/bucket_name/home/mydirectory" } ]
Represents an object that contains entries and targets for HomeDirectoryMappings
.
The following is an Entry
and Target
pair example for chroot
.
[ { "Entry": "/", "Target": "/bucket_name/home/mydirectory" } ]
Represents an entry for HomeDirectoryMappings
.
Represents the map target that is used in a HomeDirectorymapEntry
.
A session policy for your user so that you can use the same IAM role across multiple users. This policy scopes down user access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include ${Transfer:UserName}
, ${Transfer:HomeDirectory}
, and ${Transfer:HomeBucket}
.
Note
This only applies when the domain of ServerId
is S3. EFS does not use session policies.
For session policies, Amazon Web Services Transfer Family stores the policy as a JSON blob, instead of the Amazon Resource Name (ARN) of the policy. You save the policy as a JSON blob and pass it in the Policy
argument.
For an example of a session policy, see Creating a session policy .
For more information, see AssumeRole in the Amazon Web Services Security Token Service API Reference .
Specifies the full POSIX identity, including user ID (Uid
), group ID (Gid
), and any secondary groups IDs (SecondaryGids
), that controls your users' access to your Amazon Elastic File Systems (Amazon EFS). The POSIX permissions that are set on files and directories in your file system determines the level of access your users get when transferring files into and out of your Amazon EFS file systems.
The POSIX user ID used for all EFS operations by this user.
The POSIX group ID used for all EFS operations by this user.
The secondary POSIX group IDs used for all EFS operations by this user.
[REQUIRED]
A system-assigned unique identifier for a server instance that the user account is assigned to.
[REQUIRED]
A unique string that identifies a user and is associated with a server as specified by the ServerId
. This user name must be a minimum of 3 and a maximum of 100 characters long. The following are valid characters: a-z, A-Z, 0-9, underscore '_', hyphen '-', period '.', and at sign '@'. The user name can't start with a hyphen, period, or at sign.
dict
Response Syntax
{
'ServerId': 'string',
'UserName': 'string'
}
Response Structure
(dict) --
UpdateUserResponse
returns the user name and identifier for the request to update a user's properties.
ServerId (string) --
A system-assigned unique identifier for a server instance that the user account is assigned to.
UserName (string) --
The unique identifier for a user that is assigned to a server instance that was specified in the request.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException
Transfer.Client.exceptions.ThrottlingException
The available paginators are:
Transfer.Paginator.ListAccesses
Transfer.Paginator.ListExecutions
Transfer.Paginator.ListSecurityPolicies
Transfer.Paginator.ListServers
Transfer.Paginator.ListTagsForResource
Transfer.Paginator.ListUsers
Transfer.Paginator.ListWorkflows
Transfer.Paginator.
ListAccesses
¶paginator = client.get_paginator('list_accesses')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from Transfer.Client.list_accesses()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
ServerId='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
[REQUIRED]
A system-assigned unique identifier for a server that has users assigned to it.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
dict
Response Syntax
{
'ServerId': 'string',
'Accesses': [
{
'HomeDirectory': 'string',
'HomeDirectoryType': 'PATH'|'LOGICAL',
'Role': 'string',
'ExternalId': 'string'
},
]
}
Response Structure
(dict) --
ServerId (string) --
A system-assigned unique identifier for a server that has users assigned to it.
Accesses (list) --
Returns the accesses and their properties for the ServerId
value that you specify.
(dict) --
Lists the properties for one or more specified associated accesses.
HomeDirectory (string) --
The landing directory (folder) for a user when they log in to the server using the client.
A HomeDirectory
example is /bucket_name/home/mydirectory
.
HomeDirectoryType (string) --
The type of landing directory (folder) you want your users' home directory to be when they log into the server. If you set it to PATH
, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it LOGICAL
, you need to provide mappings in the HomeDirectoryMappings
for how you want to make Amazon S3 or EFS paths visible to your users.
Role (string) --
Specifies the Amazon Resource Name (ARN) of the IAM role that controls your users' access to your Amazon S3 bucket or EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.
ExternalId (string) --
A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Amazon Web Services Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.
Get-ADGroup -Filter {samAccountName -like "*YourGroupName* *"} -Properties * | Select SamAccountName,ObjectSid
In that command, replace YourGroupName with the name of your Active Directory group.
The regex used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-
Transfer.Paginator.
ListExecutions
¶paginator = client.get_paginator('list_executions')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from Transfer.Client.list_executions()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
WorkflowId='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
[REQUIRED]
A unique identifier for the workflow.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
dict
Response Syntax
{
'WorkflowId': 'string',
'Executions': [
{
'ExecutionId': 'string',
'InitialFileLocation': {
'S3FileLocation': {
'Bucket': 'string',
'Key': 'string',
'VersionId': 'string',
'Etag': 'string'
},
'EfsFileLocation': {
'FileSystemId': 'string',
'Path': 'string'
}
},
'ServiceMetadata': {
'UserDetails': {
'UserName': 'string',
'ServerId': 'string',
'SessionId': 'string'
}
},
'Status': 'IN_PROGRESS'|'COMPLETED'|'EXCEPTION'|'HANDLING_EXCEPTION'
},
]
}
Response Structure
(dict) --
WorkflowId (string) --
A unique identifier for the workflow.
Executions (list) --
Returns the details for each execution.
IN_PROGRESS
, COMPLETED
, EXCEPTION
, HANDLING_EXEPTION
.(dict) --
Returns properties of the execution that is specified.
ExecutionId (string) --
A unique identifier for the execution of a workflow.
InitialFileLocation (dict) --
A structure that describes the Amazon S3 or EFS file location. This is the file location when the execution begins: if the file is being copied, this is the initial (as opposed to destination) file location.
S3FileLocation (dict) --
Specifies the S3 details for the file being used, such as bucket, Etag, and so forth.
Bucket (string) --
Specifies the S3 bucket that contains the file being used.
Key (string) --
The name assigned to the file when it was created in S3. You use the object key to retrieve the object.
VersionId (string) --
Specifies the file version.
Etag (string) --
The entity tag is a hash of the object. The ETag reflects changes only to the contents of an object, not its metadata.
EfsFileLocation (dict) --
Specifies the Amazon EFS ID and the path for the file being used.
FileSystemId (string) --
The ID of the file system, assigned by Amazon EFS.
Path (string) --
The pathname for the folder being used by a workflow.
ServiceMetadata (dict) --
A container object for the session details associated with a workflow.
UserDetails (dict) --
The Server ID (ServerId
), Session ID (SessionId
) and user (UserName
) make up the UserDetails
.
UserName (string) --
A unique string that identifies a user account associated with a server.
ServerId (string) --
The system-assigned unique identifier for a Transfer server instance.
SessionId (string) --
The system-assigned unique identifier for a session that corresponds to the workflow.
Status (string) --
The status is one of the execution. Can be in progress, completed, exception encountered, or handling the exception.
Transfer.Paginator.
ListSecurityPolicies
¶paginator = client.get_paginator('list_security_policies')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from Transfer.Client.list_security_policies()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
{
'SecurityPolicyNames': [
'string',
]
}
Response Structure
An array of security policies that were listed.
Transfer.Paginator.
ListServers
¶paginator = client.get_paginator('list_servers')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from Transfer.Client.list_servers()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
{
'Servers': [
{
'Arn': 'string',
'Domain': 'S3'|'EFS',
'IdentityProviderType': 'SERVICE_MANAGED'|'API_GATEWAY'|'AWS_DIRECTORY_SERVICE'|'AWS_LAMBDA',
'EndpointType': 'PUBLIC'|'VPC'|'VPC_ENDPOINT',
'LoggingRole': 'string',
'ServerId': 'string',
'State': 'OFFLINE'|'ONLINE'|'STARTING'|'STOPPING'|'START_FAILED'|'STOP_FAILED',
'UserCount': 123
},
]
}
Response Structure
An array of servers that were listed.
Returns properties of a file transfer protocol-enabled server that was specified.
Specifies the unique Amazon Resource Name (ARN) for a server to be listed.
Specifies the domain of the storage system that is used for file transfers.
Specifies the mode of authentication for a server. The default value is SERVICE_MANAGED
, which allows you to store and access user credentials within the Amazon Web Services Transfer Family service.
Use AWS_DIRECTORY_SERVICE
to provide access to Active Directory groups in Amazon Web Services Managed Active Directory or Microsoft Active Directory in your on-premises environment or in Amazon Web Services using AD Connectors. This option also requires you to provide a Directory ID using the IdentityProviderDetails
parameter.
Use the API_GATEWAY
value to integrate with an identity provider of your choosing. The API_GATEWAY
setting requires you to provide an API Gateway endpoint URL to call for authentication using the IdentityProviderDetails
parameter.
Use the AWS_LAMBDA
value to directly use a Lambda function as your identity provider. If you choose this value, you must specify the ARN for the lambda function in the Function
parameter for the IdentityProviderDetails
data type.
Specifies the type of VPC endpoint that your server is connected to. If your server is connected to a VPC endpoint, your server isn't accessible over the public internet.
Specifies the Amazon Resource Name (ARN) of the Amazon Web Services Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFS events. When set, user activity can be viewed in your CloudWatch logs.
Specifies the unique system assigned identifier for the servers that were listed.
Specifies the condition of a server for the server that was described. A value of ONLINE
indicates that the server can accept jobs and transfer files. A State
value of OFFLINE
means that the server cannot perform file transfer operations.
The states of STARTING
and STOPPING
indicate that the server is in an intermediate state, either not fully able to respond, or not fully offline. The values of START_FAILED
or STOP_FAILED
can indicate an error condition.
Specifies the number of users that are assigned to a server you specified with the ServerId
.
Transfer.Paginator.
ListTagsForResource
¶paginator = client.get_paginator('list_tags_for_resource')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from Transfer.Client.list_tags_for_resource()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Arn='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
[REQUIRED]
Requests the tags associated with a particular Amazon Resource Name (ARN). An ARN is an identifier for a specific Amazon Web Services resource, such as a server, user, or role.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
dict
Response Syntax
{
'Arn': 'string',
'Tags': [
{
'Key': 'string',
'Value': 'string'
},
]
}
Response Structure
(dict) --
Arn (string) --
The ARN you specified to list the tags of.
Tags (list) --
Key-value pairs that are assigned to a resource, usually for the purpose of grouping and searching for items. Tags are metadata that you define.
(dict) --
Creates a key-value pair for a specific resource. Tags are metadata that you can use to search for and group a resource for various purposes. You can apply tags to servers, users, and roles. A tag key can take more than one value. For example, to group servers for accounting purposes, you might create a tag called Group
and assign the values Research
and Accounting
to that group.
Key (string) --
The name assigned to the tag that you create.
Value (string) --
Contains one or more values that you assigned to the key name you create.
Transfer.Paginator.
ListUsers
¶paginator = client.get_paginator('list_users')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from Transfer.Client.list_users()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
ServerId='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
[REQUIRED]
A system-assigned unique identifier for a server that has users assigned to it.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
dict
Response Syntax
{
'ServerId': 'string',
'Users': [
{
'Arn': 'string',
'HomeDirectory': 'string',
'HomeDirectoryType': 'PATH'|'LOGICAL',
'Role': 'string',
'SshPublicKeyCount': 123,
'UserName': 'string'
},
]
}
Response Structure
(dict) --
ServerId (string) --
A system-assigned unique identifier for a server that the users are assigned to.
Users (list) --
Returns the user accounts and their properties for the ServerId
value that you specify.
(dict) --
Returns properties of the user that you specify.
Arn (string) --
Provides the unique Amazon Resource Name (ARN) for the user that you want to learn about.
HomeDirectory (string) --
The landing directory (folder) for a user when they log in to the server using the client.
A HomeDirectory
example is /bucket_name/home/mydirectory
.
HomeDirectoryType (string) --
The type of landing directory (folder) you want your users' home directory to be when they log into the server. If you set it to PATH
, the user will see the absolute Amazon S3 bucket or EFS paths as is in their file transfer protocol clients. If you set it LOGICAL
, you need to provide mappings in the HomeDirectoryMappings
for how you want to make Amazon S3 or EFS paths visible to your users.
Role (string) --
Specifies the Amazon Resource Name (ARN) of the IAM role that controls your users' access to your Amazon S3 bucket or EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.
Note
The IAM role that controls your users' access to your Amazon S3 bucket for servers with Domain=S3
, or your EFS file system for servers with Domain=EFS
.
The policies attached to this role determine the level of access you want to provide your users when transferring files into and out of your S3 buckets or EFS file systems.
SshPublicKeyCount (integer) --
Specifies the number of SSH public keys stored for the user you specified.
UserName (string) --
Specifies the name of the user whose ARN was specified. User names are used for authentication purposes.
Transfer.Paginator.
ListWorkflows
¶paginator = client.get_paginator('list_workflows')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from Transfer.Client.list_workflows()
.
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken
will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken
from a previous response.
{
'Workflows': [
{
'WorkflowId': 'string',
'Description': 'string',
'Arn': 'string'
},
]
}
Response Structure
Returns the Arn
, WorkflowId
, and Description
for each workflow.
Contains the ID, text description, and Amazon Resource Name (ARN) for the workflow.
A unique identifier for the workflow.
Specifies the text description for the workflow.
Specifies the unique Amazon Resource Name (ARN) for the workflow.
The available waiters are:
Transfer.Waiter.
ServerOffline
¶waiter = client.get_waiter('server_offline')
wait
(**kwargs)¶Polls Transfer.Client.describe_server()
every 30 seconds until a successful state is reached. An error is returned after 120 failed checks.
See also: AWS API Documentation
Request Syntax
waiter.wait(
ServerId='string',
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
[REQUIRED]
A system-assigned unique identifier for a server.
A dictionary that provides parameters to control waiting behavior.
The amount of time in seconds to wait between attempts. Default: 30
The maximum number of attempts to be made. Default: 120
None
Transfer.Waiter.
ServerOnline
¶waiter = client.get_waiter('server_online')
wait
(**kwargs)¶Polls Transfer.Client.describe_server()
every 30 seconds until a successful state is reached. An error is returned after 120 failed checks.
See also: AWS API Documentation
Request Syntax
waiter.wait(
ServerId='string',
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
[REQUIRED]
A system-assigned unique identifier for a server.
A dictionary that provides parameters to control waiting behavior.
The amount of time in seconds to wait between attempts. Default: 30
The maximum number of attempts to be made. Default: 120
None