Transfer / Client / describe_workflow
describe_workflow#
- Transfer.Client.describe_workflow(**kwargs)#
Describes the specified workflow.
See also: AWS API Documentation
Request Syntax
response = client.describe_workflow( WorkflowId='string' )
- Parameters:
WorkflowId (string) –
[REQUIRED]
A unique identifier for the workflow.
- Return type:
dict
- Returns:
Response Syntax
{ 'Workflow': { 'Arn': 'string', 'Description': 'string', 'Steps': [ { 'Type': 'COPY'|'CUSTOM'|'TAG'|'DELETE'|'DECRYPT', 'CopyStepDetails': { 'Name': 'string', 'DestinationFileLocation': { 'S3FileLocation': { 'Bucket': 'string', 'Key': 'string' }, 'EfsFileLocation': { 'FileSystemId': 'string', 'Path': 'string' } }, 'OverwriteExisting': 'TRUE'|'FALSE', 'SourceFileLocation': 'string' }, 'CustomStepDetails': { 'Name': 'string', 'Target': 'string', 'TimeoutSeconds': 123, 'SourceFileLocation': 'string' }, 'DeleteStepDetails': { 'Name': 'string', 'SourceFileLocation': 'string' }, 'TagStepDetails': { 'Name': 'string', 'Tags': [ { 'Key': 'string', 'Value': 'string' }, ], 'SourceFileLocation': 'string' }, 'DecryptStepDetails': { 'Name': 'string', 'Type': 'PGP', 'SourceFileLocation': 'string', 'OverwriteExisting': 'TRUE'|'FALSE', 'DestinationFileLocation': { 'S3FileLocation': { 'Bucket': 'string', 'Key': 'string' }, 'EfsFileLocation': { 'FileSystemId': 'string', 'Path': 'string' } } } }, ], 'OnExceptionSteps': [ { 'Type': 'COPY'|'CUSTOM'|'TAG'|'DELETE'|'DECRYPT', 'CopyStepDetails': { 'Name': 'string', 'DestinationFileLocation': { 'S3FileLocation': { 'Bucket': 'string', 'Key': 'string' }, 'EfsFileLocation': { 'FileSystemId': 'string', 'Path': 'string' } }, 'OverwriteExisting': 'TRUE'|'FALSE', 'SourceFileLocation': 'string' }, 'CustomStepDetails': { 'Name': 'string', 'Target': 'string', 'TimeoutSeconds': 123, 'SourceFileLocation': 'string' }, 'DeleteStepDetails': { 'Name': 'string', 'SourceFileLocation': 'string' }, 'TagStepDetails': { 'Name': 'string', 'Tags': [ { 'Key': 'string', 'Value': 'string' }, ], 'SourceFileLocation': 'string' }, 'DecryptStepDetails': { 'Name': 'string', 'Type': 'PGP', 'SourceFileLocation': 'string', 'OverwriteExisting': 'TRUE'|'FALSE', 'DestinationFileLocation': { 'S3FileLocation': { 'Bucket': 'string', 'Key': 'string' }, 'EfsFileLocation': { 'FileSystemId': 'string', 'Path': 'string' } } } }, ], 'WorkflowId': 'string', 'Tags': [ { 'Key': 'string', 'Value': 'string' }, ] } }
Response Structure
(dict) –
Workflow (dict) –
The structure that contains the details of the workflow.
Arn (string) –
Specifies the unique Amazon Resource Name (ARN) for the workflow.
Description (string) –
Specifies the text description for the workflow.
Steps (list) –
Specifies the details for the steps that are in the specified workflow.
(dict) –
The basic building block of a workflow.
Type (string) –
Currently, the following step types are supported.
COPY
- Copy the file to another location.CUSTOM
- Perform a custom step with an Lambda function target.DECRYPT
- Decrypt a file that was encrypted before it was uploaded.DELETE
- Delete the file.TAG
- Add a tag to the file.
CopyStepDetails (dict) –
Details for a step that performs a file copy.
Consists of the following values:
A description
An Amazon S3 location for the destination of the file copy.
A flag that indicates whether to overwrite an existing file of the same name. The default is
FALSE
.
Name (string) –
The name of the step, used as an identifier.
DestinationFileLocation (dict) –
Specifies the location for the file being copied. Use
${Transfer:UserName}
or${Transfer:UploadDate}
in this field to parametrize the destination prefix by username or uploaded date.Set the value of
DestinationFileLocation
to${Transfer:UserName}
to copy uploaded files to an Amazon S3 bucket that is prefixed with the name of the Transfer Family user that uploaded the file.Set the value of
DestinationFileLocation
to${Transfer:UploadDate}
to copy uploaded files to an Amazon S3 bucket that is prefixed with the date of the upload.
Note
The system resolves
UploadDate
to a date format of YYYY-MM-DD, based on the date the file is uploaded in UTC.S3FileLocation (dict) –
Specifies the details for the Amazon S3 file that’s being copied or decrypted.
Bucket (string) –
Specifies the S3 bucket for the customer input file.
Key (string) –
The name assigned to the file when it was created in Amazon S3. You use the object key to retrieve the object.
EfsFileLocation (dict) –
Specifies the details for the Amazon Elastic File System (Amazon EFS) file that’s being decrypted.
FileSystemId (string) –
The identifier of the file system, assigned by Amazon EFS.
Path (string) –
The pathname for the folder being used by a workflow.
OverwriteExisting (string) –
A flag that indicates whether to overwrite an existing file of the same name. The default is
FALSE
.If the workflow is processing a file that has the same name as an existing file, the behavior is as follows:
If
OverwriteExisting
isTRUE
, the existing file is replaced with the file being processed.If
OverwriteExisting
isFALSE
, nothing happens, and the workflow processing stops.
SourceFileLocation (string) –
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
To use the previous file as the input, enter
${previous.file}
. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.To use the originally uploaded file location as input for this step, enter
${original.file}
.
CustomStepDetails (dict) –
Details for a step that invokes an Lambda function.
Consists of the Lambda function’s name, target, and timeout (in seconds).
Name (string) –
The name of the step, used as an identifier.
Target (string) –
The ARN for the Lambda function that is being called.
TimeoutSeconds (integer) –
Timeout, in seconds, for the step.
SourceFileLocation (string) –
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
To use the previous file as the input, enter
${previous.file}
. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.To use the originally uploaded file location as input for this step, enter
${original.file}
.
DeleteStepDetails (dict) –
Details for a step that deletes the file.
Name (string) –
The name of the step, used as an identifier.
SourceFileLocation (string) –
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
To use the previous file as the input, enter
${previous.file}
. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.To use the originally uploaded file location as input for this step, enter
${original.file}
.
TagStepDetails (dict) –
Details for a step that creates one or more tags.
You specify one or more tags. Each tag contains a key-value pair.
Name (string) –
The name of the step, used as an identifier.
Tags (list) –
Array that contains from 1 to 10 key/value pairs.
(dict) –
Specifies the key-value pair that are assigned to a file during the execution of a Tagging step.
Key (string) –
The name assigned to the tag that you create.
Value (string) –
The value that corresponds to the key.
SourceFileLocation (string) –
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
To use the previous file as the input, enter
${previous.file}
. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.To use the originally uploaded file location as input for this step, enter
${original.file}
.
DecryptStepDetails (dict) –
Details for a step that decrypts an encrypted file.
Consists of the following values:
A descriptive name
An Amazon S3 or Amazon Elastic File System (Amazon EFS) location for the source file to decrypt.
An S3 or Amazon EFS location for the destination of the file decryption.
A flag that indicates whether to overwrite an existing file of the same name. The default is
FALSE
.The type of encryption that’s used. Currently, only PGP encryption is supported.
Name (string) –
The name of the step, used as an identifier.
Type (string) –
The type of encryption used. Currently, this value must be
PGP
.SourceFileLocation (string) –
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
To use the previous file as the input, enter
${previous.file}
. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.To use the originally uploaded file location as input for this step, enter
${original.file}
.
OverwriteExisting (string) –
A flag that indicates whether to overwrite an existing file of the same name. The default is
FALSE
.If the workflow is processing a file that has the same name as an existing file, the behavior is as follows:
If
OverwriteExisting
isTRUE
, the existing file is replaced with the file being processed.If
OverwriteExisting
isFALSE
, nothing happens, and the workflow processing stops.
DestinationFileLocation (dict) –
Specifies the location for the file being decrypted. Use
${Transfer:UserName}
or${Transfer:UploadDate}
in this field to parametrize the destination prefix by username or uploaded date.Set the value of
DestinationFileLocation
to${Transfer:UserName}
to decrypt uploaded files to an Amazon S3 bucket that is prefixed with the name of the Transfer Family user that uploaded the file.Set the value of
DestinationFileLocation
to${Transfer:UploadDate}
to decrypt uploaded files to an Amazon S3 bucket that is prefixed with the date of the upload.
Note
The system resolves
UploadDate
to a date format of YYYY-MM-DD, based on the date the file is uploaded in UTC.S3FileLocation (dict) –
Specifies the details for the Amazon S3 file that’s being copied or decrypted.
Bucket (string) –
Specifies the S3 bucket for the customer input file.
Key (string) –
The name assigned to the file when it was created in Amazon S3. You use the object key to retrieve the object.
EfsFileLocation (dict) –
Specifies the details for the Amazon Elastic File System (Amazon EFS) file that’s being decrypted.
FileSystemId (string) –
The identifier of the file system, assigned by Amazon EFS.
Path (string) –
The pathname for the folder being used by a workflow.
OnExceptionSteps (list) –
Specifies the steps (actions) to take if errors are encountered during execution of the workflow.
(dict) –
The basic building block of a workflow.
Type (string) –
Currently, the following step types are supported.
COPY
- Copy the file to another location.CUSTOM
- Perform a custom step with an Lambda function target.DECRYPT
- Decrypt a file that was encrypted before it was uploaded.DELETE
- Delete the file.TAG
- Add a tag to the file.
CopyStepDetails (dict) –
Details for a step that performs a file copy.
Consists of the following values:
A description
An Amazon S3 location for the destination of the file copy.
A flag that indicates whether to overwrite an existing file of the same name. The default is
FALSE
.
Name (string) –
The name of the step, used as an identifier.
DestinationFileLocation (dict) –
Specifies the location for the file being copied. Use
${Transfer:UserName}
or${Transfer:UploadDate}
in this field to parametrize the destination prefix by username or uploaded date.Set the value of
DestinationFileLocation
to${Transfer:UserName}
to copy uploaded files to an Amazon S3 bucket that is prefixed with the name of the Transfer Family user that uploaded the file.Set the value of
DestinationFileLocation
to${Transfer:UploadDate}
to copy uploaded files to an Amazon S3 bucket that is prefixed with the date of the upload.
Note
The system resolves
UploadDate
to a date format of YYYY-MM-DD, based on the date the file is uploaded in UTC.S3FileLocation (dict) –
Specifies the details for the Amazon S3 file that’s being copied or decrypted.
Bucket (string) –
Specifies the S3 bucket for the customer input file.
Key (string) –
The name assigned to the file when it was created in Amazon S3. You use the object key to retrieve the object.
EfsFileLocation (dict) –
Specifies the details for the Amazon Elastic File System (Amazon EFS) file that’s being decrypted.
FileSystemId (string) –
The identifier of the file system, assigned by Amazon EFS.
Path (string) –
The pathname for the folder being used by a workflow.
OverwriteExisting (string) –
A flag that indicates whether to overwrite an existing file of the same name. The default is
FALSE
.If the workflow is processing a file that has the same name as an existing file, the behavior is as follows:
If
OverwriteExisting
isTRUE
, the existing file is replaced with the file being processed.If
OverwriteExisting
isFALSE
, nothing happens, and the workflow processing stops.
SourceFileLocation (string) –
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
To use the previous file as the input, enter
${previous.file}
. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.To use the originally uploaded file location as input for this step, enter
${original.file}
.
CustomStepDetails (dict) –
Details for a step that invokes an Lambda function.
Consists of the Lambda function’s name, target, and timeout (in seconds).
Name (string) –
The name of the step, used as an identifier.
Target (string) –
The ARN for the Lambda function that is being called.
TimeoutSeconds (integer) –
Timeout, in seconds, for the step.
SourceFileLocation (string) –
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
To use the previous file as the input, enter
${previous.file}
. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.To use the originally uploaded file location as input for this step, enter
${original.file}
.
DeleteStepDetails (dict) –
Details for a step that deletes the file.
Name (string) –
The name of the step, used as an identifier.
SourceFileLocation (string) –
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
To use the previous file as the input, enter
${previous.file}
. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.To use the originally uploaded file location as input for this step, enter
${original.file}
.
TagStepDetails (dict) –
Details for a step that creates one or more tags.
You specify one or more tags. Each tag contains a key-value pair.
Name (string) –
The name of the step, used as an identifier.
Tags (list) –
Array that contains from 1 to 10 key/value pairs.
(dict) –
Specifies the key-value pair that are assigned to a file during the execution of a Tagging step.
Key (string) –
The name assigned to the tag that you create.
Value (string) –
The value that corresponds to the key.
SourceFileLocation (string) –
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
To use the previous file as the input, enter
${previous.file}
. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.To use the originally uploaded file location as input for this step, enter
${original.file}
.
DecryptStepDetails (dict) –
Details for a step that decrypts an encrypted file.
Consists of the following values:
A descriptive name
An Amazon S3 or Amazon Elastic File System (Amazon EFS) location for the source file to decrypt.
An S3 or Amazon EFS location for the destination of the file decryption.
A flag that indicates whether to overwrite an existing file of the same name. The default is
FALSE
.The type of encryption that’s used. Currently, only PGP encryption is supported.
Name (string) –
The name of the step, used as an identifier.
Type (string) –
The type of encryption used. Currently, this value must be
PGP
.SourceFileLocation (string) –
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
To use the previous file as the input, enter
${previous.file}
. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.To use the originally uploaded file location as input for this step, enter
${original.file}
.
OverwriteExisting (string) –
A flag that indicates whether to overwrite an existing file of the same name. The default is
FALSE
.If the workflow is processing a file that has the same name as an existing file, the behavior is as follows:
If
OverwriteExisting
isTRUE
, the existing file is replaced with the file being processed.If
OverwriteExisting
isFALSE
, nothing happens, and the workflow processing stops.
DestinationFileLocation (dict) –
Specifies the location for the file being decrypted. Use
${Transfer:UserName}
or${Transfer:UploadDate}
in this field to parametrize the destination prefix by username or uploaded date.Set the value of
DestinationFileLocation
to${Transfer:UserName}
to decrypt uploaded files to an Amazon S3 bucket that is prefixed with the name of the Transfer Family user that uploaded the file.Set the value of
DestinationFileLocation
to${Transfer:UploadDate}
to decrypt uploaded files to an Amazon S3 bucket that is prefixed with the date of the upload.
Note
The system resolves
UploadDate
to a date format of YYYY-MM-DD, based on the date the file is uploaded in UTC.S3FileLocation (dict) –
Specifies the details for the Amazon S3 file that’s being copied or decrypted.
Bucket (string) –
Specifies the S3 bucket for the customer input file.
Key (string) –
The name assigned to the file when it was created in Amazon S3. You use the object key to retrieve the object.
EfsFileLocation (dict) –
Specifies the details for the Amazon Elastic File System (Amazon EFS) file that’s being decrypted.
FileSystemId (string) –
The identifier of the file system, assigned by Amazon EFS.
Path (string) –
The pathname for the folder being used by a workflow.
WorkflowId (string) –
A unique identifier for the workflow.
Tags (list) –
Key-value pairs that can be used to group and search for workflows. Tags are metadata attached to workflows for any purpose.
(dict) –
Creates a key-value pair for a specific resource. Tags are metadata that you can use to search for and group a resource for various purposes. You can apply tags to servers, users, and roles. A tag key can take more than one value. For example, to group servers for accounting purposes, you might create a tag called
Group
and assign the valuesResearch
andAccounting
to that group.Key (string) –
The name assigned to the tag that you create.
Value (string) –
Contains one or more values that you assigned to the key name you create.
Exceptions
Transfer.Client.exceptions.ServiceUnavailableException
Transfer.Client.exceptions.InternalServiceError
Transfer.Client.exceptions.InvalidRequestException
Transfer.Client.exceptions.ResourceNotFoundException