Table of Contents
A low-level client representing Amazon Polly:
import boto3
client = boto3.client('polly')
These are the available methods:
Check if an operation can be paginated.
Deletes the specified pronunciation lexicon stored in an AWS Region. A lexicon which has been deleted is not available for speech synthesis, nor is it possible to retrieve it using either the GetLexicon or ListLexicon APIs.
For more information, see Managing Lexicons .
See also: AWS API Documentation
Request Syntax
response = client.delete_lexicon(
Name='string'
)
[REQUIRED]
The name of the lexicon to delete. Must be an existing lexicon in the region.
{}
Response Structure
Exceptions
Examples
Deletes a specified pronunciation lexicon stored in an AWS Region.
response = client.delete_lexicon(
Name='example',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
Returns the list of voices that are available for use when requesting speech synthesis. Each voice speaks a specified language, is either male or female, and is identified by an ID, which is the ASCII version of the voice name.
When synthesizing speech ( SynthesizeSpeech ), you provide the voice ID for the voice you want from the list of voices returned by DescribeVoices .
For example, you want your news reader application to read news in a specific language, but giving a user the option to choose the voice. Using the DescribeVoices operation you can provide the user with a list of available voices to select from.
You can optionally specify a language code to filter the available voices. For example, if you specify en-US , the operation returns a list of all available US English voices.
This operation requires permissions to perform the polly:DescribeVoices action.
See also: AWS API Documentation
Request Syntax
response = client.describe_voices(
Engine='standard'|'neural',
LanguageCode='arb'|'cmn-CN'|'cy-GB'|'da-DK'|'de-DE'|'en-AU'|'en-GB'|'en-GB-WLS'|'en-IN'|'en-US'|'es-ES'|'es-MX'|'es-US'|'fr-CA'|'fr-FR'|'is-IS'|'it-IT'|'ja-JP'|'hi-IN'|'ko-KR'|'nb-NO'|'nl-NL'|'pl-PL'|'pt-BR'|'pt-PT'|'ro-RO'|'ru-RU'|'sv-SE'|'tr-TR',
IncludeAdditionalLanguageCodes=True|False,
NextToken='string'
)
dict
Response Syntax
{
'Voices': [
{
'Gender': 'Female'|'Male',
'Id': 'Aditi'|'Amy'|'Astrid'|'Bianca'|'Brian'|'Camila'|'Carla'|'Carmen'|'Celine'|'Chantal'|'Conchita'|'Cristiano'|'Dora'|'Emma'|'Enrique'|'Ewa'|'Filiz'|'Geraint'|'Giorgio'|'Gwyneth'|'Hans'|'Ines'|'Ivy'|'Jacek'|'Jan'|'Joanna'|'Joey'|'Justin'|'Karl'|'Kendra'|'Kimberly'|'Lea'|'Liv'|'Lotte'|'Lucia'|'Lupe'|'Mads'|'Maja'|'Marlene'|'Mathieu'|'Matthew'|'Maxim'|'Mia'|'Miguel'|'Mizuki'|'Naja'|'Nicole'|'Penelope'|'Raveena'|'Ricardo'|'Ruben'|'Russell'|'Salli'|'Seoyeon'|'Takumi'|'Tatyana'|'Vicki'|'Vitoria'|'Zeina'|'Zhiyu',
'LanguageCode': 'arb'|'cmn-CN'|'cy-GB'|'da-DK'|'de-DE'|'en-AU'|'en-GB'|'en-GB-WLS'|'en-IN'|'en-US'|'es-ES'|'es-MX'|'es-US'|'fr-CA'|'fr-FR'|'is-IS'|'it-IT'|'ja-JP'|'hi-IN'|'ko-KR'|'nb-NO'|'nl-NL'|'pl-PL'|'pt-BR'|'pt-PT'|'ro-RO'|'ru-RU'|'sv-SE'|'tr-TR',
'LanguageName': 'string',
'Name': 'string',
'AdditionalLanguageCodes': [
'arb'|'cmn-CN'|'cy-GB'|'da-DK'|'de-DE'|'en-AU'|'en-GB'|'en-GB-WLS'|'en-IN'|'en-US'|'es-ES'|'es-MX'|'es-US'|'fr-CA'|'fr-FR'|'is-IS'|'it-IT'|'ja-JP'|'hi-IN'|'ko-KR'|'nb-NO'|'nl-NL'|'pl-PL'|'pt-BR'|'pt-PT'|'ro-RO'|'ru-RU'|'sv-SE'|'tr-TR',
],
'SupportedEngines': [
'standard'|'neural',
]
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
Voices (list) --
A list of voices with their properties.
(dict) --
Description of the voice.
Gender (string) --
Gender of the voice.
Id (string) --
Amazon Polly assigned voice ID. This is the ID that you specify when calling the SynthesizeSpeech operation.
LanguageCode (string) --
Language code of the voice.
LanguageName (string) --
Human readable name of the language in English.
Name (string) --
Name of the voice (for example, Salli, Kendra, etc.). This provides a human readable voice name that you might display in your application.
AdditionalLanguageCodes (list) --
Additional codes for languages available for the specified voice in addition to its default language.
For example, the default language for Aditi is Indian English (en-IN) because it was first used for that language. Since Aditi is bilingual and fluent in both Indian English and Hindi, this parameter would show the code hi-IN .
SupportedEngines (list) --
Specifies which engines (standard or neural ) that are supported by a given voice.
NextToken (string) --
The pagination token to use in the next request to continue the listing of voices. NextToken is returned only if the response is truncated.
Exceptions
Examples
Returns the list of voices that are available for use when requesting speech synthesis. Displayed languages are those within the specified language code. If no language code is specified, voices for all available languages are displayed.
response = client.describe_voices(
LanguageCode='en-GB',
)
print(response)
Expected Output:
{
'Voices': [
{
'Gender': 'Female',
'Id': 'Emma',
'LanguageCode': 'en-GB',
'LanguageName': 'British English',
'Name': 'Emma',
},
{
'Gender': 'Male',
'Id': 'Brian',
'LanguageCode': 'en-GB',
'LanguageName': 'British English',
'Name': 'Brian',
},
{
'Gender': 'Female',
'Id': 'Amy',
'LanguageCode': 'en-GB',
'LanguageName': 'British English',
'Name': 'Amy',
},
],
'ResponseMetadata': {
'...': '...',
},
}
Generate a presigned url given a client, its method, and arguments
The presigned url
Returns the content of the specified pronunciation lexicon stored in an AWS Region. For more information, see Managing Lexicons .
See also: AWS API Documentation
Request Syntax
response = client.get_lexicon(
Name='string'
)
[REQUIRED]
Name of the lexicon.
{
'Lexicon': {
'Content': 'string',
'Name': 'string'
},
'LexiconAttributes': {
'Alphabet': 'string',
'LanguageCode': 'arb'|'cmn-CN'|'cy-GB'|'da-DK'|'de-DE'|'en-AU'|'en-GB'|'en-GB-WLS'|'en-IN'|'en-US'|'es-ES'|'es-MX'|'es-US'|'fr-CA'|'fr-FR'|'is-IS'|'it-IT'|'ja-JP'|'hi-IN'|'ko-KR'|'nb-NO'|'nl-NL'|'pl-PL'|'pt-BR'|'pt-PT'|'ro-RO'|'ru-RU'|'sv-SE'|'tr-TR',
'LastModified': datetime(2015, 1, 1),
'LexiconArn': 'string',
'LexemesCount': 123,
'Size': 123
}
}
Response Structure
Lexicon object that provides name and the string content of the lexicon.
Lexicon content in string format. The content of a lexicon must be in PLS format.
Name of the lexicon.
Metadata of the lexicon, including phonetic alphabetic used, language code, lexicon ARN, number of lexemes defined in the lexicon, and size of lexicon in bytes.
Phonetic alphabet used in the lexicon. Valid values are ipa and x-sampa .
Language code that the lexicon applies to. A lexicon with a language code such as "en" would be applied to all English languages (en-GB, en-US, en-AUS, en-WLS, and so on.
Date lexicon was last modified (a timestamp value).
Amazon Resource Name (ARN) of the lexicon.
Number of lexemes in the lexicon.
Total size of the lexicon, in characters.
Exceptions
Examples
Returns the content of the specified pronunciation lexicon stored in an AWS Region.
response = client.get_lexicon(
Name='',
)
print(response)
Expected Output:
{
'Lexicon': {
'Content': '<?xml version="1.0" encoding="UTF-8"?>\r\n<lexicon version="1.0" \r\n xmlns="http://www.w3.org/2005/01/pronunciation-lexicon"\r\n xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" \r\n xsi:schemaLocation="http://www.w3.org/2005/01/pronunciation-lexicon \r\n http://www.w3.org/TR/2007/CR-pronunciation-lexicon-20071212/pls.xsd"\r\n alphabet="ipa" \r\n xml:lang="en-US">\r\n <lexeme>\r\n <grapheme>W3C</grapheme>\r\n <alias>World Wide Web Consortium</alias>\r\n </lexeme>\r\n</lexicon>',
'Name': 'example',
},
'LexiconAttributes': {
'Alphabet': 'ipa',
'LanguageCode': 'en-US',
'LastModified': 1478542980.117,
'LexemesCount': 1,
'LexiconArn': 'arn:aws:polly:us-east-1:123456789012:lexicon/example',
'Size': 503,
},
'ResponseMetadata': {
'...': '...',
},
}
Create a paginator for an operation.
Retrieves a specific SpeechSynthesisTask object based on its TaskID. This object contains information about the given speech synthesis task, including the status of the task, and a link to the S3 bucket containing the output of the task.
See also: AWS API Documentation
Request Syntax
response = client.get_speech_synthesis_task(
TaskId='string'
)
[REQUIRED]
The Amazon Polly generated identifier for a speech synthesis task.
{
'SynthesisTask': {
'Engine': 'standard'|'neural',
'TaskId': 'string',
'TaskStatus': 'scheduled'|'inProgress'|'completed'|'failed',
'TaskStatusReason': 'string',
'OutputUri': 'string',
'CreationTime': datetime(2015, 1, 1),
'RequestCharacters': 123,
'SnsTopicArn': 'string',
'LexiconNames': [
'string',
],
'OutputFormat': 'json'|'mp3'|'ogg_vorbis'|'pcm',
'SampleRate': 'string',
'SpeechMarkTypes': [
'sentence'|'ssml'|'viseme'|'word',
],
'TextType': 'ssml'|'text',
'VoiceId': 'Aditi'|'Amy'|'Astrid'|'Bianca'|'Brian'|'Camila'|'Carla'|'Carmen'|'Celine'|'Chantal'|'Conchita'|'Cristiano'|'Dora'|'Emma'|'Enrique'|'Ewa'|'Filiz'|'Geraint'|'Giorgio'|'Gwyneth'|'Hans'|'Ines'|'Ivy'|'Jacek'|'Jan'|'Joanna'|'Joey'|'Justin'|'Karl'|'Kendra'|'Kimberly'|'Lea'|'Liv'|'Lotte'|'Lucia'|'Lupe'|'Mads'|'Maja'|'Marlene'|'Mathieu'|'Matthew'|'Maxim'|'Mia'|'Miguel'|'Mizuki'|'Naja'|'Nicole'|'Penelope'|'Raveena'|'Ricardo'|'Ruben'|'Russell'|'Salli'|'Seoyeon'|'Takumi'|'Tatyana'|'Vicki'|'Vitoria'|'Zeina'|'Zhiyu',
'LanguageCode': 'arb'|'cmn-CN'|'cy-GB'|'da-DK'|'de-DE'|'en-AU'|'en-GB'|'en-GB-WLS'|'en-IN'|'en-US'|'es-ES'|'es-MX'|'es-US'|'fr-CA'|'fr-FR'|'is-IS'|'it-IT'|'ja-JP'|'hi-IN'|'ko-KR'|'nb-NO'|'nl-NL'|'pl-PL'|'pt-BR'|'pt-PT'|'ro-RO'|'ru-RU'|'sv-SE'|'tr-TR'
}
}
Response Structure
SynthesisTask object that provides information from the requested task, including output format, creation time, task status, and so on.
Specifies the engine (standard or neural ) for Amazon Polly to use when processing input text for speech synthesis. Using a voice that is not supported for the engine selected will result in an error.
The Amazon Polly generated identifier for a speech synthesis task.
Current status of the individual speech synthesis task.
Reason for the current status of a specific speech synthesis task, including errors if the task has failed.
Pathway for the output speech file.
Timestamp for the time the synthesis task was started.
Number of billable characters synthesized.
ARN for the SNS topic optionally used for providing status notification for a speech synthesis task.
List of one or more pronunciation lexicon names you want the service to apply during synthesis. Lexicons are applied only if the language of the lexicon is the same as the language of the voice.
The format in which the returned output will be encoded. For audio stream, this will be mp3, ogg_vorbis, or pcm. For speech marks, this will be json.
The audio frequency specified in Hz.
The valid values for mp3 and ogg_vorbis are "8000", "16000", "22050", and "24000". The default value for standard voices is "22050". The default value for neural voices is "24000".
Valid values for pcm are "8000" and "16000" The default value is "16000".
The type of speech marks returned for the input text.
Specifies whether the input text is plain text or SSML. The default value is plain text.
Voice ID to use for the synthesis.
Optional language code for a synthesis task. This is only necessary if using a bilingual voice, such as Aditi, which can be used for either Indian English (en-IN) or Hindi (hi-IN).
If a bilingual voice is used and no language code is specified, Amazon Polly will use the default language of the bilingual voice. The default language for any voice is the one returned by the DescribeVoices operation for the LanguageCode parameter. For example, if no language code is specified, Aditi will use Indian English rather than Hindi.
Exceptions
Returns an object that can wait for some condition.
Returns a list of pronunciation lexicons stored in an AWS Region. For more information, see Managing Lexicons .
See also: AWS API Documentation
Request Syntax
response = client.list_lexicons(
NextToken='string'
)
{
'Lexicons': [
{
'Name': 'string',
'Attributes': {
'Alphabet': 'string',
'LanguageCode': 'arb'|'cmn-CN'|'cy-GB'|'da-DK'|'de-DE'|'en-AU'|'en-GB'|'en-GB-WLS'|'en-IN'|'en-US'|'es-ES'|'es-MX'|'es-US'|'fr-CA'|'fr-FR'|'is-IS'|'it-IT'|'ja-JP'|'hi-IN'|'ko-KR'|'nb-NO'|'nl-NL'|'pl-PL'|'pt-BR'|'pt-PT'|'ro-RO'|'ru-RU'|'sv-SE'|'tr-TR',
'LastModified': datetime(2015, 1, 1),
'LexiconArn': 'string',
'LexemesCount': 123,
'Size': 123
}
},
],
'NextToken': 'string'
}
Response Structure
A list of lexicon names and attributes.
Describes the content of the lexicon.
Name of the lexicon.
Provides lexicon metadata.
Phonetic alphabet used in the lexicon. Valid values are ipa and x-sampa .
Language code that the lexicon applies to. A lexicon with a language code such as "en" would be applied to all English languages (en-GB, en-US, en-AUS, en-WLS, and so on.
Date lexicon was last modified (a timestamp value).
Amazon Resource Name (ARN) of the lexicon.
Number of lexemes in the lexicon.
Total size of the lexicon, in characters.
The pagination token to use in the next request to continue the listing of lexicons. NextToken is returned only if the response is truncated.
Exceptions
Examples
Returns a list of pronunciation lexicons stored in an AWS Region.
response = client.list_lexicons(
)
print(response)
Expected Output:
{
'Lexicons': [
{
'Attributes': {
'Alphabet': 'ipa',
'LanguageCode': 'en-US',
'LastModified': 1478542980.117,
'LexemesCount': 1,
'LexiconArn': 'arn:aws:polly:us-east-1:123456789012:lexicon/example',
'Size': 503,
},
'Name': 'example',
},
],
'ResponseMetadata': {
'...': '...',
},
}
Returns a list of SpeechSynthesisTask objects ordered by their creation date. This operation can filter the tasks by their status, for example, allowing users to list only tasks that are completed.
See also: AWS API Documentation
Request Syntax
response = client.list_speech_synthesis_tasks(
MaxResults=123,
NextToken='string',
Status='scheduled'|'inProgress'|'completed'|'failed'
)
dict
Response Syntax
{
'NextToken': 'string',
'SynthesisTasks': [
{
'Engine': 'standard'|'neural',
'TaskId': 'string',
'TaskStatus': 'scheduled'|'inProgress'|'completed'|'failed',
'TaskStatusReason': 'string',
'OutputUri': 'string',
'CreationTime': datetime(2015, 1, 1),
'RequestCharacters': 123,
'SnsTopicArn': 'string',
'LexiconNames': [
'string',
],
'OutputFormat': 'json'|'mp3'|'ogg_vorbis'|'pcm',
'SampleRate': 'string',
'SpeechMarkTypes': [
'sentence'|'ssml'|'viseme'|'word',
],
'TextType': 'ssml'|'text',
'VoiceId': 'Aditi'|'Amy'|'Astrid'|'Bianca'|'Brian'|'Camila'|'Carla'|'Carmen'|'Celine'|'Chantal'|'Conchita'|'Cristiano'|'Dora'|'Emma'|'Enrique'|'Ewa'|'Filiz'|'Geraint'|'Giorgio'|'Gwyneth'|'Hans'|'Ines'|'Ivy'|'Jacek'|'Jan'|'Joanna'|'Joey'|'Justin'|'Karl'|'Kendra'|'Kimberly'|'Lea'|'Liv'|'Lotte'|'Lucia'|'Lupe'|'Mads'|'Maja'|'Marlene'|'Mathieu'|'Matthew'|'Maxim'|'Mia'|'Miguel'|'Mizuki'|'Naja'|'Nicole'|'Penelope'|'Raveena'|'Ricardo'|'Ruben'|'Russell'|'Salli'|'Seoyeon'|'Takumi'|'Tatyana'|'Vicki'|'Vitoria'|'Zeina'|'Zhiyu',
'LanguageCode': 'arb'|'cmn-CN'|'cy-GB'|'da-DK'|'de-DE'|'en-AU'|'en-GB'|'en-GB-WLS'|'en-IN'|'en-US'|'es-ES'|'es-MX'|'es-US'|'fr-CA'|'fr-FR'|'is-IS'|'it-IT'|'ja-JP'|'hi-IN'|'ko-KR'|'nb-NO'|'nl-NL'|'pl-PL'|'pt-BR'|'pt-PT'|'ro-RO'|'ru-RU'|'sv-SE'|'tr-TR'
},
]
}
Response Structure
(dict) --
NextToken (string) --
An opaque pagination token returned from the previous List operation in this request. If present, this indicates where to continue the listing.
SynthesisTasks (list) --
List of SynthesisTask objects that provides information from the specified task in the list request, including output format, creation time, task status, and so on.
(dict) --
SynthesisTask object that provides information about a speech synthesis task.
Engine (string) --
Specifies the engine (standard or neural ) for Amazon Polly to use when processing input text for speech synthesis. Using a voice that is not supported for the engine selected will result in an error.
TaskId (string) --
The Amazon Polly generated identifier for a speech synthesis task.
TaskStatus (string) --
Current status of the individual speech synthesis task.
TaskStatusReason (string) --
Reason for the current status of a specific speech synthesis task, including errors if the task has failed.
OutputUri (string) --
Pathway for the output speech file.
CreationTime (datetime) --
Timestamp for the time the synthesis task was started.
RequestCharacters (integer) --
Number of billable characters synthesized.
SnsTopicArn (string) --
ARN for the SNS topic optionally used for providing status notification for a speech synthesis task.
LexiconNames (list) --
List of one or more pronunciation lexicon names you want the service to apply during synthesis. Lexicons are applied only if the language of the lexicon is the same as the language of the voice.
OutputFormat (string) --
The format in which the returned output will be encoded. For audio stream, this will be mp3, ogg_vorbis, or pcm. For speech marks, this will be json.
SampleRate (string) --
The audio frequency specified in Hz.
The valid values for mp3 and ogg_vorbis are "8000", "16000", "22050", and "24000". The default value for standard voices is "22050". The default value for neural voices is "24000".
Valid values for pcm are "8000" and "16000" The default value is "16000".
SpeechMarkTypes (list) --
The type of speech marks returned for the input text.
TextType (string) --
Specifies whether the input text is plain text or SSML. The default value is plain text.
VoiceId (string) --
Voice ID to use for the synthesis.
LanguageCode (string) --
Optional language code for a synthesis task. This is only necessary if using a bilingual voice, such as Aditi, which can be used for either Indian English (en-IN) or Hindi (hi-IN).
If a bilingual voice is used and no language code is specified, Amazon Polly will use the default language of the bilingual voice. The default language for any voice is the one returned by the DescribeVoices operation for the LanguageCode parameter. For example, if no language code is specified, Aditi will use Indian English rather than Hindi.
Exceptions
Stores a pronunciation lexicon in an AWS Region. If a lexicon with the same name already exists in the region, it is overwritten by the new lexicon. Lexicon operations have eventual consistency, therefore, it might take some time before the lexicon is available to the SynthesizeSpeech operation.
For more information, see Managing Lexicons .
See also: AWS API Documentation
Request Syntax
response = client.put_lexicon(
Name='string',
Content='string'
)
[REQUIRED]
Name of the lexicon. The name must follow the regular express format [0-9A-Za-z]{1,20}. That is, the name is a case-sensitive alphanumeric string up to 20 characters long.
[REQUIRED]
Content of the PLS lexicon as string data.
dict
Response Syntax
{}
Response Structure
Exceptions
Examples
Stores a pronunciation lexicon in an AWS Region.
response = client.put_lexicon(
Content='file://example.pls',
Name='W3C',
)
print(response)
Expected Output:
{
'ResponseMetadata': {
'...': '...',
},
}
Allows the creation of an asynchronous synthesis task, by starting a new SpeechSynthesisTask . This operation requires all the standard information needed for speech synthesis, plus the name of an Amazon S3 bucket for the service to store the output of the synthesis task and two optional parameters (OutputS3KeyPrefix and SnsTopicArn). Once the synthesis task is created, this operation will return a SpeechSynthesisTask object, which will include an identifier of this task as well as the current status.
See also: AWS API Documentation
Request Syntax
response = client.start_speech_synthesis_task(
Engine='standard'|'neural',
LanguageCode='arb'|'cmn-CN'|'cy-GB'|'da-DK'|'de-DE'|'en-AU'|'en-GB'|'en-GB-WLS'|'en-IN'|'en-US'|'es-ES'|'es-MX'|'es-US'|'fr-CA'|'fr-FR'|'is-IS'|'it-IT'|'ja-JP'|'hi-IN'|'ko-KR'|'nb-NO'|'nl-NL'|'pl-PL'|'pt-BR'|'pt-PT'|'ro-RO'|'ru-RU'|'sv-SE'|'tr-TR',
LexiconNames=[
'string',
],
OutputFormat='json'|'mp3'|'ogg_vorbis'|'pcm',
OutputS3BucketName='string',
OutputS3KeyPrefix='string',
SampleRate='string',
SnsTopicArn='string',
SpeechMarkTypes=[
'sentence'|'ssml'|'viseme'|'word',
],
Text='string',
TextType='ssml'|'text',
VoiceId='Aditi'|'Amy'|'Astrid'|'Bianca'|'Brian'|'Camila'|'Carla'|'Carmen'|'Celine'|'Chantal'|'Conchita'|'Cristiano'|'Dora'|'Emma'|'Enrique'|'Ewa'|'Filiz'|'Geraint'|'Giorgio'|'Gwyneth'|'Hans'|'Ines'|'Ivy'|'Jacek'|'Jan'|'Joanna'|'Joey'|'Justin'|'Karl'|'Kendra'|'Kimberly'|'Lea'|'Liv'|'Lotte'|'Lucia'|'Lupe'|'Mads'|'Maja'|'Marlene'|'Mathieu'|'Matthew'|'Maxim'|'Mia'|'Miguel'|'Mizuki'|'Naja'|'Nicole'|'Penelope'|'Raveena'|'Ricardo'|'Ruben'|'Russell'|'Salli'|'Seoyeon'|'Takumi'|'Tatyana'|'Vicki'|'Vitoria'|'Zeina'|'Zhiyu'
)
Optional language code for the Speech Synthesis request. This is only necessary if using a bilingual voice, such as Aditi, which can be used for either Indian English (en-IN) or Hindi (hi-IN).
If a bilingual voice is used and no language code is specified, Amazon Polly will use the default language of the bilingual voice. The default language for any voice is the one returned by the DescribeVoices operation for the LanguageCode parameter. For example, if no language code is specified, Aditi will use Indian English rather than Hindi.
List of one or more pronunciation lexicon names you want the service to apply during synthesis. Lexicons are applied only if the language of the lexicon is the same as the language of the voice.
[REQUIRED]
The format in which the returned output will be encoded. For audio stream, this will be mp3, ogg_vorbis, or pcm. For speech marks, this will be json.
[REQUIRED]
Amazon S3 bucket name to which the output file will be saved.
The audio frequency specified in Hz.
The valid values for mp3 and ogg_vorbis are "8000", "16000", "22050", and "24000". The default value for standard voices is "22050". The default value for neural voices is "24000".
Valid values for pcm are "8000" and "16000" The default value is "16000".
The type of speech marks returned for the input text.
[REQUIRED]
The input text to synthesize. If you specify ssml as the TextType, follow the SSML format for the input text.
[REQUIRED]
Voice ID to use for the synthesis.
dict
Response Syntax
{
'SynthesisTask': {
'Engine': 'standard'|'neural',
'TaskId': 'string',
'TaskStatus': 'scheduled'|'inProgress'|'completed'|'failed',
'TaskStatusReason': 'string',
'OutputUri': 'string',
'CreationTime': datetime(2015, 1, 1),
'RequestCharacters': 123,
'SnsTopicArn': 'string',
'LexiconNames': [
'string',
],
'OutputFormat': 'json'|'mp3'|'ogg_vorbis'|'pcm',
'SampleRate': 'string',
'SpeechMarkTypes': [
'sentence'|'ssml'|'viseme'|'word',
],
'TextType': 'ssml'|'text',
'VoiceId': 'Aditi'|'Amy'|'Astrid'|'Bianca'|'Brian'|'Camila'|'Carla'|'Carmen'|'Celine'|'Chantal'|'Conchita'|'Cristiano'|'Dora'|'Emma'|'Enrique'|'Ewa'|'Filiz'|'Geraint'|'Giorgio'|'Gwyneth'|'Hans'|'Ines'|'Ivy'|'Jacek'|'Jan'|'Joanna'|'Joey'|'Justin'|'Karl'|'Kendra'|'Kimberly'|'Lea'|'Liv'|'Lotte'|'Lucia'|'Lupe'|'Mads'|'Maja'|'Marlene'|'Mathieu'|'Matthew'|'Maxim'|'Mia'|'Miguel'|'Mizuki'|'Naja'|'Nicole'|'Penelope'|'Raveena'|'Ricardo'|'Ruben'|'Russell'|'Salli'|'Seoyeon'|'Takumi'|'Tatyana'|'Vicki'|'Vitoria'|'Zeina'|'Zhiyu',
'LanguageCode': 'arb'|'cmn-CN'|'cy-GB'|'da-DK'|'de-DE'|'en-AU'|'en-GB'|'en-GB-WLS'|'en-IN'|'en-US'|'es-ES'|'es-MX'|'es-US'|'fr-CA'|'fr-FR'|'is-IS'|'it-IT'|'ja-JP'|'hi-IN'|'ko-KR'|'nb-NO'|'nl-NL'|'pl-PL'|'pt-BR'|'pt-PT'|'ro-RO'|'ru-RU'|'sv-SE'|'tr-TR'
}
}
Response Structure
(dict) --
SynthesisTask (dict) --
SynthesisTask object that provides information and attributes about a newly submitted speech synthesis task.
Engine (string) --
Specifies the engine (standard or neural ) for Amazon Polly to use when processing input text for speech synthesis. Using a voice that is not supported for the engine selected will result in an error.
TaskId (string) --
The Amazon Polly generated identifier for a speech synthesis task.
TaskStatus (string) --
Current status of the individual speech synthesis task.
TaskStatusReason (string) --
Reason for the current status of a specific speech synthesis task, including errors if the task has failed.
OutputUri (string) --
Pathway for the output speech file.
CreationTime (datetime) --
Timestamp for the time the synthesis task was started.
RequestCharacters (integer) --
Number of billable characters synthesized.
SnsTopicArn (string) --
ARN for the SNS topic optionally used for providing status notification for a speech synthesis task.
LexiconNames (list) --
List of one or more pronunciation lexicon names you want the service to apply during synthesis. Lexicons are applied only if the language of the lexicon is the same as the language of the voice.
OutputFormat (string) --
The format in which the returned output will be encoded. For audio stream, this will be mp3, ogg_vorbis, or pcm. For speech marks, this will be json.
SampleRate (string) --
The audio frequency specified in Hz.
The valid values for mp3 and ogg_vorbis are "8000", "16000", "22050", and "24000". The default value for standard voices is "22050". The default value for neural voices is "24000".
Valid values for pcm are "8000" and "16000" The default value is "16000".
SpeechMarkTypes (list) --
The type of speech marks returned for the input text.
TextType (string) --
Specifies whether the input text is plain text or SSML. The default value is plain text.
VoiceId (string) --
Voice ID to use for the synthesis.
LanguageCode (string) --
Optional language code for a synthesis task. This is only necessary if using a bilingual voice, such as Aditi, which can be used for either Indian English (en-IN) or Hindi (hi-IN).
If a bilingual voice is used and no language code is specified, Amazon Polly will use the default language of the bilingual voice. The default language for any voice is the one returned by the DescribeVoices operation for the LanguageCode parameter. For example, if no language code is specified, Aditi will use Indian English rather than Hindi.
Exceptions
Synthesizes UTF-8 input, plain text or SSML, to a stream of bytes. SSML input must be valid, well-formed SSML. Some alphabets might not be available with all the voices (for example, Cyrillic might not be read at all by English voices) unless phoneme mapping is used. For more information, see How it Works .
See also: AWS API Documentation
Request Syntax
response = client.synthesize_speech(
Engine='standard'|'neural',
LanguageCode='arb'|'cmn-CN'|'cy-GB'|'da-DK'|'de-DE'|'en-AU'|'en-GB'|'en-GB-WLS'|'en-IN'|'en-US'|'es-ES'|'es-MX'|'es-US'|'fr-CA'|'fr-FR'|'is-IS'|'it-IT'|'ja-JP'|'hi-IN'|'ko-KR'|'nb-NO'|'nl-NL'|'pl-PL'|'pt-BR'|'pt-PT'|'ro-RO'|'ru-RU'|'sv-SE'|'tr-TR',
LexiconNames=[
'string',
],
OutputFormat='json'|'mp3'|'ogg_vorbis'|'pcm',
SampleRate='string',
SpeechMarkTypes=[
'sentence'|'ssml'|'viseme'|'word',
],
Text='string',
TextType='ssml'|'text',
VoiceId='Aditi'|'Amy'|'Astrid'|'Bianca'|'Brian'|'Camila'|'Carla'|'Carmen'|'Celine'|'Chantal'|'Conchita'|'Cristiano'|'Dora'|'Emma'|'Enrique'|'Ewa'|'Filiz'|'Geraint'|'Giorgio'|'Gwyneth'|'Hans'|'Ines'|'Ivy'|'Jacek'|'Jan'|'Joanna'|'Joey'|'Justin'|'Karl'|'Kendra'|'Kimberly'|'Lea'|'Liv'|'Lotte'|'Lucia'|'Lupe'|'Mads'|'Maja'|'Marlene'|'Mathieu'|'Matthew'|'Maxim'|'Mia'|'Miguel'|'Mizuki'|'Naja'|'Nicole'|'Penelope'|'Raveena'|'Ricardo'|'Ruben'|'Russell'|'Salli'|'Seoyeon'|'Takumi'|'Tatyana'|'Vicki'|'Vitoria'|'Zeina'|'Zhiyu'
)
Optional language code for the Synthesize Speech request. This is only necessary if using a bilingual voice, such as Aditi, which can be used for either Indian English (en-IN) or Hindi (hi-IN).
If a bilingual voice is used and no language code is specified, Amazon Polly will use the default language of the bilingual voice. The default language for any voice is the one returned by the DescribeVoices operation for the LanguageCode parameter. For example, if no language code is specified, Aditi will use Indian English rather than Hindi.
List of one or more pronunciation lexicon names you want the service to apply during synthesis. Lexicons are applied only if the language of the lexicon is the same as the language of the voice. For information about storing lexicons, see PutLexicon .
[REQUIRED]
The format in which the returned output will be encoded. For audio stream, this will be mp3, ogg_vorbis, or pcm. For speech marks, this will be json.
When pcm is used, the content returned is audio/pcm in a signed 16-bit, 1 channel (mono), little-endian format.
The audio frequency specified in Hz.
The valid values for mp3 and ogg_vorbis are "8000", "16000", "22050", and "24000". The default value for standard voices is "22050". The default value for neural voices is "24000".
Valid values for pcm are "8000" and "16000" The default value is "16000".
The type of speech marks returned for the input text.
[REQUIRED]
Input text to synthesize. If you specify ssml as the TextType , follow the SSML format for the input text.
[REQUIRED]
Voice ID to use for the synthesis. You can get a list of available voice IDs by calling the DescribeVoices operation.
dict
Response Syntax
{
'AudioStream': StreamingBody(),
'ContentType': 'string',
'RequestCharacters': 123
}
Response Structure
(dict) --
AudioStream (StreamingBody) --
Stream containing the synthesized speech.
ContentType (string) --
Specifies the type audio stream. This should reflect the OutputFormat parameter in your request.
RequestCharacters (integer) --
Number of characters synthesized.
Exceptions
Examples
Synthesizes plain text or SSML into a file of human-like speech.
response = client.synthesize_speech(
LexiconNames=[
'example',
],
OutputFormat='mp3',
SampleRate='8000',
Text='All Gaul is divided into three parts',
TextType='text',
VoiceId='Joanna',
)
print(response)
Expected Output:
{
'AudioStream': 'TEXT',
'ContentType': 'audio/mpeg',
'RequestCharacters': 37,
'ResponseMetadata': {
'...': '...',
},
}
The available paginators are:
paginator = client.get_paginator('describe_voices')
Creates an iterator that will paginate through responses from Polly.Client.describe_voices().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Engine='standard'|'neural',
LanguageCode='arb'|'cmn-CN'|'cy-GB'|'da-DK'|'de-DE'|'en-AU'|'en-GB'|'en-GB-WLS'|'en-IN'|'en-US'|'es-ES'|'es-MX'|'es-US'|'fr-CA'|'fr-FR'|'is-IS'|'it-IT'|'ja-JP'|'hi-IN'|'ko-KR'|'nb-NO'|'nl-NL'|'pl-PL'|'pt-BR'|'pt-PT'|'ro-RO'|'ru-RU'|'sv-SE'|'tr-TR',
IncludeAdditionalLanguageCodes=True|False,
PaginationConfig={
'MaxItems': 123,
'StartingToken': 'string'
}
)
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'Voices': [
{
'Gender': 'Female'|'Male',
'Id': 'Aditi'|'Amy'|'Astrid'|'Bianca'|'Brian'|'Camila'|'Carla'|'Carmen'|'Celine'|'Chantal'|'Conchita'|'Cristiano'|'Dora'|'Emma'|'Enrique'|'Ewa'|'Filiz'|'Geraint'|'Giorgio'|'Gwyneth'|'Hans'|'Ines'|'Ivy'|'Jacek'|'Jan'|'Joanna'|'Joey'|'Justin'|'Karl'|'Kendra'|'Kimberly'|'Lea'|'Liv'|'Lotte'|'Lucia'|'Lupe'|'Mads'|'Maja'|'Marlene'|'Mathieu'|'Matthew'|'Maxim'|'Mia'|'Miguel'|'Mizuki'|'Naja'|'Nicole'|'Penelope'|'Raveena'|'Ricardo'|'Ruben'|'Russell'|'Salli'|'Seoyeon'|'Takumi'|'Tatyana'|'Vicki'|'Vitoria'|'Zeina'|'Zhiyu',
'LanguageCode': 'arb'|'cmn-CN'|'cy-GB'|'da-DK'|'de-DE'|'en-AU'|'en-GB'|'en-GB-WLS'|'en-IN'|'en-US'|'es-ES'|'es-MX'|'es-US'|'fr-CA'|'fr-FR'|'is-IS'|'it-IT'|'ja-JP'|'hi-IN'|'ko-KR'|'nb-NO'|'nl-NL'|'pl-PL'|'pt-BR'|'pt-PT'|'ro-RO'|'ru-RU'|'sv-SE'|'tr-TR',
'LanguageName': 'string',
'Name': 'string',
'AdditionalLanguageCodes': [
'arb'|'cmn-CN'|'cy-GB'|'da-DK'|'de-DE'|'en-AU'|'en-GB'|'en-GB-WLS'|'en-IN'|'en-US'|'es-ES'|'es-MX'|'es-US'|'fr-CA'|'fr-FR'|'is-IS'|'it-IT'|'ja-JP'|'hi-IN'|'ko-KR'|'nb-NO'|'nl-NL'|'pl-PL'|'pt-BR'|'pt-PT'|'ro-RO'|'ru-RU'|'sv-SE'|'tr-TR',
],
'SupportedEngines': [
'standard'|'neural',
]
},
],
}
Response Structure
(dict) --
Voices (list) --
A list of voices with their properties.
(dict) --
Description of the voice.
Gender (string) --
Gender of the voice.
Id (string) --
Amazon Polly assigned voice ID. This is the ID that you specify when calling the SynthesizeSpeech operation.
LanguageCode (string) --
Language code of the voice.
LanguageName (string) --
Human readable name of the language in English.
Name (string) --
Name of the voice (for example, Salli, Kendra, etc.). This provides a human readable voice name that you might display in your application.
AdditionalLanguageCodes (list) --
Additional codes for languages available for the specified voice in addition to its default language.
For example, the default language for Aditi is Indian English (en-IN) because it was first used for that language. Since Aditi is bilingual and fluent in both Indian English and Hindi, this parameter would show the code hi-IN .
SupportedEngines (list) --
Specifies which engines (standard or neural ) that are supported by a given voice.
paginator = client.get_paginator('list_lexicons')
Creates an iterator that will paginate through responses from Polly.Client.list_lexicons().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'StartingToken': 'string'
}
)
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
A token to specify where to start paginating. This is the NextToken from a previous response.
{
'Lexicons': [
{
'Name': 'string',
'Attributes': {
'Alphabet': 'string',
'LanguageCode': 'arb'|'cmn-CN'|'cy-GB'|'da-DK'|'de-DE'|'en-AU'|'en-GB'|'en-GB-WLS'|'en-IN'|'en-US'|'es-ES'|'es-MX'|'es-US'|'fr-CA'|'fr-FR'|'is-IS'|'it-IT'|'ja-JP'|'hi-IN'|'ko-KR'|'nb-NO'|'nl-NL'|'pl-PL'|'pt-BR'|'pt-PT'|'ro-RO'|'ru-RU'|'sv-SE'|'tr-TR',
'LastModified': datetime(2015, 1, 1),
'LexiconArn': 'string',
'LexemesCount': 123,
'Size': 123
}
},
],
}
Response Structure
A list of lexicon names and attributes.
Describes the content of the lexicon.
Name of the lexicon.
Provides lexicon metadata.
Phonetic alphabet used in the lexicon. Valid values are ipa and x-sampa .
Language code that the lexicon applies to. A lexicon with a language code such as "en" would be applied to all English languages (en-GB, en-US, en-AUS, en-WLS, and so on.
Date lexicon was last modified (a timestamp value).
Amazon Resource Name (ARN) of the lexicon.
Number of lexemes in the lexicon.
Total size of the lexicon, in characters.
paginator = client.get_paginator('list_speech_synthesis_tasks')
Creates an iterator that will paginate through responses from Polly.Client.list_speech_synthesis_tasks().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Status='scheduled'|'inProgress'|'completed'|'failed',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'SynthesisTasks': [
{
'Engine': 'standard'|'neural',
'TaskId': 'string',
'TaskStatus': 'scheduled'|'inProgress'|'completed'|'failed',
'TaskStatusReason': 'string',
'OutputUri': 'string',
'CreationTime': datetime(2015, 1, 1),
'RequestCharacters': 123,
'SnsTopicArn': 'string',
'LexiconNames': [
'string',
],
'OutputFormat': 'json'|'mp3'|'ogg_vorbis'|'pcm',
'SampleRate': 'string',
'SpeechMarkTypes': [
'sentence'|'ssml'|'viseme'|'word',
],
'TextType': 'ssml'|'text',
'VoiceId': 'Aditi'|'Amy'|'Astrid'|'Bianca'|'Brian'|'Camila'|'Carla'|'Carmen'|'Celine'|'Chantal'|'Conchita'|'Cristiano'|'Dora'|'Emma'|'Enrique'|'Ewa'|'Filiz'|'Geraint'|'Giorgio'|'Gwyneth'|'Hans'|'Ines'|'Ivy'|'Jacek'|'Jan'|'Joanna'|'Joey'|'Justin'|'Karl'|'Kendra'|'Kimberly'|'Lea'|'Liv'|'Lotte'|'Lucia'|'Lupe'|'Mads'|'Maja'|'Marlene'|'Mathieu'|'Matthew'|'Maxim'|'Mia'|'Miguel'|'Mizuki'|'Naja'|'Nicole'|'Penelope'|'Raveena'|'Ricardo'|'Ruben'|'Russell'|'Salli'|'Seoyeon'|'Takumi'|'Tatyana'|'Vicki'|'Vitoria'|'Zeina'|'Zhiyu',
'LanguageCode': 'arb'|'cmn-CN'|'cy-GB'|'da-DK'|'de-DE'|'en-AU'|'en-GB'|'en-GB-WLS'|'en-IN'|'en-US'|'es-ES'|'es-MX'|'es-US'|'fr-CA'|'fr-FR'|'is-IS'|'it-IT'|'ja-JP'|'hi-IN'|'ko-KR'|'nb-NO'|'nl-NL'|'pl-PL'|'pt-BR'|'pt-PT'|'ro-RO'|'ru-RU'|'sv-SE'|'tr-TR'
},
]
}
Response Structure
(dict) --
SynthesisTasks (list) --
List of SynthesisTask objects that provides information from the specified task in the list request, including output format, creation time, task status, and so on.
(dict) --
SynthesisTask object that provides information about a speech synthesis task.
Engine (string) --
Specifies the engine (standard or neural ) for Amazon Polly to use when processing input text for speech synthesis. Using a voice that is not supported for the engine selected will result in an error.
TaskId (string) --
The Amazon Polly generated identifier for a speech synthesis task.
TaskStatus (string) --
Current status of the individual speech synthesis task.
TaskStatusReason (string) --
Reason for the current status of a specific speech synthesis task, including errors if the task has failed.
OutputUri (string) --
Pathway for the output speech file.
CreationTime (datetime) --
Timestamp for the time the synthesis task was started.
RequestCharacters (integer) --
Number of billable characters synthesized.
SnsTopicArn (string) --
ARN for the SNS topic optionally used for providing status notification for a speech synthesis task.
LexiconNames (list) --
List of one or more pronunciation lexicon names you want the service to apply during synthesis. Lexicons are applied only if the language of the lexicon is the same as the language of the voice.
OutputFormat (string) --
The format in which the returned output will be encoded. For audio stream, this will be mp3, ogg_vorbis, or pcm. For speech marks, this will be json.
SampleRate (string) --
The audio frequency specified in Hz.
The valid values for mp3 and ogg_vorbis are "8000", "16000", "22050", and "24000". The default value for standard voices is "22050". The default value for neural voices is "24000".
Valid values for pcm are "8000" and "16000" The default value is "16000".
SpeechMarkTypes (list) --
The type of speech marks returned for the input text.
TextType (string) --
Specifies whether the input text is plain text or SSML. The default value is plain text.
VoiceId (string) --
Voice ID to use for the synthesis.
LanguageCode (string) --
Optional language code for a synthesis task. This is only necessary if using a bilingual voice, such as Aditi, which can be used for either Indian English (en-IN) or Hindi (hi-IN).
If a bilingual voice is used and no language code is specified, Amazon Polly will use the default language of the bilingual voice. The default language for any voice is the one returned by the DescribeVoices operation for the LanguageCode parameter. For example, if no language code is specified, Aditi will use Indian English rather than Hindi.