ChimeSDKMediaPipelines / Client / create_media_insights_pipeline_configuration

create_media_insights_pipeline_configuration#

ChimeSDKMediaPipelines.Client.create_media_insights_pipeline_configuration(**kwargs)#

A structure that contains the static configurations for a media insights pipeline.

See also: AWS API Documentation

Request Syntax

response = client.create_media_insights_pipeline_configuration(
    MediaInsightsPipelineConfigurationName='string',
    ResourceAccessRoleArn='string',
    RealTimeAlertConfiguration={
        'Disabled': True|False,
        'Rules': [
            {
                'Type': 'KeywordMatch'|'Sentiment'|'IssueDetection',
                'KeywordMatchConfiguration': {
                    'RuleName': 'string',
                    'Keywords': [
                        'string',
                    ],
                    'Negate': True|False
                },
                'SentimentConfiguration': {
                    'RuleName': 'string',
                    'SentimentType': 'NEGATIVE',
                    'TimePeriod': 123
                },
                'IssueDetectionConfiguration': {
                    'RuleName': 'string'
                }
            },
        ]
    },
    Elements=[
        {
            'Type': 'AmazonTranscribeCallAnalyticsProcessor'|'VoiceAnalyticsProcessor'|'AmazonTranscribeProcessor'|'KinesisDataStreamSink'|'LambdaFunctionSink'|'SqsQueueSink'|'SnsTopicSink'|'S3RecordingSink'|'VoiceEnhancementSink',
            'AmazonTranscribeCallAnalyticsProcessorConfiguration': {
                'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                'VocabularyName': 'string',
                'VocabularyFilterName': 'string',
                'VocabularyFilterMethod': 'remove'|'mask'|'tag',
                'LanguageModelName': 'string',
                'EnablePartialResultsStabilization': True|False,
                'PartialResultsStability': 'high'|'medium'|'low',
                'ContentIdentificationType': 'PII',
                'ContentRedactionType': 'PII',
                'PiiEntityTypes': 'string',
                'FilterPartialResults': True|False,
                'PostCallAnalyticsSettings': {
                    'OutputLocation': 'string',
                    'DataAccessRoleArn': 'string',
                    'ContentRedactionOutput': 'redacted'|'redacted_and_unredacted',
                    'OutputEncryptionKMSKeyId': 'string'
                },
                'CallAnalyticsStreamCategories': [
                    'string',
                ]
            },
            'AmazonTranscribeProcessorConfiguration': {
                'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                'VocabularyName': 'string',
                'VocabularyFilterName': 'string',
                'VocabularyFilterMethod': 'remove'|'mask'|'tag',
                'ShowSpeakerLabel': True|False,
                'EnablePartialResultsStabilization': True|False,
                'PartialResultsStability': 'high'|'medium'|'low',
                'ContentIdentificationType': 'PII',
                'ContentRedactionType': 'PII',
                'PiiEntityTypes': 'string',
                'LanguageModelName': 'string',
                'FilterPartialResults': True|False,
                'IdentifyLanguage': True|False,
                'LanguageOptions': 'string',
                'PreferredLanguage': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                'VocabularyNames': 'string',
                'VocabularyFilterNames': 'string'
            },
            'KinesisDataStreamSinkConfiguration': {
                'InsightsTarget': 'string'
            },
            'S3RecordingSinkConfiguration': {
                'Destination': 'string',
                'RecordingFileFormat': 'Wav'|'Opus'
            },
            'VoiceAnalyticsProcessorConfiguration': {
                'SpeakerSearchStatus': 'Enabled'|'Disabled',
                'VoiceToneAnalysisStatus': 'Enabled'|'Disabled'
            },
            'LambdaFunctionSinkConfiguration': {
                'InsightsTarget': 'string'
            },
            'SqsQueueSinkConfiguration': {
                'InsightsTarget': 'string'
            },
            'SnsTopicSinkConfiguration': {
                'InsightsTarget': 'string'
            },
            'VoiceEnhancementSinkConfiguration': {
                'Disabled': True|False
            }
        },
    ],
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ],
    ClientRequestToken='string'
)
Parameters:
  • MediaInsightsPipelineConfigurationName (string) –

    [REQUIRED]

    The name of the media insights pipeline configuration.

  • ResourceAccessRoleArn (string) –

    [REQUIRED]

    The ARN of the role used by the service to access Amazon Web Services resources, including Transcribe and Transcribe Call Analytics, on the caller’s behalf.

  • RealTimeAlertConfiguration (dict) –

    The configuration settings for the real-time alerts in a media insights pipeline configuration.

    • Disabled (boolean) –

      Turns off real-time alerts.

    • Rules (list) –

      The rules in the alert. Rules specify the words or phrases that you want to be notified about.

      • (dict) –

        Specifies the words or phrases that trigger an alert.

        • Type (string) – [REQUIRED]

          The type of alert rule.

        • KeywordMatchConfiguration (dict) –

          Specifies the settings for matching the keywords in a real-time alert rule.

          • RuleName (string) – [REQUIRED]

            The name of the keyword match rule.

          • Keywords (list) – [REQUIRED]

            The keywords or phrases that you want to match.

            • (string) –

          • Negate (boolean) –

            Matches keywords or phrases on their presence or absence. If set to TRUE, the rule matches when all the specified keywords or phrases are absent. Default: FALSE.

        • SentimentConfiguration (dict) –

          Specifies the settings for predicting sentiment in a real-time alert rule.

          • RuleName (string) – [REQUIRED]

            The name of the rule in the sentiment configuration.

          • SentimentType (string) – [REQUIRED]

            The type of sentiment, POSITIVE, NEGATIVE, or NEUTRAL.

          • TimePeriod (integer) – [REQUIRED]

            Specifies the analysis interval.

        • IssueDetectionConfiguration (dict) –

          Specifies the issue detection settings for a real-time alert rule.

          • RuleName (string) – [REQUIRED]

            The name of the issue detection rule.

  • Elements (list) –

    [REQUIRED]

    The elements in the request, such as a processor for Amazon Transcribe or a sink for a Kinesis Data Stream.

    • (dict) –

      An element in a media insights pipeline configuration.

      • Type (string) – [REQUIRED]

        The element type.

      • AmazonTranscribeCallAnalyticsProcessorConfiguration (dict) –

        The analytics configuration settings for transcribing audio in a media insights pipeline configuration element.

        • LanguageCode (string) – [REQUIRED]

          The language code in the configuration.

        • VocabularyName (string) –

          Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive.

          If the language of the specified custom vocabulary doesn’t match the language identified in your media, the custom vocabulary is not applied to your transcription.

          For more information, see Custom vocabularies in the Amazon Transcribe Developer Guide.

          Length Constraints: Minimum length of 1. Maximum length of 200.

        • VocabularyFilterName (string) –

          Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive.

          If the language of the specified custom vocabulary filter doesn’t match the language identified in your media, the vocabulary filter is not applied to your transcription.

          For more information, see Using vocabulary filtering with unwanted words in the Amazon Transcribe Developer Guide.

          Length Constraints: Minimum length of 1. Maximum length of 200.

        • VocabularyFilterMethod (string) –

          Specifies how to apply a vocabulary filter to a transcript.

          To replace words with ***, choose mask.

          To delete words, choose remove.

          To flag words without changing them, choose tag.

        • LanguageModelName (string) –

          Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive.

          The language of the specified language model must match the language code specified in the transcription request. If the languages don’t match, the custom language model isn’t applied. Language mismatches don’t generate errors or warnings.

          For more information, see Custom language models in the Amazon Transcribe Developer Guide.

        • EnablePartialResultsStabilization (boolean) –

          Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

        • PartialResultsStability (string) –

          Specifies the level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization).

          Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

          For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

        • ContentIdentificationType (string) –

          Labels all personally identifiable information (PII) identified in your transcript.

          Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

          You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you do, your request returns a BadRequestException.

          For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

        • ContentRedactionType (string) –

          Redacts all personally identifiable information (PII) identified in your transcript.

          Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

          You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you do, your request returns a BadRequestException.

          For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

        • PiiEntityTypes (string) –

          Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you’d like, or you can select ALL.

          To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType, but you can’t include both.

          Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY, CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN, SSN, or ALL.

          Length Constraints: Minimum length of 1. Maximum length of 300.

        • FilterPartialResults (boolean) –

          If true, UtteranceEvents with IsPartial: true are filtered out of the insights target.

        • PostCallAnalyticsSettings (dict) –

          The settings for a post-call analysis task in an analytics configuration.

          • OutputLocation (string) – [REQUIRED]

            The URL of the Amazon S3 bucket that contains the post-call data.

          • DataAccessRoleArn (string) – [REQUIRED]

            The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the Amazon Transcribe Developer Guide.

          • ContentRedactionOutput (string) –

            The content redaction output settings for a post-call analysis task.

          • OutputEncryptionKMSKeyId (string) –

            The ID of the KMS (Key Management Service) key used to encrypt the output.

        • CallAnalyticsStreamCategories (list) –

          By default, all CategoryEvents are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target.

          • (string) –

      • AmazonTranscribeProcessorConfiguration (dict) –

        The transcription processor configuration settings in a media insights pipeline configuration element.

        • LanguageCode (string) –

          The language code that represents the language spoken in your audio.

          If you’re unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

          For a list of languages that real-time Call Analytics supports, see the Supported languages table in the Amazon Transcribe Developer Guide.

        • VocabularyName (string) –

          The name of the custom vocabulary that you specified in your Call Analytics request.

          Length Constraints: Minimum length of 1. Maximum length of 200.

        • VocabularyFilterName (string) –

          The name of the custom vocabulary filter that you specified in your Call Analytics request.

          Length Constraints: Minimum length of 1. Maximum length of 200.

        • VocabularyFilterMethod (string) –

          The vocabulary filtering method used in your Call Analytics transcription.

        • ShowSpeakerLabel (boolean) –

          Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

          For more information, see Partitioning speakers (diarization) in the Amazon Transcribe Developer Guide.

        • EnablePartialResultsStabilization (boolean) –

          Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.

          For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

        • PartialResultsStability (string) –

          The level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization).

          Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

          For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

        • ContentIdentificationType (string) –

          Labels all personally identifiable information (PII) identified in your transcript.

          Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

          You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you set both, your request returns a BadRequestException.

          For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

        • ContentRedactionType (string) –

          Redacts all personally identifiable information (PII) identified in your transcript.

          Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

          You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a BadRequestException.

          For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

        • PiiEntityTypes (string) –

          The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you’d like, or you can select ALL.

          To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType, but you can’t include both.

          Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY, CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN, SSN, or ALL.

          If you leave this parameter empty, the default behavior is equivalent to ALL.

        • LanguageModelName (string) –

          The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.

          The language of the specified language model must match the language code you specify in your transcription request. If the languages don’t match, the custom language model isn’t applied. There are no errors or warnings associated with a language mismatch.

          For more information, see Custom language models in the Amazon Transcribe Developer Guide.

        • FilterPartialResults (boolean) –

          If true, TranscriptEvents with IsPartial: true are filtered out of the insights target.

        • IdentifyLanguage (boolean) –

          Turns language identification on or off.

        • LanguageOptions (string) –

          The language options for the transcription, such as automatic language detection.

        • PreferredLanguage (string) –

          The preferred language for the transcription.

        • VocabularyNames (string) –

          The names of the custom vocabulary or vocabularies used during transcription.

        • VocabularyFilterNames (string) –

          The names of the custom vocabulary filter or filters using during transcription.

      • KinesisDataStreamSinkConfiguration (dict) –

        The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element.

        • InsightsTarget (string) –

          The ARN of the sink.

      • S3RecordingSinkConfiguration (dict) –

        The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element.

        • Destination (string) –

          The default URI of the Amazon S3 bucket used as the recording sink.

        • RecordingFileFormat (string) –

          The default file format for the media files sent to the Amazon S3 bucket.

      • VoiceAnalyticsProcessorConfiguration (dict) –

        The voice analytics configuration settings in a media insights pipeline configuration element.

        • SpeakerSearchStatus (string) –

          The status of the speaker search task.

        • VoiceToneAnalysisStatus (string) –

          The status of the voice tone analysis task.

      • LambdaFunctionSinkConfiguration (dict) –

        The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element.

        • InsightsTarget (string) –

          The ARN of the sink.

      • SqsQueueSinkConfiguration (dict) –

        The configuration settings for an SQS queue sink in a media insights pipeline configuration element.

        • InsightsTarget (string) –

          The ARN of the SQS sink.

      • SnsTopicSinkConfiguration (dict) –

        The configuration settings for an SNS topic sink in a media insights pipeline configuration element.

        • InsightsTarget (string) –

          The ARN of the SNS sink.

      • VoiceEnhancementSinkConfiguration (dict) –

        The configuration settings for voice enhancement sink in a media insights pipeline configuration element.

        • Disabled (boolean) –

          Disables the VoiceEnhancementSinkConfiguration element.

  • Tags (list) –

    The tags assigned to the media insights pipeline configuration.

    • (dict) –

      A key/value pair that grants users access to meeting resources.

      • Key (string) – [REQUIRED]

        The key half of a tag.

      • Value (string) – [REQUIRED]

        The value half of a tag.

  • ClientRequestToken (string) –

    The unique identifier for the media insights pipeline configuration request.

    This field is autopopulated if not provided.

Return type:

dict

Returns:

Response Syntax

{
    'MediaInsightsPipelineConfiguration': {
        'MediaInsightsPipelineConfigurationName': 'string',
        'MediaInsightsPipelineConfigurationArn': 'string',
        'ResourceAccessRoleArn': 'string',
        'RealTimeAlertConfiguration': {
            'Disabled': True|False,
            'Rules': [
                {
                    'Type': 'KeywordMatch'|'Sentiment'|'IssueDetection',
                    'KeywordMatchConfiguration': {
                        'RuleName': 'string',
                        'Keywords': [
                            'string',
                        ],
                        'Negate': True|False
                    },
                    'SentimentConfiguration': {
                        'RuleName': 'string',
                        'SentimentType': 'NEGATIVE',
                        'TimePeriod': 123
                    },
                    'IssueDetectionConfiguration': {
                        'RuleName': 'string'
                    }
                },
            ]
        },
        'Elements': [
            {
                'Type': 'AmazonTranscribeCallAnalyticsProcessor'|'VoiceAnalyticsProcessor'|'AmazonTranscribeProcessor'|'KinesisDataStreamSink'|'LambdaFunctionSink'|'SqsQueueSink'|'SnsTopicSink'|'S3RecordingSink'|'VoiceEnhancementSink',
                'AmazonTranscribeCallAnalyticsProcessorConfiguration': {
                    'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                    'VocabularyName': 'string',
                    'VocabularyFilterName': 'string',
                    'VocabularyFilterMethod': 'remove'|'mask'|'tag',
                    'LanguageModelName': 'string',
                    'EnablePartialResultsStabilization': True|False,
                    'PartialResultsStability': 'high'|'medium'|'low',
                    'ContentIdentificationType': 'PII',
                    'ContentRedactionType': 'PII',
                    'PiiEntityTypes': 'string',
                    'FilterPartialResults': True|False,
                    'PostCallAnalyticsSettings': {
                        'OutputLocation': 'string',
                        'DataAccessRoleArn': 'string',
                        'ContentRedactionOutput': 'redacted'|'redacted_and_unredacted',
                        'OutputEncryptionKMSKeyId': 'string'
                    },
                    'CallAnalyticsStreamCategories': [
                        'string',
                    ]
                },
                'AmazonTranscribeProcessorConfiguration': {
                    'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                    'VocabularyName': 'string',
                    'VocabularyFilterName': 'string',
                    'VocabularyFilterMethod': 'remove'|'mask'|'tag',
                    'ShowSpeakerLabel': True|False,
                    'EnablePartialResultsStabilization': True|False,
                    'PartialResultsStability': 'high'|'medium'|'low',
                    'ContentIdentificationType': 'PII',
                    'ContentRedactionType': 'PII',
                    'PiiEntityTypes': 'string',
                    'LanguageModelName': 'string',
                    'FilterPartialResults': True|False,
                    'IdentifyLanguage': True|False,
                    'LanguageOptions': 'string',
                    'PreferredLanguage': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                    'VocabularyNames': 'string',
                    'VocabularyFilterNames': 'string'
                },
                'KinesisDataStreamSinkConfiguration': {
                    'InsightsTarget': 'string'
                },
                'S3RecordingSinkConfiguration': {
                    'Destination': 'string',
                    'RecordingFileFormat': 'Wav'|'Opus'
                },
                'VoiceAnalyticsProcessorConfiguration': {
                    'SpeakerSearchStatus': 'Enabled'|'Disabled',
                    'VoiceToneAnalysisStatus': 'Enabled'|'Disabled'
                },
                'LambdaFunctionSinkConfiguration': {
                    'InsightsTarget': 'string'
                },
                'SqsQueueSinkConfiguration': {
                    'InsightsTarget': 'string'
                },
                'SnsTopicSinkConfiguration': {
                    'InsightsTarget': 'string'
                },
                'VoiceEnhancementSinkConfiguration': {
                    'Disabled': True|False
                }
            },
        ],
        'MediaInsightsPipelineConfigurationId': 'string',
        'CreatedTimestamp': datetime(2015, 1, 1),
        'UpdatedTimestamp': datetime(2015, 1, 1)
    }
}

Response Structure

  • (dict) –

    • MediaInsightsPipelineConfiguration (dict) –

      The configuration settings for the media insights pipeline.

      • MediaInsightsPipelineConfigurationName (string) –

        The name of the configuration.

      • MediaInsightsPipelineConfigurationArn (string) –

        The ARN of the configuration.

      • ResourceAccessRoleArn (string) –

        The ARN of the role used by the service to access Amazon Web Services resources.

      • RealTimeAlertConfiguration (dict) –

        Lists the rules that trigger a real-time alert.

        • Disabled (boolean) –

          Turns off real-time alerts.

        • Rules (list) –

          The rules in the alert. Rules specify the words or phrases that you want to be notified about.

          • (dict) –

            Specifies the words or phrases that trigger an alert.

            • Type (string) –

              The type of alert rule.

            • KeywordMatchConfiguration (dict) –

              Specifies the settings for matching the keywords in a real-time alert rule.

              • RuleName (string) –

                The name of the keyword match rule.

              • Keywords (list) –

                The keywords or phrases that you want to match.

                • (string) –

              • Negate (boolean) –

                Matches keywords or phrases on their presence or absence. If set to TRUE, the rule matches when all the specified keywords or phrases are absent. Default: FALSE.

            • SentimentConfiguration (dict) –

              Specifies the settings for predicting sentiment in a real-time alert rule.

              • RuleName (string) –

                The name of the rule in the sentiment configuration.

              • SentimentType (string) –

                The type of sentiment, POSITIVE, NEGATIVE, or NEUTRAL.

              • TimePeriod (integer) –

                Specifies the analysis interval.

            • IssueDetectionConfiguration (dict) –

              Specifies the issue detection settings for a real-time alert rule.

              • RuleName (string) –

                The name of the issue detection rule.

      • Elements (list) –

        The elements in the configuration.

        • (dict) –

          An element in a media insights pipeline configuration.

          • Type (string) –

            The element type.

          • AmazonTranscribeCallAnalyticsProcessorConfiguration (dict) –

            The analytics configuration settings for transcribing audio in a media insights pipeline configuration element.

            • LanguageCode (string) –

              The language code in the configuration.

            • VocabularyName (string) –

              Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive.

              If the language of the specified custom vocabulary doesn’t match the language identified in your media, the custom vocabulary is not applied to your transcription.

              For more information, see Custom vocabularies in the Amazon Transcribe Developer Guide.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterName (string) –

              Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive.

              If the language of the specified custom vocabulary filter doesn’t match the language identified in your media, the vocabulary filter is not applied to your transcription.

              For more information, see Using vocabulary filtering with unwanted words in the Amazon Transcribe Developer Guide.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterMethod (string) –

              Specifies how to apply a vocabulary filter to a transcript.

              To replace words with ***, choose mask.

              To delete words, choose remove.

              To flag words without changing them, choose tag.

            • LanguageModelName (string) –

              Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive.

              The language of the specified language model must match the language code specified in the transcription request. If the languages don’t match, the custom language model isn’t applied. Language mismatches don’t generate errors or warnings.

              For more information, see Custom language models in the Amazon Transcribe Developer Guide.

            • EnablePartialResultsStabilization (boolean) –

              Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

            • PartialResultsStability (string) –

              Specifies the level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization).

              Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

            • ContentIdentificationType (string) –

              Labels all personally identifiable information (PII) identified in your transcript.

              Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

              You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you do, your request returns a BadRequestException.

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

            • ContentRedactionType (string) –

              Redacts all personally identifiable information (PII) identified in your transcript.

              Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

              You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you do, your request returns a BadRequestException.

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

            • PiiEntityTypes (string) –

              Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you’d like, or you can select ALL.

              To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType, but you can’t include both.

              Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY, CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN, SSN, or ALL.

              Length Constraints: Minimum length of 1. Maximum length of 300.

            • FilterPartialResults (boolean) –

              If true, UtteranceEvents with IsPartial: true are filtered out of the insights target.

            • PostCallAnalyticsSettings (dict) –

              The settings for a post-call analysis task in an analytics configuration.

              • OutputLocation (string) –

                The URL of the Amazon S3 bucket that contains the post-call data.

              • DataAccessRoleArn (string) –

                The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the Amazon Transcribe Developer Guide.

              • ContentRedactionOutput (string) –

                The content redaction output settings for a post-call analysis task.

              • OutputEncryptionKMSKeyId (string) –

                The ID of the KMS (Key Management Service) key used to encrypt the output.

            • CallAnalyticsStreamCategories (list) –

              By default, all CategoryEvents are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target.

              • (string) –

          • AmazonTranscribeProcessorConfiguration (dict) –

            The transcription processor configuration settings in a media insights pipeline configuration element.

            • LanguageCode (string) –

              The language code that represents the language spoken in your audio.

              If you’re unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

              For a list of languages that real-time Call Analytics supports, see the Supported languages table in the Amazon Transcribe Developer Guide.

            • VocabularyName (string) –

              The name of the custom vocabulary that you specified in your Call Analytics request.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterName (string) –

              The name of the custom vocabulary filter that you specified in your Call Analytics request.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterMethod (string) –

              The vocabulary filtering method used in your Call Analytics transcription.

            • ShowSpeakerLabel (boolean) –

              Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

              For more information, see Partitioning speakers (diarization) in the Amazon Transcribe Developer Guide.

            • EnablePartialResultsStabilization (boolean) –

              Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

            • PartialResultsStability (string) –

              The level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization).

              Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

            • ContentIdentificationType (string) –

              Labels all personally identifiable information (PII) identified in your transcript.

              Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

              You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you set both, your request returns a BadRequestException.

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

            • ContentRedactionType (string) –

              Redacts all personally identifiable information (PII) identified in your transcript.

              Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

              You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a BadRequestException.

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

            • PiiEntityTypes (string) –

              The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you’d like, or you can select ALL.

              To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType, but you can’t include both.

              Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY, CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN, SSN, or ALL.

              If you leave this parameter empty, the default behavior is equivalent to ALL.

            • LanguageModelName (string) –

              The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.

              The language of the specified language model must match the language code you specify in your transcription request. If the languages don’t match, the custom language model isn’t applied. There are no errors or warnings associated with a language mismatch.

              For more information, see Custom language models in the Amazon Transcribe Developer Guide.

            • FilterPartialResults (boolean) –

              If true, TranscriptEvents with IsPartial: true are filtered out of the insights target.

            • IdentifyLanguage (boolean) –

              Turns language identification on or off.

            • LanguageOptions (string) –

              The language options for the transcription, such as automatic language detection.

            • PreferredLanguage (string) –

              The preferred language for the transcription.

            • VocabularyNames (string) –

              The names of the custom vocabulary or vocabularies used during transcription.

            • VocabularyFilterNames (string) –

              The names of the custom vocabulary filter or filters using during transcription.

          • KinesisDataStreamSinkConfiguration (dict) –

            The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element.

            • InsightsTarget (string) –

              The ARN of the sink.

          • S3RecordingSinkConfiguration (dict) –

            The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element.

            • Destination (string) –

              The default URI of the Amazon S3 bucket used as the recording sink.

            • RecordingFileFormat (string) –

              The default file format for the media files sent to the Amazon S3 bucket.

          • VoiceAnalyticsProcessorConfiguration (dict) –

            The voice analytics configuration settings in a media insights pipeline configuration element.

            • SpeakerSearchStatus (string) –

              The status of the speaker search task.

            • VoiceToneAnalysisStatus (string) –

              The status of the voice tone analysis task.

          • LambdaFunctionSinkConfiguration (dict) –

            The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element.

            • InsightsTarget (string) –

              The ARN of the sink.

          • SqsQueueSinkConfiguration (dict) –

            The configuration settings for an SQS queue sink in a media insights pipeline configuration element.

            • InsightsTarget (string) –

              The ARN of the SQS sink.

          • SnsTopicSinkConfiguration (dict) –

            The configuration settings for an SNS topic sink in a media insights pipeline configuration element.

            • InsightsTarget (string) –

              The ARN of the SNS sink.

          • VoiceEnhancementSinkConfiguration (dict) –

            The configuration settings for voice enhancement sink in a media insights pipeline configuration element.

            • Disabled (boolean) –

              Disables the VoiceEnhancementSinkConfiguration element.

      • MediaInsightsPipelineConfigurationId (string) –

        The ID of the configuration.

      • CreatedTimestamp (datetime) –

        The time at which the configuration was created.

      • UpdatedTimestamp (datetime) –

        The time at which the configuration was last updated.

Exceptions

  • ChimeSDKMediaPipelines.Client.exceptions.BadRequestException

  • ChimeSDKMediaPipelines.Client.exceptions.ForbiddenException

  • ChimeSDKMediaPipelines.Client.exceptions.NotFoundException

  • ChimeSDKMediaPipelines.Client.exceptions.UnauthorizedClientException

  • ChimeSDKMediaPipelines.Client.exceptions.ThrottledClientException

  • ChimeSDKMediaPipelines.Client.exceptions.ResourceLimitExceededException

  • ChimeSDKMediaPipelines.Client.exceptions.ServiceUnavailableException

  • ChimeSDKMediaPipelines.Client.exceptions.ServiceFailureException