LexModelsV2 / Client / list_utterance_analytics_data

list_utterance_analytics_data#

LexModelsV2.Client.list_utterance_analytics_data(**kwargs)#

Note

To use this API operation, your IAM role must have permissions to perform the ListAggregatedUtterances operation, which provides access to utterance-related analytics. See Viewing utterance statistics for the IAM policy to apply to the IAM role.

Retrieves a list of metadata for individual user utterances to your bot. The following fields are required:

  • startDateTime and endDateTime – Define a time range for which you want to retrieve results.

Of the optional fields, you can organize the results in the following ways:

  • Use the filters field to filter the results and the sortBy field to specify the values by which to sort the results.

  • Use the maxResults field to limit the number of results to return in a single response and the nextToken field to return the next batch of results if the response does not return the full set of results.

See also: AWS API Documentation

Request Syntax

response = client.list_utterance_analytics_data(
    botId='string',
    startDateTime=datetime(2015, 1, 1),
    endDateTime=datetime(2015, 1, 1),
    sortBy={
        'name': 'UtteranceTimestamp',
        'order': 'Ascending'|'Descending'
    },
    filters=[
        {
            'name': 'BotAliasId'|'BotVersion'|'LocaleId'|'Modality'|'Channel'|'SessionId'|'OriginatingRequestId'|'UtteranceState'|'UtteranceText',
            'operator': 'EQ'|'GT'|'LT',
            'values': [
                'string',
            ]
        },
    ],
    maxResults=123,
    nextToken='string'
)
Parameters:
  • botId (string) –

    [REQUIRED]

    The identifier for the bot for which you want to retrieve utterance analytics.

  • startDateTime (datetime) –

    [REQUIRED]

    The date and time that marks the beginning of the range of time for which you want to see utterance analytics.

  • endDateTime (datetime) –

    [REQUIRED]

    The date and time that marks the end of the range of time for which you want to see utterance analytics.

  • sortBy (dict) –

    An object specifying the measure and method by which to sort the utterance analytics data.

    • name (string) – [REQUIRED]

      The measure by which to sort the utterance analytics data.

      • Count – The number of utterances.

      • UtteranceTimestamp – The date and time of the utterance.

    • order (string) – [REQUIRED]

      Specifies whether to sort the results in ascending or descending order.

  • filters (list) –

    A list of objects, each of which describes a condition by which you want to filter the results.

    • (dict) –

      Contains fields describing a condition by which to filter the utterances. The expression may be understood as name operator values. For example:

      • LocaleId EQ Book – The locale is the string “en”.

      • UtteranceText CO help – The text of the utterance contains the string “help”.

      The operators that each filter supports are listed below:

      • BotAliasEQ.

      • BotVersionEQ.

      • LocaleIdEQ.

      • ModalityEQ.

      • ChannelEQ.

      • SessionIdEQ.

      • OriginatingRequestIdEQ.

      • UtteranceStateEQ.

      • UtteranceTextEQ, CO.

      • name (string) – [REQUIRED]

        The category by which to filter the utterances. The descriptions for each option are as follows:

        • BotAlias – The name of the bot alias.

        • BotVersion – The version of the bot.

        • LocaleId – The locale of the bot.

        • Modality – The modality of the session with the bot (audio, DTMF, or text).

        • Channel – The channel that the bot is integrated with.

        • SessionId – The identifier of the session with the bot.

        • OriginatingRequestId – The identifier of the first request in a session.

        • UtteranceState – The state of the utterance.

        • UtteranceText – The text in the utterance.

      • operator (string) – [REQUIRED]

        The operation by which to filter the category. The following operations are possible:

        • CO – Contains

        • EQ – Equals

        • GT – Greater than

        • LT – Less than

        The operators that each filter supports are listed below:

        • BotAliasEQ.

        • BotVersionEQ.

        • LocaleIdEQ.

        • ModalityEQ.

        • ChannelEQ.

        • SessionIdEQ.

        • OriginatingRequestIdEQ.

        • UtteranceStateEQ.

        • UtteranceTextEQ, CO.

      • values (list) – [REQUIRED]

        An array containing the values of the category by which to apply the operator to filter the results. You can provide multiple values if the operator is EQ or CO. If you provide multiple values, you filter for results that equal/contain any of the values. For example, if the name, operator, and values fields are Modality, EQ, and [Speech, Text], the operation filters for results where the modality was either Speech or Text.

        • (string) –

  • maxResults (integer) – The maximum number of results to return in each page of results. If there are fewer results than the maximum page size, only the actual number of results are returned.

  • nextToken (string) –

    If the response from the ListUtteranceAnalyticsData operation contains more results than specified in the maxResults parameter, a token is returned in the response.

    Use the returned token in the nextToken parameter of a ListUtteranceAnalyticsData request to return the next page of results. For a complete set of results, call the ListUtteranceAnalyticsData operation until the nextToken returned in the response is null.

Return type:

dict

Returns:

Response Syntax

{
    'botId': 'string',
    'nextToken': 'string',
    'utterances': [
        {
            'botAliasId': 'string',
            'botVersion': 'string',
            'localeId': 'string',
            'sessionId': 'string',
            'channel': 'string',
            'mode': 'Speech'|'Text'|'DTMF'|'MultiMode',
            'conversationStartTime': datetime(2015, 1, 1),
            'conversationEndTime': datetime(2015, 1, 1),
            'utterance': 'string',
            'utteranceTimestamp': datetime(2015, 1, 1),
            'audioVoiceDurationMillis': 123,
            'utteranceUnderstood': True|False,
            'inputType': 'string',
            'outputType': 'string',
            'associatedIntentName': 'string',
            'associatedSlotName': 'string',
            'intentState': 'Failed'|'Fulfilled'|'InProgress'|'ReadyForFulfillment'|'Waiting'|'FulfillmentInProgress',
            'dialogActionType': 'string',
            'botResponseAudioVoiceId': 'string',
            'slotsFilledInSession': 'string',
            'utteranceRequestId': 'string',
            'botResponses': [
                {
                    'content': 'string',
                    'contentType': 'PlainText'|'CustomPayload'|'SSML'|'ImageResponseCard',
                    'imageResponseCard': {
                        'title': 'string',
                        'subtitle': 'string',
                        'imageUrl': 'string',
                        'buttons': [
                            {
                                'text': 'string',
                                'value': 'string'
                            },
                        ]
                    }
                },
            ]
        },
    ]
}

Response Structure

  • (dict) –

    • botId (string) –

      The unique identifier of the bot that the utterances belong to.

    • nextToken (string) –

      If the response from the ListUtteranceAnalyticsData operation contains more results than specified in the maxResults parameter, a token is returned in the response.

      Use the returned token in the nextToken parameter of a ListUtteranceAnalyticsData request to return the next page of results. For a complete set of results, call the ListUtteranceAnalyticsData operation until the nextToken returned in the response is null.

    • utterances (list) –

      A list of objects, each of which contains information about an utterance in a user session with your bot.

      • (dict) –

        An object containing information about a specific utterance.

        • botAliasId (string) –

          The identifier of the alias of the bot that the utterance was made to.

        • botVersion (string) –

          The version of the bot that the utterance was made to.

        • localeId (string) –

          The locale of the bot that the utterance was made to.

        • sessionId (string) –

          The identifier of the session that the utterance was made in.

        • channel (string) –

          The channel that is integrated with the bot that the utterance was made to.

        • mode (string) –

          The mode of the session. The possible values are as follows:

          • Speech – The session consisted of spoken dialogue.

          • Text – The session consisted of written dialogue.

          • DTMF – The session consisted of touch-tone keypad (Dual Tone Multi-Frequency) key presses.

          • MultiMode – The session consisted of multiple modes.

        • conversationStartTime (datetime) –

          The date and time when the conversation in which the utterance took place began. A conversation is defined as a unique combination of a sessionId and an originatingRequestId.

        • conversationEndTime (datetime) –

          The date and time when the conversation in which the utterance took place ended. A conversation is defined as a unique combination of a sessionId and an originatingRequestId.

        • utterance (string) –

          The text of the utterance.

        • utteranceTimestamp (datetime) –

          The date and time when the utterance took place.

        • audioVoiceDurationMillis (integer) –

          The duration in milliseconds of the audio associated with the utterance.

        • utteranceUnderstood (boolean) –

          Specifies whether the bot understood the utterance or not.

        • inputType (string) –

          The input type of the utterance. The possible values are as follows:

          • PCM format: audio data must be in little-endian byte order.

            • audio/l16; rate=16000; channels=1

            • audio/x-l16; sample-rate=16000; channel-count=1

            • audio/lpcm; sample-rate=8000; sample-size-bits=16; channel-count=1; is-big-endian=false

          • Opus format

            • audio/x-cbr-opus-with-preamble;preamble-size=0;bit-rate=256000;frame-size-milliseconds=4

          • Text format

            • text/plain; charset=utf-8

        • outputType (string) –

          The output type of the utterance. The possible values are as follows:

          • audio/mpeg

          • audio/ogg

          • audio/pcm (16 KHz)

          • audio/ (defaults to mpeg)

          • text/plain; charset=utf-8

        • associatedIntentName (string) –

          The name of the intent that the utterance is associated to.

        • associatedSlotName (string) –

          The name of the slot that the utterance is associated to.

        • intentState (string) –

          The state of the intent that the utterance is associated to.

        • dialogActionType (string) –

          The type of dialog action that the utterance is associated to. See the type field in DialogAction for more information.

        • botResponseAudioVoiceId (string) –

          The identifier for the audio of the bot response.

        • slotsFilledInSession (string) –

          The slots that have been filled in the session by the time of the utterance.

        • utteranceRequestId (string) –

          The identifier of the request associated with the utterance.

        • botResponses (list) –

          A list of objects containing information about the bot response to the utterance.

          • (dict) –

            An object that contains a response to the utterance from the bot.

            • content (string) –

              The text of the response to the utterance from the bot.

            • contentType (string) –

              The type of the response. The following values are possible:

              • PlainText – A plain text string.

              • CustomPayload – A response string that you can customize to include data or metadata for your application.

              • SSML – A string that includes Speech Synthesis Markup Language to customize the audio response.

              • ImageResponseCard – An image with buttons that the customer can select. See ImageResponseCard for more information.

            • imageResponseCard (dict) –

              A card that is shown to the user by a messaging platform. You define the contents of the card, the card is displayed by the platform.

              When you use a response card, the response from the user is constrained to the text associated with a button on the card.

              • title (string) –

                The title to display on the response card. The format of the title is determined by the platform displaying the response card.

              • subtitle (string) –

                The subtitle to display on the response card. The format of the subtitle is determined by the platform displaying the response card.

              • imageUrl (string) –

                The URL of an image to display on the response card. The image URL must be publicly available so that the platform displaying the response card has access to the image.

              • buttons (list) –

                A list of buttons that should be displayed on the response card. The arrangement of the buttons is determined by the platform that displays the button.

                • (dict) –

                  Describes a button to use on a response card used to gather slot values from a user.

                  • text (string) –

                    The text that appears on the button. Use this to tell the user what value is returned when they choose this button.

                  • value (string) –

                    The value returned to Amazon Lex when the user chooses this button. This must be one of the slot values configured for the slot.

Exceptions

  • LexModelsV2.Client.exceptions.ThrottlingException

  • LexModelsV2.Client.exceptions.ValidationException

  • LexModelsV2.Client.exceptions.PreconditionFailedException

  • LexModelsV2.Client.exceptions.ServiceQuotaExceededException

  • LexModelsV2.Client.exceptions.InternalServerException