AgentsforBedrock / Client / update_prompt

update_prompt#

AgentsforBedrock.Client.update_prompt(**kwargs)#

Modifies a prompt in your prompt library. Include both fields that you want to keep and fields that you want to replace. For more information, see Prompt management in Amazon Bedrock and Edit prompts in your prompt library in the Amazon Bedrock User Guide.

See also: AWS API Documentation

Request Syntax

response = client.update_prompt(
    customerEncryptionKeyArn='string',
    defaultVariant='string',
    description='string',
    name='string',
    promptIdentifier='string',
    variants=[
        {
            'inferenceConfiguration': {
                'text': {
                    'maxTokens': 123,
                    'stopSequences': [
                        'string',
                    ],
                    'temperature': ...,
                    'topK': 123,
                    'topP': ...
                }
            },
            'modelId': 'string',
            'name': 'string',
            'templateConfiguration': {
                'text': {
                    'inputVariables': [
                        {
                            'name': 'string'
                        },
                    ],
                    'text': 'string'
                }
            },
            'templateType': 'TEXT'
        },
    ]
)
Parameters:
  • customerEncryptionKeyArn (string) – The Amazon Resource Name (ARN) of the KMS key to encrypt the prompt.

  • defaultVariant (string) – The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.

  • description (string) – A description for the prompt.

  • name (string) –

    [REQUIRED]

    A name for the prompt.

  • promptIdentifier (string) –

    [REQUIRED]

    The unique identifier of the prompt.

  • variants (list) –

    A list of objects, each containing details about a variant of the prompt.

    • (dict) –

      Contains details about a variant of the prompt.

      • inferenceConfiguration (dict) –

        Contains inference configurations for the prompt variant.

        Note

        This is a Tagged Union structure. Only one of the following top level keys can be set: text.

        • text (dict) –

          Contains inference configurations for a text prompt.

          • maxTokens (integer) –

            The maximum number of tokens to return in the response.

          • stopSequences (list) –

            A list of strings that define sequences after which the model will stop generating.

            • (string) –

          • temperature (float) –

            Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

          • topK (integer) –

            The number of most-likely candidates that the model considers for the next token during generation.

          • topP (float) –

            The percentage of most-likely candidates that the model considers for the next token.

      • modelId (string) –

        The unique identifier of the model with which to run inference on the prompt.

      • name (string) – [REQUIRED]

        The name of the prompt variant.

      • templateConfiguration (dict) –

        Contains configurations for the prompt template.

        Note

        This is a Tagged Union structure. Only one of the following top level keys can be set: text.

        • text (dict) –

          Contains configurations for the text in a message for a prompt.

          • inputVariables (list) –

            An array of the variables in the prompt template.

            • (dict) –

              Contains information about a variable in the prompt.

              • name (string) –

                The name of the variable.

          • text (string) – [REQUIRED]

            The message for the prompt.

      • templateType (string) – [REQUIRED]

        The type of prompt template to use.

Return type:

dict

Returns:

Response Syntax

{
    'arn': 'string',
    'createdAt': datetime(2015, 1, 1),
    'customerEncryptionKeyArn': 'string',
    'defaultVariant': 'string',
    'description': 'string',
    'id': 'string',
    'name': 'string',
    'updatedAt': datetime(2015, 1, 1),
    'variants': [
        {
            'inferenceConfiguration': {
                'text': {
                    'maxTokens': 123,
                    'stopSequences': [
                        'string',
                    ],
                    'temperature': ...,
                    'topK': 123,
                    'topP': ...
                }
            },
            'modelId': 'string',
            'name': 'string',
            'templateConfiguration': {
                'text': {
                    'inputVariables': [
                        {
                            'name': 'string'
                        },
                    ],
                    'text': 'string'
                }
            },
            'templateType': 'TEXT'
        },
    ],
    'version': 'string'
}

Response Structure

  • (dict) –

    • arn (string) –

      The Amazon Resource Name (ARN) of the prompt.

    • createdAt (datetime) –

      The time at which the prompt was created.

    • customerEncryptionKeyArn (string) –

      The Amazon Resource Name (ARN) of the KMS key to encrypt the prompt.

    • defaultVariant (string) –

      The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.

    • description (string) –

      The description of the prompt.

    • id (string) –

      The unique identifier of the prompt.

    • name (string) –

      The name of the prompt.

    • updatedAt (datetime) –

      The time at which the prompt was last updated.

    • variants (list) –

      A list of objects, each containing details about a variant of the prompt.

      • (dict) –

        Contains details about a variant of the prompt.

        • inferenceConfiguration (dict) –

          Contains inference configurations for the prompt variant.

          Note

          This is a Tagged Union structure. Only one of the following top level keys will be set: text. If a client receives an unknown member it will set SDK_UNKNOWN_MEMBER as the top level key, which maps to the name or tag of the unknown member. The structure of SDK_UNKNOWN_MEMBER is as follows:

          'SDK_UNKNOWN_MEMBER': {'name': 'UnknownMemberName'}
          
          • text (dict) –

            Contains inference configurations for a text prompt.

            • maxTokens (integer) –

              The maximum number of tokens to return in the response.

            • stopSequences (list) –

              A list of strings that define sequences after which the model will stop generating.

              • (string) –

            • temperature (float) –

              Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

            • topK (integer) –

              The number of most-likely candidates that the model considers for the next token during generation.

            • topP (float) –

              The percentage of most-likely candidates that the model considers for the next token.

        • modelId (string) –

          The unique identifier of the model with which to run inference on the prompt.

        • name (string) –

          The name of the prompt variant.

        • templateConfiguration (dict) –

          Contains configurations for the prompt template.

          Note

          This is a Tagged Union structure. Only one of the following top level keys will be set: text. If a client receives an unknown member it will set SDK_UNKNOWN_MEMBER as the top level key, which maps to the name or tag of the unknown member. The structure of SDK_UNKNOWN_MEMBER is as follows:

          'SDK_UNKNOWN_MEMBER': {'name': 'UnknownMemberName'}
          
          • text (dict) –

            Contains configurations for the text in a message for a prompt.

            • inputVariables (list) –

              An array of the variables in the prompt template.

              • (dict) –

                Contains information about a variable in the prompt.

                • name (string) –

                  The name of the variable.

            • text (string) –

              The message for the prompt.

        • templateType (string) –

          The type of prompt template to use.

    • version (string) –

      The version of the prompt. When you update a prompt, the version updated is the DRAFT version.

Exceptions

  • AgentsforBedrock.Client.exceptions.ThrottlingException

  • AgentsforBedrock.Client.exceptions.AccessDeniedException

  • AgentsforBedrock.Client.exceptions.ValidationException

  • AgentsforBedrock.Client.exceptions.InternalServerException

  • AgentsforBedrock.Client.exceptions.ResourceNotFoundException

  • AgentsforBedrock.Client.exceptions.ConflictException

  • AgentsforBedrock.Client.exceptions.ServiceQuotaExceededException