AgentsforBedrock / Client / create_prompt
create_prompt#
- AgentsforBedrock.Client.create_prompt(**kwargs)#
- Creates a prompt in your prompt library that you can add to a flow. For more information, see Prompt management in Amazon Bedrock, Create a prompt using Prompt management and Prompt flows in Amazon Bedrock in the Amazon Bedrock User Guide. - See also: AWS API Documentation - Request Syntax- response = client.create_prompt( clientToken='string', customerEncryptionKeyArn='string', defaultVariant='string', description='string', name='string', tags={ 'string': 'string' }, variants=[ { 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topK': 123, 'topP': ... } }, 'modelId': 'string', 'name': 'string', 'templateConfiguration': { 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT' }, ] ) - Parameters:
- clientToken (string) – - A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency. - This field is autopopulated if not provided. 
- customerEncryptionKeyArn (string) – The Amazon Resource Name (ARN) of the KMS key to encrypt the prompt. 
- defaultVariant (string) – The name of the default variant for the prompt. This value must match the - namefield in the relevant PromptVariant object.
- description (string) – A description for the prompt. 
- name (string) – - [REQUIRED] - A name for the prompt. 
- tags (dict) – - Any tags that you want to attach to the prompt. For more information, see Tagging resources in Amazon Bedrock. - (string) – - (string) – 
 
 
- variants (list) – - A list of objects, each containing details about a variant of the prompt. - (dict) – - Contains details about a variant of the prompt. - inferenceConfiguration (dict) – - Contains inference configurations for the prompt variant. - Note- This is a Tagged Union structure. Only one of the following top level keys can be set: - text.- text (dict) – - Contains inference configurations for a text prompt. - maxTokens (integer) – - The maximum number of tokens to return in the response. 
- stopSequences (list) – - A list of strings that define sequences after which the model will stop generating. - (string) – 
 
- temperature (float) – - Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs. 
- topK (integer) – - The number of most-likely candidates that the model considers for the next token during generation. 
- topP (float) – - The percentage of most-likely candidates that the model considers for the next token. 
 
 
- modelId (string) – - The unique identifier of the model with which to run inference on the prompt. 
- name (string) – [REQUIRED] - The name of the prompt variant. 
- templateConfiguration (dict) – - Contains configurations for the prompt template. - Note- This is a Tagged Union structure. Only one of the following top level keys can be set: - text.- text (dict) – - Contains configurations for the text in a message for a prompt. - inputVariables (list) – - An array of the variables in the prompt template. - (dict) – - Contains information about a variable in the prompt. - name (string) – - The name of the variable. 
 
 
- text (string) – [REQUIRED] - The message for the prompt. 
 
 
- templateType (string) – [REQUIRED] - The type of prompt template to use. 
 
 
 
- Return type:
- dict 
- Returns:
- Response Syntax- { 'arn': 'string', 'createdAt': datetime(2015, 1, 1), 'customerEncryptionKeyArn': 'string', 'defaultVariant': 'string', 'description': 'string', 'id': 'string', 'name': 'string', 'updatedAt': datetime(2015, 1, 1), 'variants': [ { 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topK': 123, 'topP': ... } }, 'modelId': 'string', 'name': 'string', 'templateConfiguration': { 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT' }, ], 'version': 'string' } - Response Structure- (dict) – - arn (string) – - The Amazon Resource Name (ARN) of the prompt. 
- createdAt (datetime) – - The time at which the prompt was created. 
- customerEncryptionKeyArn (string) – - The Amazon Resource Name (ARN) of the KMS key that you encrypted the prompt with. 
- defaultVariant (string) – - The name of the default variant for your prompt. 
- description (string) – - The description of the prompt. 
- id (string) – - The unique identifier of the prompt. 
- name (string) – - The name of the prompt. 
- updatedAt (datetime) – - The time at which the prompt was last updated. 
- variants (list) – - A list of objects, each containing details about a variant of the prompt. - (dict) – - Contains details about a variant of the prompt. - inferenceConfiguration (dict) – - Contains inference configurations for the prompt variant. - Note- This is a Tagged Union structure. Only one of the following top level keys will be set: - text. If a client receives an unknown member it will set- SDK_UNKNOWN_MEMBERas the top level key, which maps to the name or tag of the unknown member. The structure of- SDK_UNKNOWN_MEMBERis as follows:- 'SDK_UNKNOWN_MEMBER': {'name': 'UnknownMemberName'} - text (dict) – - Contains inference configurations for a text prompt. - maxTokens (integer) – - The maximum number of tokens to return in the response. 
- stopSequences (list) – - A list of strings that define sequences after which the model will stop generating. - (string) – 
 
- temperature (float) – - Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs. 
- topK (integer) – - The number of most-likely candidates that the model considers for the next token during generation. 
- topP (float) – - The percentage of most-likely candidates that the model considers for the next token. 
 
 
- modelId (string) – - The unique identifier of the model with which to run inference on the prompt. 
- name (string) – - The name of the prompt variant. 
- templateConfiguration (dict) – - Contains configurations for the prompt template. - Note- This is a Tagged Union structure. Only one of the following top level keys will be set: - text. If a client receives an unknown member it will set- SDK_UNKNOWN_MEMBERas the top level key, which maps to the name or tag of the unknown member. The structure of- SDK_UNKNOWN_MEMBERis as follows:- 'SDK_UNKNOWN_MEMBER': {'name': 'UnknownMemberName'} - text (dict) – - Contains configurations for the text in a message for a prompt. - inputVariables (list) – - An array of the variables in the prompt template. - (dict) – - Contains information about a variable in the prompt. - name (string) – - The name of the variable. 
 
 
- text (string) – - The message for the prompt. 
 
 
- templateType (string) – - The type of prompt template to use. 
 
 
- version (string) – - The version of the prompt. When you create a prompt, the version created is the - DRAFTversion.
 
 
 - Exceptions- AgentsforBedrock.Client.exceptions.ThrottlingException
- AgentsforBedrock.Client.exceptions.AccessDeniedException
- AgentsforBedrock.Client.exceptions.ValidationException
- AgentsforBedrock.Client.exceptions.InternalServerException
- AgentsforBedrock.Client.exceptions.ConflictException
- AgentsforBedrock.Client.exceptions.ServiceQuotaExceededException