create_batch_inference_job

create_batch_inference_job(**kwargs)

Creates a batch inference job. The operation can handle up to 50 million records and the input file must be in JSON format. For more information, see Creating a batch inference job.

See also: AWS API Documentation

Request Syntax

response = client.create_batch_inference_job(
    jobName='string',
    solutionVersionArn='string',
    filterArn='string',
    numResults=123,
    jobInput={
        's3DataSource': {
            'path': 'string',
            'kmsKeyArn': 'string'
        }
    },
    jobOutput={
        's3DataDestination': {
            'path': 'string',
            'kmsKeyArn': 'string'
        }
    },
    roleArn='string',
    batchInferenceJobConfig={
        'itemExplorationConfig': {
            'string': 'string'
        }
    },
    tags=[
        {
            'tagKey': 'string',
            'tagValue': 'string'
        },
    ]
)
Parameters
  • jobName (string) --

    [REQUIRED]

    The name of the batch inference job to create.

  • solutionVersionArn (string) --

    [REQUIRED]

    The Amazon Resource Name (ARN) of the solution version that will be used to generate the batch inference recommendations.

  • filterArn (string) -- The ARN of the filter to apply to the batch inference job. For more information on using filters, see Filtering batch recommendations.
  • numResults (integer) -- The number of recommendations to retrieve.
  • jobInput (dict) --

    [REQUIRED]

    The Amazon S3 path that leads to the input file to base your recommendations on. The input material must be in JSON format.

    • s3DataSource (dict) -- [REQUIRED]

      The URI of the Amazon S3 location that contains your input data. The Amazon S3 bucket must be in the same region as the API endpoint you are calling.

      • path (string) -- [REQUIRED]

        The file path of the Amazon S3 bucket.

      • kmsKeyArn (string) --

        The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files.

  • jobOutput (dict) --

    [REQUIRED]

    The path to the Amazon S3 bucket where the job's output will be stored.

    • s3DataDestination (dict) -- [REQUIRED]

      Information on the Amazon S3 bucket in which the batch inference job's output is stored.

      • path (string) -- [REQUIRED]

        The file path of the Amazon S3 bucket.

      • kmsKeyArn (string) --

        The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files.

  • roleArn (string) --

    [REQUIRED]

    The ARN of the Amazon Identity and Access Management role that has permissions to read and write to your input and output Amazon S3 buckets respectively.

  • batchInferenceJobConfig (dict) --

    The configuration details of a batch inference job.

    • itemExplorationConfig (dict) --

      A string to string map specifying the exploration configuration hyperparameters, including explorationWeight and explorationItemAgeCutOff , you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. See User-Personalization.

      • (string) --
        • (string) --
  • tags (list) --

    A list of tags to apply to the batch inference job.

    • (dict) --

      The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Personalize resources.

      • tagKey (string) -- [REQUIRED]

        One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values.

      • tagValue (string) -- [REQUIRED]

        The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key).

Return type

dict

Returns

Response Syntax

{
    'batchInferenceJobArn': 'string'
}

Response Structure

  • (dict) --

    • batchInferenceJobArn (string) --

      The ARN of the batch inference job.

Exceptions

  • Personalize.Client.exceptions.InvalidInputException
  • Personalize.Client.exceptions.ResourceAlreadyExistsException
  • Personalize.Client.exceptions.LimitExceededException
  • Personalize.Client.exceptions.ResourceNotFoundException
  • Personalize.Client.exceptions.ResourceInUseException
  • Personalize.Client.exceptions.TooManyTagsException