Firehose / Client / put_record_batch

put_record_batch#

Firehose.Client.put_record_batch(**kwargs)#

Writes multiple data records into a delivery stream in a single call, which can achieve higher throughput per producer than when writing single records. To write single data records into a delivery stream, use PutRecord. Applications using these operations are referred to as producers.

For information about service quota, see Amazon Kinesis Data Firehose Quota.

Each PutRecordBatch request supports up to 500 records. Each record in the request can be as large as 1,000 KB (before base64 encoding), up to a limit of 4 MB for the entire request. These limits cannot be changed.

You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists of a data blob that can be up to 1,000 KB in size, and any kind of data. For example, it could be a segment from a log file, geographic location data, website clickstream data, and so on.

Kinesis Data Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the destination, a common solution is to use delimiters in the data, such as a newline ( \n) or some other character unique within the data. This allows the consumer application to parse individual data items when reading the data from the destination.

The PutRecordBatch response includes a count of failed records, FailedPutCount, and an array of responses, RequestResponses. Even if the PutRecordBatch call succeeds, the value of FailedPutCount may be greater than 0, indicating that there are records for which the operation didn’t succeed. Each entry in the RequestResponses array provides additional information about the processed record. It directly correlates with a record in the request array using the same ordering, from the top to the bottom. The response array always includes the same number of records as the request array. RequestResponses includes both successfully and unsuccessfully processed records. Kinesis Data Firehose tries to process all records in each PutRecordBatch request. A single record failure does not stop the processing of subsequent records.

A successfully processed record includes a RecordId value, which is unique for the record. An unsuccessfully processed record includes ErrorCode and ErrorMessage values. ErrorCode reflects the type of error, and is one of the following values: ServiceUnavailableException or InternalFailure. ErrorMessage provides more detailed information about the error.

If there is an internal server error or a timeout, the write might have completed or it might have failed. If FailedPutCount is greater than 0, retry the request, resending only those records that might have failed processing. This minimizes the possible duplicate records and also reduces the total bytes sent (and corresponding charges). We recommend that you handle any duplicates at the destination.

If PutRecordBatch throws ServiceUnavailableException, back off and retry. If the exception persists, it is possible that the throughput limits have been exceeded for the delivery stream.

Data records sent to Kinesis Data Firehose are stored for 24 hours from the time they are added to a delivery stream as it attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no longer available.

Warning

Don’t concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the raw data, then perform base64 encoding.

See also: AWS API Documentation

Request Syntax

response = client.put_record_batch(
    DeliveryStreamName='string',
    Records=[
        {
            'Data': b'bytes'
        },
    ]
)
Parameters:
  • DeliveryStreamName (string) –

    [REQUIRED]

    The name of the delivery stream.

  • Records (list) –

    [REQUIRED]

    One or more records.

    • (dict) –

      The unit of data in a delivery stream.

      • Data (bytes) – [REQUIRED]

        The data blob, which is base64-encoded when the blob is serialized. The maximum size of the data blob, before base64-encoding, is 1,000 KiB.

Return type:

dict

Returns:

Response Syntax

{
    'FailedPutCount': 123,
    'Encrypted': True|False,
    'RequestResponses': [
        {
            'RecordId': 'string',
            'ErrorCode': 'string',
            'ErrorMessage': 'string'
        },
    ]
}

Response Structure

  • (dict) –

    • FailedPutCount (integer) –

      The number of records that might have failed processing. This number might be greater than 0 even if the PutRecordBatch call succeeds. Check FailedPutCount to determine whether there are records that you need to resend.

    • Encrypted (boolean) –

      Indicates whether server-side encryption (SSE) was enabled during this operation.

    • RequestResponses (list) –

      The results array. For each record, the index of the response element is the same as the index used in the request array.

      • (dict) –

        Contains the result for an individual record from a PutRecordBatch request. If the record is successfully added to your delivery stream, it receives a record ID. If the record fails to be added to your delivery stream, the result includes an error code and an error message.

        • RecordId (string) –

          The ID of the record.

        • ErrorCode (string) –

          The error code for an individual record result.

        • ErrorMessage (string) –

          The error message for an individual record result.

Exceptions

  • Firehose.Client.exceptions.ResourceNotFoundException

  • Firehose.Client.exceptions.InvalidArgumentException

  • Firehose.Client.exceptions.InvalidKMSResourceException

  • Firehose.Client.exceptions.ServiceUnavailableException