MachineLearning.Client.
create_batch_prediction
(**kwargs)¶Generates predictions for a group of observations. The observations to process exist in one or more data files referenced by a DataSource
. This operation creates a new BatchPrediction
, and uses an MLModel
and the data files referenced by the DataSource
as information sources.
CreateBatchPrediction
is an asynchronous operation. In response toCreateBatchPrediction
, Amazon Machine Learning (Amazon ML) immediately returns and sets theBatchPrediction
status toPENDING
. After theBatchPrediction
completes, Amazon ML sets the status toCOMPLETED
.
You can poll for status updates by using the GetBatchPrediction operation and checking the Status
parameter of the result. After the COMPLETED
status appears, the results are available in the location specified by the OutputUri
parameter.
See also: AWS API Documentation
Request Syntax
response = client.create_batch_prediction(
BatchPredictionId='string',
BatchPredictionName='string',
MLModelId='string',
BatchPredictionDataSourceId='string',
OutputUri='string'
)
[REQUIRED]
A user-supplied ID that uniquely identifies the BatchPrediction
.
BatchPrediction
. BatchPredictionName
can only use the UTF-8 character set.[REQUIRED]
The ID of the MLModel
that will generate predictions for the group of observations.
[REQUIRED]
The ID of the DataSource
that points to the group of observations to predict.
[REQUIRED]
The location of an Amazon Simple Storage Service (Amazon S3) bucket or directory to store the batch prediction results. The following substrings are not allowed in the s3 key
portion of the outputURI
field: ':', '//', '/./', '/../'.
Amazon ML needs permissions to store and retrieve the logs on your behalf. For information about how to set permissions, see the Amazon Machine Learning Developer Guide.
dict
Response Syntax
{
'BatchPredictionId': 'string'
}
Response Structure
(dict) --
Represents the output of a CreateBatchPrediction
operation, and is an acknowledgement that Amazon ML received the request.
The CreateBatchPrediction
operation is asynchronous. You can poll for status updates by using the >GetBatchPrediction
operation and checking the Status
parameter of the result.
BatchPredictionId (string) --
A user-supplied ID that uniquely identifies the BatchPrediction
. This value is identical to the value of the BatchPredictionId
in the request.
Exceptions
MachineLearning.Client.exceptions.InvalidInputException
MachineLearning.Client.exceptions.InternalServerException
MachineLearning.Client.exceptions.IdempotentParameterMismatchException