describe_evaluations

describe_evaluations(**kwargs)

Returns a list of DescribeEvaluations that match the search criteria in the request.

See also: AWS API Documentation

Request Syntax

response = client.describe_evaluations(
    FilterVariable='CreatedAt'|'LastUpdatedAt'|'Status'|'Name'|'IAMUser'|'MLModelId'|'DataSourceId'|'DataURI',
    EQ='string',
    GT='string',
    LT='string',
    GE='string',
    LE='string',
    NE='string',
    Prefix='string',
    SortOrder='asc'|'dsc',
    NextToken='string',
    Limit=123
)
Parameters
  • FilterVariable (string) --

    Use one of the following variable to filter a list of Evaluation objects:

    • CreatedAt - Sets the search criteria to the Evaluation creation date.
    • Status - Sets the search criteria to the Evaluation status.
    • Name - Sets the search criteria to the contents of Evaluation **** Name .
    • IAMUser - Sets the search criteria to the user account that invoked an Evaluation .
    • MLModelId - Sets the search criteria to the MLModel that was evaluated.
    • DataSourceId - Sets the search criteria to the DataSource used in Evaluation .
    • DataUri - Sets the search criteria to the data file(s) used in Evaluation . The URL can identify either a file or an Amazon Simple Storage Solution (Amazon S3) bucket or directory.
  • EQ (string) -- The equal to operator. The Evaluation results will have FilterVariable values that exactly match the value specified with EQ .
  • GT (string) -- The greater than operator. The Evaluation results will have FilterVariable values that are greater than the value specified with GT .
  • LT (string) -- The less than operator. The Evaluation results will have FilterVariable values that are less than the value specified with LT .
  • GE (string) -- The greater than or equal to operator. The Evaluation results will have FilterVariable values that are greater than or equal to the value specified with GE .
  • LE (string) -- The less than or equal to operator. The Evaluation results will have FilterVariable values that are less than or equal to the value specified with LE .
  • NE (string) -- The not equal to operator. The Evaluation results will have FilterVariable values not equal to the value specified with NE .
  • Prefix (string) --

    A string that is found at the beginning of a variable, such as Name or Id .

    For example, an Evaluation could have the Name 2014-09-09-HolidayGiftMailer . To search for this Evaluation , select Name for the FilterVariable and any of the following strings for the Prefix :

    • 2014-09
    • 2014-09-09
    • 2014-09-09-Holiday
  • SortOrder (string) --

    A two-value parameter that determines the sequence of the resulting list of Evaluation .

    • asc - Arranges the list in ascending order (A-Z, 0-9).
    • dsc - Arranges the list in descending order (Z-A, 9-0).

    Results are sorted by FilterVariable .

  • NextToken (string) -- The ID of the page in the paginated results.
  • Limit (integer) -- The maximum number of Evaluation to include in the result.
Return type

dict

Returns

Response Syntax

{
    'Results': [
        {
            'EvaluationId': 'string',
            'MLModelId': 'string',
            'EvaluationDataSourceId': 'string',
            'InputDataLocationS3': 'string',
            'CreatedByIamUser': 'string',
            'CreatedAt': datetime(2015, 1, 1),
            'LastUpdatedAt': datetime(2015, 1, 1),
            'Name': 'string',
            'Status': 'PENDING'|'INPROGRESS'|'FAILED'|'COMPLETED'|'DELETED',
            'PerformanceMetrics': {
                'Properties': {
                    'string': 'string'
                }
            },
            'Message': 'string',
            'ComputeTime': 123,
            'FinishedAt': datetime(2015, 1, 1),
            'StartedAt': datetime(2015, 1, 1)
        },
    ],
    'NextToken': 'string'
}

Response Structure

  • (dict) --

    Represents the query results from a DescribeEvaluations operation. The content is essentially a list of Evaluation .

    • Results (list) --

      A list of Evaluation that meet the search criteria.

      • (dict) --

        Represents the output of GetEvaluation operation.

        The content consists of the detailed metadata and data file information and the current status of the Evaluation .

        • EvaluationId (string) --

          The ID that is assigned to the Evaluation at creation.

        • MLModelId (string) --

          The ID of the MLModel that is the focus of the evaluation.

        • EvaluationDataSourceId (string) --

          The ID of the DataSource that is used to evaluate the MLModel .

        • InputDataLocationS3 (string) --

          The location and name of the data in Amazon Simple Storage Server (Amazon S3) that is used in the evaluation.

        • CreatedByIamUser (string) --

          The AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

        • CreatedAt (datetime) --

          The time that the Evaluation was created. The time is expressed in epoch time.

        • LastUpdatedAt (datetime) --

          The time of the most recent edit to the Evaluation . The time is expressed in epoch time.

        • Name (string) --

          A user-supplied name or description of the Evaluation .

        • Status (string) --

          The status of the evaluation. This element can have one of the following values:

          • PENDING - Amazon Machine Learning (Amazon ML) submitted a request to evaluate an MLModel .
          • INPROGRESS - The evaluation is underway.
          • FAILED - The request to evaluate an MLModel did not run to completion. It is not usable.
          • COMPLETED - The evaluation process completed successfully.
          • DELETED - The Evaluation is marked as deleted. It is not usable.
        • PerformanceMetrics (dict) --

          Measurements of how well the MLModel performed, using observations referenced by the DataSource . One of the following metrics is returned, based on the type of the MLModel :

          • BinaryAUC: A binary MLModel uses the Area Under the Curve (AUC) technique to measure performance.
          • RegressionRMSE: A regression MLModel uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable.
          • MulticlassAvgFScore: A multiclass MLModel uses the F1 score technique to measure performance.

          For more information about performance metrics, please see the Amazon Machine Learning Developer Guide.

          • Properties (dict) --
            • (string) --
              • (string) --
        • Message (string) --

          A description of the most recent details about evaluating the MLModel .

        • ComputeTime (integer) --

          Long integer type that is a 64-bit signed number.

        • FinishedAt (datetime) --

          A timestamp represented in epoch time.

        • StartedAt (datetime) --

          A timestamp represented in epoch time.

    • NextToken (string) --

      The ID of the next page in the paginated results that indicates at least one more page follows.

Exceptions

  • MachineLearning.Client.exceptions.InvalidInputException
  • MachineLearning.Client.exceptions.InternalServerException