SageMaker / Client / update_monitoring_schedule

update_monitoring_schedule#

SageMaker.Client.update_monitoring_schedule(**kwargs)#

Updates a previously created schedule.

See also: AWS API Documentation

Request Syntax

response = client.update_monitoring_schedule(
    MonitoringScheduleName='string',
    MonitoringScheduleConfig={
        'ScheduleConfig': {
            'ScheduleExpression': 'string',
            'DataAnalysisStartTime': 'string',
            'DataAnalysisEndTime': 'string'
        },
        'MonitoringJobDefinition': {
            'BaselineConfig': {
                'BaseliningJobName': 'string',
                'ConstraintsResource': {
                    'S3Uri': 'string'
                },
                'StatisticsResource': {
                    'S3Uri': 'string'
                }
            },
            'MonitoringInputs': [
                {
                    'EndpointInput': {
                        'EndpointName': 'string',
                        'LocalPath': 'string',
                        'S3InputMode': 'Pipe'|'File',
                        'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
                        'FeaturesAttribute': 'string',
                        'InferenceAttribute': 'string',
                        'ProbabilityAttribute': 'string',
                        'ProbabilityThresholdAttribute': 123.0,
                        'StartTimeOffset': 'string',
                        'EndTimeOffset': 'string',
                        'ExcludeFeaturesAttribute': 'string'
                    },
                    'BatchTransformInput': {
                        'DataCapturedDestinationS3Uri': 'string',
                        'DatasetFormat': {
                            'Csv': {
                                'Header': True|False
                            },
                            'Json': {
                                'Line': True|False
                            },
                            'Parquet': {}

                        },
                        'LocalPath': 'string',
                        'S3InputMode': 'Pipe'|'File',
                        'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
                        'FeaturesAttribute': 'string',
                        'InferenceAttribute': 'string',
                        'ProbabilityAttribute': 'string',
                        'ProbabilityThresholdAttribute': 123.0,
                        'StartTimeOffset': 'string',
                        'EndTimeOffset': 'string',
                        'ExcludeFeaturesAttribute': 'string'
                    }
                },
            ],
            'MonitoringOutputConfig': {
                'MonitoringOutputs': [
                    {
                        'S3Output': {
                            'S3Uri': 'string',
                            'LocalPath': 'string',
                            'S3UploadMode': 'Continuous'|'EndOfJob'
                        }
                    },
                ],
                'KmsKeyId': 'string'
            },
            'MonitoringResources': {
                'ClusterConfig': {
                    'InstanceCount': 123,
                    'InstanceType': 'ml.t3.medium'|'ml.t3.large'|'ml.t3.xlarge'|'ml.t3.2xlarge'|'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.r5.large'|'ml.r5.xlarge'|'ml.r5.2xlarge'|'ml.r5.4xlarge'|'ml.r5.8xlarge'|'ml.r5.12xlarge'|'ml.r5.16xlarge'|'ml.r5.24xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge',
                    'VolumeSizeInGB': 123,
                    'VolumeKmsKeyId': 'string'
                }
            },
            'MonitoringAppSpecification': {
                'ImageUri': 'string',
                'ContainerEntrypoint': [
                    'string',
                ],
                'ContainerArguments': [
                    'string',
                ],
                'RecordPreprocessorSourceUri': 'string',
                'PostAnalyticsProcessorSourceUri': 'string'
            },
            'StoppingCondition': {
                'MaxRuntimeInSeconds': 123
            },
            'Environment': {
                'string': 'string'
            },
            'NetworkConfig': {
                'EnableInterContainerTrafficEncryption': True|False,
                'EnableNetworkIsolation': True|False,
                'VpcConfig': {
                    'SecurityGroupIds': [
                        'string',
                    ],
                    'Subnets': [
                        'string',
                    ]
                }
            },
            'RoleArn': 'string'
        },
        'MonitoringJobDefinitionName': 'string',
        'MonitoringType': 'DataQuality'|'ModelQuality'|'ModelBias'|'ModelExplainability'
    }
)
Parameters:
  • MonitoringScheduleName (string) –

    [REQUIRED]

    The name of the monitoring schedule. The name must be unique within an Amazon Web Services Region within an Amazon Web Services account.

  • MonitoringScheduleConfig (dict) –

    [REQUIRED]

    The configuration object that specifies the monitoring schedule and defines the monitoring job.

    • ScheduleConfig (dict) –

      Configures the monitoring schedule.

      • ScheduleExpression (string) – [REQUIRED]

        A cron expression that describes details about the monitoring schedule.

        The supported cron expressions are:

        • If you want to set the job to start every hour, use the following: Hourly: cron(0 * ? * * *)

        • If you want to start the job daily: cron(0 [00-23] ? * * *)

        • If you want to run the job one time, immediately, use the following keyword: NOW

        For example, the following are valid cron expressions:

        • Daily at noon UTC: cron(0 12 ? * * *)

        • Daily at midnight UTC: cron(0 0 ? * * *)

        To support running every 6, 12 hours, the following are also supported:

        cron(0 [00-23]/[01-24] ? * * *)

        For example, the following are valid cron expressions:

        • Every 12 hours, starting at 5pm UTC: cron(0 17/12 ? * * *)

        • Every two hours starting at midnight: cron(0 0/2 ? * * *)

        Note

        • Even though the cron expression is set to start at 5PM UTC, note that there could be a delay of 0-20 minutes from the actual requested time to run the execution.

        • We recommend that if you would like a daily schedule, you do not provide this parameter. Amazon SageMaker will pick a time for running every day.

        You can also specify the keyword NOW to run the monitoring job immediately, one time, without recurring.

      • DataAnalysisStartTime (string) –

        Sets the start time for a monitoring job window. Express this time as an offset to the times that you schedule your monitoring jobs to run. You schedule monitoring jobs with the ScheduleExpression parameter. Specify this offset in ISO 8601 duration format. For example, if you want to monitor the five hours of data in your dataset that precede the start of each monitoring job, you would specify: "-PT5H".

        The start time that you specify must not precede the end time that you specify by more than 24 hours. You specify the end time with the DataAnalysisEndTime parameter.

        If you set ScheduleExpression to NOW, this parameter is required.

      • DataAnalysisEndTime (string) –

        Sets the end time for a monitoring job window. Express this time as an offset to the times that you schedule your monitoring jobs to run. You schedule monitoring jobs with the ScheduleExpression parameter. Specify this offset in ISO 8601 duration format. For example, if you want to end the window one hour before the start of each monitoring job, you would specify: "-PT1H".

        The end time that you specify must not follow the start time that you specify by more than 24 hours. You specify the start time with the DataAnalysisStartTime parameter.

        If you set ScheduleExpression to NOW, this parameter is required.

    • MonitoringJobDefinition (dict) –

      Defines the monitoring job.

      • BaselineConfig (dict) –

        Baseline configuration used to validate that the data conforms to the specified constraints and statistics

        • BaseliningJobName (string) –

          The name of the job that performs baselining for the monitoring job.

        • ConstraintsResource (dict) –

          The baseline constraint file in Amazon S3 that the current monitoring job should validated against.

          • S3Uri (string) –

            The Amazon S3 URI for the constraints resource.

        • StatisticsResource (dict) –

          The baseline statistics file in Amazon S3 that the current monitoring job should be validated against.

          • S3Uri (string) –

            The Amazon S3 URI for the statistics resource.

      • MonitoringInputs (list) – [REQUIRED]

        The array of inputs for the monitoring job. Currently we support monitoring an Amazon SageMaker Endpoint.

        • (dict) –

          The inputs for a monitoring job.

          • EndpointInput (dict) –

            The endpoint for a monitoring job.

            • EndpointName (string) – [REQUIRED]

              An endpoint in customer’s account which has enabled DataCaptureConfig enabled.

            • LocalPath (string) – [REQUIRED]

              Path to the filesystem where the endpoint data is available to the container.

            • S3InputMode (string) –

              Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.

            • S3DataDistributionType (string) –

              Whether input data distributed in Amazon S3 is fully replicated or sharded by an Amazon S3 key. Defaults to FullyReplicated

            • FeaturesAttribute (string) –

              The attributes of the input data that are the input features.

            • InferenceAttribute (string) –

              The attribute of the input data that represents the ground truth label.

            • ProbabilityAttribute (string) –

              In a classification problem, the attribute that represents the class probability.

            • ProbabilityThresholdAttribute (float) –

              The threshold for the class probability to be evaluated as a positive result.

            • StartTimeOffset (string) –

              If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

            • EndTimeOffset (string) –

              If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

            • ExcludeFeaturesAttribute (string) –

              The attributes of the input data to exclude from the analysis.

          • BatchTransformInput (dict) –

            Input object for the batch transform job.

            • DataCapturedDestinationS3Uri (string) – [REQUIRED]

              The Amazon S3 location being used to capture the data.

            • DatasetFormat (dict) – [REQUIRED]

              The dataset format for your batch transform job.

              • Csv (dict) –

                The CSV dataset used in the monitoring job.

                • Header (boolean) –

                  Indicates if the CSV data has a header.

              • Json (dict) –

                The JSON dataset used in the monitoring job

                • Line (boolean) –

                  Indicates if the file should be read as a JSON object per line.

              • Parquet (dict) –

                The Parquet dataset used in the monitoring job

            • LocalPath (string) – [REQUIRED]

              Path to the filesystem where the batch transform data is available to the container.

            • S3InputMode (string) –

              Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.

            • S3DataDistributionType (string) –

              Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

            • FeaturesAttribute (string) –

              The attributes of the input data that are the input features.

            • InferenceAttribute (string) –

              The attribute of the input data that represents the ground truth label.

            • ProbabilityAttribute (string) –

              In a classification problem, the attribute that represents the class probability.

            • ProbabilityThresholdAttribute (float) –

              The threshold for the class probability to be evaluated as a positive result.

            • StartTimeOffset (string) –

              If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

            • EndTimeOffset (string) –

              If specified, monitoring jobs subtract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

            • ExcludeFeaturesAttribute (string) –

              The attributes of the input data to exclude from the analysis.

      • MonitoringOutputConfig (dict) – [REQUIRED]

        The array of outputs from the monitoring job to be uploaded to Amazon S3.

        • MonitoringOutputs (list) – [REQUIRED]

          Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.

          • (dict) –

            The output object for a monitoring job.

            • S3Output (dict) – [REQUIRED]

              The Amazon S3 storage location where the results of a monitoring job are saved.

              • S3Uri (string) – [REQUIRED]

                A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.

              • LocalPath (string) – [REQUIRED]

                The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.

              • S3UploadMode (string) –

                Whether to upload the results of the monitoring job continuously or after the job completes.

        • KmsKeyId (string) –

          The Key Management Service (KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.

      • MonitoringResources (dict) – [REQUIRED]

        Identifies the resources, ML compute instances, and ML storage volumes to deploy for a monitoring job. In distributed processing, you specify more than one instance.

        • ClusterConfig (dict) – [REQUIRED]

          The configuration for the cluster resources used to run the processing job.

          • InstanceCount (integer) – [REQUIRED]

            The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.

          • InstanceType (string) – [REQUIRED]

            The ML compute instance type for the processing job.

          • VolumeSizeInGB (integer) – [REQUIRED]

            The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.

          • VolumeKmsKeyId (string) –

            The Key Management Service (KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.

      • MonitoringAppSpecification (dict) – [REQUIRED]

        Configures the monitoring job to run a specified Docker container image.

        • ImageUri (string) – [REQUIRED]

          The container image to be run by the monitoring job.

        • ContainerEntrypoint (list) –

          Specifies the entrypoint for a container used to run the monitoring job.

          • (string) –

        • ContainerArguments (list) –

          An array of arguments for the container used to run the monitoring job.

          • (string) –

        • RecordPreprocessorSourceUri (string) –

          An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flattened JSON so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers.

        • PostAnalyticsProcessorSourceUri (string) –

          An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.

      • StoppingCondition (dict) –

        Specifies a time limit for how long the monitoring job is allowed to run.

        • MaxRuntimeInSeconds (integer) – [REQUIRED]

          The maximum runtime allowed in seconds.

          Note

          The MaxRuntimeInSeconds cannot exceed the frequency of the job. For data quality and model explainability, this can be up to 3600 seconds for an hourly schedule. For model bias and model quality hourly schedules, this can be up to 1800 seconds.

      • Environment (dict) –

        Sets the environment variables in the Docker container.

        • (string) –

          • (string) –

      • NetworkConfig (dict) –

        Specifies networking options for an monitoring job.

        • EnableInterContainerTrafficEncryption (boolean) –

          Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.

        • EnableNetworkIsolation (boolean) –

          Whether to allow inbound and outbound network calls to and from the containers used for the processing job.

        • VpcConfig (dict) –

          Specifies an Amazon Virtual Private Cloud (VPC) that your SageMaker jobs, hosted models, and compute resources have access to. You can control access to and from your resources by configuring a VPC. For more information, see Give SageMaker Access to Resources in your Amazon VPC.

          • SecurityGroupIds (list) – [REQUIRED]

            The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.

            • (string) –

          • Subnets (list) – [REQUIRED]

            The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.

            • (string) –

      • RoleArn (string) – [REQUIRED]

        The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.

    • MonitoringJobDefinitionName (string) –

      The name of the monitoring job definition to schedule.

    • MonitoringType (string) –

      The type of the monitoring job definition to schedule.

Return type:

dict

Returns:

Response Syntax

{
    'MonitoringScheduleArn': 'string'
}

Response Structure

  • (dict) –

    • MonitoringScheduleArn (string) –

      The Amazon Resource Name (ARN) of the monitoring schedule.

Exceptions

  • SageMaker.Client.exceptions.ResourceLimitExceeded

  • SageMaker.Client.exceptions.ResourceNotFound