SageMaker / Client / describe_model
describe_model#
- SageMaker.Client.describe_model(**kwargs)#
- Describes a model that you created using the - CreateModelAPI.- See also: AWS API Documentation - Request Syntax- response = client.describe_model( ModelName='string' ) - Parameters:
- ModelName (string) – - [REQUIRED] - The name of the model. 
- Return type:
- dict 
- Returns:
- Response Syntax- { 'ModelName': 'string', 'PrimaryContainer': { 'ContainerHostname': 'string', 'Image': 'string', 'ImageConfig': { 'RepositoryAccessMode': 'Platform'|'Vpc', 'RepositoryAuthConfig': { 'RepositoryCredentialsProviderArn': 'string' } }, 'Mode': 'SingleModel'|'MultiModel', 'ModelDataUrl': 'string', 'ModelDataSource': { 'S3DataSource': { 'S3Uri': 'string', 'S3DataType': 'S3Prefix'|'S3Object', 'CompressionType': 'None'|'Gzip', 'ModelAccessConfig': { 'AcceptEula': True|False }, 'HubAccessConfig': { 'HubContentArn': 'string' }, 'ManifestS3Uri': 'string', 'ETag': 'string', 'ManifestEtag': 'string' } }, 'AdditionalModelDataSources': [ { 'ChannelName': 'string', 'S3DataSource': { 'S3Uri': 'string', 'S3DataType': 'S3Prefix'|'S3Object', 'CompressionType': 'None'|'Gzip', 'ModelAccessConfig': { 'AcceptEula': True|False }, 'HubAccessConfig': { 'HubContentArn': 'string' }, 'ManifestS3Uri': 'string', 'ETag': 'string', 'ManifestEtag': 'string' } }, ], 'Environment': { 'string': 'string' }, 'ModelPackageName': 'string', 'InferenceSpecificationName': 'string', 'MultiModelConfig': { 'ModelCacheSetting': 'Enabled'|'Disabled' } }, 'Containers': [ { 'ContainerHostname': 'string', 'Image': 'string', 'ImageConfig': { 'RepositoryAccessMode': 'Platform'|'Vpc', 'RepositoryAuthConfig': { 'RepositoryCredentialsProviderArn': 'string' } }, 'Mode': 'SingleModel'|'MultiModel', 'ModelDataUrl': 'string', 'ModelDataSource': { 'S3DataSource': { 'S3Uri': 'string', 'S3DataType': 'S3Prefix'|'S3Object', 'CompressionType': 'None'|'Gzip', 'ModelAccessConfig': { 'AcceptEula': True|False }, 'HubAccessConfig': { 'HubContentArn': 'string' }, 'ManifestS3Uri': 'string', 'ETag': 'string', 'ManifestEtag': 'string' } }, 'AdditionalModelDataSources': [ { 'ChannelName': 'string', 'S3DataSource': { 'S3Uri': 'string', 'S3DataType': 'S3Prefix'|'S3Object', 'CompressionType': 'None'|'Gzip', 'ModelAccessConfig': { 'AcceptEula': True|False }, 'HubAccessConfig': { 'HubContentArn': 'string' }, 'ManifestS3Uri': 'string', 'ETag': 'string', 'ManifestEtag': 'string' } }, ], 'Environment': { 'string': 'string' }, 'ModelPackageName': 'string', 'InferenceSpecificationName': 'string', 'MultiModelConfig': { 'ModelCacheSetting': 'Enabled'|'Disabled' } }, ], 'InferenceExecutionConfig': { 'Mode': 'Serial'|'Direct' }, 'ExecutionRoleArn': 'string', 'VpcConfig': { 'SecurityGroupIds': [ 'string', ], 'Subnets': [ 'string', ] }, 'CreationTime': datetime(2015, 1, 1), 'ModelArn': 'string', 'EnableNetworkIsolation': True|False, 'DeploymentRecommendation': { 'RecommendationStatus': 'IN_PROGRESS'|'COMPLETED'|'FAILED'|'NOT_APPLICABLE', 'RealTimeInferenceRecommendations': [ { 'RecommendationId': 'string', 'InstanceType': 'ml.t2.medium'|'ml.t2.large'|'ml.t2.xlarge'|'ml.t2.2xlarge'|'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.m5d.large'|'ml.m5d.xlarge'|'ml.m5d.2xlarge'|'ml.m5d.4xlarge'|'ml.m5d.12xlarge'|'ml.m5d.24xlarge'|'ml.c4.large'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.large'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.c5d.large'|'ml.c5d.xlarge'|'ml.c5d.2xlarge'|'ml.c5d.4xlarge'|'ml.c5d.9xlarge'|'ml.c5d.18xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge'|'ml.r5.large'|'ml.r5.xlarge'|'ml.r5.2xlarge'|'ml.r5.4xlarge'|'ml.r5.12xlarge'|'ml.r5.24xlarge'|'ml.r5d.large'|'ml.r5d.xlarge'|'ml.r5d.2xlarge'|'ml.r5d.4xlarge'|'ml.r5d.12xlarge'|'ml.r5d.24xlarge'|'ml.inf1.xlarge'|'ml.inf1.2xlarge'|'ml.inf1.6xlarge'|'ml.inf1.24xlarge'|'ml.dl1.24xlarge'|'ml.c6i.large'|'ml.c6i.xlarge'|'ml.c6i.2xlarge'|'ml.c6i.4xlarge'|'ml.c6i.8xlarge'|'ml.c6i.12xlarge'|'ml.c6i.16xlarge'|'ml.c6i.24xlarge'|'ml.c6i.32xlarge'|'ml.m6i.large'|'ml.m6i.xlarge'|'ml.m6i.2xlarge'|'ml.m6i.4xlarge'|'ml.m6i.8xlarge'|'ml.m6i.12xlarge'|'ml.m6i.16xlarge'|'ml.m6i.24xlarge'|'ml.m6i.32xlarge'|'ml.r6i.large'|'ml.r6i.xlarge'|'ml.r6i.2xlarge'|'ml.r6i.4xlarge'|'ml.r6i.8xlarge'|'ml.r6i.12xlarge'|'ml.r6i.16xlarge'|'ml.r6i.24xlarge'|'ml.r6i.32xlarge'|'ml.g5.xlarge'|'ml.g5.2xlarge'|'ml.g5.4xlarge'|'ml.g5.8xlarge'|'ml.g5.12xlarge'|'ml.g5.16xlarge'|'ml.g5.24xlarge'|'ml.g5.48xlarge'|'ml.g6.xlarge'|'ml.g6.2xlarge'|'ml.g6.4xlarge'|'ml.g6.8xlarge'|'ml.g6.12xlarge'|'ml.g6.16xlarge'|'ml.g6.24xlarge'|'ml.g6.48xlarge'|'ml.g6e.xlarge'|'ml.g6e.2xlarge'|'ml.g6e.4xlarge'|'ml.g6e.8xlarge'|'ml.g6e.12xlarge'|'ml.g6e.16xlarge'|'ml.g6e.24xlarge'|'ml.g6e.48xlarge'|'ml.p4d.24xlarge'|'ml.c7g.large'|'ml.c7g.xlarge'|'ml.c7g.2xlarge'|'ml.c7g.4xlarge'|'ml.c7g.8xlarge'|'ml.c7g.12xlarge'|'ml.c7g.16xlarge'|'ml.m6g.large'|'ml.m6g.xlarge'|'ml.m6g.2xlarge'|'ml.m6g.4xlarge'|'ml.m6g.8xlarge'|'ml.m6g.12xlarge'|'ml.m6g.16xlarge'|'ml.m6gd.large'|'ml.m6gd.xlarge'|'ml.m6gd.2xlarge'|'ml.m6gd.4xlarge'|'ml.m6gd.8xlarge'|'ml.m6gd.12xlarge'|'ml.m6gd.16xlarge'|'ml.c6g.large'|'ml.c6g.xlarge'|'ml.c6g.2xlarge'|'ml.c6g.4xlarge'|'ml.c6g.8xlarge'|'ml.c6g.12xlarge'|'ml.c6g.16xlarge'|'ml.c6gd.large'|'ml.c6gd.xlarge'|'ml.c6gd.2xlarge'|'ml.c6gd.4xlarge'|'ml.c6gd.8xlarge'|'ml.c6gd.12xlarge'|'ml.c6gd.16xlarge'|'ml.c6gn.large'|'ml.c6gn.xlarge'|'ml.c6gn.2xlarge'|'ml.c6gn.4xlarge'|'ml.c6gn.8xlarge'|'ml.c6gn.12xlarge'|'ml.c6gn.16xlarge'|'ml.r6g.large'|'ml.r6g.xlarge'|'ml.r6g.2xlarge'|'ml.r6g.4xlarge'|'ml.r6g.8xlarge'|'ml.r6g.12xlarge'|'ml.r6g.16xlarge'|'ml.r6gd.large'|'ml.r6gd.xlarge'|'ml.r6gd.2xlarge'|'ml.r6gd.4xlarge'|'ml.r6gd.8xlarge'|'ml.r6gd.12xlarge'|'ml.r6gd.16xlarge'|'ml.p4de.24xlarge'|'ml.trn1.2xlarge'|'ml.trn1.32xlarge'|'ml.trn1n.32xlarge'|'ml.trn2.48xlarge'|'ml.inf2.xlarge'|'ml.inf2.8xlarge'|'ml.inf2.24xlarge'|'ml.inf2.48xlarge'|'ml.p5.48xlarge'|'ml.p5e.48xlarge'|'ml.m7i.large'|'ml.m7i.xlarge'|'ml.m7i.2xlarge'|'ml.m7i.4xlarge'|'ml.m7i.8xlarge'|'ml.m7i.12xlarge'|'ml.m7i.16xlarge'|'ml.m7i.24xlarge'|'ml.m7i.48xlarge'|'ml.c7i.large'|'ml.c7i.xlarge'|'ml.c7i.2xlarge'|'ml.c7i.4xlarge'|'ml.c7i.8xlarge'|'ml.c7i.12xlarge'|'ml.c7i.16xlarge'|'ml.c7i.24xlarge'|'ml.c7i.48xlarge'|'ml.r7i.large'|'ml.r7i.xlarge'|'ml.r7i.2xlarge'|'ml.r7i.4xlarge'|'ml.r7i.8xlarge'|'ml.r7i.12xlarge'|'ml.r7i.16xlarge'|'ml.r7i.24xlarge'|'ml.r7i.48xlarge', 'Environment': { 'string': 'string' } }, ] } } - Response Structure- (dict) – - ModelName (string) – - Name of the SageMaker model. 
- PrimaryContainer (dict) – - The location of the primary inference code, associated artifacts, and custom environment map that the inference code uses when it is deployed in production. - ContainerHostname (string) – - This parameter is ignored for models that contain only a - PrimaryContainer.- When a - ContainerDefinitionis part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline. If you don’t specify a value for this parameter for a- ContainerDefinitionthat is part of an inference pipeline, a unique name is automatically assigned based on the position of the- ContainerDefinitionin the pipeline. If you specify a value for the- ContainerHostNamefor any- ContainerDefinitionthat is part of an inference pipeline, you must specify a value for the- ContainerHostNameparameter of every- ContainerDefinitionin that pipeline.
- Image (string) – - The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both - registry/repository[:tag]and- registry/repository[@digest]image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.- Note- The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating. 
- ImageConfig (dict) – - Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers. - Note- The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating. - RepositoryAccessMode (string) – - Set this to one of the following values: - Platform- The model image is hosted in Amazon ECR.
- Vpc- The model image is hosted in a private Docker registry in your VPC.
 
- RepositoryAuthConfig (dict) – - (Optional) Specifies an authentication configuration for the private docker registry where your model image is hosted. Specify a value for this property only if you specified - Vpcas the value for the- RepositoryAccessModefield, and the private Docker registry where the model image is hosted requires authentication.- RepositoryCredentialsProviderArn (string) – - The Amazon Resource Name (ARN) of an Amazon Web Services Lambda function that provides credentials to authenticate to the private Docker registry where your model image is hosted. For information about how to create an Amazon Web Services Lambda function, see Create a Lambda function with the console in the Amazon Web Services Lambda Developer Guide. 
 
 
- Mode (string) – - Whether the container hosts a single model or multiple models. 
- ModelDataUrl (string) – - The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters. - Note- The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating. - If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your Amazon Web Services account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User Guide. - Warning- If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model artifacts in - ModelDataUrl.
- ModelDataSource (dict) – - Specifies the location of ML model data to deploy. - Note- Currently you cannot use - ModelDataSourcein conjunction with SageMaker batch transform, SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.- S3DataSource (dict) – - Specifies the S3 location of ML model data to deploy. - S3Uri (string) – - Specifies the S3 path of ML model data to deploy. 
- S3DataType (string) – - Specifies the type of ML model data to deploy. - If you choose - S3Prefix,- S3Uriidentifies a key name prefix. SageMaker uses all objects that match the specified key name prefix as part of the ML model data to deploy. A valid key name prefix identified by- S3Urialways ends with a forward slash (/).- If you choose - S3Object,- S3Uriidentifies an object that is the ML model data to deploy.
- CompressionType (string) – - Specifies how the ML model data is prepared. - If you choose - Gzipand choose- S3Objectas the value of- S3DataType,- S3Uriidentifies an object that is a gzip-compressed TAR archive. SageMaker will attempt to decompress and untar the object during model deployment.- If you choose - Noneand chooose- S3Objectas the value of- S3DataType,- S3Uriidentifies an object that represents an uncompressed ML model to deploy.- If you choose None and choose - S3Prefixas the value of- S3DataType,- S3Uriidentifies a key name prefix, under which all objects represents the uncompressed ML model to deploy.- If you choose None, then SageMaker will follow rules below when creating model data files under /opt/ml/model directory for use by your inference code: - If you choose - S3Objectas the value of- S3DataType, then SageMaker will split the key of the S3 object referenced by- S3Uriby slash (/), and use the last part as the filename of the file holding the content of the S3 object.
- If you choose - S3Prefixas the value of- S3DataType, then for each S3 object under the key name pefix referenced by- S3Uri, SageMaker will trim its key by the prefix, and use the remainder as the path (relative to- /opt/ml/model) of the file holding the content of the S3 object. SageMaker will split the remainder by slash (/), using intermediate parts as directory names and the last part as filename of the file holding the content of the S3 object.
- Do not use any of the following as file names or directory names: - An empty or blank string 
- A string which contains null bytes 
- A string longer than 255 bytes 
- A single dot ( - .)
- A double dot ( - ..)
 
- Ambiguous file names will result in model deployment failure. For example, if your uncompressed ML model consists of two S3 objects - s3://mybucket/model/weightsand- s3://mybucket/model/weights/part1and you specify- s3://mybucket/model/as the value of- S3Uriand- S3Prefixas the value of- S3DataType, then it will result in name clash between- /opt/ml/model/weights(a regular file) and- /opt/ml/model/weights/(a directory).
- Do not organize the model artifacts in S3 console using folders. When you create a folder in S3 console, S3 creates a 0-byte object with a key set to the folder name you provide. They key of the 0-byte object ends with a slash (/) which violates SageMaker restrictions on model artifact file names, leading to model deployment failure. 
 
- ModelAccessConfig (dict) – - Specifies the access configuration file for the ML model. You can explicitly accept the model end-user license agreement (EULA) within the - ModelAccessConfig. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.- AcceptEula (boolean) – - Specifies agreement to the model end-user license agreement (EULA). The - AcceptEulavalue must be explicitly defined as- Truein order to accept the EULA that this model requires. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.
 
- HubAccessConfig (dict) – - Configuration information for hub access. - HubContentArn (string) – - The ARN of the hub content for which deployment access is allowed. 
 
- ManifestS3Uri (string) – - The Amazon S3 URI of the manifest file. The manifest file is a CSV file that stores the artifact locations. 
- ETag (string) – - The ETag associated with S3 URI. 
- ManifestEtag (string) – - The ETag associated with Manifest S3 URI. 
 
 
- AdditionalModelDataSources (list) – - Data sources that are available to your model in addition to the one that you specify for - ModelDataSourcewhen you use the- CreateModelaction.- (dict) – - Data sources that are available to your model in addition to the one that you specify for - ModelDataSourcewhen you use the- CreateModelaction.- ChannelName (string) – - A custom name for this - AdditionalModelDataSourceobject.
- S3DataSource (dict) – - Specifies the S3 location of ML model data to deploy. - S3Uri (string) – - Specifies the S3 path of ML model data to deploy. 
- S3DataType (string) – - Specifies the type of ML model data to deploy. - If you choose - S3Prefix,- S3Uriidentifies a key name prefix. SageMaker uses all objects that match the specified key name prefix as part of the ML model data to deploy. A valid key name prefix identified by- S3Urialways ends with a forward slash (/).- If you choose - S3Object,- S3Uriidentifies an object that is the ML model data to deploy.
- CompressionType (string) – - Specifies how the ML model data is prepared. - If you choose - Gzipand choose- S3Objectas the value of- S3DataType,- S3Uriidentifies an object that is a gzip-compressed TAR archive. SageMaker will attempt to decompress and untar the object during model deployment.- If you choose - Noneand chooose- S3Objectas the value of- S3DataType,- S3Uriidentifies an object that represents an uncompressed ML model to deploy.- If you choose None and choose - S3Prefixas the value of- S3DataType,- S3Uriidentifies a key name prefix, under which all objects represents the uncompressed ML model to deploy.- If you choose None, then SageMaker will follow rules below when creating model data files under /opt/ml/model directory for use by your inference code: - If you choose - S3Objectas the value of- S3DataType, then SageMaker will split the key of the S3 object referenced by- S3Uriby slash (/), and use the last part as the filename of the file holding the content of the S3 object.
- If you choose - S3Prefixas the value of- S3DataType, then for each S3 object under the key name pefix referenced by- S3Uri, SageMaker will trim its key by the prefix, and use the remainder as the path (relative to- /opt/ml/model) of the file holding the content of the S3 object. SageMaker will split the remainder by slash (/), using intermediate parts as directory names and the last part as filename of the file holding the content of the S3 object.
- Do not use any of the following as file names or directory names: - An empty or blank string 
- A string which contains null bytes 
- A string longer than 255 bytes 
- A single dot ( - .)
- A double dot ( - ..)
 
- Ambiguous file names will result in model deployment failure. For example, if your uncompressed ML model consists of two S3 objects - s3://mybucket/model/weightsand- s3://mybucket/model/weights/part1and you specify- s3://mybucket/model/as the value of- S3Uriand- S3Prefixas the value of- S3DataType, then it will result in name clash between- /opt/ml/model/weights(a regular file) and- /opt/ml/model/weights/(a directory).
- Do not organize the model artifacts in S3 console using folders. When you create a folder in S3 console, S3 creates a 0-byte object with a key set to the folder name you provide. They key of the 0-byte object ends with a slash (/) which violates SageMaker restrictions on model artifact file names, leading to model deployment failure. 
 
- ModelAccessConfig (dict) – - Specifies the access configuration file for the ML model. You can explicitly accept the model end-user license agreement (EULA) within the - ModelAccessConfig. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.- AcceptEula (boolean) – - Specifies agreement to the model end-user license agreement (EULA). The - AcceptEulavalue must be explicitly defined as- Truein order to accept the EULA that this model requires. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.
 
- HubAccessConfig (dict) – - Configuration information for hub access. - HubContentArn (string) – - The ARN of the hub content for which deployment access is allowed. 
 
- ManifestS3Uri (string) – - The Amazon S3 URI of the manifest file. The manifest file is a CSV file that stores the artifact locations. 
- ETag (string) – - The ETag associated with S3 URI. 
- ManifestEtag (string) – - The ETag associated with Manifest S3 URI. 
 
 
 
- Environment (dict) – - The environment variables to set in the Docker container. Don’t include any sensitive data in your environment variables. - The maximum length of each key and value in the - Environmentmap is 1024 bytes. The maximum length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to a- CreateModelrequest, then the maximum length of all of their maps, combined, is also 32 KB.- (string) – - (string) – 
 
 
- ModelPackageName (string) – - The name or Amazon Resource Name (ARN) of the model package to use to create the model. 
- InferenceSpecificationName (string) – - The inference specification name in the model package version. 
- MultiModelConfig (dict) – - Specifies additional configuration for multi-model endpoints. - ModelCacheSetting (string) – - Whether to cache models for a multi-model endpoint. By default, multi-model endpoints cache models so that a model does not have to be loaded into memory each time it is invoked. Some use cases do not benefit from model caching. For example, if an endpoint hosts a large number of models that are each invoked infrequently, the endpoint might perform better if you disable model caching. To disable model caching, set the value of this parameter to - Disabled.
 
 
- Containers (list) – - The containers in the inference pipeline. - (dict) – - Describes the container, as part of model definition. - ContainerHostname (string) – - This parameter is ignored for models that contain only a - PrimaryContainer.- When a - ContainerDefinitionis part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline. If you don’t specify a value for this parameter for a- ContainerDefinitionthat is part of an inference pipeline, a unique name is automatically assigned based on the position of the- ContainerDefinitionin the pipeline. If you specify a value for the- ContainerHostNamefor any- ContainerDefinitionthat is part of an inference pipeline, you must specify a value for the- ContainerHostNameparameter of every- ContainerDefinitionin that pipeline.
- Image (string) – - The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both - registry/repository[:tag]and- registry/repository[@digest]image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.- Note- The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating. 
- ImageConfig (dict) – - Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers. - Note- The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating. - RepositoryAccessMode (string) – - Set this to one of the following values: - Platform- The model image is hosted in Amazon ECR.
- Vpc- The model image is hosted in a private Docker registry in your VPC.
 
- RepositoryAuthConfig (dict) – - (Optional) Specifies an authentication configuration for the private docker registry where your model image is hosted. Specify a value for this property only if you specified - Vpcas the value for the- RepositoryAccessModefield, and the private Docker registry where the model image is hosted requires authentication.- RepositoryCredentialsProviderArn (string) – - The Amazon Resource Name (ARN) of an Amazon Web Services Lambda function that provides credentials to authenticate to the private Docker registry where your model image is hosted. For information about how to create an Amazon Web Services Lambda function, see Create a Lambda function with the console in the Amazon Web Services Lambda Developer Guide. 
 
 
- Mode (string) – - Whether the container hosts a single model or multiple models. 
- ModelDataUrl (string) – - The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters. - Note- The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating. - If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your Amazon Web Services account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity and Access Management User Guide. - Warning- If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model artifacts in - ModelDataUrl.
- ModelDataSource (dict) – - Specifies the location of ML model data to deploy. - Note- Currently you cannot use - ModelDataSourcein conjunction with SageMaker batch transform, SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.- S3DataSource (dict) – - Specifies the S3 location of ML model data to deploy. - S3Uri (string) – - Specifies the S3 path of ML model data to deploy. 
- S3DataType (string) – - Specifies the type of ML model data to deploy. - If you choose - S3Prefix,- S3Uriidentifies a key name prefix. SageMaker uses all objects that match the specified key name prefix as part of the ML model data to deploy. A valid key name prefix identified by- S3Urialways ends with a forward slash (/).- If you choose - S3Object,- S3Uriidentifies an object that is the ML model data to deploy.
- CompressionType (string) – - Specifies how the ML model data is prepared. - If you choose - Gzipand choose- S3Objectas the value of- S3DataType,- S3Uriidentifies an object that is a gzip-compressed TAR archive. SageMaker will attempt to decompress and untar the object during model deployment.- If you choose - Noneand chooose- S3Objectas the value of- S3DataType,- S3Uriidentifies an object that represents an uncompressed ML model to deploy.- If you choose None and choose - S3Prefixas the value of- S3DataType,- S3Uriidentifies a key name prefix, under which all objects represents the uncompressed ML model to deploy.- If you choose None, then SageMaker will follow rules below when creating model data files under /opt/ml/model directory for use by your inference code: - If you choose - S3Objectas the value of- S3DataType, then SageMaker will split the key of the S3 object referenced by- S3Uriby slash (/), and use the last part as the filename of the file holding the content of the S3 object.
- If you choose - S3Prefixas the value of- S3DataType, then for each S3 object under the key name pefix referenced by- S3Uri, SageMaker will trim its key by the prefix, and use the remainder as the path (relative to- /opt/ml/model) of the file holding the content of the S3 object. SageMaker will split the remainder by slash (/), using intermediate parts as directory names and the last part as filename of the file holding the content of the S3 object.
- Do not use any of the following as file names or directory names: - An empty or blank string 
- A string which contains null bytes 
- A string longer than 255 bytes 
- A single dot ( - .)
- A double dot ( - ..)
 
- Ambiguous file names will result in model deployment failure. For example, if your uncompressed ML model consists of two S3 objects - s3://mybucket/model/weightsand- s3://mybucket/model/weights/part1and you specify- s3://mybucket/model/as the value of- S3Uriand- S3Prefixas the value of- S3DataType, then it will result in name clash between- /opt/ml/model/weights(a regular file) and- /opt/ml/model/weights/(a directory).
- Do not organize the model artifacts in S3 console using folders. When you create a folder in S3 console, S3 creates a 0-byte object with a key set to the folder name you provide. They key of the 0-byte object ends with a slash (/) which violates SageMaker restrictions on model artifact file names, leading to model deployment failure. 
 
- ModelAccessConfig (dict) – - Specifies the access configuration file for the ML model. You can explicitly accept the model end-user license agreement (EULA) within the - ModelAccessConfig. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.- AcceptEula (boolean) – - Specifies agreement to the model end-user license agreement (EULA). The - AcceptEulavalue must be explicitly defined as- Truein order to accept the EULA that this model requires. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.
 
- HubAccessConfig (dict) – - Configuration information for hub access. - HubContentArn (string) – - The ARN of the hub content for which deployment access is allowed. 
 
- ManifestS3Uri (string) – - The Amazon S3 URI of the manifest file. The manifest file is a CSV file that stores the artifact locations. 
- ETag (string) – - The ETag associated with S3 URI. 
- ManifestEtag (string) – - The ETag associated with Manifest S3 URI. 
 
 
- AdditionalModelDataSources (list) – - Data sources that are available to your model in addition to the one that you specify for - ModelDataSourcewhen you use the- CreateModelaction.- (dict) – - Data sources that are available to your model in addition to the one that you specify for - ModelDataSourcewhen you use the- CreateModelaction.- ChannelName (string) – - A custom name for this - AdditionalModelDataSourceobject.
- S3DataSource (dict) – - Specifies the S3 location of ML model data to deploy. - S3Uri (string) – - Specifies the S3 path of ML model data to deploy. 
- S3DataType (string) – - Specifies the type of ML model data to deploy. - If you choose - S3Prefix,- S3Uriidentifies a key name prefix. SageMaker uses all objects that match the specified key name prefix as part of the ML model data to deploy. A valid key name prefix identified by- S3Urialways ends with a forward slash (/).- If you choose - S3Object,- S3Uriidentifies an object that is the ML model data to deploy.
- CompressionType (string) – - Specifies how the ML model data is prepared. - If you choose - Gzipand choose- S3Objectas the value of- S3DataType,- S3Uriidentifies an object that is a gzip-compressed TAR archive. SageMaker will attempt to decompress and untar the object during model deployment.- If you choose - Noneand chooose- S3Objectas the value of- S3DataType,- S3Uriidentifies an object that represents an uncompressed ML model to deploy.- If you choose None and choose - S3Prefixas the value of- S3DataType,- S3Uriidentifies a key name prefix, under which all objects represents the uncompressed ML model to deploy.- If you choose None, then SageMaker will follow rules below when creating model data files under /opt/ml/model directory for use by your inference code: - If you choose - S3Objectas the value of- S3DataType, then SageMaker will split the key of the S3 object referenced by- S3Uriby slash (/), and use the last part as the filename of the file holding the content of the S3 object.
- If you choose - S3Prefixas the value of- S3DataType, then for each S3 object under the key name pefix referenced by- S3Uri, SageMaker will trim its key by the prefix, and use the remainder as the path (relative to- /opt/ml/model) of the file holding the content of the S3 object. SageMaker will split the remainder by slash (/), using intermediate parts as directory names and the last part as filename of the file holding the content of the S3 object.
- Do not use any of the following as file names or directory names: - An empty or blank string 
- A string which contains null bytes 
- A string longer than 255 bytes 
- A single dot ( - .)
- A double dot ( - ..)
 
- Ambiguous file names will result in model deployment failure. For example, if your uncompressed ML model consists of two S3 objects - s3://mybucket/model/weightsand- s3://mybucket/model/weights/part1and you specify- s3://mybucket/model/as the value of- S3Uriand- S3Prefixas the value of- S3DataType, then it will result in name clash between- /opt/ml/model/weights(a regular file) and- /opt/ml/model/weights/(a directory).
- Do not organize the model artifacts in S3 console using folders. When you create a folder in S3 console, S3 creates a 0-byte object with a key set to the folder name you provide. They key of the 0-byte object ends with a slash (/) which violates SageMaker restrictions on model artifact file names, leading to model deployment failure. 
 
- ModelAccessConfig (dict) – - Specifies the access configuration file for the ML model. You can explicitly accept the model end-user license agreement (EULA) within the - ModelAccessConfig. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.- AcceptEula (boolean) – - Specifies agreement to the model end-user license agreement (EULA). The - AcceptEulavalue must be explicitly defined as- Truein order to accept the EULA that this model requires. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.
 
- HubAccessConfig (dict) – - Configuration information for hub access. - HubContentArn (string) – - The ARN of the hub content for which deployment access is allowed. 
 
- ManifestS3Uri (string) – - The Amazon S3 URI of the manifest file. The manifest file is a CSV file that stores the artifact locations. 
- ETag (string) – - The ETag associated with S3 URI. 
- ManifestEtag (string) – - The ETag associated with Manifest S3 URI. 
 
 
 
- Environment (dict) – - The environment variables to set in the Docker container. Don’t include any sensitive data in your environment variables. - The maximum length of each key and value in the - Environmentmap is 1024 bytes. The maximum length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to a- CreateModelrequest, then the maximum length of all of their maps, combined, is also 32 KB.- (string) – - (string) – 
 
 
- ModelPackageName (string) – - The name or Amazon Resource Name (ARN) of the model package to use to create the model. 
- InferenceSpecificationName (string) – - The inference specification name in the model package version. 
- MultiModelConfig (dict) – - Specifies additional configuration for multi-model endpoints. - ModelCacheSetting (string) – - Whether to cache models for a multi-model endpoint. By default, multi-model endpoints cache models so that a model does not have to be loaded into memory each time it is invoked. Some use cases do not benefit from model caching. For example, if an endpoint hosts a large number of models that are each invoked infrequently, the endpoint might perform better if you disable model caching. To disable model caching, set the value of this parameter to - Disabled.
 
 
 
- InferenceExecutionConfig (dict) – - Specifies details of how containers in a multi-container endpoint are called. - Mode (string) – - How containers in a multi-container are run. The following values are valid. - SERIAL- Containers run as a serial pipeline.
- DIRECT- Only the individual container that you specify is run.
 
 
- ExecutionRoleArn (string) – - The Amazon Resource Name (ARN) of the IAM role that you specified for the model. 
- VpcConfig (dict) – - A VpcConfig object that specifies the VPC that this model has access to. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud - SecurityGroupIds (list) – - The VPC security group IDs, in the form - sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the- Subnetsfield.- (string) – 
 
- Subnets (list) – - The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones. - (string) – 
 
 
- CreationTime (datetime) – - A timestamp that shows when the model was created. 
- ModelArn (string) – - The Amazon Resource Name (ARN) of the model. 
- EnableNetworkIsolation (boolean) – - If - True, no inbound or outbound network calls can be made to or from the model container.
- DeploymentRecommendation (dict) – - A set of recommended deployment configurations for the model. - RecommendationStatus (string) – - Status of the deployment recommendation. The status - NOT_APPLICABLEmeans that SageMaker is unable to provide a default recommendation for the model using the information provided. If the deployment status is- IN_PROGRESS, retry your API call after a few seconds to get a- COMPLETEDdeployment recommendation.
- RealTimeInferenceRecommendations (list) – - A list of RealTimeInferenceRecommendation items. - (dict) – - The recommended configuration to use for Real-Time Inference. - RecommendationId (string) – - The recommendation ID which uniquely identifies each recommendation. 
- InstanceType (string) – - The recommended instance type for Real-Time Inference. 
- Environment (dict) – - The recommended environment variables to set in the model container for Real-Time Inference. - (string) – - (string) –