SageMaker / Client / describe_auto_ml_job_v2
describe_auto_ml_job_v2#
- SageMaker.Client.describe_auto_ml_job_v2(**kwargs)#
Returns information about an Amazon SageMaker AutoML V2 job.
Note
This API action is callable through SageMaker Canvas only. Calling it directly from the CLI or an SDK results in an error.
See also: AWS API Documentation
Request Syntax
response = client.describe_auto_ml_job_v2( AutoMLJobName='string' )
- Parameters:
AutoMLJobName (string) –
[REQUIRED]
Requests information about an AutoML V2 job using its unique name.
- Return type:
dict
- Returns:
Response Syntax
{ 'AutoMLJobName': 'string', 'AutoMLJobArn': 'string', 'AutoMLJobInputDataConfig': [ { 'ChannelType': 'training'|'validation', 'ContentType': 'string', 'CompressionType': 'None'|'Gzip', 'DataSource': { 'S3DataSource': { 'S3DataType': 'ManifestFile'|'S3Prefix'|'AugmentedManifestFile', 'S3Uri': 'string' } } }, ], 'OutputDataConfig': { 'KmsKeyId': 'string', 'S3OutputPath': 'string' }, 'RoleArn': 'string', 'AutoMLJobObjective': { 'MetricName': 'Accuracy'|'MSE'|'F1'|'F1macro'|'AUC'|'RMSE'|'MAE'|'R2'|'BalancedAccuracy'|'Precision'|'PrecisionMacro'|'Recall'|'RecallMacro' }, 'AutoMLProblemTypeConfig': { 'ImageClassificationJobConfig': { 'CompletionCriteria': { 'MaxCandidates': 123, 'MaxRuntimePerTrainingJobInSeconds': 123, 'MaxAutoMLJobRuntimeInSeconds': 123 } }, 'TextClassificationJobConfig': { 'CompletionCriteria': { 'MaxCandidates': 123, 'MaxRuntimePerTrainingJobInSeconds': 123, 'MaxAutoMLJobRuntimeInSeconds': 123 }, 'ContentColumn': 'string', 'TargetLabelColumn': 'string' } }, 'CreationTime': datetime(2015, 1, 1), 'EndTime': datetime(2015, 1, 1), 'LastModifiedTime': datetime(2015, 1, 1), 'FailureReason': 'string', 'PartialFailureReasons': [ { 'PartialFailureMessage': 'string' }, ], 'BestCandidate': { 'CandidateName': 'string', 'FinalAutoMLJobObjectiveMetric': { 'Type': 'Maximize'|'Minimize', 'MetricName': 'Accuracy'|'MSE'|'F1'|'F1macro'|'AUC'|'RMSE'|'MAE'|'R2'|'BalancedAccuracy'|'Precision'|'PrecisionMacro'|'Recall'|'RecallMacro', 'Value': ..., 'StandardMetricName': 'Accuracy'|'MSE'|'F1'|'F1macro'|'AUC'|'RMSE'|'MAE'|'R2'|'BalancedAccuracy'|'Precision'|'PrecisionMacro'|'Recall'|'RecallMacro' }, 'ObjectiveStatus': 'Succeeded'|'Pending'|'Failed', 'CandidateSteps': [ { 'CandidateStepType': 'AWS::SageMaker::TrainingJob'|'AWS::SageMaker::TransformJob'|'AWS::SageMaker::ProcessingJob', 'CandidateStepArn': 'string', 'CandidateStepName': 'string' }, ], 'CandidateStatus': 'Completed'|'InProgress'|'Failed'|'Stopped'|'Stopping', 'InferenceContainers': [ { 'Image': 'string', 'ModelDataUrl': 'string', 'Environment': { 'string': 'string' } }, ], 'CreationTime': datetime(2015, 1, 1), 'EndTime': datetime(2015, 1, 1), 'LastModifiedTime': datetime(2015, 1, 1), 'FailureReason': 'string', 'CandidateProperties': { 'CandidateArtifactLocations': { 'Explainability': 'string', 'ModelInsights': 'string' }, 'CandidateMetrics': [ { 'MetricName': 'Accuracy'|'MSE'|'F1'|'F1macro'|'AUC'|'RMSE'|'MAE'|'R2'|'BalancedAccuracy'|'Precision'|'PrecisionMacro'|'Recall'|'RecallMacro', 'Value': ..., 'Set': 'Train'|'Validation'|'Test', 'StandardMetricName': 'Accuracy'|'MSE'|'F1'|'F1macro'|'AUC'|'RMSE'|'MAE'|'R2'|'BalancedAccuracy'|'Precision'|'PrecisionMacro'|'Recall'|'RecallMacro'|'LogLoss'|'InferenceLatency' }, ] }, 'InferenceContainerDefinitions': { 'string': [ { 'Image': 'string', 'ModelDataUrl': 'string', 'Environment': { 'string': 'string' } }, ] } }, 'AutoMLJobStatus': 'Completed'|'InProgress'|'Failed'|'Stopped'|'Stopping', 'AutoMLJobSecondaryStatus': 'Starting'|'AnalyzingData'|'FeatureEngineering'|'ModelTuning'|'MaxCandidatesReached'|'Failed'|'Stopped'|'MaxAutoMLJobRuntimeReached'|'Stopping'|'CandidateDefinitionsGenerated'|'GeneratingExplainabilityReport'|'Completed'|'ExplainabilityError'|'DeployingModel'|'ModelDeploymentError'|'GeneratingModelInsightsReport'|'ModelInsightsError'|'TrainingModels', 'ModelDeployConfig': { 'AutoGenerateEndpointName': True|False, 'EndpointName': 'string' }, 'ModelDeployResult': { 'EndpointName': 'string' }, 'DataSplitConfig': { 'ValidationFraction': ... }, 'SecurityConfig': { 'VolumeKmsKeyId': 'string', 'EnableInterContainerTrafficEncryption': True|False, 'VpcConfig': { 'SecurityGroupIds': [ 'string', ], 'Subnets': [ 'string', ] } } }
Response Structure
(dict) –
AutoMLJobName (string) –
Returns the name of the AutoML V2 job.
AutoMLJobArn (string) –
Returns the Amazon Resource Name (ARN) of the AutoML V2 job.
AutoMLJobInputDataConfig (list) –
Returns an array of channel objects describing the input data and their location.
(dict) –
A channel is a named input source that training algorithms can consume. This channel is used for the non tabular training data of an AutoML job using the V2 API. For tabular training data, see . For more information, see .
ChannelType (string) –
The type of channel. Defines whether the data are used for training or validation. The default value is
training
. Channels fortraining
andvalidation
must share the sameContentType
ContentType (string) –
The content type of the data from the input source. The following are the allowed content types for different problems:
ImageClassification:
image/png
,image/jpeg
,image/*
TextClassification:
text/csv;header=present
CompressionType (string) –
The allowed compression types depend on the input format. We allow the compression type
Gzip
forS3Prefix
inputs only. For all other inputs, the compression type should beNone
. If no compression type is provided, we default toNone
.DataSource (dict) –
The data source for an AutoML channel.
S3DataSource (dict) –
The Amazon S3 location of the input data.
S3DataType (string) –
The data type.
If you choose
S3Prefix
,S3Uri
identifies a key name prefix. SageMaker uses all objects that match the specified key name prefix for model training. TheS3Prefix
should have the following format:s3://DOC-EXAMPLE-BUCKET/DOC-EXAMPLE-FOLDER-OR-FILE
If you choose
ManifestFile
,S3Uri
identifies an object that is a manifest file containing a list of object keys that you want SageMaker to use for model training. AManifestFile
should have the format shown below:[ {"prefix": "s3://DOC-EXAMPLE-BUCKET/DOC-EXAMPLE-FOLDER/DOC-EXAMPLE-PREFIX/"}, `` ``"DOC-EXAMPLE-RELATIVE-PATH/DOC-EXAMPLE-FOLDER/DATA-1",
"DOC-EXAMPLE-RELATIVE-PATH/DOC-EXAMPLE-FOLDER/DATA-2",
... "DOC-EXAMPLE-RELATIVE-PATH/DOC-EXAMPLE-FOLDER/DATA-N" ]
If you choose
AugmentedManifestFile
,S3Uri
identifies an object that is an augmented manifest file in JSON lines format. This file contains the data you want to use for model training.AugmentedManifestFile
is available for V2 API jobs only (for example, for jobs created by callingCreateAutoMLJobV2
). Here is a minimal, single-record example of anAugmentedManifestFile
:{"source-ref": "s3://DOC-EXAMPLE-BUCKET/DOC-EXAMPLE-FOLDER/cats/cat.jpg",
"label-metadata": {"class-name": "cat"
} For more information onAugmentedManifestFile
, see Provide Dataset Metadata to Training Jobs with an Augmented Manifest File.
S3Uri (string) –
The URL to the Amazon S3 data source. The Uri refers to the Amazon S3 prefix or ManifestFile depending on the data type.
OutputDataConfig (dict) –
Returns the job’s output data config.
KmsKeyId (string) –
The Key Management Service (KMS) encryption key ID.
S3OutputPath (string) –
The Amazon S3 output path. Must be 128 characters or less.
RoleArn (string) –
The ARN of the Identity and Access Management role that has read permission to the input data location and write permission to the output data location in Amazon S3.
AutoMLJobObjective (dict) –
Returns the job’s objective.
MetricName (string) –
The name of the objective metric used to measure the predictive quality of a machine learning system. This metric is optimized during training to provide the best estimate for model parameter values from data.
Here are the options:
Accuracy
The ratio of the number of correctly classified items to the total number of (correctly and incorrectly) classified items. It is used for both binary and multiclass classification. Accuracy measures how close the predicted class values are to the actual values. Values for accuracy metrics vary between zero (0) and one (1). A value of 1 indicates perfect accuracy, and 0 indicates perfect inaccuracy.
AUC
The area under the curve (AUC) metric is used to compare and evaluate binary classification by algorithms that return probabilities, such as logistic regression. To map the probabilities into classifications, these are compared against a threshold value.
The relevant curve is the receiver operating characteristic curve (ROC curve). The ROC curve plots the true positive rate (TPR) of predictions (or recall) against the false positive rate (FPR) as a function of the threshold value, above which a prediction is considered positive. Increasing the threshold results in fewer false positives, but more false negatives.
AUC is the area under this ROC curve. Therefore, AUC provides an aggregated measure of the model performance across all possible classification thresholds. AUC scores vary between 0 and 1. A score of 1 indicates perfect accuracy, and a score of one half (0.5) indicates that the prediction is not better than a random classifier.
BalancedAccuracy
BalancedAccuracy
is a metric that measures the ratio of accurate predictions to all predictions. This ratio is calculated after normalizing true positives (TP) and true negatives (TN) by the total number of positive (P) and negative (N) values. It is used in both binary and multiclass classification and is defined as follows: 0.5*((TP/P)+(TN/N)), with values ranging from 0 to 1.BalancedAccuracy
gives a better measure of accuracy when the number of positives or negatives differ greatly from each other in an imbalanced dataset. For example, when only 1% of email is spam.F1
The
F1
score is the harmonic mean of the precision and recall, defined as follows: F1 = 2 * (precision * recall) / (precision + recall). It is used for binary classification into classes traditionally referred to as positive and negative. Predictions are said to be true when they match their actual (correct) class, and false when they do not.Precision is the ratio of the true positive predictions to all positive predictions, and it includes the false positives in a dataset. Precision measures the quality of the prediction when it predicts the positive class.
Recall (or sensitivity) is the ratio of the true positive predictions to all actual positive instances. Recall measures how completely a model predicts the actual class members in a dataset.
F1 scores vary between 0 and 1. A score of 1 indicates the best possible performance, and 0 indicates the worst.
F1macro
The
F1macro
score applies F1 scoring to multiclass classification problems. It does this by calculating the precision and recall, and then taking their harmonic mean to calculate the F1 score for each class. Lastly, the F1macro averages the individual scores to obtain theF1macro
score.F1macro
scores vary between 0 and 1. A score of 1 indicates the best possible performance, and 0 indicates the worst.MAE
The mean absolute error (MAE) is a measure of how different the predicted and actual values are, when they’re averaged over all values. MAE is commonly used in regression analysis to understand model prediction error. If there is linear regression, MAE represents the average distance from a predicted line to the actual value. MAE is defined as the sum of absolute errors divided by the number of observations. Values range from 0 to infinity, with smaller numbers indicating a better model fit to the data.
MSE
The mean squared error (MSE) is the average of the squared differences between the predicted and actual values. It is used for regression. MSE values are always positive. The better a model is at predicting the actual values, the smaller the MSE value is
Precision
Precision measures how well an algorithm predicts the true positives (TP) out of all of the positives that it identifies. It is defined as follows: Precision = TP/(TP+FP), with values ranging from zero (0) to one (1), and is used in binary classification. Precision is an important metric when the cost of a false positive is high. For example, the cost of a false positive is very high if an airplane safety system is falsely deemed safe to fly. A false positive (FP) reflects a positive prediction that is actually negative in the data.
PrecisionMacro
The precision macro computes precision for multiclass classification problems. It does this by calculating precision for each class and averaging scores to obtain precision for several classes.
PrecisionMacro
scores range from zero (0) to one (1). Higher scores reflect the model’s ability to predict true positives (TP) out of all of the positives that it identifies, averaged across multiple classes.R2
R2, also known as the coefficient of determination, is used in regression to quantify how much a model can explain the variance of a dependent variable. Values range from one (1) to negative one (-1). Higher numbers indicate a higher fraction of explained variability.
R2
values close to zero (0) indicate that very little of the dependent variable can be explained by the model. Negative values indicate a poor fit and that the model is outperformed by a constant function. For linear regression, this is a horizontal line.Recall
Recall measures how well an algorithm correctly predicts all of the true positives (TP) in a dataset. A true positive is a positive prediction that is also an actual positive value in the data. Recall is defined as follows: Recall = TP/(TP+FN), with values ranging from 0 to 1. Higher scores reflect a better ability of the model to predict true positives (TP) in the data, and is used in binary classification.
Recall is important when testing for cancer because it’s used to find all of the true positives. A false positive (FP) reflects a positive prediction that is actually negative in the data. It is often insufficient to measure only recall, because predicting every output as a true positive yield a perfect recall score.
RecallMacro
The RecallMacro computes recall for multiclass classification problems by calculating recall for each class and averaging scores to obtain recall for several classes. RecallMacro scores range from 0 to 1. Higher scores reflect the model’s ability to predict true positives (TP) in a dataset. Whereas, a true positive reflects a positive prediction that is also an actual positive value in the data. It is often insufficient to measure only recall, because predicting every output as a true positive yields a perfect recall score.
RMSE
Root mean squared error (RMSE) measures the square root of the squared difference between predicted and actual values, and it’s averaged over all values. It is used in regression analysis to understand model prediction error. It’s an important metric to indicate the presence of large model errors and outliers. Values range from zero (0) to infinity, with smaller numbers indicating a better model fit to the data. RMSE is dependent on scale, and should not be used to compare datasets of different sizes.
If you do not specify a metric explicitly, the default behavior is to automatically use:
MSE
: for regression.F1
: for binary classificationAccuracy
: for multiclass classification.
AutoMLProblemTypeConfig (dict) –
Returns the configuration settings of the problem type set for the AutoML V2 job.
Note
This is a Tagged Union structure. Only one of the following top level keys will be set:
ImageClassificationJobConfig
,TextClassificationJobConfig
. If a client receives an unknown member it will setSDK_UNKNOWN_MEMBER
as the top level key, which maps to the name or tag of the unknown member. The structure ofSDK_UNKNOWN_MEMBER
is as follows:'SDK_UNKNOWN_MEMBER': {'name': 'UnknownMemberName'}
ImageClassificationJobConfig (dict) –
Settings used to configure an AutoML job using the V2 API for the image classification problem type.
CompletionCriteria (dict) –
How long a job is allowed to run, or how many candidates a job is allowed to generate.
MaxCandidates (integer) –
The maximum number of times a training job is allowed to run.
For V2 jobs (jobs created by calling
CreateAutoMLJobV2
), the supported value is 1.MaxRuntimePerTrainingJobInSeconds (integer) –
The maximum time, in seconds, that each training job executed inside hyperparameter tuning is allowed to run as part of a hyperparameter tuning job. For more information, see the used by the action.
For V2 jobs (jobs created by calling
CreateAutoMLJobV2
), this field controls the runtime of the job candidate.MaxAutoMLJobRuntimeInSeconds (integer) –
The maximum runtime, in seconds, an AutoML job has to complete.
If an AutoML job exceeds the maximum runtime, the job is stopped automatically and its processing is ended gracefully. The AutoML job identifies the best model whose training was completed and marks it as the best-performing model. Any unfinished steps of the job, such as automatic one-click Autopilot model deployment, are not completed.
TextClassificationJobConfig (dict) –
Settings used to configure an AutoML job using the V2 API for the text classification problem type.
CompletionCriteria (dict) –
How long a job is allowed to run, or how many candidates a job is allowed to generate.
MaxCandidates (integer) –
The maximum number of times a training job is allowed to run.
For V2 jobs (jobs created by calling
CreateAutoMLJobV2
), the supported value is 1.MaxRuntimePerTrainingJobInSeconds (integer) –
The maximum time, in seconds, that each training job executed inside hyperparameter tuning is allowed to run as part of a hyperparameter tuning job. For more information, see the used by the action.
For V2 jobs (jobs created by calling
CreateAutoMLJobV2
), this field controls the runtime of the job candidate.MaxAutoMLJobRuntimeInSeconds (integer) –
The maximum runtime, in seconds, an AutoML job has to complete.
If an AutoML job exceeds the maximum runtime, the job is stopped automatically and its processing is ended gracefully. The AutoML job identifies the best model whose training was completed and marks it as the best-performing model. Any unfinished steps of the job, such as automatic one-click Autopilot model deployment, are not completed.
ContentColumn (string) –
The name of the column used to provide the sentences to be classified. It should not be the same as the target column.
TargetLabelColumn (string) –
The name of the column used to provide the class labels. It should not be same as the content column.
CreationTime (datetime) –
Returns the creation time of the AutoML V2 job.
EndTime (datetime) –
Returns the end time of the AutoML V2 job.
LastModifiedTime (datetime) –
Returns the job’s last modified time.
FailureReason (string) –
Returns the reason for the failure of the AutoML V2 job, when applicable.
PartialFailureReasons (list) –
Returns a list of reasons for partial failures within an AutoML V2 job.
(dict) –
The reason for a partial failure of an AutoML job.
PartialFailureMessage (string) –
The message containing the reason for a partial failure of an AutoML job.
BestCandidate (dict) –
Information about the candidate produced by an AutoML training job V2, including its status, steps, and other properties.
CandidateName (string) –
The name of the candidate.
FinalAutoMLJobObjectiveMetric (dict) –
The best candidate result from an AutoML training job.
Type (string) –
The type of metric with the best result.
MetricName (string) –
The name of the metric with the best result. For a description of the possible objective metrics, see AutoMLJobObjective$MetricName.
Value (float) –
The value of the metric with the best result.
StandardMetricName (string) –
The name of the standard metric. For a description of the standard metrics, see Autopilot candidate metrics.
ObjectiveStatus (string) –
The objective’s status.
CandidateSteps (list) –
Information about the candidate’s steps.
(dict) –
Information about the steps for a candidate and what step it is working on.
CandidateStepType (string) –
Whether the candidate is at the transform, training, or processing step.
CandidateStepArn (string) –
The ARN for the candidate’s step.
CandidateStepName (string) –
The name for the candidate’s step.
CandidateStatus (string) –
The candidate’s status.
InferenceContainers (list) –
Information about the recommended inference container definitions.
(dict) –
A list of container definitions that describe the different containers that make up an AutoML candidate. For more information, see .
Image (string) –
The Amazon Elastic Container Registry (Amazon ECR) path of the container. For more information, see .
ModelDataUrl (string) –
The location of the model artifacts. For more information, see .
Environment (dict) –
The environment variables to set in the container. For more information, see .
(string) –
(string) –
CreationTime (datetime) –
The creation time.
EndTime (datetime) –
The end time.
LastModifiedTime (datetime) –
The last modified time.
FailureReason (string) –
The failure reason.
CandidateProperties (dict) –
The properties of an AutoML candidate job.
CandidateArtifactLocations (dict) –
The Amazon S3 prefix to the artifacts generated for an AutoML candidate.
Explainability (string) –
The Amazon S3 prefix to the explainability artifacts generated for the AutoML candidate.
ModelInsights (string) –
The Amazon S3 prefix to the model insight artifacts generated for the AutoML candidate.
CandidateMetrics (list) –
Information about the candidate metrics for an AutoML job.
(dict) –
Information about the metric for a candidate produced by an AutoML job.
MetricName (string) –
The name of the metric.
Value (float) –
The value of the metric.
Set (string) –
The dataset split from which the AutoML job produced the metric.
StandardMetricName (string) –
The name of the standard metric.
Note
For definitions of the standard metrics, see Autopilot candidate metrics.
InferenceContainerDefinitions (dict) –
The mapping of all supported processing unit (CPU, GPU, etc…) to inference container definitions for the candidate. This field is populated for the V2 API only (for example, for jobs created by calling
CreateAutoMLJobV2
).(string) –
Processing unit for an inference container. Currently Autopilot only supports
CPU
orGPU
.(list) –
Information about the recommended inference container definitions.
(dict) –
A list of container definitions that describe the different containers that make up an AutoML candidate. For more information, see .
Image (string) –
The Amazon Elastic Container Registry (Amazon ECR) path of the container. For more information, see .
ModelDataUrl (string) –
The location of the model artifacts. For more information, see .
Environment (dict) –
The environment variables to set in the container. For more information, see .
(string) –
(string) –
AutoMLJobStatus (string) –
Returns the status of the AutoML V2 job.
AutoMLJobSecondaryStatus (string) –
Returns the secondary status of the AutoML V2 job.
ModelDeployConfig (dict) –
Indicates whether the model was deployed automatically to an endpoint and the name of that endpoint if deployed automatically.
AutoGenerateEndpointName (boolean) –
Set to
True
to automatically generate an endpoint name for a one-click Autopilot model deployment; set toFalse
otherwise. The default value isFalse
.Note
If you set
AutoGenerateEndpointName
toTrue
, do not specify theEndpointName
; otherwise a 400 error is thrown.EndpointName (string) –
Specifies the endpoint name to use for a one-click Autopilot model deployment if the endpoint name is not generated automatically.
Note
Specify the
EndpointName
if and only if you setAutoGenerateEndpointName
toFalse
; otherwise a 400 error is thrown.
ModelDeployResult (dict) –
Provides information about endpoint for the model deployment.
EndpointName (string) –
The name of the endpoint to which the model has been deployed.
Note
If model deployment fails, this field is omitted from the response.
DataSplitConfig (dict) –
Returns the configuration settings of how the data are split into train and validation datasets.
ValidationFraction (float) –
The validation fraction (optional) is a float that specifies the portion of the training dataset to be used for validation. The default value is 0.2, and values must be greater than 0 and less than 1. We recommend setting this value to be less than 0.5.
SecurityConfig (dict) –
Returns the security configuration for traffic encryption or Amazon VPC settings.
VolumeKmsKeyId (string) –
The key used to encrypt stored data.
EnableInterContainerTrafficEncryption (boolean) –
Whether to use traffic encryption between the container layers.
VpcConfig (dict) –
The VPC configuration.
SecurityGroupIds (list) –
The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the
Subnets
field.(string) –
Subnets (list) –
The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.
(string) –
Exceptions
SageMaker.Client.exceptions.ResourceNotFound