The method through which we measure the quality of a model’s predictions is called model evaluation. To achieve this, AutoML evaluates the performance of the trained model on a test split of dataset, 10% of the training data will be used for evaluation (refer Figure 2.13).

Step 1: Checking model training

Status of training can be seen in Figure 2.17:

Figure 2.17: Training complete

  1. Once the model training is completed, it will be highlighted with a green tick.
  2. Model training will also be listed in the training section of Vertex AI (click on it).

Step 2: Model training results

Training jobs will be listed under training section as shown in Figure 2.18:

Figure 2.18: Training job listed under training job

  1. Training pipelines will list training jobs of AutoML.
  2. Custom Jobs will list training jobs of custom models.
  3. Hyperparameter tuning jobs will list training jobs of custom models with hyperparameter tuning.
  4. AutoML model trained in previous steps (click on it). Trained model will also be listed under model section of Vertex AI.

Step 3: Model evaluation

Clicking on the model will navigate to model section as shown in Figure 2.19:

Figure 2.19: Trained model evaluation

  1. When a model is trained, it will be the first version of the model. When multiple models are trained it provides options to select the specific version of the model.
  2. Trained model can be exported as a Tensorflow package, and model can be exported to cloud storage.
  3. Evaluate provides details about the class information about the target variable. Users can select any specific class to check the evaluation metrics for each of the classes separately.
  4. Different evaluation metrics are listed along with the visual information about few metrics. Confidence threshold can be varied to inspect the graphs at different levels of confidence.
  5. Graphical representation of the evaluation metrics (varies as per the confidence threshold).
  6. Batch predict provides options for batch prediction.
  7. Deploy and test provides options for the online prediction.
  8. Version details provides various details of the model version (click on Version details).

Step 4: Confusion matrix and feature importance

Scrolling down will have information about other metrices as shown in Figure 2.20:

Figure 2.20: Trained model evaluation

  1. Confusion matrix, click on the item counts to get the matrix with actual counts.
  2. variable importance information (variable importance is calculated using sampled Shapley method).

Step 5: Model version details

More details on the model will be displayed as shown in Figure 2.21:

Figure 2.21: Trained model details

  • Various details of the model are listed.
  • Detailed log messages are available in the models and trials.

Leave a Reply

Your email address will not be published. Required fields are marked *