Manage AutoML Models

Manage your AutoML Models.

Manage AutoML Experiments

The AutoML list page displays all created experiments and provides post-run actions for completed jobs. Once an experiment finishes, you can view the run report or delete the experiment.

Status indicators

  • Completed (Success): shown with a green indicator

  • Failed: shown with a red indicator


Available Actions

Action
Where
When available
Purpose

View Report

AutoML → Experiments list → Actions column

Completed or Failed

Open a detailed report of the run (summary, recommended model, logs).

Delete

AutoML → Experiments list → Actions column

Any status

Remove the experiment from the list.


View Report for Successfully Completed Experiment

The page highlights the winning model with full metrics and training time, while presenting comparable metrics for other strong candidates to support informed selection and promotion.

The Key information displayed for the recommended and other leading models may differ based on the selection of the Algorithm.

Note: The current Report represents key metrics for evaluating the performance and efficiency of a Classification AutoML experiment.

Lists the best model selected by the AutoML framework based on the objective metric.

Fields shown

  • Model Name – Name of the recommended AutoML model.

  • Best_Score(Accuracy) – The highest accuracy score of the recommended model.

  • Best_Score(Balanced_Accuracy) – refers to the highest balanced accuracy score achieved by any of the models during the training process. It is crucial for the Classification models.

  • Best_Score(Log_Loss) – It penalizes incorrect or confident predictions. A lower Log-Loss value indicates a more accurate model.

  • Best_Score(MCC) – Refers to the highest MCC achieved by a model during the AutoML experiment.

  • Fit_Time – total training time, e.g., 0.111

  • Created On – model artifact timestamp, e.g., Sep, 19, 2025

  • Note – Explains that the Recommended Model is chosen by the highest metric score among all trained models.

Other Models (ranked alternatives)

Shows additional top candidates with the same metric breakdown for comparison.

Fields shown for each model

  • Model Name –Names of the other two leading models.

  • Best_Score(Accuracy) – the highest Accuracy score of the listed models.

  • Best_Score(Balanced_Accuracy) – Balanced Accuracy score for the listed models.

  • Best_Score(Log_Loss) – It penalizes incorrect or confident predictions. A lower Log-Loss value indicates a more accurate model. It is specifically used for the classification models.

  • Best_Score(MCC) – Refers to the highest MCC achieved by a model during an automated machine learning (AutoML) experiment.

  • Fit_Time – Refers to the amount of time it took to train or "fit" a model on the provided training data.

  • Created On –Displays the date of experiment creation.

Run Summary (right side, for the run)

This section outlines the run summary of the selected AutoML experiment.

  • Task Type (e.g., Classification)

  • Experiment Status (e.g., Completed)

  • Created By (e.g., user name)

  • Dataset (e.g., abalone)

  • Target Column (e.g., lower)

View Report (Failed Experiment)

For failed runs, the report focuses on diagnostics.

Steps

  1. Go to Data Science Lab > AutoML.

  2. In the Experiments list, select a failed experiment.

  3. In the Actions column, click View Report.

  4. The Logs tab opens, showing Model Logs with the reason for failure (build, data, environment, or training error).

Delete an AutoML Experiment

Use Delete to remove any experiment (regardless of status) from the list.

Steps

  1. Go to Data Science Lab > AutoML.

  2. In the Experiments list, locate the experiment to remove (any status).

  3. Click the Delete icon in the Actions column.

  4. In the confirmation dialog, click Yes.

  5. A success message confirms the experiment was removed.

Note: Deletion removes the experiment record from the list. Verify that any artifacts you still need (models, notebooks, exported logs) are saved elsewhere before deleting.

Tips & Best Practices

  • Use View Report first: Confirm the Recommended Model and review metrics before promoting to Models.

  • Investigate failures via Logs: Check environment, schema, and data validation errors; retry with corrected settings.

  • Explainability: Use View Explanation to validate model behavior (feature influence, what-if outcomes) before registration or deployment.

  • Lifecycle hygiene: Periodically delete stale or exploratory experiments to keep the list manageable.