Creating AutoML Experiment
Learn how to create an AutoML Experiment and explore your options for the next phase.
The AutoML feature in the Data Science Lab allows data scientists to quickly create and run supervised machine learning experiments on top of prepared datasets. Each experiment runs as a Job and is automatically terminated once training is complete.
Supported experiment types:
Classification – Predict discrete categories (e.g., churn vs. no churn).
Regression – Predict continuous numeric values (e.g., sales forecast, price).
Forecasting – Predict future values based on historical data (e.g., demand forecasting).
Where to Create an Experiment
Navigate to Data Science Lab > AutoML and click Create Experiment, OR
From the Dataset List page (under the Dataset tab of a Repo Sync Data Science Project), use the Create Experiment icon.
Step 1: Configure Experiment
Navigate to the AutoML page.
Click Create Experiment.
The Configure tab opens by default.
Provide experiment details:
Experiment Name – A unique name for the experiment.
Description – (Optional) Notes about the experiment.
Target Column – Select the dependent variable to predict.
Data Preparation – Choose from the drop-down.
Use the checkbox to confirm your selection.
Exclude Columns – Select fields to exclude from training.
Use checkboxes to exclude features from the experiment.
Click Next to proceed.
Step 2: Select Experiment Type
In the Select Experiment Type tab, choose a prediction type:
Classification
Regression
Forecasting
Confirm the selection.
A validation message appears for the chosen type.
Click Done to finalize.
Experiment Status Lifecycle
The Status column tracks each phase of experiment execution:
Started – Experiment is created, resources allocated.
Running – Models are being trained.
A notification appears: “Model training started.”
Completed – Training is finished, models are available.
A notification confirms: “Model trained.”
Failed – Experiment was unsuccessful.
Next Steps
Open the completed experiment to review candidate models.
View the Report for the completed experiment to evaluate performance metrics.
Access the best-performing Auto ML model from the Models page for registration, deployment, or explainer generation.