Creating AutoML Experiment

Learn how to create an AutoML Experiment and explore your options for the next phase.

The AutoML feature in the Data Science Lab allows data scientists to quickly create and run supervised machine learning experiments on top of prepared datasets. Each experiment runs as a Job and is automatically terminated once training is complete.

Supported experiment types:

  • Classification – Predict discrete categories (e.g., churn vs. no churn).

  • Regression – Predict continuous numeric values (e.g., sales forecast, price).

  • Forecasting – Predict future values based on historical data (e.g., demand forecasting).

Note: Every AutoML experiment runs as a Job. A new job is allocated for each experiment. Once training is complete and models are ready, the job is terminated automatically.

Where to Create an Experiment

  • Navigate to Data Science Lab > AutoML and click Create Experiment, OR

  • From the Dataset List page (under the Dataset tab of a Repo Sync Data Science Project), use the Create Experiment icon.

Step 1: Configure Experiment

  1. Navigate to the AutoML page.

  2. Click Create Experiment.

    • The Configure tab opens by default.

  3. Provide experiment details:

    • Experiment Name – A unique name for the experiment.

    • Description – (Optional) Notes about the experiment.

    • Target Column – Select the dependent variable to predict.

    • Data Preparation – Choose from the drop-down.

      • Use the checkbox to confirm your selection.

    • Exclude Columns – Select fields to exclude from training.

      • Use checkboxes to exclude features from the experiment.

Note: Excluded fields will not be considered while training the AutoML model.

  1. Click Next to proceed.

Step 2: Select Experiment Type

  1. In the Select Experiment Type tab, choose a prediction type:

    • Classification

    • Regression

    • Forecasting

  2. Confirm the selection.

    • A validation message appears for the chosen type.

  3. Click Done to finalize.

Experiment Status Lifecycle

The Status column tracks each phase of experiment execution:

  • Started – Experiment is created, resources allocated.

  • Running – Models are being trained.

    • A notification appears: “Model training started.”

  • Completed – Training is finished, models are available.

    • A notification confirms: “Model trained.”

  • Failed – Experiment was unsuccessful.

Note: Status updates occur automatically as the experiment progresses.

Next Steps

  • Open the completed experiment to review candidate models.

  • View the Report for the completed experiment to evaluate performance metrics.

  • Access the best-performing Auto ML model from the Models page for registration, deployment, or explainer generation.