PyTorch Model
The PyTorch Model interface in the Data Science module provides a flexible and developer-friendly environment for building, training, and deploying deep learning models within the platform. It supports seamless integration with PyTorch’s dynamic computation graph, allowing data scientists to iterate quickly, debug intuitively, and experiment with complex neural network architectures. Users can import datasets directly from connected data sources, preprocess them using built-in DataPrep pipelines, and train models using GPU-accelerated compute environments. The interface also simplifies model tracking, parameter tuning, and version management, ensuring reproducible workflows from experimentation to production. Once trained, models can be exported, visualized, or deployed as APIs for downstream applications and AI Agents, enabling powerful end-to-end machine learning lifecycle management within the platform.
Installing Required Libraries
Installing the required libraries is the prerequisite for this model to execute successfully. Use the following code to install the Torch library.
!!pip install torchThe following output will be displayed after executing the cell.

Step-1: Import Libraries
We import PyTorch for building neural networks, sklearn for splitting datasets, pandas and numpy for data handling, and matplotlib for plotting.
Step-2: Load Dataset
Load the Iris dataset from a CSV file. Assign column names and map species names to numeric labels for easier training.
Step-3: Split Features and Target
Separate input features (X) and target labels (y). Split the dataset into training (80%) and testing (20%) sets. Convert arrays to PyTorch tensors for model training.
Step-4: Inspect Features
Display the feature columns for verification.
After executing the code, the data set preview will be displayed below.

Step-5: Define Neural Network
Create a fully connected neural network with two hidden layers. ReLU activation is applied to hidden layers. Output layer returns raw logits suitable for CrossEntropyLoss.
Step-6: Initialize Model, Loss, and Optimizer
Initialize the neural network model. Use CrossEntropyLoss for multi-classification. Adam optimizer is used for training.
Step-7: Train the Model
Train the model for 100 epochs. For each epoch: Perform a forward pass. Compute the loss. Backpropagate and update the model weights. Store the loss values to visualize progress later.
The model was trained for 100 epochs using an iterative optimization process. For each epoch, the following steps were executed:
Forward Pass: Input data was passed through the model to generate predictions.
Loss Computation: A loss function was applied to quantify the discrepancy between the model's predictions and the true target values.
Backpropagation and Weight Update: The calculated loss was backpropagated through the model, and an optimization algorithm (e.g., Stochastic Gradient Descent, Adam) was used to compute and apply updates to the model's weights and biases.
Loss Tracking: The loss value from each epoch was recorded to monitor the training progress and enable subsequent visualization of the learning curve.
Step-8: Save the Trained Model
Use NotebookExecutor to save the model for future predictions.
Step-9: Load the Saved Model
Load the previously saved model using its ID.
Step-10: Convert Test Set to DataFrame
The PyTorch tensor must be converted into a Pandas DataFrame format. This transformation is necessary to meet the required input specification for the NotebookExecutor.predict function, enabling the batch execution of inference tasks.
The conversion typically involves the following steps:
Convert to NumPy: The PyTorch tensor is converted into a NumPy array (e.g., using
.numpy()).Create DataFrame: The resulting NumPy array is then used to instantiate a Pandas DataFrame, ensuring that the column names and index align with the expected structure of the
NotebookExecutor.predictfunction input.
Step-11: Predict on Test Set
The modeltype parameter is explicitly set to 'ml' (Machine Learning) to ensure the prediction mechanism is configured for traditional machine learning models (as opposed to deep learning or other model types).
The function will return a resulting set of predictions which represents the model's output for each instance in the test set.

Last updated