Build and Deploy a Sentiment Analysis Model as an API in DS Lab
To build a Sentiment Analysis model in DS Lab, register it as an API through the Admin Module, and validate its response via API requests.
Purpose
This guide outlines how to create a Sentiment Analysis model using BDB Data Science Lab (DS Lab), register it as an API through the Admin Module, and validate the API response via requests directly from the notebook.
This workflow demonstrates the end-to-end lifecycle of AI development within the BDB Platform—enabling model creation, deployment, and real-time consumption through APIs.
Business Context
Sentiment Analysis is a key Natural Language Processing (NLP) application that helps organizations analyze textual data such as product reviews, customer feedback, or social media posts to determine the sentiment—positive, neutral, or negative.
By integrating DS Lab for model creation and the Admin Module for API registration, businesses can automate feedback analysis and improve customer experience at scale.
Workflow Overview
This workflow covers the following stages:
Create a DS Lab Project
Import and Execute the Sentiment Analysis Notebook
Train and Register the Model
Register the Model as an API through the Admin Module
Validate Model Response via API Request
Step 1 – Create a DS Lab Project
Procedure
From the BDB Homepage, click the Apps icon and select DS Lab.
Click Create + to start a new project.
Fill in the required details as shown below:
FieldExampleDescriptionProject Name
Sentiment_Analysis_ModelUnique project name
Description
“Build and deploy sentiment classifier”
Optional
Algorithm Type
Classification
For supervised text classification
Environment
Python
Default environment
Resource Allocation
Medium
Based on dataset size
Idle Shutdown
1 hour
Recommended to free idle compute
External Libraries
spaCy,tqdmAdd here to avoid manual notebook installations
Click Save to create the project.

Step 2 – Import the Sentiment Analysis Notebook
Activate the Project
Click Activate on the project tile.
Once activated, click View to open it.
Wait for the kernel to start.
Import the Notebook
In the Repo section, click the three dots (⋮) and select Import.
Enter a Name (e.g., Sentiment Analysis Notebook) and a Description.
Choose the
.ipynbfile from your local system and upload it.
Step 3 – Load and Prepare the Dataset
Click the Data icon on the left navigation panel.
Click the + Add Data icon → select Data Sandbox Files as the source.
Click Upload, provide:
Name:
Sentiment_DataDescription: “Customer review dataset”
File: Upload your CSV file.
Once the upload completes, a message appears: “File is uploaded successfully.”
Check the dataset’s checkbox and click Add.
In the notebook, recheck the dataset under Data, click its checkbox, and a code snippet will auto-generate to load it.
Run the cell using the Run Cell icon.
Step 4 – Build the Sentiment Analysis Model
4.1 Data Preprocessing
# Keep only relevant columns
df = df[['reviewText', 'overall']]
# Map ratings to sentiment labels
df['sentiment'] = df['overall'].map({
5 : 'positive',
4 : 'positive',
3 : 'neutral',
2 : 'negative',
1 : 'negative'
})
# Drop original rating column
df = df.drop(columns='overall')This prepares the dataset with two columns: reviewText and sentiment.
4.2 Optional: NLP Library Setup
%%bash
pip install spacy tqdm
python -m spacy download en_core_web_sm4.3 Subset Data for Experimentation
data = df.iloc[:200].copy()
text_col = 'reviewText'This limits data to 200 samples for faster model experimentation.
4.4 Select the Algorithm
Navigate to the Algorithm tab on the left panel.
Choose Classification → Logistic Regression.
Ensure you click on the target cell before selecting the algorithm.
4.5 TF-IDF Vectorization
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer()
X_tfidf = tfidf.fit_transform(data[text_col])4.6 Train/Test Split
from sklearn.model_selection import train_test_split
X, y = X_tfidf, data['sentiment']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)4.7 Train Logistic Regression Model
from sklearn.linear_model import LogisticRegression
sentiment = LogisticRegression()
sentiment.fit(X_train, y_train)4.8 Evaluate Model Performance
from sklearn.metrics import classification_report
y_pred_train = sentiment.predict(X_train)
y_pred_test = sentiment.predict(X_test)
print(classification_report(y_train, y_pred_train))
print(classification_report(y_test, y_pred_test))The reports display precision, recall, and F1-score for each sentiment class.
4.9 Prepare Sample Input for API Testing
api_test_input = X_tfidf[0].todense().tolist()Step 5 – Save and Register the Model
Click the three-dot (⋮) menu on the right side of the notebook cell → select Save Model.
The platform auto-generates a code snippet to save the trained model.
Run the cell to execute the save operation.
Open the Models tab (left navigation) → click All.
Locate the saved model and click Register (↑) to register it.
Step 6 – Register the Model as an API
This step enables real-time prediction via an API endpoint using the trained model.
6.1 Register Model as API
Navigate to the Models tab in DS Lab.
Select your trained model.
Click Register as API.
Enter required details:
Instance Name
Resource Configuration
Description (optional)
Click Save.

Step 7 – Register API Client in the Admin Module
Procedure
From the Apps menu, open the Admin Module.
Under API Client Registration, click Create.
Select the Internal registration type.
Enter:
Client Name and Email Address (for credentials delivery)
App Name
Request Limits: Requests per Hour / Day
Associated Model: Select the model registered as API
Click Save.

To share credentials:
Click the Secret Key icon to email credentials (Client ID, Secret).
Alternatively, use the Edit icon to view and copy them manually.
Step 8 – Validate Model Response via API
Now, validate the deployed API from your notebook.
import requests, json
url = "https://app.bdb.ai/services/api/sentiment_clf_test.dill"
payload = json.dumps(api_test_input)
headers = {
'clientid': 'MAVQXGKARSVUUIPFUCIH@5980',
'clientsecret': 'YSFQSGQSDILQLVHQGVXB1689039162745',
'appname': 'test1',
'Content-Type': 'application/json'
}
response = requests.post(url, headers=headers, data=payload)
print([i['predictions'] for i in response.json()])Explanation
payload: Input data serialized as JSON.headers: Include your uniqueclientid,clientsecret, andappname.requests.post: Sends a POST request to the model API.Output prints the predicted sentiment for the input text.
Best Practice: Store credentials securely and avoid hardcoding in notebooks.
Outcome
You have completed Workflow 4: Sentiment Analysis Model Development and API Deployment, achieving the following outcomes:
Business Impact
This workflow enables data scientists and developers to move seamlessly from model creation to deployment within the BDB Platform, accelerating production-readiness. Organizations can now integrate sentiment insights into customer experience workflows, marketing analytics, and service quality monitoring.