Build and Deploy a Sentiment Analysis Model as an API in DS Lab

To build a Sentiment Analysis model in DS Lab, register it as an API through the Admin Module, and validate its response via API requests.

Purpose

This guide outlines how to create a Sentiment Analysis model using BDB Data Science Lab (DS Lab), register it as an API through the Admin Module, and validate the API response via requests directly from the notebook.

This workflow demonstrates the end-to-end lifecycle of AI development within the BDB Platform—enabling model creation, deployment, and real-time consumption through APIs.


Business Context

Sentiment Analysis is a key Natural Language Processing (NLP) application that helps organizations analyze textual data such as product reviews, customer feedback, or social media posts to determine the sentiment—positive, neutral, or negative.

By integrating DS Lab for model creation and the Admin Module for API registration, businesses can automate feedback analysis and improve customer experience at scale.

Workflow Overview

This workflow covers the following stages:

  1. Create a DS Lab Project

  2. Import and Execute the Sentiment Analysis Notebook

  3. Train and Register the Model

  4. Register the Model as an API through the Admin Module

  5. Validate Model Response via API Request

Step 1 – Create a DS Lab Project

Procedure

  1. From the BDB Homepage, click the Apps icon and select DS Lab.

  2. Click Create + to start a new project.

  3. Fill in the required details as shown below:

    Field
    Example
    Description

    Project Name

    Sentiment_Analysis_Model

    Unique project name

    Description

    “Build and deploy sentiment classifier”

    Optional

    Algorithm Type

    Classification

    For supervised text classification

    Environment

    Python

    Default environment

    Resource Allocation

    Medium

    Based on dataset size

    Idle Shutdown

    1 hour

    Recommended to free idle compute

    External Libraries

    spaCy, tqdm

    Add here to avoid manual notebook installations

  4. Click Save to create the project.

Step 2 – Import the Sentiment Analysis Notebook

Activate the Project

  • Click Activate on the project tile.

  • Once activated, click View to open it.

  • Wait for the kernel to start.

Import the Notebook

  1. In the Repo section, click the three dots (⋮) and select Import.

  2. Enter a Name (e.g., Sentiment Analysis Notebook) and a Description.

  3. Choose the .ipynb file from your local system and upload it.

Step 3 – Load and Prepare the Dataset

  1. Click the Data icon on the left navigation panel.

  2. Click the + Add Data icon → select Data Sandbox Files as the source.

  3. Click Upload, provide:

    • Name: Sentiment_Data

    • Description: “Customer review dataset”

    • File: Upload your CSV file.

  4. Once the upload completes, a message appears: “File is uploaded successfully.”

  5. Check the dataset’s checkbox and click Add.

  6. In the notebook, recheck the dataset under Data, click its checkbox, and a code snippet will auto-generate to load it.

  7. Run the cell using the Run Cell icon.

Step 4 – Build the Sentiment Analysis Model

4.1 Data Preprocessing

# Keep only relevant columns
df = df[['reviewText', 'overall']]

# Map ratings to sentiment labels
df['sentiment'] = df['overall'].map({
    5 : 'positive',
    4 : 'positive',
    3 : 'neutral',
    2 : 'negative',
    1 : 'negative'
})

# Drop original rating column
df = df.drop(columns='overall')

This prepares the dataset with two columns: reviewText and sentiment.

4.2 Optional: NLP Library Setup

%%bash
pip install spacy tqdm
python -m spacy download en_core_web_sm

Note: spaCy installation is only needed for advanced preprocessing.

4.3 Subset Data for Experimentation

data = df.iloc[:200].copy()
text_col = 'reviewText'

This limits data to 200 samples for faster model experimentation.

4.4 Select the Algorithm

  1. Navigate to the Algorithm tab on the left panel.

  2. Choose Classification → Logistic Regression.

  3. Ensure you click on the target cell before selecting the algorithm.

Note: The code for model creation is auto-generated.

4.5 TF-IDF Vectorization

from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer()
X_tfidf = tfidf.fit_transform(data[text_col])

4.6 Train/Test Split

from sklearn.model_selection import train_test_split
X, y = X_tfidf, data['sentiment']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)

4.7 Train Logistic Regression Model

from sklearn.linear_model import LogisticRegression
sentiment = LogisticRegression()
sentiment.fit(X_train, y_train)

4.8 Evaluate Model Performance

from sklearn.metrics import classification_report
y_pred_train = sentiment.predict(X_train)
y_pred_test = sentiment.predict(X_test)
print(classification_report(y_train, y_pred_train))
print(classification_report(y_test, y_pred_test))

The reports display precision, recall, and F1-score for each sentiment class.

4.9 Prepare Sample Input for API Testing

api_test_input = X_tfidf[0].todense().tolist()

Step 5 – Save and Register the Model

  1. Click the three-dot (⋮) menu on the right side of the notebook cell → select Save Model.

  2. The platform auto-generates a code snippet to save the trained model.

  3. Run the cell to execute the save operation.

  4. Open the Models tab (left navigation) → click All.

  5. Locate the saved model and click Register (↑) to register it.

Step 6 – Register the Model as an API

This step enables real-time prediction via an API endpoint using the trained model.

6.1 Register Model as API

  1. Navigate to the Models tab in DS Lab.

  2. Select your trained model.

  3. Click Register as API.

  4. Enter required details:

    • Instance Name

    • Resource Configuration

    • Description (optional)

  5. Click Save.

Step 7 – Register API Client in the Admin Module

Note: Only administrators have access to the API Client Registration feature.

Procedure

  1. From the Apps menu, open the Admin Module.

  2. Under API Client Registration, click Create.

  3. Select the Internal registration type.

  4. Enter:

    • Client Name and Email Address (for credentials delivery)

    • App Name

    • Request Limits: Requests per Hour / Day

    • Associated Model: Select the model registered as API

  5. Click Save.

  6. To share credentials:

    • Click the Secret Key icon to email credentials (Client ID, Secret).

    • Alternatively, use the Edit icon to view and copy them manually.

Step 8 – Validate Model Response via API

Now, validate the deployed API from your notebook.

import requests, json

url = "https://app.bdb.ai/services/api/sentiment_clf_test.dill"
payload = json.dumps(api_test_input)
headers = {
  'clientid': 'MAVQXGKARSVUUIPFUCIH@5980',
  'clientsecret': 'YSFQSGQSDILQLVHQGVXB1689039162745',
  'appname': 'test1',
  'Content-Type': 'application/json'
}

response = requests.post(url, headers=headers, data=payload)
print([i['predictions'] for i in response.json()])

Explanation

  • payload: Input data serialized as JSON.

  • headers: Include your unique clientid, clientsecret, and appname.

  • requests.post: Sends a POST request to the model API.

  • Output prints the predicted sentiment for the input text.

Best Practice: Store credentials securely and avoid hardcoding in notebooks.

Outcome

You have completed Workflow 4: Sentiment Analysis Model Development and API Deployment, achieving the following outcomes:

Business Impact

This workflow enables data scientists and developers to move seamlessly from model creation to deployment within the BDB Platform, accelerating production-readiness. Organizations can now integrate sentiment insights into customer experience workflows, marketing analytics, and service quality monitoring.