7.6
  • What's New in the BDB 7.6?
    • Core Platform
    • Data Center
    • Data Science Lab
    • Data Pipeline
    • Data Visualization
      • Self-Service
      • Governed Dashboards
  • Core Platform
    • About Data Platform
      • Why Data Platform
      • Design Philosophy
      • Plugin Architecture
    • Getting Started
      • Sign in
        • Resetting Password
        • Force Login
      • Homepage
        • Data Catalog Search
        • AI Search
        • Apps Menu
        • Notification
        • Help Menu
        • User Profile
        • Search Option
        • Documents Folders
          • Document Options
            • Accessing Document Options
            • Options Assigned to a Folder
            • Options Assigned to a Linked URL
            • Options Assigned to a Story
            • Options for a Published Dashboard
        • Sorting Documents
        • Filter
      • Signing Out
    • Platform Administration
      • Accessing the Admin Module
      • Admin Panel Options
        • Document Management
        • Configurations
          • Geo Spatial
          • Data Science Servers
          • Bulk User Creation
          • Custom Field Settings
          • Data Connectors
          • API Connectors Configurations
          • Encryption
          • Form Settings
          • Data Sheet Settings
          • Data Lake Settings
          • Data Catalog Settings
          • Open ID Settings
          • Version Control
          • New Version Control
            • Versioning
            • Migration
          • Pipeline Settings
          • Keycloak Settings
          • Core Ownership Transfer
          • Email Server
          • Password
          • Sandbox Settings
          • Secret Management
          • DS Lab Settings
          • Data Store Settings
        • Authentication
          • AD Configuration
          • CA PPM Configuration
          • AWS Cognito Configuration
        • Audit Trail
          • Log Status
          • Audit Log Table
        • Language Mapping
          • Languages
          • Mapping Table
        • Migration
          • SFTP Settings
          • Document Migration
          • DSW Migration
        • GIT Migration
          • Migrating a Dashboard
          • Migrating an API Service
          • Migrating a Pipeline
        • Session Manager
        • Schedule Monitor
        • Server Monitor
        • License
        • API Client Registration
    • Users & Roles
      • User Security Page
      • Creating a new User
        • Restrict Data Access
      • Creating a new User Group
        • Assigning Custom Fields to Group Users
      • Various User Roles
        • Viewer Role
        • Admin Role
        • Non-admin User Roles
      • User Status
  • Data Center
    • Homepage
    • Data Virtualization
    • Data Connectors
      • Creating a Data Connector
      • Data Connector List
        • Edit Data Connectors
        • Create Option
        • Reconnecting to a Data Connector
        • Sharing a Data Connector
        • Delete a Data Connector
      • Supported Data Connectors
        • Database Connectors
          • MySQL
          • MSSQL
          • Elastic (Beta Release)
          • Oracle
          • ClickHouse
          • Arango DB
          • Hive
          • Cassandra
          • MongoDB
          • MongoDB for BI
          • PostgreSQL
          • Snowflake
        • File Data Connector
        • API Connectors
          • API Connector
          • Amazon
          • App Store
          • Bing Ads
          • Dropbox
          • FTP Server
          • Facebook
          • Facebook Ads
          • Firebase DB
          • Fitbit
          • Flipkart
          • Google Adwords
          • Google Analytics
          • Google Big Query
          • Google Forms
          • Google Sheet
          • HubSpot
          • Jira
          • Lead Squared
          • Linkedin
          • Linkedin Ads
          • MS Dynamics
          • Mailchimp
          • QuickBooks
          • SalesForce
          • ServiceNow
          • Twitter
          • Twitter Ads
          • Yelp
          • YouTube
          • ZOHO Books
        • Others
          • MS Sql Olap
          • Data Store
          • OData
          • Spark SQL
          • AWS Redshift
          • SAP HANA
    • Data Sets
      • Creating a New Data Set using RDBMS Connector
      • Creating a Data Set using Arango DB Connector
      • Creating a Data Set using an API Connector
      • Creating a New FTP Data Set
      • Creating a Data Set based on an Elastic Connector
      • Data set list page
        • View Options: Data Sets List Page
        • Data Set List: Actions
    • Data Stores
      • Creating a New Data Store
        • Data Store using an RDBMS Connector
        • Data Store using a Flat File Data Connector
        • Data Store using an API Data Connector
      • Adding Synonyms to a Datastore
      • Data Stores List
    • Data Store Meta Data
      • Sharing a Meta Data Store
      • Deleting a Meta Data Store
    • Data Sheets
      • Creating a Data Sheet
      • Publishing a Data Sheet
        • Entering Data
        • Viewing Data
        • Deleting a Row
      • Editing a Data Sheet
      • Removing a Data Sheet
    • Data Catalog
    • Data Sandbox
      • Creating a Data Sandbox File
      • Data Sandbox List Page
        • Uploading File Status
        • Using the Data Preparation Option
        • Deleting a Data Sandbox
    • Data as API
    • Data Preparation (Beta Release)
      • Accessing the Data Preparation Option
      • Data Preparation Workspace
        • Data Grid
          • Data Grid Header
          • Data Quality Bar in the Grid
        • Profile: Summary Pane
          • Charts
          • Info: Values/Statistics
          • Pattern
        • Transforms
          • Data Cleansing
          • String
          • Numbers
          • Columns
          • Conversions
          • Integer
          • Dates
          • ML
          • Anonymization
        • Steps
      • Data Preparation List
        • Rename
        • Edit
        • Delete
  • Data Science Lab
    • What is Data Science Lab?
      • Design Philosophy
      • What is a DSL Project?
    • Getting Started
      • Accessing the DS Lab Module
    • Start your Data Science Experiment with DS Lab
    • Project
      • Creating a Project
      • Keep Multiple Versions of a Project
      • Sharing a Project
      • Editing a Project
      • Activating a Project
      • Deactivating a Project
      • Deleting a Project
      • Various Tabs to work with
        • Notebook
          • Ways to Access Notebook
            • Creating a Notebook
            • Uploading a Notebook
          • Notebook Page
            • Notebook Cells
              • Using a Code Cell
              • Using a Markdown Cell
            • Modifying a Notebook
            • Notebook Task Bar
            • Notebook Operations
              • Datasets
              • Secrets
              • Algorithms
              • Transforms
              • Models
                • Registering a Model
                • Filtering a Model
              • Predict
              • Artifacts
                • Preview Artifact
              • Variable Explorer
              • Find and Replace
          • Notebook List Page
            • Export
              • Export to Pipeline
              • Export to GIT
            • Notebook Version Control
            • Sharing a Notebook
            • Editing a Notebook
            • Delete a Notebook
        • Dataset
          • Adding Data Sets
            • Data Sets
            • Data Sandbox
          • Dataset List Page
            • Preview
            • Data Profile
            • Create Experiment
            • Data Preparation
            • Delete
        • Utility
        • Model
          • Export to GIT
          • Register a Model
          • Unregister a Model
          • Register Model as an API Service
            • Register a Model as an API
            • Register an API Client
            • Pass Model values in Postman
        • Auto ML (Alpha Release)
          • Creating Experiments
            • Accessing the Create Experiment Option
              • Configure
              • Specify Prediction
          • AutoML List Page
            • View Report
              • Details
              • Models
                • View Explanation
                  • Model Summary
                  • Model Interpretation
                    • Individual Explanation
                    • Partial Dependence
                    • Model Performance
                    • Feature Importance
                  • Dataset Explainer
            • Delete
  • Data Pipeline
    • About Data Pipeline
      • Design Philosophy
      • Low Code Visual Authoring
      • Real-time and Batch Orchestration
      • Event based Process Orchestration
      • ML and Data Ops
      • Distributed Compute
      • Fault Tolerant and Auto-recovery
      • Extensibility via Custom Scripting
    • Getting Started
      • Homepage
        • List Pipeline
        • Creating Pipeline
          • Adding Components to Canvas
          • Connecting Components
          • Events [Kafka and DB Sync]
          • Memory and CPU allocations
        • List Components
        • Delete Orphan Pods
        • Scheduler
        • Data Channel
        • Settings
      • Pipeline Workflow Editor
        • Pipeline Toolbar
        • Component Panel
        • Right-side Panel
      • Testing Suite
      • Activating Pipeline
      • Monitoring Pipeline
    • Components
      • Adding component to worflow
      • Component Architecture
      • Component Base Configuration
      • Resource Configuration
      • Intelligent Scaling
      • Connection Validation
      • Readers
        • S3 Reader
        • HDFS Reader
        • DB Reader
        • Elastic Search Reader
        • SFTP Stream Reader
        • SFTP Reader
        • Mongo DB Reader
          • Docker Reader
          • Spark
        • Azure Blob Reader
        • Azure Metadata Reader
        • ClickHouse Reader [Docker]
      • Writers
        • S3 Writer
        • RDBMS Writer
        • HDFS Writer
        • ES Writer
        • Mongo Writers
          • Mongo Writer (Spark)
          • Mongo Writer(Docker)
        • Azure Writer
        • ClickHouse Writer [Docker]
        • Sandbox Writer
      • AI/ML
        • Python Model Runner
        • DSL Model & Script Runner
      • Consumers
        • SFTP Monitor
        • MQTT Consumer
        • Eventhub Subscriber
        • Twitter Scrapper
        • API Ingestion and Webhook Listener
        • Mongo Change Stream
        • Rabbit MQ Consumer
        • AWS SNS Monitor
        • Kafka Consumer
        • Kafka Producer
      • Producers
        • WebSocket Producer
        • Eventhub Publisher
        • EventGrid Producer
        • Rabbit MQ Producer
      • Transformation
        • SQL Component
        • Dataprep Script Runner
        • File Splitter
        • Rule Splitter
        • Stored Procedure Runner
        • Flatten JSON
        • Email Component
        • Pandas Query Component
        • Enrichment Component
        • Mongo Aggregation
        • Data Loss Protection
      • Scripting
        • Script Runner
        • Python Script
      • Scheduler
    • Custom Components
    • Advance Configuration & Monitoring
      • Configuration
        • Kafka Configuration
        • Default Component Configuration
        • Logger Setting
      • Data Channel
      • Cluster Events
      • System component Status
    • Version Control
    • Use Cases
  • Data Visualization
    • Self Service
      • Getting Started
        • What is Story?
        • Creating a new Story
          • Accessing the Story Module
            • Creating and Updating Instance
          • Designing a View
      • Design Workspace
        • Guided Tour
        • Dimension Profiling
        • Data Store Merge at View Level
        • Measure Summary
        • Series Properties
        • Formula Field Editor
          • Creating a Formula
            • Record Level Option
            • Summary Level Option
          • Creating a Range
        • Order by and Limit
        • Adding a Slicer
      • Chart Gallery
        • Mixed chart
        • Area chart
        • Bar chart
        • Bubble chart
        • Column Stack chart
        • Line chart
        • Pie chart
        • Scattered Plot chart
        • TreeMap chart
        • Circumplex chart
        • Pareto chart
        • Semi Gauge
        • KPI Tile
        • KPI Tile: Comparative Tile
        • KPI Tile: Sparkline
        • Map
        • Data Grid
        • Metric Summary
        • R Server Visual
        • Dissolution chart
        • Spider chart
        • Waterfall chart
      • Storyboard
        • Search
        • Shared Views
        • Export
        • Alert Center
        • Change Theme
        • Data Store Information
        • Options Assigned to a View
          • Analyse
            • Timeline Play
          • Edit
          • Remove
        • Data Interactions/ Data Drills
          • Drill Into
          • Drill Through
      • Applying Filters
        • View Specific Filter
          • Dimension-based View Filter
          • Measure-based View Filter
          • Date-based View Filter
        • Global Filter
          • Exclude from the Global Filter
          • Saving a Global Filter
          • Custom View Filter
          • Like and Not Like Filter Operations
      • Actions
        • Interactions
    • Governed Dashboards
      • About Dashboard Designer
        • What is Dashboard Designer?
        • Why is it used?
      • Getting Started
        • Accessing the Designer Module
        • Overview of Designer Module
          • Homepage
            • Guided Tour
            • Left Menu Panel
              • New
              • Manage
              • Open Dashboard
              • Preferences
              • Save As
              • Help
              • Exit
          • Dashboard Canvas Page
            • Right side Panel
              • Connection Page
              • Chart Container
              • Manage Window
              • Script Window
              • Guided Tour
            • Canvas Properties
            • Context Menu Properties
      • Create New Workspace
        • Workspace Creation: Complete Flow
        • Creating a Workspace
        • Renaming a Workspace
        • Deleting a Workspace
      • Create New Dashboard
        • Dashboard Creation: Complete Flow
        • Adding a New Dashboard
        • Create Connection
        • Drag and Drop Charts
        • Associate the Dataset
        • Preview the Dashboard
        • Save the Dashboard
      • Managing Options for a Dashboard
        • Open Dashboard in Designer
        • Publish a Dashboard
        • Share a Dashboard
        • Dashboard Version Control
        • Action Menu
          • Preview Dashboard
          • Export to Local Disk
          • Rename
          • Delete
          • Moving a Dashboard
          • Information Icon
      • Connecting to a Data Source
        • Accessing the Data Connectors
        • Establishing a Data Connection
          • CSV Connection
          • Excel Connection
          • Data Service Connection
          • Data Science Service
          • Data store Connection
          • Data Sheet
            • Data Sheet Enhancements
          • WebSocket Connection
          • Merged Connection
      • Charts Gallery
        • Charts
          • Area Chart
          • Bar Chart
          • Bubble Chart
          • Circumplex Chart
          • Column Chart
          • Funnel Chart
          • Histogram Chart
          • Inverted Funnel
          • KPI Tile
          • Line Chart
          • Map Chart
            • Leaflet Properties
          • Mito Plot
          • Mixed Chart
          • Pie Chart
          • Project Timeline
          • Pyramid Chart
          • Spark Line
          • Scatter Plot
          • Spider Chart
          • Waterfall Chart
        • Grids
          • Data Grid
          • Paging Grid
          • Data Sheet
          • Scorecard
          • Pivot Grid
        • Filters
          • Checkbox
          • Combobox
          • Hierarchical Combobox
          • List
          • Radio Button
        • Advanced Charts
          • Box Plot
          • Candle Stick
          • Chevron
          • Data Search
          • Decision Tree
          • Group Bar
          • Group Column
          • Heat Map
          • Text Analyzer
          • Time Series
          • Tree Map
          • Trellis
          • Word Cloud
        • Other Charts
          • Box
          • Bullet
          • Date Picker
          • Export
          • Filter Chips
          • Filter Saver
          • Gauge
          • Graphics
          • Guided Tour
          • H-Slider
          • Image
          • Info Button
          • Label
          • Legend
          • Progress Pie
          • Semi Gauge
          • Stepper
          • SVG Image
          • Text Box
          • Trend
          • Url Button
          • V-Slider
        • Custom Charts
        • Common Chart Properties
          • Background
          • Title & Sub-title
          • X & Y Axis Properties
          • Legend Properties
          • Formatter
          • Axis Setup
          • Export Options
      • Dashboard Objects
        • Manage Dashboard Components
        • Dashboard Objects Properties
      • Configuration
        • Filtering the Data
        • Drill Through
        • Dataset Properties
        • Indicators in Charts
        • Tooltip(Default & Custom)
        • Data Label
        • Geo Mapping
        • Language Mapping
        • Legend Mapping
        • Alerts in Grids
      • Script Help Section
        • Navigate to Script Help page
        • How to use Scripts?
  • Survey
    • Accessing Survey Module
    • Creating a Survey
      • Creating a New Survey
      • Creating a New Survey using Template
    • Survey Builder: Designing a Survey
      • Questions
        • Inserting a Question
        • Available Question Types
        • Deleting a Question
      • Page
        • Inserting a New Page
        • Editing an Existing Page
      • Survey Options
      • Properties
      • Theme
      • Saving a Survey
    • Publishing a Survey
      • Providing the Publish Survey Information
      • Collectors
        • Accessing the Collector Options
        • Creating a Collector
          • Creating a Weblink Collector
          • Creating an Email Collector
            • Adding Recipient to an Email Collector
        • Editing Collector(s)
        • Deleting Collector(s)
    • Analyzing Result for a Survey
      • Creating a New View in the Analyze Result
      • Filter Rule in the Analyze Result
        • Creating a Filter
        • Filter Types
      • Show/Hide Rules for Page/Questions
    • More Options
      • Preview Survey
      • Benchmark
        • Editing a Benchmark
        • Benchmark Goals
          • Adding a Goal to Benchmark
          • Editing a Benchmark Goal
          • Deleting a Benchmark Goal
        • Questions for Benchmark Goals
          • Adding Questions to a Benchmark Goal
          • Viewing a Goal Question
          • Deleting a Goal Question
      • Managing Datamart
        • Creating a Datamart
        • Implementing Scheduler for Datamart
        • Other Options
      • Copying a Survey
      • Survey Summary
      • Deleting a Survey
    • Contacts
      • Creating a Contact Group
        • Editing a Contact Group
        • Deleting a Contact Group
      • Creating New Contacts
        • Creating New Contacts (Manually)
        • Uploading Contacts from a CSV file
      • Listing a Contact
    • Survey Template
  • Forms
    • Accessing the Forms Module
    • General Workflow for Forms
      • Creating a Form
      • Listing a form
      • Form Page Component
      • Page Settings
        • Rename
        • Duplicate
        • Delete
      • Question
        • Duplicate
        • Delete
        • Properties
          • Properties
          • Condition
          • Look up
      • Form Preview
      • Publishing a form
        • Options Context Menu
          • Open in New Tab
          • Properties
          • Modifying a form
          • Adding a form to favorite
          • Moving a form
          • Renaming a form
          • Coping a form
          • Deleting a form
      • Form Response
        • Filter Response
          • Filtering Responses by Page
          • Filtering Responses by Date
          • Filtering Responses by Users
      • Form Settings
        • Properties
          • Status
          • Configuration
        • Theme
          • Form Header Theme
          • Page Header Theme
          • Form Body Theme
        • Form Instruction per User
    • Form Options
      • Edit
      • Delete
      • Duplicate
Powered by GitBook
On this page
  • Create your project
  • Activate your Project
  • Create your first Notebook
  • Upload your Notebook to DS Lab
  • Create your first DSL Model
  • Save your DSL Model
  • Load your DSL Model
  • Predict the model output
  • Save Artifacts of your model
  • Deploy your DS Model to Pipeline
  • Publish your DS model as an API
  • Add data sets to your Project
  • VCS for Projects
  • VCS for Notebooks

Was this helpful?

  1. Data Science Lab

Start your Data Science Experiment with DS Lab

Follow the given sequence to accomplish your experimentation with Data Science Lab

PreviousAccessing the DS Lab ModuleNextProject

Last updated 2 years ago

Was this helpful?

Check out the walk-through on how to create a DSL Project.

  • Navigate to the Projects Page of the Data Science Lab plugin.

  • Click the Create Project option to create a new project.

  • A form opens to provide the Project-related information.

  • The next screen opens asking for the following details for a new project:

    • Project Name: Give a name to the new project.

    • Project Description: Describe the project.

    • Select Algorithms: Select Algorithms using the drop-down menu.

    • Environment: Allows users to select the environment they want to work in. Currently, the supported Python frameworks are TensorFlow and PyTorch.

      • If the users select the TensorFlow environment, they do not need to install packages like the TensorFlow and Keras explicitly in the notebook. These packages can simply be imported inside the notebook.

      • If the users select the PyTorch environment, they do not need to install packages like the Torch and Torchvision explicitly in the notebook. These packages can simply be imported inside the notebook.

      The user can select an option out of the given choices: 1. Python Tensor Flow, 2. Python PyTorch

    • Resource Allocation: This allows the users to allocate CPU/ GPU and memory to be used by the Notebook container inside a given project. The currently supported Resource Allocation options are Low, Medium, and High.

    • Idle Shutdown: Idle Shutdown: It allows the users to specify the idle time limit after which the notebook session will get disconnected, and the project will be deactivated. To use the notebook again, the project should be activated. The supported Idle Shutdown options are 30m, 1h, and 2h.

    • External Libraries: Provide the external libraries’ links required for the project.

  • Based on the selection of the Resource Allocation field the following fields appear with pre-selected values:

    • Image Name

    • Image Version

    • Limit

    • Memory

    • Request (CPU)

    • Memory

  • Select nvidia from the GPU Type field to improve the performance of the project. you select the nvidia as the GPU Type then

  • Click the Save option.

  • The newly created project gets saved, and it appears on the screen.

  • The success of project creation is informed by a notification message.

Please Note:

  • The user can also open the Project list by clicking the View Project option.

  • Click the View Project option.

  • The user gets redirected to the Project list.

Please Note: A project gets the Share, Edit, Delete, Activate/Deactivate actions to be applied on it after getting listed under the Project list.

Pre-requisite: The DSL projects also get Push to VCS and Pull from VCS functionalities, but they only get enabled for the activated DSL projects.

Look at the given walk-through to understand how a DSL Project gets activated.

  • Navigate to the Projects page.

  • Select a project from the list.

  • Click the Activate option.

  • A dialog window appears to confirm the Activation.

  • Click the Yes option.

  • The project gets activated and a notification message appears to communicate the completion of the action.

  • The Activation option gets changed into the Deactivation option for the concerned project.

Have a look at the walk-through to create a new Notebook.

  • Click the Create Notebook option from the Notebook tab.

  • A new Notebook gets created; the user gets a notification message informing the same.

  • Click the Back icon.

  • The Notebook gets saved under the Notebook list.

Note:

  1. Edit the Notebook name by using the ‘Edit Notebook Name’ icon.

  2. The accessible datasets, models, and artifacts will list down under the Datasets, Models, and Artifacts menus (In this case, there are no datasets added to the concerned Notebook.).

  3. Find/Replace menu facilitates the user to find and replace a specific text in the notebook code.

  4. Add a description for the created Notebook by using the same page.

Check out the given walk-through on how to upload a Notebook.

The users can seamlessly upload Notebooks created using other tools and saved in their systems.

  • Navigate to the landing page of an activated Project.

  • Click the Upload Notebook option.

  • Specify a Notebook from the system.

  • Click the Open option to upload the Notebook.

  • The selected Notebook gets uploaded under the Project.

  • The same gets confirmed by a notification message.

  • Another notification message appears to inform the status of the Notebook (it gets saved by default).

  • Click the Back icon.

  • The uploaded Notebook gets listed on the landing page of the Project.

Check-out the walk-through on how to save and load a DSL Model.

Once the Notebook script is executed successfully, the users can save them as a model. The saved model can be loaded into the Notebook.

Save your DSL Model

  • Navigate to a Notebook.

  • Write code using the following sequence:

    • Read DataFrame

    • Define test and train data

    • Create a model

  • Execute the script.

  • Get a new cell.

  • Give a model name to specify the model.

  • Execute the cell.

  • After the code gets executed, click the Save Model notebook in a new cell.

  • The saved model gets listed under the Models list.

Load your DSL Model

  • Click on a new cell and select the model by using the given checkbox to load it.

  • The model gets loaded into a new cell.

Check out the walk-through on the Predict option for a DSL Notebook.

You can get the predicted array from a loaded DSL model that contains a definite dataframe.

  • Add a new cell.

  • Click the Predict option.

  • Execute the code.

  • Provide the model and dataframe.

  • The predicted output of the given dataframe appears as an array.

  • The default comments on how to define the predicted output for a DS Lab model appears as well.

Check out the walk-through on how to Save Artifacts.

Please Note: The user can save the artifacts of a predicted model using this option.

  • Add a new cell.

  • Click the Save Artifacts.

  • Give proper dataframe name and Name of Artifacts (with extensions - .csv/.txt/.json).

  • Execute the cell.

  • The Artifacts get saved.

Please Note: The saved Artifacts can be downloaded as well.

Check out the walk-through to deploy a DSL model to the Data Pipeline (Publish the Model from the Model tab).

You can deploy a saved DSL model to the Data Pipeline plugin by using the Model tab.

  • Navigate to the Model tab.

  • Select a model from the list.

  • Click the Deploy to Pipeline icon for the model.

  • The Deploy to Pipeline dialog box appears to confirm the action.

  • Click the Yes option.

  • The selected model gets published and deployed to the Data Pipeline (It disappears from the Unpublished model list).

  • A notification message appears to inform the same.

Please Note:

  1. The published/deployed model gets listed under the Published filter.

  2. The Publish option provided under the Notebook tab and the Deploy to Pipeline option provided under the Model tab perform the same task.

This function gets completed in three steps:

1. Publish a Model as an API

2. Register an API Client

3. Pass the Model values in the Postman

Check-out the below given video to understand the Publish Model as an API Service functionality.

You can publish a DSL model as an API using the Model tab. Only the published models get this option.

  • Navigate to the Model tab.

  • Filter the model list by using the Published filter option.

  • Select a model from the list.

  • Click the Publish as API option.

  • The Update model page opens.

  • Provide Max instance limit.

  • Click the Save and Publish option.

Please Note: Use the Save option to save the data which can be published later.

  • The model gets saved and published as an API service. A notification message appears to inform the same.

  • Navigate to the Admin module.

  • Click the API Client Registration option.

  • The API Client Registration page opens.

  • Click the New option.

  • Select the Client type as internal.

  • Provide the following client specific information:

    • Client Name

    • Client Email

    • App Name

    • Request Per Hour

    • Request Per Day

    • Select API Type- Select the Model as API option.

    • Select the Services Entitled -Select the published DSL model from the drop-down menu.

  • Click the Save option.

  • The client details get registered.

  • A notification message appears to inform the same.

Note: Once the client gets registered open the registered client details using the Edit option to get the Client id and Client Secrete key.

  • Navigate to the Postman.

  • Add a new POST request.

  • Pass the URL with model name (Only the sklearn models are supported at present).

  • Provide required parameters under the Params tab:

    a. Client Id

    b. Client Secret Key

    c. App Name

  • Open the Body tab.

  • Select the raw option.

  • Provide the input dataframe.

  • Click the Send option.

  • The response will appear below.

  • You can save the response by using the Save Response option.

Please Note: The model published as an API service can be easily consumed under various apps.

The Dataset tab offers a list of uploaded Datasets which can be added to a project. The user can get a list of uploaded Data Sets and Data Sandbox from the Data Center module under this tab.

The Add Datasets page offers the following Data service options to add as datasets:

  1. Data Sandbox – This option lists all the available Data Sandbox from the Data Center module.

Check out the walk-through to on how to add a data service as a dataset.

  • Navigate to a Project-specific page and click the Dataset tab (E.g., the given image displays the Dataset tab under the Sample Project).

  • Click the Add Datasets button.

  • The Add Datasets page opens offering two options to choose data:

    • Data service (gets selected by default)

    • Data Sandbox

  • Use the Search space to search through the displayed data service list.

  • Select the required data service(s) using the checkboxes provided next to it.

  • Click the Add option.

  • The selected data service(s) gets added to the concerned project.

  • A notification message appears to inform the same.

Adding Data Sandbox

Check-out the walk-through to understand how to add dataset using the Data Sandbox.

  • Open the Dataset tab from a specific project.

  • Click the Add Datasets option.

  • You get redirected to the Add Datasets page.

  • Select the Data Sandbox option from the Data Service drop-down menu.

  • Use the Search space to search for a specific Data Sandbox.

  • Select the required Data Sandbox(es) using the checkboxes provided next to it.

  • Click the Add option.

  • The selected data sandbox(es) gets added to the concerned project.

  • A notification message appears to inform the same.

Pre-requisite: Make sure that the Version control settings for the DSL plugin are configured by your administrator before you use this functionality.

Watch the walk-through videos given below to understand the Push to VCS and Pull from VCS functionality for a DSL Project.

Pushing a Project to the VCS

  • Navigate to the Projects page of the DS Lab plugin.

  • Select an activated project.

  • Click the Push into VCS icon for the project.

  • The Push into Version Controlling System dialog box appears.

  • Provide a Commit Message.

  • Click the Push option.

  • The DSL Project version gets pushed into the Version Controlling System, a notification message appears to inform the same.

Pulling a Project from the VCS

  • Navigate to the Projects page of the DS Lab plugin.

  • Select an activated project.

  • Click the Pull from VCS icon for the project.

  • The Pull from Version Controlling System dialog box opens.

  • Select the version that you wish to pull by using the checkbox.

  • Click the Pull option.

  • The pulled version of the selected Project gets updated in the Project list.

  • A notification message appears to inform the same.

The Push to and Pull from VCS functionalities will not be enabled for a deactivated project.

Pre-requisite: Make sure that the Version control settings for the DSL plugin are configured by your administrator before you use this functionality.

Walk-through on the Version Control Functionality for Notebook

Pushing a Notebook to the VCS

  • Navigate to the Notebook list of a Project.

  • Select a Notebook.

  • Click the Push into VCS icon for the Notebook.​

  • The Push into Version Controlling System dialog box appears.

  • Provide a Commit Message.

  • Click the Push option.​

  • The Notebook version gets pushed into the Version Controlling System and the Notebook list gets updated with the latest version.

  • A notification message appears to inform the success of the action.​

Pulling a Notebook from the VCS

  • Navigate to the Notebook list given under a Project.

  • Select a Notebook.

  • Click the Pull from VCS icon for the Notebook.

  • The Pull from Version Controlling System dialog box opens.

  • Select the version that you wish to pull by using the checkbox.

  • Click the Pull option.

  • The pulled version of the selected Notebook gets updated in the Notebook list.

  • A notification message appears to inform the success of the action.

– These are the uploaded data sets from the Data Center module.

Activate your Project
Create your first Notebook
Upload your Notebook to DS Lab
Create your first DSL Model
Predict the model output
Save Artifacts of your model
Deploy your DS Model to Pipeline
Publish your DS model as an API
Publish a Model as an API
Register an API Client
Pass the Model values in the Postman
Add data sets to your Project
Data Service
Adding Data Service
VCS for Projects
VCS for Notebooks
Create your project
Creating a new DSL Project
The Create Project option
Mandatory fields for creating a Project
Saving the Project details
Project gets created and listed under the Projects page
View Project option
Project List
Various Actions provided to a DSL Project
Activating a DSL Project
Use the Activate option for a Project
The Project gets activated
Creating a new Notebook
The Create Notebook option.
The Notebook landing page
The List of created Notebooks
Notebook Landing page
Uploading a Notebook to DS Lab Project
Upload Notebook option
Notification messages after a Notebook gets uploaded
The uploaded Notebook gets listed under the Notebook list
Saving and Loading a DSL Model
Sample Script for a Data Science Model
Specify a DSL Model by giving a name
Loading a saved DSL Mod
Predicting the output of a loaded model
Predicting the Output of a DSL model
Saving Artifacts in a Notebook
Saved Artifacts inside a Notebook
Deploying a DSL Model to Data Pipeline/ Publishing a DSL Model
Accessing the Deploy to Pipeline option for a saved DS model
Notification message after the model gets deployed to the Data Pipeline plugin.
Filtering the model list by using the 'Published' filter option.
Accessing the Publish as API option for a saved model
Updating the model details
Notification message after the Model gets saved and published as an API
Required information for an API Client Registration
The API Client gets registered
Get Client Id and Client Secret Key for a registered API Client
Passing the Model in the POST request under the Postman
Providing the required Parameters
Getting Response based on the Model input
Add Datasets page
Adding a Data service as a Dataset in the DS Lab
Adding Datasets to a Project
Selecting Data Sets
Dataset li
Adding Dataset using the Data Sandbox option
Adding Dataset to a project
Selecting Data Sandbox
Dataset list
Pushing a DSL Project to the VCS
Pulling a DSL Project from the VCS
Push to VCS option for a DSL Project
Push into Version Controlling System dialog box
Notification message after a DSL Project gets pushed into VCS
Pull from VCS icon for a DSL Project
Pull from Version Controlling System dialog box
Notification message after a version of the DSL Project gets pulled
Pushing to and Pulling from VCS functionality for a Notebook
Push to VCS icon for Notebook
Push into Version Controlling System dialog box
Notification message after the Notebook version gets updated in the list
Pull from VCS icon for a Notebook
Pull from Version Controlling System dialog box for Notebook
Notification message after a version of the Notebook gets pulled