Data Science Lab Quick Start Flow (In progress)
This page aims to provide all the major steps in the concise manner for the user to kick start their Data Science Experiments.
Last updated
This page aims to provide all the major steps in the concise manner for the user to kick start their Data Science Experiments.
Last updated
Data Science module allows the user to create Data Science Experiments and productionaize them. This page tries to provide the entire Data Science flow in nutshell for the user to quickly begin their Data Science experiment journey.
A Data Science Project created inside the Data Science Lab is like a Workspace inside which the user can create and store multiple data science experiments.
Pre-requisite: It is mandatory to configure the DS Lab Settings option before beginning with the Data Science Project creation. Also, select the algorithms by using the Algorithms field from the DS Lab Settings section which you wish to use inside your Data Science Lab project.
Data is the first requirement for any Data Science Project. The user can add the required datasets and view the added datasets under a specific Project by using the Dataset tab.
The user needs to click on the Dataset tab from the Project List page to access the Add Datasets option.
Checkout the given illustrations to understand the Adding Dataset (Data Service) and Adding Data Sandbox steps in details.
Please Note:
The user can add Datasets by using the Dataset tab or Notebook page.
Based on the selected Environment the supported Data Sets types can be added to a Project or Notebook. E.g., PySpark environment does not support the Data Sandbox as Dataset.
Refer the Adding Data Sets section with the sub-pages to understand it in details.
Refer the Data Preparation page to understand how the user can apply required Data Preparation steps on a specific dataset from the Data Set List page.
Once the user creates a Project and adds the required Data sets to the Project, it gets ready to hold a Data Science Experiment. The Data Science Lab user gets the following ways to go ahead with their Data Science Experiments:
Use Notebook infrastructure provided under the Project to create, save, load, and predict a model. It is also possible to save the Artifacts for a Saved Model. Refer the Notebook section for more detials.
Use the Auto ML functionality to get the auto-trained Data Science models. Refer the AutoML section for more details.
Please Note: The Auto ML functionality is not supported at present for the Project created in within PySpark environment.
The user can get a list of uploaded Data Sets and Data Sandbox from the module under this tab.