Pre Sales
  • BDB Pre Sales
  • Manufacturing Use Case
    • Introduction
    • How is BDB different than Azure, AWS, or GCP?
    • Project Definition and Requirements
      • Functional Requirements
      • Technical Requirements
      • Non-Functional Requirements
      • Project Deliverables
    • Functional Requirements from Manufacturing
    • Technical Requirements
      • Data Ingestion
      • Data Processing (Batch Data)
      • Data Processing (Real-Time Data)
      • Data Preparation
      • Data Store(Data Lake)
      • Data Store (Enterprise Datawarehouse)
      • Query Engine
      • Data Visualization
      • BDB Search
      • Advanced Analytics and Data Science
    • Data Services
    • Security Requirements
    • Networking Requirements
    • Operational Requirement
    • Non-Functional Requirements
      • Scalability
      • Availability
    • Data Platform Benchmarking
    • Hardware Sizing Requirements
  • Data Platform Evaluation Criteria
    • Data Preparation
    • Data Platform Evaluation Highlights
    • Data Pipeline
    • Ingestion Connector
      • Seamless Handling of Data ops and ML ops
    • Ingestion Process
      • Building a path from ingestion to analytics
    • Data Preparation
      • Processing Modern Data Pipeline
  • BDB POC Approach
  • BDB Vertical Analytics
  • Technical FAQs
    • Data Platform
    • Administration
    • Data Security & Privacy
    • Analytics
    • Data Preparation
    • Data Pipeline
    • Dashboard Designer
    • Business Story
    • Performance & Scalability
    • Global and Embeddable
    • Deployability
    • User Experience
    • Support & Licensing
    • AI
    • Change Management
Powered by GitBook
On this page
  • What out-of-box AI models are available in the BDB platform?
  • Can the BDB Platform run custom-made AI models?
  • Which programming languages are supported to run the custom AI models?
  • Where are the AI models getting stored?
  • How can you perform MLOps using the BDB Platform?
  • Does the MLOps pipeline have the capability of automatically initiating the re-training process based on the observed drift in the models?
  • Is there a limit on the size of models that can be hosted by the platform?
  1. Technical FAQs

AI

What out-of-box AI models are available in the BDB platform?

The list of out-of-box AI models in the BDB Platform is as given below:

  1. Face Recognition

  2. General Object Detection

  3. Tabular Data Extraction

  4. Sentiment Analysis

  5. General Text Classification

  6. Entity Extraction

There are generic ML training templates available for NLP, Forecasting , Anomaly Detection , Classification, and Regression. These can be utilized for fine-tuning based on the specific data.

Can the BDB Platform run custom-made AI models?

BDB Platform provides the ability to build new customized AI models as well as deploy your already built AI models. BDB Platform gets integrated with a variety of machine learning and deep learning frameworks, including TensorFlow and PyTorch.

Which programming languages are supported to run the custom AI models?

There are various AI models in production for the customer of BDB. Some of them are listed below :

  • Churn predictions models (Classification models)

  • Sales prediction model (Time series forecasting)

  • Collaborative Filtering model (Recommendation system)

  • Text classification model (Transformer based deep learning model)

  • LOGO detection model (Computer vision deep learning model)

Where are the AI models getting stored?

The models trained on the BDB Platform are stored in an object storage (it can be on AWS, S3, Azure Blob or attached Volume mounted with deployment).

How can you perform MLOps using the BDB Platform?

The BDB Platform has a Data Pipeline module to perform various MLOps Following are the example of an MLOps pipeline in BDB Platform:

  1. Data Ingestion: Data is collected from various sources and ingested into BDB Pipeline.

  2. Data Preparation: The data is then cleaned, transformed, and prepared for modeling.

  3. Model Development: In this step, a machine learning model is developed using the prepared data. The model is trained, tested, and optimized using various algorithms and techniques.

  4. Model Deployment: The trained model is then deployed to a production environment, such as BDB Pipeline, for real-time predictions.

  5. Model Monitoring: The model performance is continuously monitored and evaluated using various metrics to ensure its accuracy and reliability.

  6. Model Update: If necessary, the model can be updated and retrained with new data.

  7. Deploy Updates: The updated model can then be deployed to the production environment, replacing the previous version. This pipeline is automated and optimized to ensure efficient and reliable delivery of machine learning models into production. BDB Platform provides a unified platform that allows data scientists, data engineers, and DevOps teams to collaborate and streamline the MLOps process.

Does the MLOps pipeline have the capability of automatically initiating the re-training process based on the observed drift in the models?

No, we don’t provide automatic re-training on the platform. This feature is in BDB's roadmap and will be coming with the release of version 8.5 in July 2023.

Is there a limit on the size of models that can be hosted by the platform?

BDB Platform provides an auto-scalable system and doesn’t put restrictions based on the model size. It usually depends on the infrastructure allocated for model development and model inferencing.

PreviousSupport & LicensingNextChange Management

Last updated 2 years ago