Pre Sales
  • BDB Pre Sales
  • Manufacturing Use Case
    • Introduction
    • How is BDB different than Azure, AWS, or GCP?
    • Project Definition and Requirements
      • Functional Requirements
      • Technical Requirements
      • Non-Functional Requirements
      • Project Deliverables
    • Functional Requirements from Manufacturing
    • Technical Requirements
      • Data Ingestion
      • Data Processing (Batch Data)
      • Data Processing (Real-Time Data)
      • Data Preparation
      • Data Store(Data Lake)
      • Data Store (Enterprise Datawarehouse)
      • Query Engine
      • Data Visualization
      • BDB Search
      • Advanced Analytics and Data Science
    • Data Services
    • Security Requirements
    • Networking Requirements
    • Operational Requirement
    • Non-Functional Requirements
      • Scalability
      • Availability
    • Data Platform Benchmarking
    • Hardware Sizing Requirements
  • Data Platform Evaluation Criteria
    • Data Preparation
    • Data Platform Evaluation Highlights
    • Data Pipeline
    • Ingestion Connector
      • Seamless Handling of Data ops and ML ops
    • Ingestion Process
      • Building a path from ingestion to analytics
    • Data Preparation
      • Processing Modern Data Pipeline
  • BDB POC Approach
  • BDB Vertical Analytics
  • Technical FAQs
    • Data Platform
    • Administration
    • Data Security & Privacy
    • Analytics
    • Data Preparation
    • Data Pipeline
    • Dashboard Designer
    • Business Story
    • Performance & Scalability
    • Global and Embeddable
    • Deployability
    • User Experience
    • Support & Licensing
    • AI
    • Change Management
Powered by GitBook
On this page
  1. Data Platform Evaluation Criteria

Ingestion Connector

Requirement
Evaluation
Remarks
Additional Remarks

MS SQL Server

very high

MS SQL Server connection via data center for data dissemination. Pipeline reader and writer available out of the box

S3 Buckets

high

Out-of-the-box reader and writer are available (in Data Pipeline)

Azure blobs

high

Out-of-the-box reader and writer are available (in Data Pipeline)

SFTP(CSV)

high

Out-of-the-box reader and writer are available (in Data Pipeline)

MQTT

very high

MQTT Producer / Consumer component is available in the Data Pipeline.

Kafka Topics

very high

Pipeline provides Kafka consumer and producer.

AMQP

Currently not available, provides ability via custom connector. Python scripting component can help to achieve this.

Oracle DB

very high

Oracle DB connection via data center for data dissemination. Pipeline reader and writer avaliable out-of-the-box.

DB2AS 400

medium

Custom connector via Python scripting

DB2 mainframe

medium

Custom connector via Python scripting

MongoDB

very high

MongoDB connection via Data Center for data dissemination. Pipeline reader and writer available out-of-the-box.

Elastic Search

very high

Elastic Search connection via Data Center for data dissemination. There are out-of-the-box Elastic reader and writer components available in the Data Pipeline .

FluxDb

medium

Custom connector via Python scripting

Custom SDK

medium

Custom connector via Python scripting

SCP

medium

The SSH enabled file transfer is supported in the Data Pipeline

Azure Data Lake (V1,V2)

high

Custom connector via Python scripting

Others

high

BDB Platform has following built-in connectors:

Hive

SAP HANA

Data Set

Data Store

Data Store Metadata

Data Sheet

Amazon

App Store

Bing Ads

Dropbox

FTP Server

Facebook

Facebook Ads

Firebase DB

Fitbit

Flipkart

Google

Adwords

Google Analytics

Google Big Query

Google Forms

Google Sheet

Hubspot

JIRA

Lead Squared LinkedIn

LinkedIn Ads

MS Dynamics

Mail Chimp

Mongo DB for BI PostgreSQL

Quick Books Salesforce

Service Now

Twitter

Twitter Ads

Yelp

YouTube

ZOHO Books

PreviousData PipelineNextSeamless Handling of Data ops and ML ops

Last updated 2 years ago