Feature List
  • 8.x
    • Team Feature
      • Data-Prep Features
      • Pipeline Features
        • What's New Window Pop-up
        • Failure Db-Sync
        • Testing for Kafka 3.1.0
        • Python 3.10 R&D
        • Job History
        • SPC (Single Page for Configuration)
        • Python Jobs
        • Failure Alerts
        • Event Channel Alerts
        • Pipeline Error handling
        • Pipeline : PySpark Component (PySpark Job)
      • Dashboard Charting
        • Widget as component
        • Knowledge Graph Chart
          • Sample Library based code
        • Word Cloud
        • Tile component
        • Sankey Chart
        • Model as API Connector
        • Dataprep recipe in Dataset selection
        • Decomposition Enhancment
      • Python Upgrade
        • Core Platform : Data Services
        • Core Platform : Data Catalog
        • Core Platform : Data Center
        • Data Science Lab
      • Sonar Code Scan automation by DevOps
      • DS Lab PySpark Project.
      • Core Platform
        • Tag Feature For Data Connector , Dataset , DataStore etc..
        • DataStore & Metadatastore Migration
        • MongoDB & ClickHouse Support For DataSheet
        • Data As API WorkBench
        • Pagination in Home , DataCenter , Dataset , DataStore etc..
        • Sharing Data Connector & Data Set with View or Edit Permission.
        • Core Monitoring & Alerting
      • Data Science Lab
        • Auto Forecasting Requirements
          • User Input
          • Forecasting Method
          • Explainability
        • DSLAB Sprint May1-2023-May12-2023
        • DS LAB Sprint Apr10-Apr21
        • Provide Static Variables for DSLAB Component In AutoML
        • Scheduler For DSLAB Scripts
        • Optimisation of Model Explainability code
    • QA
    • DevOps
Powered by GitBook
On this page
  1. 8.x
  2. Team Feature
  3. Pipeline Features

Pipeline : PySpark Component (PySpark Job)

PySpark code can be built via Data Science Lab PySpark and can be deployed as component in pipeline. This allows us to utilise distributed processing capabilities of Spark and flexibility of Python.

PreviousPipeline Error handlingNextDashboard Charting

Last updated 1 year ago