Data Pipeline
  • Data Pipeline
    • About Data Pipeline
    • Design Philosophy
    • Low Code Visual Authoring
    • Real-time and Batch Orchestration
    • Event based Process Orchestration
    • ML and Data Ops
    • Distributed Compute
    • Fault Tolerant and Auto-recovery
    • Extensibility via Custom Scripting
  • Getting Started
    • Homepage
      • List Pipelines
      • Create
        • Creating a New Pipeline
          • Adding Components to Canvas
          • Connecting Components
            • Events [Kafka and Data Sync]
          • Memory and CPU Allocations
        • Creating a New Job
          • Job Editor Page
          • Task Components
            • Readers
              • HDFS Reader
              • MongoDB Reader
              • DB Reader
              • S3 Reader
              • Azure Blob Reader
              • ES Reader
              • Sandbox Reader
            • Writers
              • HDFS Writer
              • Azure Writer
              • DB Writer
              • ES Writer
              • S3 Writer
              • Sandbox Writer
              • Mongodb Writer
              • Kafka Producer
            • Transformations
          • PySpark Job
          • Python Job
      • List Jobs
      • List Components
      • Delete Orphan Pods
      • Scheduler
      • Data Channel
      • Cluster Event
      • Trash
      • Settings
    • Pipeline Workflow Editor
      • Pipeline Toolbar
        • Pipeline Overview
        • Pipeline Testing
        • Search Component in Pipelines
        • Push Pipeline (to VCS/GIT)
        • Pull Pipeline
        • Full Screen
        • Log Panel
        • Event Panel
        • Activate/Deactivate Pipeline
        • Update Pipeline
        • Failure Analysis
        • Pipeline Monitoring
        • Delete Pipeline
        • Pipeline Component Configuration
        • Pipeline Failure Alert History
      • Component Panel
      • Right-side Panel
    • Testing Suite
    • Activating Pipeline
    • Monitoring Pipeline
    • Job Monitoring
  • Components
    • Adding Components to Workflow
    • Component Architecture
    • Component Base Configuration
    • Resource Configuration
    • Intelligent Scaling
    • Connection Validation
    • Readers
      • S3 Reader
      • HDFS Reader
      • DB Reader
      • ES Reader
      • SFTP Stream Reader
      • SFTP Reader
      • Mongo DB Reader
        • MongoDB Reader Lite (PyMongo Reader)
        • MongoDB Reader
      • Azure Blob Reader
      • Azure Metadata Reader
      • ClickHouse Reader (Docker)
      • Sandbox Reader
      • Azure Blob Reader
    • Writers
      • S3 Writer
      • DB Writer
      • HDFS Writer
      • ES Writer
      • Video Writer
      • Azure Writer
      • ClickHouse Writer (Docker)
      • Sandbox Writer
      • MongoDB Writers
        • MongoDB Writer
        • MongoDB Writer Lite (PyMongo Writer)
    • Machine Learning
      • DSLab Runner
      • AutoML Runner
    • Consumers
      • SFTP Monitor
      • MQTT Consumer
      • Video Stream Consumer
      • Eventhub Subscriber
      • Twitter Scrapper
      • Mongo ChangeStream
      • Rabbit MQ Consumer
      • AWS SNS Monitor
      • Kafka Consumer
      • API Ingestion and Webhook Listener
    • Producers
      • WebSocket Producer
      • Eventhub Publisher
      • EventGrid Producer
      • RabbitMQ Producer
      • Kafka Producer
      • Synthetic Data Generator
    • Transformations
      • SQL Component
      • Dateprep Script Runner
      • File Splitter
      • Rule Splitter
      • Stored Producer Runner
      • Flatten JSON
      • Email Component
      • Pandas Query Component
      • Enrichment Component
      • Mongo Aggregation
      • Data Loss Protection
      • Data Preparation (Docker)
      • Rest Api Component
      • Schema Validator
    • Scripting
      • Script Runner
      • Python Script
        • Keeping Different Versions of the Python Script in VCS
    • Scheduler
  • Custom Components
  • Advance Configuration & Monitoring
    • Configuration
      • Default Component Configuration
      • Logger
    • Data Channel
    • Cluster Events
    • System Component Status
  • Version Control
  • Use Cases
Powered by GitBook
On this page
  • Pushing a Pipeline into VCS
  • Pulling a Pipeline

Was this helpful?

  1. Getting Started
  2. Pipeline Workflow Editor
  3. Pipeline Toolbar

Push Pipeline (to VCS/GIT)

PreviousSearch Component in PipelinesNextPull Pipeline

Was this helpful?

The Version Control feature has been provided for the user to maintain a version of the pipeline while the same pipeline undergoes further development and different enhancements.

The Push to VCS and Pull Pipeline from GIT features are present on the and pages.

Pushing a Pipeline into VCS

  • Navigate to the Pipeline Editor page for a Pipeline.

  • Click the Push Pipeline icon for the selected data pipeline.

  • The Push into Version Controlling System dialog box appears.

  • Provide a Commit Message (required) for the data pipeline version.

  • Select a Push Type out of the below-given choices to push the pipeline:

    1. 1.Version Control: For versioning of the pipeline in the same environment.

    2. 2.GIT Export (Migration): This is for pipeline migration. The pushed pipeline can be migrated to the destination environment from the migration window in Admin Module.

  • Click the Ok option.

  • A notification message appears to confirm the completion of the action.

Check out the below-given illustrations on how to attempt Version Control and Pipeline Migration.

Please Note:

  • The user also gets an option to Push the pipeline to GIT. This action will be considered as Pipeline Migration.

Pulling a Pipeline

This feature is for pulling the previously moved versions of a pipeline that are committed by the user. This can help a user significantly to recover the lost pipelines or avoid unwanted modifications made to the pipeline.

Check out the walk-through on how to pull a pipeline version from the VCS.

  • Navigate to the Pipeline Editor page.

  • Select a data pipeline from the displayed list.

  • Click the Pull from GIT icon for the selected data pipeline.

  • The Pull from GIT dialog box appears.

  • Select the data pipeline version by marking the given checkbox.

  • Click the Ok option.

  • A confirmation message appears to assure the users that the concerned pipeline workflow has been imported.

  • Another confirmation message appears to assure the user that the concerned pipeline workflow has been pulled.

Please Note:

  • The pipeline that you pull will be changed to the selected version. Please make sure to manage the versions of the pipeline properly.

​

​​

The pipeline pushed to the VCS using the Version Control option, can be pulled directly from the Pull Pipeline from GITicon.

​

Refer Migrating Pipeline described as a part of the (under the Administration section) on how to pull an exported/migrated Pipeline version from the GIT.

GIT Migration
List Pipeline
Pipeline Editor