Data Pipeline
  • Data Pipeline
    • About Data Pipeline
    • Design Philosophy
    • Low Code Visual Authoring
    • Real-time and Batch Orchestration
    • Event based Process Orchestration
    • ML and Data Ops
    • Distributed Compute
    • Fault Tolerant and Auto-recovery
    • Extensibility via Custom Scripting
  • Getting Started
    • Homepage
      • Create
        • Creating a New Pipeline
          • Adding Components to Canvas
          • Connecting Components
            • Events [Kafka and Data Sync]
          • Memory and CPU Allocations
        • Creating a New Job
          • Job Editor Page
          • Spark Job
            • Readers
              • HDFS Reader
              • MongoDB Reader
              • DB Reader
              • S3 Reader
              • Azure Blob Reader
              • ES Reader
              • Sandbox Reader
              • Athena Query Executer
            • Writers
              • HDFS Writer
              • Azure Writer
              • DB Writer
              • ES Writer
              • S3 Writer
              • Sandbox Writer
              • Mongodb Writer
              • Kafka Producer
            • Transformations
          • PySpark Job
          • Python Job
          • Python Job(On demand)
          • Script Executer Job
          • Job Alerts
        • Register as Job
        • Exporting a Script From Data Science Lab
        • Utility
        • Git Sync
      • Overview
        • Jobs
        • Pipeline
      • List Jobs
      • List Pipelines
      • Scheduler
      • Data Channel & Cluster Events
      • Trash
      • Settings
    • Pipeline Workflow Editor
      • Pipeline Toolbar
        • Pipeline Overview
        • Pipeline Testing
        • Search Component in Pipelines
        • Push & Pull Pipeline
        • Pull Pipeline
        • Full Screen
        • Log Panel
        • Event Panel
        • Activate/Deactivate Pipeline
        • Update Pipeline
        • Failure Analysis
        • Delete Pipeline
        • Pipeline Component Configuration
        • Pipeline Failure Alert History
        • Format Flowchart
        • Zoom In/Zoom Out
        • Update Component Version
      • Component Panel
      • Right-side Panel
    • Testing Suite
    • Activating Pipeline
    • Pipeline Monitoring
    • Job Monitoring
  • Components
    • Adding Components to Workflow
    • Component Architecture
    • Component Base Configuration
    • Resource Configuration
    • Intelligent Scaling
    • Connection Validation
    • Readers
      • GCS Reader
      • S3 Reader
      • HDFS Reader
      • DB Reader
      • ES Reader
      • SFTP Stream Reader
      • SFTP Reader
      • Mongo DB Reader
        • MongoDB Reader Lite (PyMongo Reader)
        • MongoDB Reader
      • Azure Blob Reader
      • Azure Metadata Reader
      • ClickHouse Reader (Docker)
      • Sandbox Reader
      • Azure Blob Reader (Docker)
      • Athena Query Executer
    • Writers
      • S3 Writer
      • DB Writer
      • HDFS Writer
      • ES Writer
      • Video Writer
      • Azure Writer
      • ClickHouse Writer (Docker)
      • Sandbox Writer
      • MongoDB Writers
        • MongoDB Writer
        • MongoDB Writer Lite (PyMongo Writer)
    • Machine Learning
      • DSLab Runner
      • AutoML Runner
    • Consumers
      • GCS Monitor
      • Sqoop Executer
      • OPC UA
      • SFTP Monitor
      • MQTT Consumer
      • Video Stream Consumer
      • Eventhub Subscriber
      • Twitter Scrapper
      • Mongo ChangeStream
      • Rabbit MQ Consumer
      • AWS SNS Monitor
      • Kafka Consumer
      • API Ingestion and Webhook Listener
    • Producers
      • WebSocket Producer
      • Eventhub Publisher
      • EventGrid Producer
      • RabbitMQ Producer
      • Kafka Producer
      • Synthetic Data Generator
    • Transformations
      • SQL Component
      • File Splitter
      • Rule Splitter
      • Stored Producer Runner
      • Flatten JSON
      • Pandas Query Component
      • Enrichment Component
      • Mongo Aggregation
      • Data Loss Protection
      • Data Preparation (Docker)
      • Rest Api Component
      • Schema Validator
    • Scripting
      • Script Runner
      • Python Script
        • Keeping Different Versions of the Python Script in VCS
    • Scheduler
    • Alerts
      • Alerts
      • Email Component
    • Job Trigger
  • Custom Components
  • Advance Configuration & Monitoring
    • Configuration
      • Default Component Configuration
      • Logger
    • Data Channel
    • Cluster Events
    • System Component Status
  • Version Control
  • Use Cases
Powered by GitBook
On this page
  • Job List
  • Job Details & History Tabs
  • Searching Job
  1. Getting Started
  2. Homepage

List Jobs

All the saved Jobs by a logged-in user get listed by using this option.

PreviousPipelineNextList Pipelines

Last updated 1 year ago

The List Jobs option opens the available Jobs List for the logged-in user. All the saved Jobs by a user get listed on this page. By clicking on the Job name the Details tab on the right side of the page gets displayed with the basic details of the selected job.

Job List

  • Navigate to the Data Pipeline Homepage.

  • Click on the List Jobs option.

  • The List Jobs page opens displaying the created jobs.

Job Details & History Tabs

  • Select a Job from the displayed list, and click on it.

  • This will open a panel containing three tabs:

    • Job Details:

      • Tasks: Indicates the number of tasks used in the job.

      • Created By: Indicates the name of the user who created the job.

      • Created Date: Date when the job was created.

      • Updated By: Indicates the name of the user who updated the job.

      • Updated Date: Date when the job was last updated.

      • Cron Expression: A string representing a schedule specifying when the job should run.

      • Trigger Interval: Interval at which the job is triggered (e.g., every 5 minutes).

      • Next Trigger: Date and time of the next scheduled trigger for the job.

      • Description: Description of the job provided by the user.

    • Total Job Config:

      • Total Allocated CPU: Total allocated CPU cores.

      • Total Allocated Memory: Total allocated memory in megabytes (MB).

      • Total Allocated Min Memory: Total Minimum allocated Memory (MB)

      • Total Allocated Max Memory: Total Maximum allocated Memory (MB)

      • Total Allocated Max CPU: Total Maximum allocated CPU (in Cores)

      • Total Allocated Min CPU: Total Minimum allocated CPU (in Cores)

    • History:

      • Provides relevant information about the selected job's past runs, including success or failure status.

      • Clear option available to clear the job history.

      • Refresh icon for refreshing the displayed job history.

  • View: Redirects user to the Job workspace.

  • Share: Allows the user to share the selected job with any user.

  • Edit: Enables the user to edit any information of the job. This option will be disabled when the job is active.

  • In the List Jobs page, the user can view and download the pod logs for all instances by clicking on the View System Logs option in the Job history tab. For reference, please see the image below.

  • Once the user clicks on the View System Logs option, a drawer panel will open from the right side of the window. The user can select the instance for which the System logs have to be downloaded from the Select Hostname drop-down option. For reference, please see the image below.

  • Clear: It will clear all the job run history from the History tab. Please refer the below given image for reference.

Searching Job

The user can search for a specific Job by using the Search bar on the Job List. By typing a common name all the existing jobs having that word will list. E.g., By typing sand all the existing Jobs with the word sand in it get listed as displayed in the following image:

The user can also customize the Job List by choosing available filters such as Job Status, Job Type, Job Running Status:

Please Note:

  • The user can open the Job Editor for the selected Job from the list by clicking the View icon.

  • The user can view and download logs only for Jobs that have either successfully run or failed. Logs for the interrupted jobs cannot be viewed or downloaded.

Job Monitoring: Redirects user to the page.

Delete: Allows the user to delete the job. The deleted job will be moved to .

Job monitoring
Trash
Accessing the List Job option from Pipeline Homepage
List Job Page
Displaying Job Details, Total Job Config & Job History for Spark Job
View and Download Pod logs option for PySpark Job from List Jobs page
Viewing Pod logs for Spark Job
Clear Job History
Search tab for Jobs
Customizing Job list with the available Filters