Creating a New Job
This section provides detailed information on the creation part of the Jobs to make your data processing faster.
Jobs in the BDB Platform are used to ingest and transfer data from various sources. They enable users to transform, unify, and cleanse data, making it ready for analytics and business reporting—all without relying on a Kafka topic—thereby ensuring faster data flow.
This section provides step-by-step instructions for creating a new job using the Jobs interface.
Job Configuration
Job Name: Enter a unique name for the new job.
Description (Optional): Provide additional details about the purpose of the job.
Select the job type from the drop-down menu. Supported job types include:
Spark Job
PySpark Job
Python Job
Script Executor
Example: Select Spark Job for distributed data processing.
Node Pool: Choose the node pool from the drop-down menu where the job will be executed.
Concurrency Policy
If scheduling is enabled, choose one of the following concurrency policies:
Allow: Runs new tasks in parallel even if previous tasks are still executing.
Forbid: Ensures only one task runs at a time; subsequent tasks wait until the previous execution is complete.
Replace: Terminates any ongoing task if a new scheduled instance starts.
Using these steps, users can create jobs that automate data ingestion, transformation, and loading processes in a fully managed environment.