Data Pipeline
  • Data Pipeline
    • About Data Pipeline
    • Design Philosophy
    • Low Code Visual Authoring
    • Real-time and Batch Orchestration
    • Event based Process Orchestration
    • ML and Data Ops
    • Distributed Compute
    • Fault Tolerant and Auto-recovery
    • Extensibility via Custom Scripting
  • Getting Started
    • Homepage
      • Create
        • Creating a New Pipeline
          • Adding Components to Canvas
          • Connecting Components
            • Events [Kafka and Data Sync]
          • Memory and CPU Allocations
        • Creating a New Job
          • Page
          • Job Editor Page
          • Spark Job
            • Readers
              • HDFS Reader
              • MongoDB Reader
              • DB Reader
              • S3 Reader
              • Azure Blob Reader
              • ES Reader
              • Sandbox Reader
              • Athena Query Executer
            • Writers
              • HDFS Writer
              • Azure Writer
              • DB Writer
              • ES Writer
              • S3 Writer
              • Sandbox Writer
              • Mongodb Writer
              • Kafka Producer
            • Transformations
          • PySpark Job
          • Python Job
          • Python Job (On demand)
          • Script Executer Job
          • Job Alerts
        • Register as Job
        • Exporting a Script From Data Science Lab
        • Utility
        • Git Sync
      • Overview
        • Jobs
        • Pipeline
      • List Jobs
      • List Pipelines
      • Scheduler
      • Data Channel & Cluster Events
      • Trash
      • Settings
    • Pipeline Workflow Editor
      • Pipeline Toolbar
        • Pipeline Overview
        • Pipeline Testing
        • Search Component in Pipelines
        • Push & Pull Pipeline
        • Update Pipeline Components
        • Full Screen
        • Log Panel
        • Event Panel
        • Activate/Deactivate Pipeline
        • Update Pipeline
        • Failure Analysis
        • Delete Pipeline
        • Pipeline Component Configuration
        • Pipeline Failure Alert History
        • Format Flowchart
        • Zoom In/Zoom Out
        • Update Component Version
      • Component Panel
      • Right-side Panel
    • Testing Suite
    • Activating Pipeline
    • Pipeline Monitoring
    • Job Monitoring
  • Components
    • Adding Components to Workflow
    • Component Architecture
    • Component Base Configuration
    • Resource Configuration
    • Intelligent Scaling
    • Connection Validation
    • Readers
      • GCS Reader
      • S3 Reader
      • HDFS Reader
      • DB Reader
      • ES Reader
      • SFTP Stream Reader
      • SFTP Reader
      • Mongo DB Reader
        • MongoDB Reader Lite (PyMongo Reader)
        • MongoDB Reader
      • Azure Blob Reader
      • Azure Metadata Reader
      • ClickHouse Reader (Docker)
      • Sandbox Reader
      • Azure Blob Reader (Docker)
      • Athena Query Executer
      • Big Query Reader
    • Writers
      • S3 Writer
      • DB Writer
      • HDFS Writer
      • ES Writer
      • Video Writer
      • Azure Writer
      • ClickHouse Writer (Docker)
      • Sandbox Writer
      • MongoDB Writers
        • MongoDB Writer
        • MongoDB Writer Lite (PyMongo Writer)
    • Machine Learning
      • DSLab Runner
      • AutoML Runner
    • Consumers
      • GCS Monitor
      • Sqoop Executer
      • OPC UA
      • SFTP Monitor
      • MQTT Consumer
      • Video Stream Consumer
      • Eventhub Subscriber
      • Twitter Scrapper
      • Mongo ChangeStream
      • Rabbit MQ Consumer
      • AWS SNS Monitor
      • Kafka Consumer
      • API Ingestion and Webhook Listener
    • Producers
      • WebSocket Producer
      • Eventhub Publisher
      • EventGrid Producer
      • RabbitMQ Producer
      • Kafka Producer
      • Synthetic Data Generator
    • Transformations
      • SQL Component
      • File Splitter
      • Rule Splitter
      • Stored Producer Runner
      • Flatten JSON
      • Pandas Query Component
      • Enrichment Component
      • Mongo Aggregation
      • Data Loss Protection
      • Data Preparation (Docker)
      • Rest Api Component
      • Schema Validator
    • Scripting
      • Script Runner
      • Python Script
        • Keeping Different Versions of the Python Script in VCS
      • PySpark Script
    • Scheduler
    • Alerts
      • Alerts
      • Email Component
    • Job Trigger
  • Custom Components
  • Advance Configuration & Monitoring
    • Configuration
      • Default Component Configuration
      • Logger
    • Data Channel
    • Cluster Events
    • System Component Status
  • Version Control
  • Use Cases
Powered by GitBook
On this page
  • Steps to configure Sandbox Reader
  • Basic Information
  • Configuring meta information of Sandbox Reader component:
Export as PDF
  1. Components
  2. Readers

Sandbox Reader

A Sandbox reader is used to read and access data within a configured sandbox environment.

PreviousClickHouse Reader (Docker)NextAzure Blob Reader (Docker)

All component configurations are classified broadly into the following sections:

  • ​​

  • Meta Information

  • ​​

  • ​

Before using the Sandbox Reader component for reading a file, the user needs to upload a file in Data Sandbox under the Data Center module.

Please go through the given walk-through for uploading the file in the Data Sandbox under the Data Center module.

Check out the given video on how to configure a Sandbox Reader component.

Steps to configure Sandbox Reader

  • Navigate to the Data Pipeline Editor.

  • Expand the Readers section provided under the Component Pallet.

  • Drag and drop the Sandbox Reader component to the Workflow Editor.

  • Click on the dragged Sandbox Reader to get the component properties tabs.

Basic Information

It is the default tab to open for the component while configuring it.

  • Invocation Type: Select an invocation mode from the Real-Time or Batch options by using the drop-down menu.

  • Deployment Type: It displays the deployment type for the reader component. This field comes pre-selected.

  • Container Image Version: It displays the image version for the docker container. This field comes pre-selected.

  • Failover Event: Select a failover Event from the drop-down menu.

  • Batch Size (min 10): Provide the maximum number of records to be processed in one execution cycle (Minimum limit for this field is 10).

Configuring meta information of Sandbox Reader component:

  • Storage Type: The user will find two options here:

    • Network: This option will be selected by default. In this option, the following fields will be displayed:

      • File Type: Select the type of the file to be read. Supported file types include CSV, JSON, PARQUET, AVRO, XML, and ORC.

      • Schema: Enter the Spark schema of the file in JSON format.

      • Sandbox Folder Path: Enter the Sandbox folder name where the data is stored in part files.

      • Limit: Enter the number of records to be read.

    • Platform: In this option, the following fields will be displayed:

      • File Type: Select the type of the file to be read. The supported file types are CSV, JSON, PARQUET, AVRO, XML, and ORC.

      • Sandbox Name: This field will display once the user selects the file type. It will show all the Sandbox names for the selected file type, and the user has to select the Sandbox name from the drop-down.

      • Sandbox File: This field displays the name of the sandbox file to be read. It will automatically fill when the user selects the sandbox name.

      • Limit: Enter the number of records to be read.

  • Query: Enter a spark SQL query. Take inputDf as a table name.

  • Column Filter: There is also a section for the selected columns in the Meta Information tab if the user can select some specific columns from the table to read data instead of selecting a complete table so this can be achieved by using the Column Filter section. Select the columns that you want to read and if you want to change the name of the column, then put that name in the alias name section otherwise keep the alias name the same as of column name and then select a Column Type from the drop-down menu.

  • Use the Download Data and Upload File options to select the desired columns.

    • Upload File: The user can upload the existing system files (CSV, JSON) using the Upload File icon (file size must be less than 2 MB).

    • Download Data (Schema): Users can download the schema structure in JSON format by using the Download Data icon.

or

Use the Download Data and Upload File options to select the desired columns.

  • Partition Columns: To read a specific partition, enter the name of the partitioned column.

Sample Query for Sandbox Reader:

select team as Team, Avg(monthly_salary) as Average_Salary,
Avg(experience) as Average_Experience from inputDf group by team;

Please Note:

  • (*) the symbol indicates that the field is mandatory.

  • Either table or query must be specified for the data readers except for SFTP Reader.

  • Column Filter- There should not be a data type mismatch in the Column Type for all the Reader components.

Fields in the Meta Information tab may vary based on the selected File Type. All the possibilities are mentioned below:

  • CSV: The following fields will display when CSV is selected as File Type:

    • Header: Enable the Header option to retrieve the header of the reading file.

    • Infer Schema: Enable the Infer Schema option to obtain the true schema of the columns in the CSV file.

    • Multiline: Enable the Multiline option to read multiline strings in the data.

    • Schema: This field will be visible only when the Header option is enabled. Enter the Spark schema in JSON format in the schema field to filter out the bad records. To filter the bad records, the user needs to map the failover Kafka event in the Failover Event field in the Basic Information tab.

  • JSON: The Multiline and Charset fields are displayed with JSON as the selected File Type. Check in the Multiline option to see, if there is any multiline string in the file.

  • PARQUET: No extra field gets displayed with PARQUET as the selected File Type.

  • AVRO: This File Type provides two drop-down menus.

    • Compression: Select an option out of the Deflate and Snappy options.

    • Compression Level: This field appears for the Deflate compression option. It provides 0 to 9 levels via a drop-down menu.

  • XML: Select this option to read the XML file. If this option is selected, the following fields will be displayed:

    • Infer schema: Enable this option to get the true schema of the column.

    • Path: Provide the path of the file.

    • Root Tag: Provide the root tag from the XML files.

    • Row Tags: Provide the row tags from the XML files.

    • Join Row Tags: Enable this option to join multiple row tags.

  • ORC: Select this option to read the ORC file. If this option is selected, the following fields will be displayed:

    • Push Down: In ORC (Optimized Row Columnar) file format, "push down" typically refers to the ability to push down predicate filters to the storage layer for processing. There will be two options in it:

      • True: When push-down is set to True, it indicates predicate filters can be pushed down to the ORC storage layer for filtering rows at the storage level. This can improve query performance by reducing the amount of data that needs to be read into memory for processing.

      • False: When push down is set to False, predicate filters are not pushed down to the ORC storage layer. Instead, filtering is performed after the data has been read into memory by the processing engine. This may result in more data being read and potentially slower query performance compared to when push-down is enabled.

​Basic Information​
Resource Configuration​
Connection Validation
Uploading a file in Data Sandbox
Configuring Sandbox Reader Component
Meta Information Of Sandbox Reader Component
Meta information