7.6
  • What's New in the BDB 7.6?
    • Core Platform
    • Data Center
    • Data Science Lab
    • Data Pipeline
    • Data Visualization
      • Self-Service
      • Governed Dashboards
  • Core Platform
    • About Data Platform
      • Why Data Platform
      • Design Philosophy
      • Plugin Architecture
    • Getting Started
      • Sign in
        • Resetting Password
        • Force Login
      • Homepage
        • Data Catalog Search
        • AI Search
        • Apps Menu
        • Notification
        • Help Menu
        • User Profile
        • Search Option
        • Documents Folders
          • Document Options
            • Accessing Document Options
            • Options Assigned to a Folder
            • Options Assigned to a Linked URL
            • Options Assigned to a Story
            • Options for a Published Dashboard
        • Sorting Documents
        • Filter
      • Signing Out
    • Platform Administration
      • Accessing the Admin Module
      • Admin Panel Options
        • Document Management
        • Configurations
          • Geo Spatial
          • Data Science Servers
          • Bulk User Creation
          • Custom Field Settings
          • Data Connectors
          • API Connectors Configurations
          • Encryption
          • Form Settings
          • Data Sheet Settings
          • Data Lake Settings
          • Data Catalog Settings
          • Open ID Settings
          • Version Control
          • New Version Control
            • Versioning
            • Migration
          • Pipeline Settings
          • Keycloak Settings
          • Core Ownership Transfer
          • Email Server
          • Password
          • Sandbox Settings
          • Secret Management
          • DS Lab Settings
          • Data Store Settings
        • Authentication
          • AD Configuration
          • CA PPM Configuration
          • AWS Cognito Configuration
        • Audit Trail
          • Log Status
          • Audit Log Table
        • Language Mapping
          • Languages
          • Mapping Table
        • Migration
          • SFTP Settings
          • Document Migration
          • DSW Migration
        • GIT Migration
          • Migrating a Dashboard
          • Migrating an API Service
          • Migrating a Pipeline
        • Session Manager
        • Schedule Monitor
        • Server Monitor
        • License
        • API Client Registration
    • Users & Roles
      • User Security Page
      • Creating a new User
        • Restrict Data Access
      • Creating a new User Group
        • Assigning Custom Fields to Group Users
      • Various User Roles
        • Viewer Role
        • Admin Role
        • Non-admin User Roles
      • User Status
  • Data Center
    • Homepage
    • Data Virtualization
    • Data Connectors
      • Creating a Data Connector
      • Data Connector List
        • Edit Data Connectors
        • Create Option
        • Reconnecting to a Data Connector
        • Sharing a Data Connector
        • Delete a Data Connector
      • Supported Data Connectors
        • Database Connectors
          • MySQL
          • MSSQL
          • Elastic (Beta Release)
          • Oracle
          • ClickHouse
          • Arango DB
          • Hive
          • Cassandra
          • MongoDB
          • MongoDB for BI
          • PostgreSQL
          • Snowflake
        • File Data Connector
        • API Connectors
          • API Connector
          • Amazon
          • App Store
          • Bing Ads
          • Dropbox
          • FTP Server
          • Facebook
          • Facebook Ads
          • Firebase DB
          • Fitbit
          • Flipkart
          • Google Adwords
          • Google Analytics
          • Google Big Query
          • Google Forms
          • Google Sheet
          • HubSpot
          • Jira
          • Lead Squared
          • Linkedin
          • Linkedin Ads
          • MS Dynamics
          • Mailchimp
          • QuickBooks
          • SalesForce
          • ServiceNow
          • Twitter
          • Twitter Ads
          • Yelp
          • YouTube
          • ZOHO Books
        • Others
          • MS Sql Olap
          • Data Store
          • OData
          • Spark SQL
          • AWS Redshift
          • SAP HANA
    • Data Sets
      • Creating a New Data Set using RDBMS Connector
      • Creating a Data Set using Arango DB Connector
      • Creating a Data Set using an API Connector
      • Creating a New FTP Data Set
      • Creating a Data Set based on an Elastic Connector
      • Data set list page
        • View Options: Data Sets List Page
        • Data Set List: Actions
    • Data Stores
      • Creating a New Data Store
        • Data Store using an RDBMS Connector
        • Data Store using a Flat File Data Connector
        • Data Store using an API Data Connector
      • Adding Synonyms to a Datastore
      • Data Stores List
    • Data Store Meta Data
      • Sharing a Meta Data Store
      • Deleting a Meta Data Store
    • Data Sheets
      • Creating a Data Sheet
      • Publishing a Data Sheet
        • Entering Data
        • Viewing Data
        • Deleting a Row
      • Editing a Data Sheet
      • Removing a Data Sheet
    • Data Catalog
    • Data Sandbox
      • Creating a Data Sandbox File
      • Data Sandbox List Page
        • Uploading File Status
        • Using the Data Preparation Option
        • Deleting a Data Sandbox
    • Data as API
    • Data Preparation (Beta Release)
      • Accessing the Data Preparation Option
      • Data Preparation Workspace
        • Data Grid
          • Data Grid Header
          • Data Quality Bar in the Grid
        • Profile: Summary Pane
          • Charts
          • Info: Values/Statistics
          • Pattern
        • Transforms
          • Data Cleansing
          • String
          • Numbers
          • Columns
          • Conversions
          • Integer
          • Dates
          • ML
          • Anonymization
        • Steps
      • Data Preparation List
        • Rename
        • Edit
        • Delete
  • Data Science Lab
    • What is Data Science Lab?
      • Design Philosophy
      • What is a DSL Project?
    • Getting Started
      • Accessing the DS Lab Module
    • Start your Data Science Experiment with DS Lab
    • Project
      • Creating a Project
      • Keep Multiple Versions of a Project
      • Sharing a Project
      • Editing a Project
      • Activating a Project
      • Deactivating a Project
      • Deleting a Project
      • Various Tabs to work with
        • Notebook
          • Ways to Access Notebook
            • Creating a Notebook
            • Uploading a Notebook
          • Notebook Page
            • Notebook Cells
              • Using a Code Cell
              • Using a Markdown Cell
            • Modifying a Notebook
            • Notebook Task Bar
            • Notebook Operations
              • Datasets
              • Secrets
              • Algorithms
              • Transforms
              • Models
                • Registering a Model
                • Filtering a Model
              • Predict
              • Artifacts
                • Preview Artifact
              • Variable Explorer
              • Find and Replace
          • Notebook List Page
            • Export
              • Export to Pipeline
              • Export to GIT
            • Notebook Version Control
            • Sharing a Notebook
            • Editing a Notebook
            • Delete a Notebook
        • Dataset
          • Adding Data Sets
            • Data Sets
            • Data Sandbox
          • Dataset List Page
            • Preview
            • Data Profile
            • Create Experiment
            • Data Preparation
            • Delete
        • Utility
        • Model
          • Export to GIT
          • Register a Model
          • Unregister a Model
          • Register Model as an API Service
            • Register a Model as an API
            • Register an API Client
            • Pass Model values in Postman
        • Auto ML (Alpha Release)
          • Creating Experiments
            • Accessing the Create Experiment Option
              • Configure
              • Specify Prediction
          • AutoML List Page
            • View Report
              • Details
              • Models
                • View Explanation
                  • Model Summary
                  • Model Interpretation
                    • Individual Explanation
                    • Partial Dependence
                    • Model Performance
                    • Feature Importance
                  • Dataset Explainer
            • Delete
  • Data Pipeline
    • About Data Pipeline
      • Design Philosophy
      • Low Code Visual Authoring
      • Real-time and Batch Orchestration
      • Event based Process Orchestration
      • ML and Data Ops
      • Distributed Compute
      • Fault Tolerant and Auto-recovery
      • Extensibility via Custom Scripting
    • Getting Started
      • Homepage
        • List Pipeline
        • Creating Pipeline
          • Adding Components to Canvas
          • Connecting Components
          • Events [Kafka and DB Sync]
          • Memory and CPU allocations
        • List Components
        • Delete Orphan Pods
        • Scheduler
        • Data Channel
        • Settings
      • Pipeline Workflow Editor
        • Pipeline Toolbar
        • Component Panel
        • Right-side Panel
      • Testing Suite
      • Activating Pipeline
      • Monitoring Pipeline
    • Components
      • Adding component to worflow
      • Component Architecture
      • Component Base Configuration
      • Resource Configuration
      • Intelligent Scaling
      • Connection Validation
      • Readers
        • S3 Reader
        • HDFS Reader
        • DB Reader
        • Elastic Search Reader
        • SFTP Stream Reader
        • SFTP Reader
        • Mongo DB Reader
          • Docker Reader
          • Spark
        • Azure Blob Reader
        • Azure Metadata Reader
        • ClickHouse Reader [Docker]
      • Writers
        • S3 Writer
        • RDBMS Writer
        • HDFS Writer
        • ES Writer
        • Mongo Writers
          • Mongo Writer (Spark)
          • Mongo Writer(Docker)
        • Azure Writer
        • ClickHouse Writer [Docker]
        • Sandbox Writer
      • AI/ML
        • Python Model Runner
        • DSL Model & Script Runner
      • Consumers
        • SFTP Monitor
        • MQTT Consumer
        • Eventhub Subscriber
        • Twitter Scrapper
        • API Ingestion and Webhook Listener
        • Mongo Change Stream
        • Rabbit MQ Consumer
        • AWS SNS Monitor
        • Kafka Consumer
        • Kafka Producer
      • Producers
        • WebSocket Producer
        • Eventhub Publisher
        • EventGrid Producer
        • Rabbit MQ Producer
      • Transformation
        • SQL Component
        • Dataprep Script Runner
        • File Splitter
        • Rule Splitter
        • Stored Procedure Runner
        • Flatten JSON
        • Email Component
        • Pandas Query Component
        • Enrichment Component
        • Mongo Aggregation
        • Data Loss Protection
      • Scripting
        • Script Runner
        • Python Script
      • Scheduler
    • Custom Components
    • Advance Configuration & Monitoring
      • Configuration
        • Kafka Configuration
        • Default Component Configuration
        • Logger Setting
      • Data Channel
      • Cluster Events
      • System component Status
    • Version Control
    • Use Cases
  • Data Visualization
    • Self Service
      • Getting Started
        • What is Story?
        • Creating a new Story
          • Accessing the Story Module
            • Creating and Updating Instance
          • Designing a View
      • Design Workspace
        • Guided Tour
        • Dimension Profiling
        • Data Store Merge at View Level
        • Measure Summary
        • Series Properties
        • Formula Field Editor
          • Creating a Formula
            • Record Level Option
            • Summary Level Option
          • Creating a Range
        • Order by and Limit
        • Adding a Slicer
      • Chart Gallery
        • Mixed chart
        • Area chart
        • Bar chart
        • Bubble chart
        • Column Stack chart
        • Line chart
        • Pie chart
        • Scattered Plot chart
        • TreeMap chart
        • Circumplex chart
        • Pareto chart
        • Semi Gauge
        • KPI Tile
        • KPI Tile: Comparative Tile
        • KPI Tile: Sparkline
        • Map
        • Data Grid
        • Metric Summary
        • R Server Visual
        • Dissolution chart
        • Spider chart
        • Waterfall chart
      • Storyboard
        • Search
        • Shared Views
        • Export
        • Alert Center
        • Change Theme
        • Data Store Information
        • Options Assigned to a View
          • Analyse
            • Timeline Play
          • Edit
          • Remove
        • Data Interactions/ Data Drills
          • Drill Into
          • Drill Through
      • Applying Filters
        • View Specific Filter
          • Dimension-based View Filter
          • Measure-based View Filter
          • Date-based View Filter
        • Global Filter
          • Exclude from the Global Filter
          • Saving a Global Filter
          • Custom View Filter
          • Like and Not Like Filter Operations
      • Actions
        • Interactions
    • Governed Dashboards
      • About Dashboard Designer
        • What is Dashboard Designer?
        • Why is it used?
      • Getting Started
        • Accessing the Designer Module
        • Overview of Designer Module
          • Homepage
            • Guided Tour
            • Left Menu Panel
              • New
              • Manage
              • Open Dashboard
              • Preferences
              • Save As
              • Help
              • Exit
          • Dashboard Canvas Page
            • Right side Panel
              • Connection Page
              • Chart Container
              • Manage Window
              • Script Window
              • Guided Tour
            • Canvas Properties
            • Context Menu Properties
      • Create New Workspace
        • Workspace Creation: Complete Flow
        • Creating a Workspace
        • Renaming a Workspace
        • Deleting a Workspace
      • Create New Dashboard
        • Dashboard Creation: Complete Flow
        • Adding a New Dashboard
        • Create Connection
        • Drag and Drop Charts
        • Associate the Dataset
        • Preview the Dashboard
        • Save the Dashboard
      • Managing Options for a Dashboard
        • Open Dashboard in Designer
        • Publish a Dashboard
        • Share a Dashboard
        • Dashboard Version Control
        • Action Menu
          • Preview Dashboard
          • Export to Local Disk
          • Rename
          • Delete
          • Moving a Dashboard
          • Information Icon
      • Connecting to a Data Source
        • Accessing the Data Connectors
        • Establishing a Data Connection
          • CSV Connection
          • Excel Connection
          • Data Service Connection
          • Data Science Service
          • Data store Connection
          • Data Sheet
            • Data Sheet Enhancements
          • WebSocket Connection
          • Merged Connection
      • Charts Gallery
        • Charts
          • Area Chart
          • Bar Chart
          • Bubble Chart
          • Circumplex Chart
          • Column Chart
          • Funnel Chart
          • Histogram Chart
          • Inverted Funnel
          • KPI Tile
          • Line Chart
          • Map Chart
            • Leaflet Properties
          • Mito Plot
          • Mixed Chart
          • Pie Chart
          • Project Timeline
          • Pyramid Chart
          • Spark Line
          • Scatter Plot
          • Spider Chart
          • Waterfall Chart
        • Grids
          • Data Grid
          • Paging Grid
          • Data Sheet
          • Scorecard
          • Pivot Grid
        • Filters
          • Checkbox
          • Combobox
          • Hierarchical Combobox
          • List
          • Radio Button
        • Advanced Charts
          • Box Plot
          • Candle Stick
          • Chevron
          • Data Search
          • Decision Tree
          • Group Bar
          • Group Column
          • Heat Map
          • Text Analyzer
          • Time Series
          • Tree Map
          • Trellis
          • Word Cloud
        • Other Charts
          • Box
          • Bullet
          • Date Picker
          • Export
          • Filter Chips
          • Filter Saver
          • Gauge
          • Graphics
          • Guided Tour
          • H-Slider
          • Image
          • Info Button
          • Label
          • Legend
          • Progress Pie
          • Semi Gauge
          • Stepper
          • SVG Image
          • Text Box
          • Trend
          • Url Button
          • V-Slider
        • Custom Charts
        • Common Chart Properties
          • Background
          • Title & Sub-title
          • X & Y Axis Properties
          • Legend Properties
          • Formatter
          • Axis Setup
          • Export Options
      • Dashboard Objects
        • Manage Dashboard Components
        • Dashboard Objects Properties
      • Configuration
        • Filtering the Data
        • Drill Through
        • Dataset Properties
        • Indicators in Charts
        • Tooltip(Default & Custom)
        • Data Label
        • Geo Mapping
        • Language Mapping
        • Legend Mapping
        • Alerts in Grids
      • Script Help Section
        • Navigate to Script Help page
        • How to use Scripts?
  • Survey
    • Accessing Survey Module
    • Creating a Survey
      • Creating a New Survey
      • Creating a New Survey using Template
    • Survey Builder: Designing a Survey
      • Questions
        • Inserting a Question
        • Available Question Types
        • Deleting a Question
      • Page
        • Inserting a New Page
        • Editing an Existing Page
      • Survey Options
      • Properties
      • Theme
      • Saving a Survey
    • Publishing a Survey
      • Providing the Publish Survey Information
      • Collectors
        • Accessing the Collector Options
        • Creating a Collector
          • Creating a Weblink Collector
          • Creating an Email Collector
            • Adding Recipient to an Email Collector
        • Editing Collector(s)
        • Deleting Collector(s)
    • Analyzing Result for a Survey
      • Creating a New View in the Analyze Result
      • Filter Rule in the Analyze Result
        • Creating a Filter
        • Filter Types
      • Show/Hide Rules for Page/Questions
    • More Options
      • Preview Survey
      • Benchmark
        • Editing a Benchmark
        • Benchmark Goals
          • Adding a Goal to Benchmark
          • Editing a Benchmark Goal
          • Deleting a Benchmark Goal
        • Questions for Benchmark Goals
          • Adding Questions to a Benchmark Goal
          • Viewing a Goal Question
          • Deleting a Goal Question
      • Managing Datamart
        • Creating a Datamart
        • Implementing Scheduler for Datamart
        • Other Options
      • Copying a Survey
      • Survey Summary
      • Deleting a Survey
    • Contacts
      • Creating a Contact Group
        • Editing a Contact Group
        • Deleting a Contact Group
      • Creating New Contacts
        • Creating New Contacts (Manually)
        • Uploading Contacts from a CSV file
      • Listing a Contact
    • Survey Template
  • Forms
    • Accessing the Forms Module
    • General Workflow for Forms
      • Creating a Form
      • Listing a form
      • Form Page Component
      • Page Settings
        • Rename
        • Duplicate
        • Delete
      • Question
        • Duplicate
        • Delete
        • Properties
          • Properties
          • Condition
          • Look up
      • Form Preview
      • Publishing a form
        • Options Context Menu
          • Open in New Tab
          • Properties
          • Modifying a form
          • Adding a form to favorite
          • Moving a form
          • Renaming a form
          • Coping a form
          • Deleting a form
      • Form Response
        • Filter Response
          • Filtering Responses by Page
          • Filtering Responses by Date
          • Filtering Responses by Users
      • Form Settings
        • Properties
          • Status
          • Configuration
        • Theme
          • Form Header Theme
          • Page Header Theme
          • Form Body Theme
        • Form Instruction per User
    • Form Options
      • Edit
      • Delete
      • Duplicate
Powered by GitBook
On this page
  • Steps to configure Python Script (Custom Python Script)
  • Basic Information Tab
  • Meta Information Tab
  • Saving the Component Configuration
  • Python Script Examples:

Was this helpful?

  1. Data Pipeline
  2. Components
  3. Scripting

Python Script

PreviousScript RunnerNextScheduler

Last updated 2 years ago

Was this helpful?

The Python Script component works as a normal python compile.

Please Note: The Python Script component can be used as a Reader, an API Data Ingestion component, a Transformation, or a writer component. The component can only return either in pandas df or a list of dictionaries.

All component configurations are classified broadly into 3 section

  • Basic

  • Metadata

  • Resource Configuration

Steps to configure Python Script (Custom Python Script)

  • Drag and drop the Python Script to the Workflow Editor.

  • The Python script will read data from an input event and will pass the processed data into an output event, so create and connect two events to the Python Script component.

  • One reader is needed to pass the data to the input event. (in this case, the ES Reader component is used).

  • Click the dragged Python Script component to get the component properties tabs.

Basic Information Tab

It is the default tab to open for the Python Script component while configuring the component.

  • Select an Invocation type from the drop-down menu to confirm the running mode of the reader component. The supported invocation type for this component is Real-time.

  • Deployment Type: It displays the deployment type for the component. This field comes pre-selected.

  • Container Image Version: It displays the image version for the docker container. This field comes pre-selected.

  • Failover Event: Select a failover Event from the drop-down menu.

  • Batch Size: Provide the maximum number of records to be processed in one execution cycle (Min limit for this field is 10).

Meta Information Tab

Open the Meta Information Tab to open the fields and configure them.

  • Component Name: Provide a name for the Python Script component.

Please Note: The component name should be without space and special characters. Use the underscore symbol to show space in between words.

  • Python Script: Insert the Python script containing at least one function. The function should not have an argument, data frame argument, or custom argument.

  • Start Function Name: It displays all the function names used in the python script in a drop-down menu. Select one function name with which you want to start.

  • In Event Data Type: Provide input data type as a data frame or list.

  • External Library: Provide the external library name in this field. Insert multiple library names separated by commas.

  • Input Data: Use custom argument names as keys and provide the required value.

Saving the Component Configuration

  • Click the Save Component in Storage icon.

  • Click the Update Pipeline icon to save the Pipeline workflow. (After getting the success message)

  • Activate the Pipeline workflow.

  • Open the Log section to see the logs.

  • The Python Script component is ready to read the data coming from the input event, it transforms the data and returns output data.

Please Note: The below-given instructions should be followed while writing a Python script in the Data Pipeline:

  • The Python script needs to be written inside a valid Python function. E.g., The entire code body should be inside the proper indentation of the function (Use 4 spaces per indentation level.)

  • The Python script should have at least one main function. Multiple functions are acceptable, and one function can call another function.

    • It should be written above the calling function body (if the called function is an outer function).

    • It should be written above the calling statement (if called function is an inner function).

  • Spaces are the preferred indentation method.

  • Do not use "type" as the function argument as it is a predefined keyword.

  • The code in the core Python distribution should always use UTF-8.

  • Single-quoted strings and double-quoted strings are considered the same in Python.

  • All the packages used in the function need to import explicitly before writing the function.

  • The Python script should return data in the form of a data frame or list only. The form of data should be defined while writing the function.

  • If the user uses some Kafka event data for transformation, then the first argument of the function should be a data frame or list.

  • If the user needs to use some external library, the user needs to mention the library name in the external libraries field. If the user wants to use multiple external libraries, the library names should be separated by a comma.

  • If you need to pass some external input in your main function, then you can use the input data field. The key name should be the same according to the variable's name and value that is put as per the requirement.

  • We can use that component as a reader, transformation, and writer.

Python Script Examples:

The Custom Python Script transform component supports 3 types of scripts in the Data Pipeline.

1. As Reader Component: If you don’t have any in Event then you can use no argument function. For Example:

import json
import requests
import pandas as pd
def getmovies_result():
    data = requests.get("http://www.omdbapi.com/?s=water&apikey=ba5d53d4")
    loaded_json = json.loads(data.content)
    data = loaded_json['Search']
    df = pd.DataFrame.from_dict(data, orient='columns')
    return df

2. As Transformation component: If you have data to execute some operation, then use the first argument as data or a list of dictionaries. For Example,

def getdata(df):
    cond1 = df['Unit Price'] > 450
    filter_df = df[cond1]
    return filter_df

Here the df holds the data coming from the previous event as argument to the pram of the method.

3. Custom Argument with Data: If there is a custom argument with the data-frame i.e. the data is coming from the previous event and we have passed the custom argument to the parameter of the function. here df will hold the data from the previous event and the second param: arg range can be given in the input data section of the component.

#
def getdata(df, range):
    cond1 = df['Unit Price'] >  range
    filter_df = df[cond1]
    return filter_df