Kafka Producer

The Kafka producer acts as a data source within the pipeline, generating and publishing messages to Kafka for subsequent processing and consumption.

The Kafka producer plays a crucial role in the Data Pipeline module enabling reliable and scalable data ingestion into the Kafka cluster, where messages can be processed, transformed, and consumed by downstream components or applications.

Kafka's distributed and fault-tolerant architecture allows for scalable and efficient data streaming, making it suitable for various real-time data processing and analytics use cases.

This component is to produce messages to internal/external Kafka topics.

All component configurations are classified broadly into the following sections:

Follow the given demonstration to configure the Kafka Producer component.

Steps to Configure Kafka Producer Component

Kafka Producer component consumes the data from the previous event and produce the data on a given Kafka topic. It can produce the data in same environment and external environment with CSV, JSON, XML and Avro format. This data can be further consumed by Kafka consumer in the data pipeline.

Drag and Drop the Component

  • Drag and drop the Kafka Producer Component to the Workflow Editor.

  • Click on the dragged Kafka Producer component to get the component properties tabs.

Basic Information Tab

  • Configure the Basic Information tab.

    • Select an Invocation type from the drop-down menu to confirm the running mode of the component. Select ‘Real-Time’ from the drop-down menu.

    • Deployment Type: It displays the deployment type for the component. This field comes pre-selected.

    • Container Image Version: It displays the image version for the docker container. This field comes pre-selected.

    • Failover Event: Select a failover Event from the drop-down menu.

    • Batch Size (min 10): Provide maximum number of records to be processed in one execution cycle (Min limit for this field is 10.

Meta Information Tab

Click on the Meta Information tab to open the properties fields and configure the Meta Information tab by providing the required fields.

  • Topic Name: Specify topic name where user want to produce data.

  • Is External: User can produce the data to external Kafka topic by enabling 'Is External' option. ‘Bootstrap Server’ and ‘Config’ fields will display after enable 'Is External' option.

    • Bootstrap Server: Enter external bootstrap details.

    • Config: Enter configuration details.

  • Input Record Type: It contain following input record type:

    • CSV: User can produce CSV data using this option. Headers’ and ‘Separator’ fields will display if user select choose CSV input record type.

      • Header: In this field user can enter column names of CSV data that produce to the Kafka topic.

      • Separator: In this field user can enter separators like comma (,) that used for CSV data.

    • JSON: User can produce JSON data using this option.

    • XML: User can produce XML data using this option.

    • AVRO: User can produce AVRO data using this option. Registry’, ‘Subject’ and ‘Schema’ fields will display if user selects AVRO as the input record type.

      • Registry: Enter registry details.

      • Subject: Enter subject details.

      • Schema: Enter schema.

    • Host Aliases: In Apache Kafka, a host alias (also known as a hostname alias) is an alternative name that can be used to refer to a Kafka broker in a cluster. Host aliases are useful when you need to refer to a broker using a name other than its actual hostname.

      • IP: Enter the IP.

      • Host Names: Enter the host names.

Saving the Component Configuration

  • After doing all the configurations click the Save Component in Storage icon provides in the configuration panel to save the component.

  • A notification message appears to inform about the component configuration saved.

Last updated