# DB Writer

The DB reader is a spark-based writer component which gives you capability to write data to multiple database sources.

All component configurations are classified broadly into the following sections:

* ​[​Basic Information​](https://docs.bdb.ai/data-pipeline-2/components/component-base-configuration)​
* Meta Information
* ​[Resource Configuration​](https://docs.bdb.ai/data-pipeline-2/components/resource-configuration)​
* ​[Connection Validation](https://docs.bdb.ai/7.6/data-pipeline/components/connection-validation)​

{% hint style="success" %}
*Please check out the given demonstration to configure the component.*
{% endhint %}

{% embed url="<https://files.gitbook.com/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fuq3RSHHup7fjHYaspk7y%2Fuploads%2F3fKUwcnQmIJIVu7uA5zY%2Fdb%20writer%20(online-video-cutter.com).mp4?alt=media&token=8817a387-b4b5-4a13-9e52-db39aa0a3a5c>" %}
Configuring the DB Writer Component as a part of Pipeline Workflow
{% endembed %}

### **Drivers Available**

<details>

<summary>Supported Drivers</summary>

* MySQL
* Oracle
* PostgreSQL
* MS-SQL
* ClickHouse
* Snowflake

</details>

{% hint style="info" %}
*<mark style="color:green;">Please Note:</mark>*&#x20;

* *The ClickHouse driver in the Spark components will use HTTP Port and not the TCP port.*
* *It is always recommended to create the table before activating the pipeline to avoid errors as RDBMS has a strict schema and can result in errors.*
  {% endhint %}

### Save Modes <a href="#save-modes" id="save-modes"></a>

The RDBMS writer supports 3 save modes:

#### **Append**

As the name suggests it adds all the records without any validations.

#### **Overwrite**

This mode truncates the table and adds fresh records. after every run you will get records that are part of the batch process.

#### **Upsert**

This operation allows the users to insert a new record or update existing data into a table. For configuring this we need to provide the Composite Key.

The BDB Data Pipeline supports composite key based upsert, in case of composite key, we can specify the second key by using comma separator e.g., *key1, key2​.* It has now an option to **upload the spark schema in JSON format.** This can greatly improve the speed of the write operation as the component will ignore inferring schema and go with the provided schema.

<figure><img src="https://content.gitbook.com/content/q2i9CKCFbySxr6jRoJfA/blobs/Rhbs8pgxlj8v3DVKLlGm/MicrosoftTeams-image%20(67).png" alt=""><figcaption><p>Spark Schema upload</p></figcaption></figure>

* **Sort Column:** This field will appear only when ***Upsert*** is selected as ***Save mode***. If there are multiple records with the same composite key but different values in the batch, the system identifies the record with the latest value based on the Sort column. The Sort column defines the ordering of records, and the record with the highest value in the sort column is considered the latest.<br>

  <figure><img src="https://content.gitbook.com/content/q2i9CKCFbySxr6jRoJfA/blobs/369TtP74VMZctlqDg1Nx/image.png" alt=""><figcaption></figcaption></figure>

{% hint style="info" %}
*<mark style="color:green;">Please Note</mark>:*&#x20;

* *If the selected save mode is **Upsert** and the driver (for the DB Writer) is **ClickHouse**, then a table with the **ReplacingMergeTree** table engine will be created in the database.*

* Currently, the ***Sort column*** field is only available for the following drivers in the DB Writer: ***MSSQL**, **PostgreSQL**, **Oracle**, **Snowflake**, and **ClickHouse.***&#x20;
  {% endhint %}

* **Database name:** Enter the Database name.

* **Table name:** Provide a table name where the data has to be written.

* **Enable SSL:** Check this box to enable SSL for this components. Enable SSL feature in DB reader component will  appear only for three(3) drivers: ***MongoDB, PostgreSQL and ClickHouse***.

* **Certificate Folder:** This option will appear when the Enable SSL field is checked-in. The user has to select the certificate folder from drop down which contains the files which has been uploaded to the admin settings. Please refer the below given images for the reference.

* **Schema File Name:** Upload the Spark Schema in JSON format.

* ***Query:*** In this field, we can write a DDL for creating the table in database where the in-event data has to be written. For example, please refer the below image:

<figure><img src="https://content.gitbook.com/content/q2i9CKCFbySxr6jRoJfA/blobs/q8XU17hLvQ9bfJGzXMRj/MicrosoftTeams-image%20(68).png" alt=""><figcaption><p>Writing Create table query in the query field of DB writer.</p></figcaption></figure>

<figure><img src="https://content.gitbook.com/content/q2i9CKCFbySxr6jRoJfA/blobs/2FTAYUMmRRjOOs4yHXeH/image.png" alt=""><figcaption><p>Meta Information of DB writer with enabled "Enable SSL" field</p></figcaption></figure>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.bdb.ai/data-pipeline-2/components/writers/db-writer.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
