# DB Writer

All component configurations are classified broadly into the following sections:

* ​[Basic ](https://docs.bdb.ai/7.6/data-pipeline/components/component-base-configuration)​
* Metadata
* ​[Resource Configuration](https://docs.bdb.ai/7.6/data-pipeline/components/resource-configuration)​
* ​[Connection Validation](https://docs.bdb.ai/7.6/data-pipeline/components/connection-validation)​

{% hint style="success" %}
*Please check out the given demonstration to configure the component.*
{% endhint %}

{% embed url="<https://files.gitbook.com/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fuq3RSHHup7fjHYaspk7y%2Fuploads%2F3fKUwcnQmIJIVu7uA5zY%2Fdb%20writer%20(online-video-cutter.com).mp4?alt=media&token=8817a387-b4b5-4a13-9e52-db39aa0a3a5c>" %}
Configuring the DB Writer Component as a part of Pipeline Workflow
{% endembed %}

### **Drivers Available**

* MySQL
* Oracle
* PostgreSQL
* MS-SQL
* ClickHouse
* Snowflake

{% hint style="info" %}
*<mark style="color:green;">Please Note:</mark>*&#x20;

* *The ClickHouse driver in the Spark components will use HTTP Port and not the TCP port.*
* It is always recommended to create the table before activating the pipeline to avoid errors as RDBMS has a strict schema and can result in errors.
  {% endhint %}

### Save Modes <a href="#save-modes" id="save-modes"></a>

The RDBMS writer supports 3 save modes:

#### **Append**

As the name suggests it adds all the records without any validations.

#### **Overwrite**

This mode truncates the table and adds fresh records. after every run you will get records that are part of the batch process.

#### **Upsert**

This operation allows the users to insert a new record or update existing data into a table. For configuring this we need to provide the Composite Key.

The BDB Data Pipeline supports composite key based upsert, in case of composite key, we can specify the second key by using comma separator e.g., *key1, key2​.* It has now an option to **upload the spark schema.** This can greatly improve the speed of the write operation as the component will ignore inferring schema and go with the provided schema.

<figure><img src="https://363587200-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fuq3RSHHup7fjHYaspk7y%2Fuploads%2FMYhU9zeZuo2UmVF4tnSa%2FMicrosoftTeams-image%20(67).png?alt=media&#x26;token=1008a27a-1947-4763-90f3-6879d9805240" alt=""><figcaption><p>Spark Schema upload</p></figcaption></figure>

{% hint style="info" %}
*<mark style="color:green;">Please Note</mark>: For ClickHouse Component Upsert is comparatively slow. It is preferable to create a table where the engine is **ReplacingMergeTree** and a view where we load the view with the Final clause. **In the component keep the write mode to Append**.*
{% endhint %}

* ***Query:*** In this field, we can write a DDL for creating the table in database where the in-event data has to be written. For example, please refer the below image:

<figure><img src="https://363587200-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fuq3RSHHup7fjHYaspk7y%2Fuploads%2FPvQvOQwIZG5eKJ5E0gS4%2FMicrosoftTeams-image%20(68).png?alt=media&#x26;token=8dafef5d-a518-402b-84d3-bcac3c35aa69" alt=""><figcaption></figcaption></figure>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.bdb.ai/data-pipeline/~/changes/hPXwcPgG9oW1n2fHDeOk/components/writers/db-writer.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
