# Resource Configuration

For each component that gets deployed, we have an option to configure the resources i.e. Memory and CPU.

#### We have two types of components based deployment types:

* **Docker**
* **Spark**

### Docker:

![Docker Components Conguration steps](https://972575688-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FRYq1HgffNfbnIMWPu1D5%2Fuploads%2FQ8qbSGaOSSvgg7lUA51z%2FComponentConfigDocker.gif?alt=media\&token=4b00ede8-3149-4ad9-b707-c5bf914ca05c)

After we save the component and pipeline, The component gets saved with the default configuration of the pipeline i.e. Low, Medium, and High. After we save the pipeline we can see the configuration tab in the component. There are multiple things

For the Docker components, we have  Request and Limit configurations.&#x20;

We can see the  CPU and Memory options to be configured.

CPU: This is the CPU config where we can specify the number of cores that we need to assign to the component.&#x20;

{% hint style="info" %}
Please Note that 1000 means 1 core in the configuration of docker components.

When we put 100 that means 0.1 core has been assigned to the component.
{% endhint %}

Memory: This option is to specify how much memory you want to dedicate to that specific component.

{% hint style="info" %}
Please Note that 1024  means 1GB in the configuration of the docker components.&#x20;
{% endhint %}

**Instances:** The number of instances is used for parallel processing. If we give **N** no of instances those may pods will be deployed.

### Spark

![DricerSpark Component Configuration Steps](https://972575688-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FRYq1HgffNfbnIMWPu1D5%2Fuploads%2FggVKg5lLYDRemI1fSA9F%2FComponentConfigSpark.gif?alt=media\&token=dd423bc1-2831-4c56-9270-940134de1e92)

The Spark Components' configuration is slightly different from the Docker components. When the spark components are deployed, there are two pods that come up:

* Driver
* Executor

Give the Driver and executor configs separately.&#x20;

**Instances:** The number of instances is used for parallel processing. If we give **N** no of instances in executors config those may executors pods will be deployed.

{% hint style="info" %}
NOTE: Till the current release, the minimum requirement to deploy a driver is 0.1 Cores and 1 core for the executor. It can change with the upcoming versions of Spark.
{% endhint %}
