Chapter 11. Scaling Multicloud Object Gateway performance
The Multicloud Object Gateway (MCG) performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints.
The MCG resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default:
- Storage service
- S3 endpoint service
S3 endpoint service
The S3 endpoint is a service that every Multicloud Object Gateway (MCG) provides by default that handles the heavy lifting data digestion in the MCG. The endpoint service handles the inline data chunking, deduplication, compression, and encryption, and it accepts data placement instructions from the MCG.
11.1. Automatic scaling of MultiCloud Object Gateway endpoints
The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. OpenShift Data Foundation clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG.
11.2. Scaling the Multicloud Object Gateway with storage nodes
Prerequisites
- A running OpenShift Data Foundation cluster on OpenShift Container Platform with access to the Multicloud Object Gateway (MCG).
A storage node in the MCG is a NooBaa daemon container attached to one or more Persistent Volumes (PVs) and used for local object service data storage. NooBaa daemons can be deployed on Kubernetes nodes. This can be done by creating a Kubernetes pool consisting of StatefulSet pods.
Procedure
- Log in to OpenShift Web Console.
-
From the MCG user interface, click Overview
Add Storage Resources. - In the window, click Deploy Kubernetes Pool.
- In the Create Pool step create the target pool for the future installed nodes.
- In the Configure step, configure the number of requested pods and the size of each PV. For each new pod, one PV is to be created.
- In the Review step, you can find the details of the new pool and select the deployment method you wish to use: local or external deployment. If local deployment is selected, the Kubernetes nodes will deploy within the cluster. If external deployment is selected, you will be provided with a YAML file to run externally.
-
All nodes will be assigned to the pool you chose in the first step, and can be found under Resources
Storage resources Resource name.
11.3. Increasing CPU and memory for PV pool resources
MCG default configuration supports low resource consumption. However, when you need to increase CPU and memory to accommodate specific workloads and to increase MCG performance for the workloads, it is possible to configure the required values for CPU and memory in the OpenShift Web Console.
Procedure
-
In the OpenShift Web Console, click Installed operators
ODF Operator. - Click on the Backingstore tab.
- Select the new backingstore.
- Scroll down and click Edit PV pool resources.
- In the edit window that appears, edit the value of Mem, CPU, and Vol size based on the requirement.
- Click Save.
Verification steps
- To verfiy, you can check the resource values of the PV pool pods.