Search

Chapter 4. OpenShift Data Foundation installation overview

download PDF

OpenShift Data Foundation consists of multiple components managed by multiple operators.

4.1. Installed Operators

When you install OpenShift Data Foundation from the Operator Hub, the following four separate Deployments are created:

  • odf-operator: Defines the odf-operator Pod
  • ocs-operator: Defines the ocs-operator Pod which runs processes for ocs-operator and its metrics-exporter in the same container.
  • rook-ceph-operator: Defines the rook-ceph-operator Pod.
  • mcg-operator: Defines the mcg-operator Pod.

These operators run independently and interact with each other by creating customer resources (CRs) watched by the other operators. The ocs-operator is primarily responsible for creating the CRs to configure Ceph storage and Multicloud Object Gateway. The mcg-operator sometimes creates Ceph volumes for use by its components.

4.2. OpenShift Container Storage initialization

The OpenShift Data Foundation bundle also defines an external plugin to the OpenShift Container Platform Console, adding new screens and functionality not otherwise available in the Console. This plugin runs as a web server in the odf-console-plugin Pod, which is managed by a Deployment created by the OLM at the time of installation.

The ocs-operator automatically creates an OCSInitialization CR after it gets created. Only one OCSInitialization CR exists at any point in time. It controls the ocs-operator behaviors that are not restricted to the scope of a single StorageCluster, but only performs them once. When you delete the OCSInitialization CR, the ocs-operator creates it again and this allows you to re-trigger its initialization operations.

The OCSInitialization CR controls the following behaviors:

SecurityContextConstraints (SCCs)
After the OCSInitialization CR is created, the ocs-operator creates various SCCs for use by the component Pods.
Ceph Toolbox Deployment
You can use the OCSInitialization to deploy the Ceph Toolbox Pod for the advanced Ceph operations.
Rook-Ceph Operator Configuration
This configuration creates the rook-ceph-operator-config ConfigMap that governs the overall configuration for rook-ceph-operator behavior.

4.3. Storage cluster creation

The OpenShift Data Foundation operators themselves provide no storage functionality, and the desired storage configuration must be defined.

After you install the operators, create a new StorageCluster, using either the OpenShift Container Platform console wizard or the CLI and the ocs-operator reconciles this StorageCluster. OpenShift Data Foundation supports a single StorageCluster per installation. Any StorageCluster CRs created after the first one is ignored by ocs-operator reconciliation.

OpenShift Data Foundation allows the following three StorageCluster configurations:

Internal
In the Internal mode, all the components run containerized within the OpenShift Container Platform cluster and uses dynamically provisioned persistent volumes (PVs) created against the StorageClass specified by the administrator in the installation wizard.
Internal-attached
This mode is similar to the Internal mode but the administrator is required to define the local storage devices directly attached to the cluster nodes that the Ceph uses for its backing storage. Also, the administrator need to create the CRs that the local storage operator reconciles to provide the StorageClass. The ocs-operator uses this StorageClass as the backing storage for Ceph.
External
In this mode, Ceph components do not run inside the OpenShift Container Platform cluster instead connectivity is provided to an external OpenShift Container Storage installation for which the applications can create PVs. The other components run within the cluster as required.

MCG Standalone: This mode facilitates the installation of a Multicloud Object Gateway system without an accompanying CephCluster.

After a StorageCluster CR is found, ocs-operator validates it and begins to create subsequent resources to define the storage components.

4.3.1. Internal mode storage cluster

Both internal and internal-attached storage clusters have the same setup process as follows:

StorageClasses

Create the storage classes that cluster applications use to create Ceph volumes.

SnapshotClasses

Create the volume snapshot classes that the cluster applications use to create snapshots of Ceph volumes.

Ceph RGW configuration

Create various Ceph object CRs to enable and provide access to the Ceph RGW object storage endpoint.

Ceph RBD Configuration

Create the CephBlockPool CR to enable RBD storage.

CephFS Configuration

Create the CephFilesystem CR to enable CephFS storage.

Rook-Ceph Configuration

Create the rook-config-override ConfigMap that governs the overall behavior of the underlying Ceph cluster.

CephCluster

Create the CephCluster CR to trigger Ceph reconciliation from rook-ceph-operator. For more information, see Rook-Ceph operator.

NoobaaSystem

Create the NooBaa CR to trigger reconciliation from mcg-operator. For more information, see MCG operator.

Job templates

Create OpenShift Template CRs that define Jobs to run administrative operations for OpenShift Container Storage.

Quickstarts

Create the QuickStart CRs that display the quickstart guides in the Web Console.

4.3.1.1. Cluster Creation

After the ocs-operator creates the CephCluster CR, the rook-operator creates the Ceph cluster according to the desired configuration. The rook-operator configures the following components:

Ceph mon daemons

Three Ceph mon daemons are started on different nodes in the cluster. They manage the core metadata for the Ceph cluster and they must form a majority quorum. The metadata for each mon is backed either by a PV if it is in a cloud environment or a path on the local host if it is in a local storage device environment.

Ceph mgr daemon

This daemon is started and it gathers metrics for the cluster and report them to Prometheus.

Ceph OSDs

These OSDs are created according to the configuration of the storageClassDeviceSets. Each OSD consumes a PV that stores the user data. By default, Ceph maintains three replicas of the application data across different OSDs for high durability and availability using the CRUSH algorithm.

CSI provisioners

These provisioners are started for RBD and CephFS. When volumes are requested for the storage classes of OpenShift Container Storage, the requests are directed to the Ceph-CSI driver to provision the volumes in Ceph.

CSI volume plugins and CephFS

The CSI volume plugins for RBD and CephFS are started on each node in the cluster. The volume plugin needs to be running wherever the Ceph volumes are required to be mounted by the applications.

After the CephCluster CR is configured, Rook reconciles the remaining Ceph CRs to complete the setup:

CephBlockPool

The CephBlockPool CR provides the configuration for Rook operator to create Ceph pools for RWO volumes.

CephFilesystem

The CephFilesystem CR instructs the Rook operator to configure a shared file system with CephFS, typically for RWX volumes. The CephFS metadata server (MDS) is started to manage the shared volumes.

CephObjectStore

The CephObjectStore CR instructs the Rook operator to configure an object store with the RGW service

CephObjectStoreUser CR

The CephObjectStoreUser CR instructs the Rook operator to configure an object store user for NooBaa to consume, publishing access/private key as well as the CephObjectStore endpoint.

The operator monitors the Ceph health to ensure that storage platform remains healthy. If a mon daemon goes down for too long a period (10 minutes), Rook starts a new mon in its place so that the full quorum can be fully restored.

When the ocs-operator updates the CephCluster CR, Rook immediately responds to the requested changes to update the cluster configuration.

4.3.1.2. NooBaa System creation

When a NooBaa system is created, the mcg-operator reconciles the following:

Default BackingStore

Depending on the platform that OpenShift Container Platform and OpenShift Data Foundation are deployed on, a default backing store resource is created so that buckets can use it for their placement policy. The different options are as follows:

Amazon Web Services (AWS) deployment

The mcg-operator uses the CloudCredentialsOperator (CCO) to mint credentials in order to create a new AWS::S3 bucket and creates a BackingStore on top of that bucket.

Microsoft Azure deployment

The mcg-operator uses the CCO to mint credentials in order to create a new Azure Blob and creates a BackingStore on top of that bucket.

Google Cloud Platform (GCP) deployment

The mcg-operator uses the CCO to mint credentials in order to create a new GCP bucket and will create a BackingStore on top of that bucket.

On-prem deployment

If RGW exists, the mcg-operator creates a new CephUser and a new bucket on top of RGW and create a BackingStore on top of that bucket.

None of the previously mentioned deployments are applicable

The mcg-operator creates a pv-pool based on the default storage class and creates a BackingStore on top of that bucket.

Default BucketClass

A BucketClass with a placement policy to the default BackingStore is created.

NooBaa pods

The following NooBaa pods are created and started:

Database (DB)

This is a Postgres DB holding metadata, statistics, events, and so on. However, it does not hold the actual data being stored.

Core

This is the pod that handles configuration, background processes, metadata management, statistics, and so on.

Endpoints

These pods perform the actual I/O-related work such as deduplication and compression, communicating with different services to write and read data, and so on. The endpoints are integrated with the HorizonalPodAutoscaler and their number increases and decreases according to the CPU usage observed on the existing endpoint pods.

Route

A Route for the NooBaa S3 interface is created for applications that uses S3.

Service

A Service for the NooBaa S3 interface is created for applications that uses S3.

4.3.2. External mode storage cluster

For external storage clusters, ocs-operator follows a slightly different setup process. The ocs-operator looks for the existence of the rook-ceph-external-cluster-details ConfigMap, which must be created by someone else, either the administrator or the Console. For information about how to create the ConfigMap, see Creating an OpenShift Data Foundation Cluster for external mode. The ocs-operator then creates some or all of the following resources, as specified in the ConfigMap:

External Ceph Configuration

A ConfigMap that specifies the endpoints of the external mons.

External Ceph Credentials Secret

A Secret that contains the credentials to connect to the external Ceph instance.

External Ceph StorageClasses

One or more StorageClasses to enable the creation of volumes for RBD, CephFS, and/or RGW.

Enable CephFS CSI Driver

If a CephFS StorageClass is specified, configure rook-ceph-operator to deploy the CephFS CSI Pods.

Ceph RGW Configuration

If an RGW StorageClass is specified, create various Ceph Object CRs to enable and provide access to the Ceph RGW object storage endpoint.

After creating the resources specified in the ConfigMap, the StorageCluster creation process proceeds as follows:

CephCluster

Create the CephCluster CR to trigger Ceph reconciliation from rook-ceph-operator (see subsequent sections).

SnapshotClasses

Create the SnapshotClasses that applications use to create snapshots of Ceph volumes.

NoobaaSystem

Create the NooBaa CR to trigger reconciliation from noobaa-operator (see subsequent sections).

QuickStarts: Create the Quickstart CRs that display the quickstart guides in the Console.

4.3.2.1. Cluster Creation

The Rook operator performs the following operations when the CephCluster CR is created in external mode:

  • The operator validates that a connection is available to the remote Ceph cluster. The connection requires mon endpoints and secrets to be imported into the local cluster.
  • The CSI driver is configured with the remote connection to Ceph. The RBD and CephFS provisioners and volume plugins are started similarly to the CSI driver when configured in internal mode, the connection to Ceph happens to be external to the OpenShift cluster.
  • Periodically watch for monitor address changes and update the Ceph-CSI configuration accordingly.

4.3.2.2. NooBaa System creation

When a NooBaa system is created, the mcg-operator reconciles the following:

Default BackingStore

Depending on the platform that OpenShift Container Platform and OpenShift Data Foundation are deployed on, a default backing store resource is created so that buckets can use it for their placement policy. The different options are as follows:

Amazon Web Services (AWS) deployment

The mcg-operator uses the CloudCredentialsOperator (CCO) to mint credentials in order to create a new AWS::S3 bucket and creates a BackingStore on top of that bucket.

Microsoft Azure deployment

The mcg-operator uses the CCO to mint credentials in order to create a new Azure Blob and creates a BackingStore on top of that bucket.

Google Cloud Platform (GCP) deployment

The mcg-operator uses the CCO to mint credentials in order to create a new GCP bucket and will create a BackingStore on top of that bucket.

On-prem deployment

If RGW exists, the mcg-operator creates a new CephUser and a new bucket on top of RGW and create a BackingStore on top of that bucket.

None of the previously mentioned deployments are applicable

The mcg-operator creates a pv-pool based on the default storage class and creates a BackingStore on top of that bucket.

Default BucketClass

A BucketClass with a placement policy to the default BackingStore is created.

NooBaa pods

The following NooBaa pods are created and started:

Database (DB)

This is a Postgres DB holding metadata, statistics, events, and so on. However, it does not hold the actual data being stored.

Core

This is the pod that handles configuration, background processes, metadata management, statistics, and so on.

Endpoints

These pods perform the actual I/O-related work such as deduplication and compression, communicating with different services to write and read data, and so on. The endpoints are integrated with the HorizonalPodAutoscaler and their number increases and decreases according to the CPU usage observed on the existing endpoint pods.

Route

A Route for the NooBaa S3 interface is created for applications that uses S3.

Service

A Service for the NooBaa S3 interface is created for applications that uses S3.

4.3.3. MCG Standalone StorageCluster

In this mode, no CephCluster is created. Instead a NooBaa system CR is created using default values to take advantage of pre-existing StorageClasses in the OpenShift Container Platform. dashboards.

4.3.3.1. NooBaa System creation

When a NooBaa system is created, the mcg-operator reconciles the following:

Default BackingStore

Depending on the platform that OpenShift Container Platform and OpenShift Data Foundation are deployed on, a default backing store resource is created so that buckets can use it for their placement policy. The different options are as follows:

Amazon Web Services (AWS) deployment

The mcg-operator uses the CloudCredentialsOperator (CCO) to mint credentials in order to create a new AWS::S3 bucket and creates a BackingStore on top of that bucket.

Microsoft Azure deployment

The mcg-operator uses the CCO to mint credentials in order to create a new Azure Blob and creates a BackingStore on top of that bucket.

Google Cloud Platform (GCP) deployment

The mcg-operator uses the CCO to mint credentials in order to create a new GCP bucket and will create a BackingStore on top of that bucket.

On-prem deployment

If RGW exists, the mcg-operator creates a new CephUser and a new bucket on top of RGW and create a BackingStore on top of that bucket.

None of the previously mentioned deployments are applicable

The mcg-operator creates a pv-pool based on the default storage class and creates a BackingStore on top of that bucket.

Default BucketClass

A BucketClass with a placement policy to the default BackingStore is created.

NooBaa pods

The following NooBaa pods are created and started:

Database (DB)

This is a Postgres DB holding metadata, statistics, events, and so on. However, it does not hold the actual data being stored.

Core

This is the pod that handles configuration, background processes, metadata management, statistics, and so on.

Endpoints

These pods perform the actual I/O-related work such as deduplication and compression, communicating with different services to write and read data, and so on. The endpoints are integrated with the HorizonalPodAutoscaler and their number increases and decreases according to the CPU usage observed on the existing endpoint pods.

Route

A Route for the NooBaa S3 interface is created for applications that uses S3.

Service

A Service for the NooBaa S3 interface is created for applications that uses S3.

4.3.3.2. StorageSystem Creation

As a part of the StorageCluster creation, odf-operator automatically creates a corresponding StorageSystem CR, which exposes the StorageCluster to the OpenShift Data Foundation.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.