Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 6. Configure storage for OpenShift Container Platform services
You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as image registry, monitoring, and logging.
The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment.
Always ensure that you have plenty of storage capacity for these services. If the storage for these critical services runs out of space, the cluster becomes inoperable and very difficult to recover.
Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data sub section of Configuring persistent storage in the OpenShift Container Platform documentation for details.
If you do run out of storage space for these services, contact Red Hat Customer Support.
6.1. Configuring Image Registry to use OpenShift Data Foundation Copier lienLien copié sur presse-papiers!
OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster.
This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete.
Prerequisites
- You have administrative access to OpenShift Web Console.
 - 
						OpenShift Data Foundation Operator is installed and running in the 
openshift-storagenamespace. In OpenShift Web Console, click OperatorsInstalled Operators to view installed operators.  - 
						Image Registry Operator is installed and running in the 
openshift-image-registrynamespace. In OpenShift Web Console, click AdministrationCluster Settings Cluster Operators to view cluster operators.  - 
						A storage class with provisioner 
openshift-storage.cephfs.csi.ceph.comis available. In OpenShift Web Console, click StorageStorageClasses to view available storage classes.  
Procedure
Create a Persistent Volume Claim for the Image Registry to use.
- 
								In the OpenShift Web Console, click Storage 
Persistent Volume Claims.  - 
								Set the Project to 
openshift-image-registry. Click Create Persistent Volume Claim.
- 
										From the list of available storage classes retrieved above, specify the Storage Class with the provisioner 
openshift-storage.cephfs.csi.ceph.com. - 
										Specify the Persistent Volume Claim Name, for example, 
ocs4registry. - 
										Specify an Access Mode of 
Shared Access (RWX). - Specify a Size of at least 100 GB.
 Click Create.
Wait until the status of the new Persistent Volume Claim is listed as
Bound.
- 
										From the list of available storage classes retrieved above, specify the Storage Class with the provisioner 
 
- 
								In the OpenShift Web Console, click Storage 
 Configure the cluster’s Image Registry to use the new Persistent Volume Claim.
- 
								Click Administration 
Custom Resource Definitions.  - 
								Click the 
Configcustom resource definition associated with theimageregistry.operator.openshift.iogroup. - Click the Instances tab.
 - 
								Beside the cluster instance, click the Action Menu (⋮) 
Edit Config.  Add the new Persistent Volume Claim as persistent storage for the Image Registry.
Add the following under
spec:, replacing the existingstorage:section if necessary.storage: pvc: claim: <new-pvc-name>storage: pvc: claim: <new-pvc-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
storage: pvc: claim: ocs4registrystorage: pvc: claim: ocs4registryCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Save.
 
- 
								Click Administration 
 Verify that the new configuration is being used.
- 
								Click Workloads 
Pods.  - 
								Set the Project to 
openshift-image-registry. - 
								Verify that the new 
image-registry-*pod appears with a status ofRunning, and that the previousimage-registry-*pod terminates. - 
								Click the new 
image-registry-*pod to view pod details. - 
								Scroll down to Volumes and verify that the 
registry-storagevolume has a Type that matches your new Persistent Volume Claim, for example,ocs4registry. 
- 
								Click Workloads 
 
6.2. Configuring monitoring to use OpenShift Data Foundation Copier lienLien copié sur presse-papiers!
OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager.
Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack.
Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring.
Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details.
Prerequisites
- You have administrative access to OpenShift Web Console.
 - 
						OpenShift Data Foundation Operator is installed and running in the 
openshift-storagenamespace. In the OpenShift Web Console, click OperatorsInstalled Operators to view installed operators.  - 
						Monitoring Operator is installed and running in the 
openshift-monitoringnamespace. In the OpenShift Web Console, click AdministrationCluster Settings Cluster Operators to view cluster operators.  - 
						A storage class with provisioner 
openshift-storage.rbd.csi.ceph.comis available. In the OpenShift Web Console, click StorageStorageClasses to view available storage classes.  
Procedure
- 
						In the OpenShift Web Console, go to Workloads 
Config Maps.  - 
						Set the Project dropdown to 
openshift-monitoring. - Click Create Config Map.
 Define a new
cluster-monitoring-configConfig Map using the following example.Replace the content in angle brackets (
<,>) with your own values, for example,retention: 24horstorage: 40Gi.Replace the storageClassName with the
storageclassthat uses the provisioneropenshift-storage.rbd.csi.ceph.com. In the example given below the name of the storageclass isocs-storagecluster-ceph-rbd.Example
cluster-monitoring-configConfig MapCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Create to save and create the Config Map.
 
Verification steps
Verify that the Persistent Volume Claims are bound to the pods.
- 
								Go to Storage 
Persistent Volume Claims.  - 
								Set the Project dropdown to 
openshift-monitoring. Verify that 5 Persistent Volume Claims are visible with a state of
Bound, attached to threealertmanager-main-*pods, and twoprometheus-k8s-*pods.Figure 6.1. Monitoring storage created and bound
- 
								Go to Storage 
 Verify that the new
alertmanager-main-*pods appear with a state ofRunning.- 
								Go to Workloads 
Pods.  - 
								Click the new 
alertmanager-main-*pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type,
ocs-alertmanager-claimthat matches one of your new Persistent Volume Claims, for example,ocs-alertmanager-claim-alertmanager-main-0.Figure 6.2. Persistent Volume Claims attached to
alertmanager-main-*pod
- 
								Go to Workloads 
 Verify that the new
prometheus-k8s-*pods appear with a state ofRunning.- 
								Click the new 
prometheus-k8s-*pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type,
ocs-prometheus-claimthat matches one of your new Persistent Volume Claims, for example,ocs-prometheus-claim-prometheus-k8s-0.Figure 6.3. Persistent Volume Claims attached to
prometheus-k8s-*pod
- 
								Click the new 
 
6.3. Cluster logging for OpenShift Data Foundation Copier lienLien copié sur presse-papiers!
You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging.
Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch).
Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover.
Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details.
If you run out of storage space for these services, contact Red Hat Customer Support.
6.3.1. Configuring persistent storage Copier lienLien copié sur presse-papiers!
You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example:
					This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging.
				
Omission of the storage block will result in a deployment backed by default storage. For example:
For more information, see Configuring cluster logging.
6.3.2. Configuring cluster logging to use OpenShift data Foundation Copier lienLien copié sur presse-papiers!
Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging.
You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed.
Prerequisites
- You have administrative access to OpenShift Web Console.
 - 
							OpenShift Data Foundation Operator is installed and running in the 
openshift-storagenamespace. - 
							Cluster logging Operator is installed and running in the 
openshift-loggingnamespace. 
Procedure
- 
							Click Administration 
Custom Resource Definitions from the left pane of the OpenShift Web Console.  - On the Custom Resource Definitions page, click ClusterLogging.
 - On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab.
 On the Cluster Logging page, click Create Cluster Logging.
You might have to refresh the page to load the data.
In the YAML, replace the storageClassName with the
storageclassthat uses the provisioneropenshift-storage.rbd.csi.ceph.com. In the example given below the name of the storageclass isocs-storagecluster-ceph-rbd:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Save.
 
Verification steps
Verify that the Persistent Volume Claims are bound to the
elasticsearchpods.- 
									Go to Storage 
Persistent Volume Claims.  - 
									Set the Project dropdown to 
openshift-logging. Verify that Persistent Volume Claims are visible with a state of
Bound, attached toelasticsearch-* pods.Figure 6.4. Cluster logging created and bound
- 
									Go to Storage 
 Verify that the new cluster logging is being used.
- 
									Click Workload 
Pods.  - 
									Set the Project to 
openshift-logging. - 
									Verify that the new 
elasticsearch-* pods appear with a state ofRunning. - 
									Click the new 
elasticsearch-* pod to view pod details. - 
									Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, 
elasticsearch-elasticsearch-cdm-9r624biv-3. - Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page.
 
- 
									Click Workload 
 
Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods.
You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default.
config.yaml: |
    openshift-storage:
      delete:
        days: 5
config.yaml: |
    openshift-storage:
      delete:
        days: 5
For more details, see Curation of Elasticsearch Data.
To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide.