Questo contenuto non è disponibile nella lingua selezionata.
Chapter 2. Deploy OpenShift Data Foundation using Red Hat Ceph storage
Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters. You need to install the OpenShift Data Foundation operator and then create an OpenShift Data Foundation cluster for the external Ceph storage system.
2.1. Installing Red Hat OpenShift Data Foundation Operator Copia collegamentoCollegamento copiato negli appunti!
You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-adminand operator installation permissions. - For additional resource requirements, see the Planning your deployment guide.
When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the
openshift-storagenamespace (createopenshift-storagenamespace in this case):oc annotate namespace openshift-storage openshift.io/node-selector=
$ oc annotate namespace openshift-storage openshift.io/node-selector=Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
- Log in to the OpenShift Web Console.
-
Click Operators
OperatorHub. -
Scroll or type
OpenShift Data Foundationinto the Filter by keyword box to find the OpenShift Data Foundation Operator. - Click Install.
Set the following options on the Install Operator page:
- Update Channel as stable-4.14.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storagedoes not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
Verification steps
-
After the operator is successfully installed, a pop-up with a message,
Web console update is availableappears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console:
- Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
- Navigate to Storage and verify if the Data Foundation dashboard is available.
2.2. Creating an OpenShift Data Foundation Cluster for external Ceph storage system Copia collegamentoCollegamento copiato negli appunti!
You need to create a new OpenShift Data Foundation cluster after you install OpenShift Data Foundation operator on OpenShift Container Platform deployed on VMware vSphere or user-provisioned bare metal infrastructures.
Prerequisites
- A valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions.
- Ensure the OpenShift Container Platform version is 4.14 or above before deploying OpenShift Data Foundation 4.14.
- OpenShift Data Foundation operator must be installed. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub.
To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker.
-
Select Service Type as
ODF as Self-Managed Service. - Select appropriate Version from the drop down.
- On the Versions tab, click the Supported RHCS versions in the External Mode tab.
-
Select Service Type as
If you have updated the Red Hat Ceph Storage cluster from a version lower than 4.1.1 to the latest release and is not a freshly deployed cluster, you must manually set the application type for the CephFS pool on the Red Hat Ceph Storage cluster to enable CephFS PVC creation in external mode.
For more details, see Troubleshooting CephFS PVC creation in external mode.
- Red Hat Ceph Storage must have Ceph Dashboard installed and configured. For more information, see Ceph Dashboard installation and access.
- It is recommended that the external Red Hat Ceph Storage cluster has the PG Autoscaler enabled. For more information, see The placement group autoscaler section in the Red Hat Ceph Storage documentation.
- The external Ceph cluster should have an existing RBD pool pre-configured for use. If it does not exist, contact your Red Hat Ceph Storage administrator to create one before you move ahead with OpenShift Data Foundation deployment. Red Hat recommends to use a separate pool for each OpenShift Data Foundation cluster.
-
Optional: If there is a zonegroup created apart from the default zonegroup, you need to add the hostname,
rook-ceph-rgw-ocs-external-storagecluster-cephobjectstore.openshift-storage.svcto the zonegroup as OpenShift Data Foundation sends S3 requests to the RADOS Object Gateways (RGWs) with this hostname. For more information, see the Red Hat Knowledgebase solution Ceph - How to add hostnames in RGW zonegroup?.
Procedure
Click Operators
Installed Operators to view all the installed operators. Ensure that the Project selected is
openshift-storage.- Click OpenShift Data Foundation and then click Create StorageSystem.
In the Backing storage page, select the following options:
- Select Full deployment for the Deployment type option.
- Select Connect an external storage platform from the available options.
- Select Red Hat Ceph Storage for Storage platform.
- Click Next.
In the Connection details page, provide the necessary information:
- Click on the Download Script link to download the python script for extracting Ceph cluster details.
For extracting the Red Hat Ceph Storage (RHCS) cluster details, contact the RHCS administrator to run the downloaded python script on a Red Hat Ceph Storage node with the
admin key.Run the following command on the RHCS node to view the list of available arguments:
python3 ceph-external-cluster-details-exporter.py --help
# python3 ceph-external-cluster-details-exporter.py --helpCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantUse
pythoninstead ofpython3if the Red Hat Ceph Storage 4.x cluster is deployed on Red Hat Enterprise Linux 7.x (RHEL 7.x) cluster.You can also run the script from inside a MON container (containerized deployment) or from a MON node (RPM deployment).
NoteUse the
yum install cephadmcommand and then thecephadmcommand to deploy your RHCS cluster using containers. You must pull the RHCS container images using thecephadmcommand, rather than usingyumfor installing the Ceph packages onto nodes. For more information, see RHCS product documentation.To retrieve the external cluster details from the RHCS cluster, run the following command:
python3 ceph-external-cluster-details-exporter.py \ --rbd-data-pool-name <rbd block pool name> [optional arguments]
# python3 ceph-external-cluster-details-exporter.py \ --rbd-data-pool-name <rbd block pool name> [optional arguments]Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs
# python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocsCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example,
rbd-data-pool-name
A mandatory parameter that is used for providing block storage in OpenShift Data Foundation.
rgw-endpoint
(Optional) This parameter is required only if the object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format:
<ip_address>:<port>NoteA fully-qualified domain name (FQDN) is also supported in the format
<FQDN>:<PORT>.monitoring-endpoint
(Optional) This parameter accepts comma-separated list of IP addresses of active and standby
mgrsreachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated.monitoring-endpoint-port
(Optional) It is the port associated with the
ceph-mgrPrometheus exporter specified by--monitoring-endpoint. If not provided, the value is automatically populated.run-as-user
(Optional) This parameter is used for providing name for the Ceph user which is created by the script. If this parameter is not specified, a default user name
client.healthcheckeris created. The permissions for the new user is set as:- caps: [mgr] allow command config
- caps: [mon] allow r, allow command quorum_status, allow command version
-
caps: [osd] allow rwx pool=
RGW_POOL_PREFIX.rgw.meta, allow r pool=.rgw.root, allow rw pool=RGW_POOL_PREFIX.rgw.control, allow rx pool=RGW_POOL_PREFIX.rgw.log, allow x pool=RGW_POOL_PREFIX.rgw.buckets.index
Additional flags:
rgw-pool-prefix
(Optional) The prefix of the RGW pools. If not specified, the default prefix is
default.rgw-tls-cert-path
(Optional) The file path of the RADOS Gateway endpoint TLS certificate.
rgw-skip-tls
(Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED).
ceph-conf
(Optional) The name of the Ceph configuration file.
cluster-name
(Optional) The Ceph cluster name.
output
(Optional) The file where the output is required to be stored.
cephfs-metadata-pool-name
(Optional) The name of the CephFS metadata pool.
cephfs-data-pool-name
(Optional) The name of the CephFS data pool.
cephfs-filesystem-name
(Optional) The name of the CephFS filesystem.
rbd-metadata-ec-pool-name
(Optional) The name of the erasure coded RBD metadata pool.
dry-run
(Optional) This parameter helps to print the executed commands without running them.
restricted-auth-permission
(Optional) This parameter restricts
cephCSIKeyringsauth permissions to specific pools and clusters. Mandatory flags that need to be set with this arerbd-data-pool-nameandcluster-name. You can also pass thecephfs-filesystem-nameflag if there is CephFS user restriction so that permission is restricted to a particular CephFS filesystem.NoteThis parameter must be applied only for the new deployments. To restrict
csi-usersper pool and per cluster, you need to create newcsi-usersand new secrets for thosecsi-users.Example with restricted auth permission:
python3 /etc/ceph/create-external-cluster-resources.py --cephfs-filesystem-name myfs --rbd-data-pool-name replicapool --cluster-name rookStorage --restricted-auth-permission true
# python3 /etc/ceph/create-external-cluster-resources.py --cephfs-filesystem-name myfs --rbd-data-pool-name replicapool --cluster-name rookStorage --restricted-auth-permission trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example of JSON output generated using the python script:
[{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxx:xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}][{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxx:xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the JSON output to a file with
.jsonextensionNoteFor OpenShift Data Foundation to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remain unchanged on the RHCS external cluster after the storage cluster creation.
Run the command when there is a multi-tenant deployment in which the RHCS cluster is already connected to OpenShift Data Foundation deployment with a lower version.
python3 ceph-external-cluster-details-exporter.py --upgrade
# python3 ceph-external-cluster-details-exporter.py --upgradeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Click Browse to select and upload the JSON file.
The content of the JSON file is populated and displayed in the text box.
Click Next
The Next button is enabled only after you upload the
.jsonfile.
Optional: In the Security and network page, provide the necessary details:
To enable encryption, select Enable data encryption for block and file storage.
Choose any one or both Encryption level:
- Cluster-wide encryption to encrypt the entire cluster (block and file).
- StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class.
Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption.
- Key Management Service Provider is set to Vault by default.
- Enter Vault Service Name, host Address of Vault server ('https://<hostname or ip>'), Port number, and Token.
Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:
- Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Vault Enterprise Namespace.
- Provide CA Certificate, Client Certificate, and Client Private Key by uploading the respective PEM encoded certificate file.
- Click Save.
- To enable in-transit encryption, select In-transit encryption.
- Select Default (SDN) for Network.
- Click Next.
In the Review and create page, review if all the details are correct:
- To modify any configuration settings, click Back to go back to the previous configuration page.
- Click Create StorageSystem.
Verification steps
To verify the final Status of the installed storage cluster:
-
In the OpenShift Web Console, navigate to Installed Operators
OpenShift Data Foundation Storage System ocs-external-storagecluster-storagesystem Resources. -
Verify that
StatusofStorageClusterisReadyand has a green tick. - To verify that OpenShift Data Foundation, pods and StorageClass are successfully installed, see Verifying your external mode OpenShift Data Foundation installation for external Ceph storage system.
2.3. Verifying your OpenShift Data Foundation installation for external Ceph storage system Copia collegamentoCollegamento copiato negli appunti!
Use this section to verify that OpenShift Data Foundation is deployed correctly.
2.3.1. Verifying the state of the pods Copia collegamentoCollegamento copiato negli appunti!
-
Click Workloads
Pods from the left pane of the OpenShift Web Console. Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, “Pods corresponding to OpenShift Data Foundation components”
Verify that the following pods are in running state:
Expand Table 2.1. Pods corresponding to OpenShift Data Foundation components Component Corresponding pods OpenShift Data Foundation Operator
-
ocs-operator-*(1 pod on any worker node) -
ocs-metrics-exporter-*(1 pod on any worker node) -
odf-operator-controller-manager-*(1 pod on any worker node) -
odf-console-*(1 pod on any worker node) -
csi-addons-controller-manager-*(1 pod on any worker node)
Rook-ceph Operator
rook-ceph-operator-*(1 pod on any worker node)
Multicloud Object Gateway
-
noobaa-operator-*(1 pod on any worker node) -
noobaa-core-*(1 pod on any worker node) -
noobaa-db-pg-*(1 pod on any worker node) -
noobaa-endpoint-*(1 pod on any worker node)
CSI
cephfs-
csi-cephfsplugin-*(1 pod on each worker node) -
csi-cephfsplugin-provisioner-*(2 pods distributed across worker nodes)
-
NoteIf an MDS is not deployed in the external cluster, the csi-cephfsplugin pods will not be created.
rbd-
csi-rbdplugin-*(1 pod on each worker node) -
csi-rbdplugin-provisioner-*(2 pods distributed across worker nodes)
-
-
2.3.2. Verifying that the OpenShift Data Foundation cluster is healthy Copia collegamentoCollegamento copiato negli appunti!
-
In the OpenShift Web Console, click Storage
Data Foundation. - In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick.
- In the Details card, verify that the cluster information is displayed.
For more information on the health of OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation.
2.3.3. Verifying that the Multicloud Object Gateway is healthy Copia collegamentoCollegamento copiato negli appunti!
-
In the OpenShift Web Console, click Storage
Data Foundation. In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
- In the Details card, verify that the Multicloud Object Gateway (MCG) information is displayed.
The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode.
For more information on the health of OpenShift Data Foundation cluster using the object dashboard, see Monitoring OpenShift Data Foundation.
2.3.4. Verifying that the storage classes are created and listed Copia collegamentoCollegamento copiato negli appunti!
-
Click Storage
Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation:
-
ocs-external-storagecluster-ceph-rbd -
ocs-external-storagecluster-ceph-rgw -
ocs-external-storagecluster-cephfs -
openshift-storage.noobaa.io
-
-
If an MDS is not deployed in the external cluster,
ocs-external-storagecluster-cephfsstorage class will not be created. -
If RGW is not deployed in the external cluster, the
ocs-external-storagecluster-ceph-rgwstorage class will not be created.
For more information regarding MDS and RGW, see Red Hat Ceph Storage documentation
2.3.5. Verifying that Ceph cluster is connected Copia collegamentoCollegamento copiato negli appunti!
Run the following command to verify if the OpenShift Data Foundation cluster is connected to the external Red Hat Ceph Storage cluster.
oc get cephcluster -n openshift-storage
$ oc get cephcluster -n openshift-storage
NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL
ocs-external-storagecluster-cephcluster 30m Connected Cluster connected successfully HEALTH_OK true
2.3.6. Verifying that storage cluster is ready Copia collegamentoCollegamento copiato negli appunti!
Run the following command to verify if the storage cluster is ready and the External option is set to true.
oc get storagecluster -n openshift-storage
$ oc get storagecluster -n openshift-storage
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-external-storagecluster 30m Ready true 2021-11-17T09:09:52Z 4.14.0