此内容没有您所选择的语言版本。
Chapter 2. Integrating Red Hat Ceph Storage
You can configure Red Hat OpenStack Services on OpenShift (RHOSO) to integrate with an external Red Hat Ceph Storage cluster. This configuration connects the following services to a Red Hat Ceph Storage cluster:
- Block Storage service (cinder)
- Image service (glance)
- Object Storage service (swift)
- Compute service (nova)
- Shared File Systems service (manila)
If you want to deploy a Red Hat Ceph Storage Hyper Converged Infrastructure (HCI), see Configuring a Hyperconverged Infrastructure environment.
To configure Red Hat Ceph Storage as the back end for RHOSO storage, complete the following tasks:
- Create the Red Hat Ceph Storage pools on the Red Hat Ceph Storage cluster.
- Create a Red Hat Ceph Storage secret on the Red Hat Ceph Storage cluster to provide RHOSO services access to the Red Hat Ceph Storage cluster.
- Obtain the Ceph File System Identifier.
- Configure the OpenStackControlPlane CR to use the Red Hat Ceph Storage cluster as the back end.
- Configure the OpenStackDataPlane CR to use the Red Hat Ceph Storage cluster as the back end.
Prerequisites
- Access to a Red Hat Ceph Storage cluster. If you intend to host Red Hat Ceph Storage on data plane nodes (HCI), then complete Configuring a Hyperconverged Infrastructure environment first.
- The RHOSO control plane is installed on an operational Red Hat OpenShift Platform cluster.
2.1. Creating Red Hat Ceph Storage pools
Create pools on the Red Hat Ceph Storage cluster server for each RHOSO service that uses the cluster.
Procedure
Create pools for the Compute service (vms), the Block Storage service (volumes), and the Image service (images):
$ for P in vms volumes images; do cephadm shell -- ceph osd pool create $P; cephadm shell -- ceph osd pool application enable $P rbd; done
Optional: Create the
cephfs
volume if the Shared File Systems service (manila) is enabled in the control plane. This automatically enables the CephFS Metadata service (MDS) and creates the necessary data and metadata pools on the Ceph cluster:$ cephadm shell -- ceph fs volume create cephfs
Optional: Deploy an NFS service on the Red Hat Ceph Storage cluster to use CephFS with NFS:
$ cephadm shell -- ceph nfs cluster create cephfs \ --ingress --virtual-ip=<vip> \ --ingress-mode=haproxy-protocol
Replace
<vip>
with the IP address assigned to the NFS service. The NFS service should be isolated on a network that can be shared with all Red Hat OpenStack users. See NFS cluster and export management, for more information about customizing the NFS service.ImportantWhen you deploy an NFS service for the Shared File Systems service, do not select a custom port to expose NFS. Only the default NFS port of 2049 is supported. You must enable the Red Hat Ceph Storage
ingress
service and set theingress-mode
tohaproxy-protocol
. Otherwise, you cannot use IP-based access rules with the Shared File Systems service. For security in production environments, Red Hat does not recommend providing access to0.0.0.0/0
on shares to mount them on client machines.
Create a cephx key for RHOSO to use to access pools:
$ cephadm shell -- \ ceph auth add client.openstack \ mgr 'allow *' \ mon 'allow r' \ osd 'allow class-read object_prefix rbd_children, allow rwx pool=vms, allow rwx pool=volumes, allow rwx pool=images'
ImportantIf the Shared File Systems service is enabled in the control plane, replace
osd
caps with the following:osd 'allow class-read object_prefix rbd_children, allow rwx pool=vms, allow rwx pool=volumes, allow rwx pool=images, allow rwx pool=cephfs.cephfs.data'
Export the cephx key:
$ cephadm shell -- ceph auth get client.openstack > /etc/ceph/ceph.client.openstack.keyring
Export the configuration file:
$ cephadm shell -- ceph config generate-minimal-conf > /etc/ceph/ceph.conf
2.2. Creating a Red Hat Ceph Storage secret
Create a secret so that services can access the Red Hat Ceph Storage cluster.
Procedure
-
Transfer the
cephx
key and configuration file created in the Creating Red Hat Ceph Storage pools procedure to a host that can create resources in theopenstack
namespace. Base64 encode these files and store them in
KEY
andCONF
environment variables:$ KEY=$(cat /etc/ceph/ceph.client.openstack.keyring | base64 -w 0) $ CONF=$(cat /etc/ceph/ceph.conf | base64 -w 0)
-
Create a YAML file to create the
Secret
resource. Using the environment variables, add the
Secret
configuration to the YAML file:apiVersion: v1 data: ceph.client.openstack.keyring: $KEY ceph.conf: $CONF kind: Secret metadata: name: ceph-conf-files namespace: openstack type: Opaque
- Save the YAML file.
Create the
Secret
resource:$ oc create -f <secret_configuration_file>
-
Replace
<secret_configuration_file>
with the name of the YAML file you created.
-
Replace
The examples in this section use openstack
as the name of the Red Hat Ceph Storage user. The file name in the Secret
resource must match this user name.
For example, if the file name used is /etc/ceph/ceph.client.openstack2.keyring
, then the secret data line should be ceph.client.openstack2.keyring: $KEY
.
2.3. Obtaining the Red Hat Ceph Storage File System Identifier
The Red Hat Ceph Storage File System Identifier (FSID) is a unique identifier for the cluster. The FSID is used in configuration and verification of cluster interoperability with RHOSO.
Procedure
Extract the FSID from the Red Hat Ceph Storage secret:
$ FSID=$(oc get secret ceph-conf-files -o json | jq -r '.data."ceph.conf"' | base64 -d | grep fsid | sed -e 's/fsid = //')
2.4. Configuring the control plane to use the Red Hat Ceph Storage cluster
You must configure the OpenStackControlPlane
CR to use the Red Hat Ceph Storage cluster. Configuration includes the following tasks:
- Confirming the Red Hat Ceph Storage cluster and the associated services have the correct network configuration.
- Configuring the control plane to use the Red Hat Ceph Storage secret.
- Configuring the Image service (glance) to use the Red Hat Ceph Storage cluster.
- Configuring the Block Storage service (cinder) to use the Red Hat Ceph Storage cluster.
- Optional: Configuring the Shared File Systems service (manila) to use native CephFS or CephFS-NFS with the Red Hat Ceph Storage cluster.
This example does not include configuring Block Storage backup service (cinder-backup
) with Red Hat Ceph Storage.
Procedure
Check the storage interface defined in your
NodeNetworkConfigurationPolicy
(nncp
) custom resource to confirm that it has the same network configuration as thepublic_network
of the Red Hat Ceph Storage cluster. This is required to enable access to the Red Hat Ceph Storage cluster through theStorage
network. TheStorage
network should have the same network configuration as thepublic_network
of the Red Hat Ceph Storage cluster.NoteIt is not necessary for RHOSO to access the
cluster_network
of the Red Hat Ceph Storage cluster.Check the
networkAttachments
for the default Image service instance in theOpenStackControlPlane
CR to confirm that the default Image service is configured to access theStorage
network:glance: enabled: true template: databaseInstance: openstack storageClass: "" storageRequest: 10G glanceAPIs: default replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage
-
Confirm the Block Storage service is configured to access the
Storage
network through MetalLB. -
Optional: Confirm the Shared File Systems service is configured to access the
Storage
network through ManilaShare. -
Confirm the Compute service (nova) is configured to access the
Storage
network. -
Confirm the Red Hat Ceph Storage configuration file,
/etc/ceph/ceph.conf
, contains the IP addresses of the Red Hat Ceph Storage cluster monitors. These IP addresses must be within theStorage
network IP address range. -
Open your
openstack_control_plane.yaml
file to edit theOpenStackControlPlane
CR. Add the
extraMounts
parameter to define the services that require access to the Red Hat Ceph Storage secret.The following is an example of using the
extraMounts
parameter for this purpose. Only includeManilaShare
in the propagation list if you are using the Shared File Systems service (manila):apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: - name: v1 region: r1 extraVol: - propagation: - CinderVolume - GlanceAPI - ManilaShare extraVolType: Ceph volumes: - name: ceph projected: sources: - secret: name: <ceph-conf-files> mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true
-
Replace
<ceph-conf-files>
with the name of your Secret CR created in Creating a Red Hat Ceph Storage secret.
-
Replace
Add the
customServiceConfig
parameter to theglance
template to configure the Image service to use the Red Hat Ceph Storage cluster:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: ... glance: enabled: true template: databaseInstance: openstack databaseUser: glance customServiceConfig: | [DEFAULT] enabled_backends = default_backend:rbd enabled_import_methods=[web-download,glance-direct] [glance_store] default_backend = default_backend [default_backend] rbd_store_ceph_conf = /etc/ceph/ceph.conf store_description = "RBD backend" rbd_store_pool = images rbd_store_user = openstack glanceAPIs: default: preserveJobs: false replicas: 1 secret: osp-secret storageClass: "" storageRequest: 10G extraMounts: - name: v1 region: r1 extraVol: - propagation: - Glance extraVolType: Ceph volumes: - name: ceph projected: sources: - secret: name: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true
Add the
customServiceConfig
parameter to thecinder
template to configure the Block Storage service to use the Red Hat Ceph Storage cluster:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: ... cinder: template: cinderVolumes: ceph: customServiceConfig: | [DEFAULT] enabled_backends=ceph [ceph] volume_backend_name=ceph volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=openstack rbd_pool=volumes rbd_flatten_volume_from_snapshot=False rbd_secret_uuid=$FSID 1
- 1
- Replace with the actual FSID. The FSID itself does not need to be considered secret. For more information, see Obtaining the Red Hat Ceph Storage FSID.
Optional: Add the
customServiceConfig
parameter to themanila
template to configure the Shared File Systems service to use native CephFS or CephFS-NFS with the Red Hat Ceph Storage cluster. For more information, see Configuring the Shared File Systems service (manila):The following example exposes native CephFS:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: ... manila: template: manilaAPI: customServiceConfig: | [DEFAULT] enabled_share_protocols=cephfs manilaShares: share1: customServiceConfig: | [DEFAULT] enabled_share_backends=cephfs [cephfs] driver_handles_share_servers=False share_backend_name=cephfs share_driver=manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path=/etc/ceph/ceph.conf cephfs_auth_id=openstack cephfs_cluster_name=ceph cephfs_volume_mode=0755 cephfs_protocol_helper_type=CEPHFS
The following example exposes CephFS with NFS:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: ... manila: template: manilaAPI: customServiceConfig: | [DEFAULT] enabled_share_protocols=nfs manilaShares: share1: customServiceConfig: | [DEFAULT] enabled_share_backends=cephfsnfs [cephfsnfs] driver_handles_share_servers=False share_backend_name=cephfsnfs share_driver=manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path=/etc/ceph/ceph.conf cephfs_auth_id=openstack cephfs_cluster_name=ceph cephfs_volume_mode=0755 cephfs_protocol_helper_type=NFS cephfs_nfs_cluster_id=cephfs
Apply the updates to the
OpenStackControlPlane
CR:$ oc apply -f openstack_control_plane.yaml
2.5. Configuring the data plane to use the Red Hat Ceph Storage cluster
Configure the data plane to use the Red Hat Ceph Storage cluster.
Procedure
Create a
ConfigMap
with additional content for the Compute service (nova) configuration file/etc/nova/nova.conf.d/
inside thenova_compute
container. This additional content directs the Compute service to use Red Hat Ceph Storage RBD.apiVersion: v1 kind: ConfigMap metadata: name: ceph-nova data: 03-ceph-nova.conf: | 1 [libvirt] images_type=rbd images_rbd_pool=vms images_rbd_ceph_conf=/etc/ceph/ceph.conf images_rbd_glance_store_name=default_backend images_rbd_glance_copy_poll_interval=15 images_rbd_glance_copy_timeout=600 rbd_user=openstack rbd_secret_uuid=$FSID 2
- 1
- This file name must follow the naming convention of
##-<name>-nova.conf
. Files are evaluated by the Compute service alphabetically. A filename that starts with01
will be evaluated by the Compute service before a filename that starts with02
. - 2
- The
$FSID
value should contain the actualFSID
as described in the Obtaining the Ceph FSID section. TheFSID
itself does not need to be considered secret.
Create a custom version of the default
nova
operator to use the newConfigMap
, which in this case is calledceph-nova
.apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-ceph 1 spec: label: dataplane-deployment-nova-custom-ceph configMaps: - ceph-nova secrets: - nova-cell1-compute-config playbook: osp.edpm.nova
- 1
- The custom service is named
nova-custom-ceph
. It cannot be namednova
becausenova
is an unchangeable default service. Any custom service that has the same name as a default service name will be overwritten during reconciliation.
Apply the
ConfigMap
and custom service changes:$ oc create -f ceph-nova.yaml
Update the
OpenStackDataPlaneNodeSet
services list to replace thenova
service with the new custom service (in this case callednova-custom-ceph
), add theceph-client
service, and use theextraMounts
parameter to define access to the Ceph Storage secret.apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet spec: ... roles: edpm-compute: ... services: - configure-network - validate-network - install-os - configure-os - run-os - ceph-client - ovn - libvirt - nova-custom-ceph - telemetry nodeTemplate: extraMounts: - extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true
NoteThe
ceph-client
service must be added before thelibvirt
andnova-custom-ceph
services. Theceph-client
service configures EDPM nodes as clients of a Red Hat Ceph Storage server by distributing the Red Hat Ceph Storage client files.- Save the changes to the services list.
Create an
OpenStackDataPlaneDeployment
CR:$ oc create -f <dataplanedeployment_cr_file>
Replace
<dataplanedeployment_cr_file>
with the name of your file.NoteAn example of an
OpenStackDataPlaneDeployment
CR file is available here: link:https://github.com/openstack-k8s-operators/dataplane-operator/blob/main/config/samples/dataplane_v1beta1_openstackdataplanedeployment.yaml.
Result
When the nova-custom-ceph
service Ansible job runs, the job copies overrides from the ConfigMaps
to the Compute service hosts. It will also use virsh secret-*
commands so the libvirt
service retrieves the cephx
secret by FSID
.
Run the following command on an EDPM node after the job completes to confirm the job results:
$ podman exec libvirt_virtsecretd virsh secret-get-value $FSID
2.6. Configuring the Object Storage service (swift) with an external Ceph Object Gateway back end
You can configure an external Ceph Object Gateway (RGW) to act as an Object Storage service (swift) back end, by completing the following high-level tasks:
- Configure the RGW to verify users and their roles in the Identity service (keystone) to authenticate with the external RGW service.
- Deploy and configure a RGW service to handle object storage requests.
You use the openstack
client tool to configure the Object Storage service.
2.6.1. Configuring RGW authentication
You must configure RGW to verify users and their roles in the Identity service (keystone) to authenticate with the external RGW service.
Prerequisites
- You have deployed an operational OpenStack control plane.
Procedure
Create the Object Storage service on the control plane:
$ openstack service create --name swift --description "OpenStack Object Storage" object-store
Create a user called
swift
:$ openstack user create --project service --password <swift_password> swift
-
Replace
<swift_password>
with the password to assign to theswift
user.
-
Replace
Create roles for the
swift
user:$ openstack role create swiftoperator $ openstack role create ResellerAdmin
Add the
swift
user to system roles:$ openstack role add --user swift --project service member $ openstack role add --user swift --project service admin
Export the RGW endpoint IP addresses to variables and create control plane endpoints:
$ export RGW_ENDPOINT_STORAGE=<rgw_endpoint_ip_address_storage> $ export RGW_ENDPOINT_EXTERNAL=<rgw_endpoint_ip_address_external> $ openstack endpoint create --region regionOne object-store public http://$RGW_ENDPOINT_EXTERNAL:8080/swift/v1/AUTH_%\(tenant_id\)s; $ openstack endpoint create --region regionOne object-store internal http://$RGW_ENDPOINT_STORAGE:8080/swift/v1/AUTH_%\(tenant_id\)s;
-
Replace
<rgw_endpoint_ip_address_storage>
with the IP address of the RGW endpoint on the storage network. This is how internal services will access RGW. Replace
<rgw_endpoint_ip_address_external>
with the IP address of the RGW endpoint on the external network. This is how cloud users will write objects to RGW.NoteBoth endpoint IP addresses are the endpoints that represent the Virtual IP addresses, owned by
haproxy
andkeepalived
, used to reach the RGW backends that will be deployed in the Red Hat Ceph Storage cluster in the procedure Configuring and Deploying the RGW service.
-
Replace
Add the
swiftoperator
role to the control planeadmin
group:$ openstack role add --project admin --user admin swiftoperator
2.6.2. Configuring and deploying the RGW service
Configure and deploy a RGW service to handle object storage requests.
Procedure
- Log in to a Red Hat Ceph Storage Controller node.
Create a file called
/tmp/rgw_spec.yaml
and add the RGW deployment parameters:service_type: rgw service_id: rgw service_name: rgw.rgw placement: hosts: - <host_1> - <host_2> ... - <host_n> networks: - <storage_network> spec: rgw_frontend_port: 8082 rgw_realm: default rgw_zone: default --- service_type: ingress service_id: rgw.default service_name: ingress.rgw.default placement: count: 1 spec: backend_service: rgw.rgw frontend_port: 8080 monitor_port: 8999 virtual_ips_list: - <storage_network_vip> - <external_network_vip> virtual_interface_networks: - <storage_network>
-
Replace
<host_1>
,<host_2>
, …,<host_n>
with the name of the Ceph nodes where the RGW instances are deployed. -
Replace
<storage_network>
with the network range used to resolve the interfaces whereradosgw
processes are bound. -
Replace
<storage_network_vip>
with the virtual IP (VIP) used as thehaproxy
front end. This is the same address configured as the Object Storage service endpoint ($RGW_ENDPOINT
) in the Configuring RGW authentication procedure. -
Optional: Replace
<external_network_vip>
with an additional VIP on an external network to use as thehaproxy
front end. This address is used to connect to RGW from an external network.
-
Replace
- Save the file.
Enter the cephadm shell and mount the
rgw_spec.yaml
file.$ cephadm shell -m /tmp/rgw_spec.yaml
Add RGW related configuration to the cluster:
$ ceph config set global rgw_keystone_url "https://<keystone_endpoint>" $ ceph config set global rgw_keystone_verify_ssl false $ ceph config set global rgw_keystone_api_version 3 $ ceph config set global rgw_keystone_accepted_roles "member, Member, admin" $ ceph config set global rgw_keystone_accepted_admin_roles "ResellerAdmin, swiftoperator" $ ceph config set global rgw_keystone_admin_domain default $ ceph config set global rgw_keystone_admin_project service $ ceph config set global rgw_keystone_admin_user swift $ ceph config set global rgw_keystone_admin_password "$SWIFT_PASSWORD" $ ceph config set global rgw_keystone_implicit_tenants true $ ceph config set global rgw_s3_auth_use_keystone true $ ceph config set global rgw_swift_versioning_enabled true $ ceph config set global rgw_swift_enforce_content_length true $ ceph config set global rgw_swift_account_in_url true $ ceph config set global rgw_trust_forwarded_https true $ ceph config set global rgw_max_attr_name_len 128 $ ceph config set global rgw_max_attrs_num_in_req 90 $ ceph config set global rgw_max_attr_size 1024
-
Replace
<keystone_endpoint>
with the Identity service internal endpoint. The EDPM nodes are able to resolve the internal endpoint but not the public one. Do not omit the URIScheme from the URL, it must be eitherhttp://
orhttps://
. -
Replace
<swift_password>
with the password assigned to the swift user in the previous step.
-
Replace
Deploy the RGW configuration using the Orchestrator:
$ ceph orch apply -i /mnt/rgw_spec.yaml