Este contenido no está disponible en el idioma seleccionado.
Chapter 3. Integrating Red Hat Ceph Storage
You can configure Red Hat OpenStack Services on OpenShift (RHOSO) to integrate with an external Red Hat Ceph Storage cluster. This configuration connects the Block Storage (cinder), Image (glance), Object Storage (swift), Compute (nova), and Shared File Systems (manila) services to the cluster.
To configure Red Hat Ceph Storage as the back end for RHOSO storage, complete the following tasks:
- Verify that Red Hat Ceph Storage is deployed and all the required services are running.
- Create the Red Hat Ceph Storage pools on the Red Hat Ceph Storage cluster.
- Create a Red Hat Ceph Storage secret on the Red Hat Ceph Storage cluster to provide RHOSO services access to the Red Hat Ceph Storage cluster.
- Obtain the Ceph file system identifier.
- Configure the OpenStackControlPlane CR to use the Red Hat Ceph Storage cluster as the back end.
- Configure the OpenStackDataPlane CR to use the Red Hat Ceph Storage cluster as the back end.
3.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- Access to a Red Hat Ceph Storage cluster.
- The RHOSO control plane is installed on an operational RHOSO cluster.
3.2. Creating Red Hat Ceph Storage pools Copiar enlaceEnlace copiado en el portapapeles!
Create pools on the Red Hat Ceph Storage cluster server for each RHOSO service that uses the cluster.
- Considerations
If you are deploying the NFS service for the Shared File Systems service (manila):
-
Do not select a custom port. Only the default NFS port of 2049 is supported, and you must enable the Red Hat Ceph Storage
ingressservice withingress-modeset tohaproxy-protocolwhen creating the NFS cluster. -
With Red Hat Ceph Storage 9, NFSv3 is not enabled by default. If you need NFSv3 support, you must include the
--enable-nfsv3parameter when creating the NFS cluster. -
For security in production environments, do not provide access to
0.0.0.0/0on shares to mount them on client machines.
-
Do not select a custom port. Only the default NFS port of 2049 is supported, and you must enable the Red Hat Ceph Storage
Prerequisites
- Run all commands in this procedure from a Red Hat Ceph Storage node that has access to the Ceph cluster.
When creating pools, set the appropriate placement group (PG) number based on expected usage and cluster size. For more information, see "Placement Groups" in the Red Hat Ceph Storage Storage Strategies Guide:
Procedure
Enter the
cephadmcontainer client:$ sudo cephadm shellCreate pools for the Compute service (vms), the Block Storage service (volumes), and the Image service (images):
$ for P in vms volumes images; do ceph osd pool create $P; ceph osd pool application enable $P rbd; doneIf you are using the Shared File Systems service, create the
cephfsvolume. This automatically enables the CephFS Metadata service (MDS) and creates the necessary data and metadata pools on the Ceph cluster:$ ceph fs volume create cephfsIf you are using the Shared File Systems service with CephFS-NFS, deploy an NFS service on the Red Hat Ceph Storage cluster:
If you are deploying Red Hat Ceph Storage 7 or 8, run the following command:
$ ceph nfs cluster create cephfs \ --ingress --virtual-ip=<vip> \ --ingress-mode=haproxy-protocolIf you are deploying Red Hat Ceph Storage 9, run the following command:
$ ceph nfs cluster create cephfs \ --ingress --virtual-ip=<vip> \ --ingress-mode=haproxy-protocol \ --enable-nfsv3-
Replace
<vip>with the IP address assigned to the NFS service. The NFS service should be on a dedicated network that isolates NFS traffic while allowing RHOSO users to attach their Compute instances to access shares.
-
Replace
Create a CephX key for RHOSO to use to access pools:
$ ceph auth add client.openstack \ mgr 'allow *' \ mon 'profile rbd' \ osd 'profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images'If you are using the Shared File Systems service, add
osdcaps for the CephFS data pool by using the following command instead:$ ceph auth add client.openstack \ mgr 'allow *' \ mon 'profile rbd' \ osd 'profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.data'
Export the CephX key:
$ ceph auth get client.openstack > /etc/ceph/ceph.client.openstack.keyringExport the configuration file:
$ ceph config generate-minimal-conf > /etc/ceph/ceph.conf
3.3. Creating a Red Hat Ceph Storage secret Copiar enlaceEnlace copiado en el portapapeles!
Create a secret so that services can access the Red Hat Ceph Storage cluster.
The procedure examples use openstack as the name of the Red Hat Ceph Storage user. The file name in the Secret resource must match this user name.
For example, if the file name for the username openstack2 is /etc/ceph/ceph.client.openstack2.keyring, then the secret data line should be ceph.client.openstack2.keyring: $KEY.
Procedure
-
Transfer the
cephxkey and configuration file created in the Creating Red Hat Ceph Storage pools procedure to a host that can create resources in theopenstacknamespace. Base64 encode these files and store them in
KEYandCONFenvironment variables:$ KEY=$(cat /etc/ceph/ceph.client.openstack.keyring | base64 -w 0) $ CONF=$(cat /etc/ceph/ceph.conf | base64 -w 0)-
Create a YAML file to create the
Secretresource. Using the environment variables, add the
Secretconfiguration to the YAML file:apiVersion: v1 data: ceph.client.openstack.keyring: $KEY ceph.conf: $CONF kind: Secret metadata: name: ceph-conf-files namespace: openstack type: Opaque- Save the YAML file.
Create the
Secretresource:$ oc create -f <secret_configuration_file>-
Replace
<secret_configuration_file>with the name of the YAML file you created.
-
Replace
3.4. Obtaining the Red Hat Ceph Storage file system identifier Copiar enlaceEnlace copiado en el portapapeles!
The Red Hat Ceph Storage file system identifier (FSID) is a unique identifier for the cluster. Use the FSID to configure and verify cluster interoperability with Red Hat OpenStack Services on OpenShift (RHOSO).
Procedure
Extract the FSID from the Red Hat Ceph Storage secret:
$ FSID=$(oc get secret ceph-conf-files -o json | jq -r '.data."ceph.conf"' | base64 -d | grep fsid | sed -e 's/fsid = //')
3.5. Configuring the control plane to use the Red Hat Ceph Storage cluster Copiar enlaceEnlace copiado en el portapapeles!
Configure the OpenStackControlPlane CR to use the Red Hat Ceph Storage cluster. This process includes confirming network configuration, configuring the control plane to use the Red Hat Ceph Storage secret, and setting up Image (glance), Block Storage (cinder), and optionally Shared File Systems (manila) services.
This example does not include configuring Block Storage backup service (cinder-backup) with Red Hat Ceph Storage.
Procedure
Check the storage interface defined in your
NodeNetworkConfigurationPolicy(nncp) custom resource to confirm that it has the same network configuration as thepublic_networkof the Red Hat Ceph Storage cluster. This is required to enable access to the Red Hat Ceph Storage cluster through theStoragenetwork. TheStoragenetwork should have the same network configuration as thepublic_networkof the Red Hat Ceph Storage cluster.It is not necessary for RHOSO to access the
cluster_networkof the Red Hat Ceph Storage cluster.NoteIf it does not impact workload performance, the
Storagenetwork can be different from the external Red Hat Ceph Storage clusterpublic_networkusing routed (L3) connectivity as long as the appropriate routes are added to theStoragenetwork to reach the external Red Hat Ceph Storage clusterpublic_network.Check the
networkAttachmentsfor the default Image service instance in theOpenStackControlPlaneCR to confirm that the default Image service is configured to access theStoragenetwork:glance: enabled: true template: databaseInstance: openstack storage: storageRequest: 10G glanceAPIs: default replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage-
Confirm the Block Storage service is configured to access the
Storagenetwork through MetalLB. -
Optional: Confirm the Shared File Systems service is configured to access the
Storagenetwork through ManilaShare. -
Confirm the Compute service (nova) is configured to access the
Storagenetwork. -
Confirm the Red Hat Ceph Storage configuration file,
/etc/ceph/ceph.conf, contains the IP addresses of the Red Hat Ceph Storage cluster monitors. These IP addresses must be within theStoragenetwork IP address range. -
Open your
openstack_control_plane.yamlfile to edit theOpenStackControlPlaneCR. Add the
extraMountsparameter to define the services that require access to the Red Hat Ceph Storage secret.The following is an example of using the
extraMountsparameter for this purpose. Only includeManilaSharein the propagation list if you are using the Shared File Systems service (manila):apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: - name: v1 region: r1 extraVol: - propagation: - CinderVolume - GlanceAPI - ManilaShare extraVolType: Ceph volumes: - name: ceph projected: sources: - secret: name: <ceph-conf-files> mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true-
Replace
<ceph-conf-files>with the name of your Secret CR created in Creating a Red Hat Ceph Storage secret.
-
Replace
Add the
customServiceConfigparameter to theglancetemplate to configure the Image service to use the Red Hat Ceph Storage cluster:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: glance: template: customServiceConfig: | [DEFAULT] enabled_backends = <backend_name>:rbd [glance_store] default_backend = <backend_name> [<backend_name>] rbd_store_ceph_conf = /etc/ceph/ceph.conf store_description = "RBD backend" rbd_store_pool = images rbd_store_user = openstack databaseInstance: openstack databaseAccount: glance secret: osp-secret storage: storageRequest: 10G extraMounts: - name: v1 region: r1 extraVol: - propagation: - GlanceAPI extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: trueReplace
<backend_name>with the name of the default back end.When you use Red Hat Ceph Storage as a back end for the Image service,
image-conversionis enabled by default. For more information, see Planning storage and shared file systems in Planning your deployment.
Add the
customServiceConfigparameter to thecindertemplate to configure the Block Storage service to use the Red Hat Ceph Storage cluster. For information about using Block Storage backups, see Configuring the Block Storage backup service.apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: ... cinder: template: cinderVolumes: ceph: customServiceConfig: | [DEFAULT] enabled_backends=ceph [ceph] volume_backend_name=ceph volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=openstack rbd_pool=volumes rbd_flatten_volume_from_snapshot=False rbd_secret_uuid=<$FSID>-
Replace
<$FSID>with the actual FSID. The FSID itself does not need to be considered secret. For more information, see Obtaining the Red Hat Ceph Storage FSID.
-
Replace
Optional: Add the
customServiceConfigparameter to themanilatemplate to configure the Shared File Systems service to use native CephFS or CephFS-NFS with the Red Hat Ceph Storage cluster. For more information, see Configuring the Shared File Systems service (manila).The following example exposes native CephFS:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: ... manila: template: manilaAPI: customServiceConfig: | [DEFAULT] enabled_share_protocols=cephfs manilaShares: share1: customServiceConfig: | [DEFAULT] enabled_share_backends=cephfs [cephfs] driver_handles_share_servers=False share_backend_name=cephfs share_driver=manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path=/etc/ceph/ceph.conf cephfs_auth_id=openstack cephfs_cluster_name=ceph cephfs_volume_mode=0755 cephfs_protocol_helper_type=CEPHFSThe following example exposes CephFS with NFS:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: ... manila: template: manilaAPI: customServiceConfig: | [DEFAULT] enabled_share_protocols=nfs manilaShares: share1: customServiceConfig: | [DEFAULT] enabled_share_backends=cephfsnfs [cephfsnfs] driver_handles_share_servers=False share_backend_name=cephfsnfs share_driver=manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path=/etc/ceph/ceph.conf cephfs_auth_id=openstack cephfs_cluster_name=ceph cephfs_volume_mode=0755 cephfs_protocol_helper_type=NFS cephfs_nfs_cluster_id=cephfsApply the updates to the
OpenStackControlPlaneCR:$ oc apply -f openstack_control_plane.yaml
3.6. Configuring the data plane to use the Red Hat Ceph Storage cluster Copiar enlaceEnlace copiado en el portapapeles!
Configure the data plane to use the Red Hat Ceph Storage cluster.
Procedure
Create a
ConfigMapwith additional content for the Compute service (nova) configuration file/etc/nova/nova.conf.d/inside thenova_computecontainer. This additional content directs the Compute service to use Red Hat Ceph Storage RBD.apiVersion: v1 kind: ConfigMap metadata: name: ceph-nova data: <03-ceph-nova.conf>: | [libvirt] images_type=rbd images_rbd_pool=vms images_rbd_ceph_conf=/etc/ceph/ceph.conf images_rbd_glance_store_name=<backend_name> images_rbd_glance_copy_poll_interval=15 images_rbd_glance_copy_timeout=600 rbd_user=openstack rbd_secret_uuid=<$FSID>-
Replace
<03-ceph-nova.conf>with your file name. This file name must follow the naming convention of##-<name>-nova.conf. Files are evaluated by the Compute service alphabetically. A filename that starts with01will be evaluated by the Compute service before a filename that starts with02. When the same configuration option occurs in multiple files, the last one read wins. -
Replace
<backend_name>with the name of the back end specified in theglancetemplate of theOpenStackControlPlaneCR. -
Replace
<$FSID>with the actualFSID, as described in the Obtaining the Ceph FSID section. TheFSIDitself does not need to be considered secret.
-
Replace
Create a custom version of the default
novaservice to use the newConfigMap, which in this case is calledceph-nova.apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-ceph spec: caCerts: combined-ca-bundle edpmServiceType: nova dataSources: - configMapRef: name: ceph-nova - secretRef: name: nova-cell1-compute-config - secretRef: name: nova-migration-ssh-key playbook: osp.edpm.nova-
The custom service is named
nova-custom-ceph. It cannot be namednovabecausenovais an unchangeable default service. Any custom service that has the same name as a default service name will be overwritten during reconciliation.
-
The custom service is named
Apply the
ConfigMapand custom service changes:$ oc create -f ceph-nova.yamlIn your
OpenStackDataPlaneNodeSetCR, update the list of services by adding theceph-clientservice and replacing the defaultnovaservice with the new custom service, for examplenova-custom-ceph. Add theextraMountsparameter to define access to the Ceph Storage secret.Example:
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet spec: ... services: - redhat - bootstrap - download-cache - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - reboot-os - install-certs - ceph-client - ovn - neutron-metadata - libvirt - nova-custom-ceph - telemetry nodeTemplate: extraMounts: - extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true-
You must add the
ceph-clientservice before theovn,libvirt, andnova-custom-cephservices in the list of services. Theceph-clientservice configures data plane nodes as clients of a Red Hat Ceph Storage server by distributing the Red Hat Ceph Storage client files. This example might not list all of the services in your environment. You can run the following command to verify the list of services in your environment:
$ oc get -n openstack crd/openstackdataplanenodesets.dataplane.openstack.org -o yaml |yq -r '.spec.versions.[].schema.openAPIV3Schema.properties.spec.properties.services.defaultFor more information, see Data plane services.
-
You must add the
- Save the changes to the services list.
Create an
OpenStackDataPlaneDeploymentCR:$ oc create -f <dataplanedeployment_cr_file>Replace
<dataplanedeployment_cr_file>with the name of your file.The Ansible job for the
nova-custom-cephservice copies overrides from theConfigMapto the Compute service hosts. The Ansible job also usesvirsh secret-*commands so thelibvirtservice retrieves thecephxsecret byFSID.
Verification
Run the following command outside of a
nova_computecontainer to confirm the results of the Ansible job:$ sudo virsh secret-get-value $FSID
3.7. Configuring an external Ceph Object Gateway for storage Copiar enlaceEnlace copiado en el portapapeles!
You can configure an external Ceph Object Gateway (RGW) to act as an Object Storage service (swift) back end. You use the openstack client tool to configure the Object Storage service.
Procedure
- Configure the RGW to verify users and their roles in the Identity service (keystone) to authenticate with the external RGW service.
- Deploy and configure a RGW service to handle object storage requests.
3.7.1. Configuring RGW authentication Copiar enlaceEnlace copiado en el portapapeles!
You must configure RGW to verify users and their roles in the Identity service (keystone) to authenticate with the external RGW service.
Prerequisites
- You have deployed an operational OpenStack control plane.
Procedure
Create the Object Storage service on the control plane:
$ openstack service create --name swift --description "OpenStack Object Storage" object-storeCreate a user called
swift:$ openstack user create --project service --password <swift_password> swift-
Replace
<swift_password>with the password to assign to theswiftuser.
-
Replace
Create roles for the
swiftuser:$ openstack role create swiftoperator $ openstack role create ResellerAdminAdd the
swiftuser to system roles:$ openstack role add --user swift --project service member $ openstack role add --user swift --project service adminExport the RGW endpoint IP addresses to variables and create control plane endpoints:
$ export RGW_ENDPOINT_STORAGE=<rgw_endpoint_ip_address_storage> $ export RGW_ENDPOINT_EXTERNAL=<rgw_endpoint_ip_address_external> $ openstack endpoint create --region regionOne object-store public http://$RGW_ENDPOINT_EXTERNAL:8080/swift/v1/AUTH_%\(tenant_id\)s; $ openstack endpoint create --region regionOne object-store internal http://$RGW_ENDPOINT_STORAGE:8080/swift/v1/AUTH_%\(tenant_id\)s;-
Replace
<rgw_endpoint_ip_address_storage>with the IP address of the RGW endpoint on the storage network. This is how internal services will access RGW. Replace
<rgw_endpoint_ip_address_external>with the IP address of the RGW endpoint on the external network. This is how cloud users will write objects to RGW.NoteBoth endpoint IP addresses are the endpoints that represent the Virtual IP addresses, owned by
haproxyandkeepalived, used to reach the RGW backends that will be deployed in the Red Hat Ceph Storage cluster in the procedure Configuring and Deploying the RGW service.
-
Replace
Add the
swiftoperatorrole to the control planeadmingroup:$ openstack role add --project admin --user admin swiftoperator
3.7.2. Configuring and deploying the RGW service Copiar enlaceEnlace copiado en el portapapeles!
Configure and deploy a RGW service to handle object storage requests.
Procedure
- Log in to a Red Hat Ceph Storage Controller node.
Create a file called
/tmp/rgw_spec.yamland add the RGW deployment parameters:service_type: rgw service_id: rgw service_name: rgw.rgw placement: hosts: - <host_1> - <host_2> ... - <host_n> networks: - <storage_network> spec: rgw_frontend_port: 8082 rgw_realm: default rgw_zone: default --- service_type: ingress service_id: rgw.default service_name: ingress.rgw.default placement: count: 1 spec: backend_service: rgw.rgw frontend_port: 8080 monitor_port: 8999 virtual_ips_list: - <storage_network_vip> - <external_network_vip> virtual_interface_networks: - <storage_network>-
Replace
<host_1>,<host_2>, …,<host_n>with the name of the Ceph nodes where the RGW instances are deployed. -
Replace
<storage_network>with the network range used to resolve the interfaces whereradosgwprocesses are bound. -
Replace
<storage_network_vip>with the virtual IP (VIP) used as thehaproxyfront end. This is the same address configured as the Object Storage service endpoint ($RGW_ENDPOINT) in the Configuring RGW authentication procedure. -
Optional: Replace
<external_network_vip>with an additional VIP on an external network to use as thehaproxyfront end. This address is used to connect to RGW from an external network.
-
Replace
- Save the file.
Enter the cephadm shell and mount the
rgw_spec.yamlfile.$ cephadm shell -m /tmp/rgw_spec.yamlAdd RGW related configuration to the cluster:
$ ceph config set global rgw_keystone_url "https://<keystone_endpoint>" $ ceph config set global rgw_keystone_verify_ssl false $ ceph config set global rgw_keystone_api_version 3 $ ceph config set global rgw_keystone_accepted_roles "member, Member, admin" $ ceph config set global rgw_keystone_accepted_admin_roles "ResellerAdmin, swiftoperator" $ ceph config set global rgw_keystone_admin_domain default $ ceph config set global rgw_keystone_admin_project service $ ceph config set global rgw_keystone_admin_user swift $ ceph config set global rgw_keystone_admin_password "$SWIFT_PASSWORD" $ ceph config set global rgw_keystone_implicit_tenants true $ ceph config set global rgw_s3_auth_use_keystone true $ ceph config set global rgw_swift_versioning_enabled true $ ceph config set global rgw_swift_enforce_content_length true $ ceph config set global rgw_swift_account_in_url true $ ceph config set global rgw_trust_forwarded_https true $ ceph config set global rgw_max_attr_name_len 128 $ ceph config set global rgw_max_attrs_num_in_req 90 $ ceph config set global rgw_max_attr_size 1024-
Replace
<keystone_endpoint>with the Identity service internal endpoint. The data plane nodes can resolve the internal endpoint but not the public one. Do not omit the URIScheme from the URL, it must be eitherhttp://orhttps://. -
Replace
<swift_password>with the password assigned to the swift user in the previous step.
-
Replace
Deploy the RGW configuration using the Orchestrator:
$ ceph orch apply -i /mnt/rgw_spec.yaml
3.8. Configuring RGW with TLS for an external Red Hat Ceph Storage cluster Copiar enlaceEnlace copiado en el portapapeles!
Configure RGW with TLS so that control plane services can resolve external Red Hat Ceph Storage cluster host names. This procedure configures Ceph RGW to emulate the Object Storage service (swift).
In this procedure, you configure the following:
-
A DNS zone and certificate so that a URL such as
https://rgw-external.ceph.local:8080is registered as an Identity service (keystone) endpoint, and {rhos_log} can securely access the HTTPS endpoint. -
A
DNSDatadomain, for exampleceph.localso that pods can map host names to IP addresses for services that are not hosted on RHOCP. -
DNS forwarding for the domain with the
CoreDNSservice. - A certificate by using the RHOSO public root certificate authority.
You must copy the certificate and key file created in RHOCP to the nodes hosting RGW so they can become part of the Ceph Orchestrator RGW specification.
- Considerations
-
DNSData custom resource: Creating a
DNSDataCR creates a newdnsmasqpod that is able to read and resolve the DNS information in the associatedDNSDataCR. -
Certificate authority: The certificate
issuerRefis set to the root certificate authority (CA) of RHOSO. This CA is automatically created when the control plane is deployed. The default name of the CA isrootca-public. The RHOSO pods trust this new certificate because the root CA is used.
-
DNSData custom resource: Creating a
Procedure
Create a
DNSDatacustom resource (CR) for the external Ceph cluster.Example
DNSDataCR:apiVersion: network.openstack.org/v1beta1 kind: DNSData metadata: labels: component: ceph-storage service: ceph name: ceph-storage namespace: openstack spec: dnsDataLabelSelectorValue: dnsdata hosts: - hostnames: - ceph-rgw-internal-vip.ceph.local ip: <172.18.0.2> - hostnames: - ceph-rgw-external-vip.ceph.local ip: <10.10.10.2>-
Replace
<172.18.0.2>with the correct host for your environment. In this example, the host at the IP address172.18.0.2hosts the Ceph RGW endpoint for access on the private storage network. This host passes the CR so that a DNS A and PTR record is created. This enables the host to be accessed by using the host nameceph-rgw-internal-vip.ceph.local. -
Replace
<10.10.10.2>with the correct host for your environment. In this example, the host at the IP address10.10.10.2hosts the Ceph RGW endpoint for access on the external network. This host passes the CR so that a DNS A and PTR record is created. This enables the host to be accessed by using the host nameceph-rgw-external-vip.ceph.local.
-
Replace
Apply the CR to your environment:
$ oc apply -f <ceph_dns_yaml>-
Replace
<ceph_dns_yaml>with the name of theDNSDataCR file.
-
Replace
-
Update the
CoreDNSCR to configure DNS forwarding to thednsmasqservice for requests to theceph.localdomain. For more information about DNS forwarding, see Using DNS forwarding in the RHOCP Networking guide. List the
openstackdomain DNS cluster IP address:$ oc get svc dnsmasq-dnsExample output:
$ oc get svc dnsmasq-dns dnsmasq-dns LoadBalancer 10.217.5.130 192.168.122.80 53:30185/UDP 160m- Record the DNS cluster IP address from the command output for DNS forwarding.
List the
CoreDNSCR:$ oc -n openshift-dns describe dns.operator/defaultEdit the
CoreDNSCR and add theserversconfiguration to thespecsection with the DNS cluster IP address.Example
CoreDNSCR updated with the DNS cluster IP address:apiVersion: operator.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2024-03-25T02:49:24Z" finalizers: - dns.operator.openshift.io/dns-controller generation: 3 name: default resourceVersion: "164142" uid: 860b0e61-a48a-470e-8684-3b23118e6083 spec: cache: negativeTTL: 0s positiveTTL: 0s logLevel: Normal nodePlacement: {} operatorLogLevel: Normal servers: - forwardPlugin: policy: Random upstreams: - 10.217.5.130:53 name: ceph zones: - ceph.local upstreamResolvers: policy: Sequential upstreams: - port: 53 type: SystemResolvConfwhere:
servers- Defines DNS forwarding configurations for specific domains.
upstreams- Specifies the DNS cluster IP address to which DNS queries are forwarded.
10.217.5.130:53-
Is the DNS cluster IP address recorded from the
oc get svc dnsmasq-dnscommand. zones- Defines the domain for which DNS queries are forwarded to the upstream server.
Create a
CertificateCR with the host names from theDNSDataCR.Example
CertificateCR:apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: cert-ceph-rgw namespace: openstack spec: duration: 43800h0m0s issuerRef: {'group': 'cert-manager.io', 'kind': 'Issuer', 'name': 'rootca-public'} secretName: cert-ceph-rgw dnsNames: - ceph-rgw-internal-vip.ceph.local - ceph-rgw-external-vip.ceph.localApply the CR to your environment:
$ oc apply -f <ceph_cert_yaml>-
Replace
<ceph_cert_yaml>with the name of theCertificateCR file.
-
Replace
Extract the certificate and key data from the secret created when the
CertificateCR was applied:$ oc get secret <ceph_cert_secret_name> -o yamlReplace
<ceph_cert_secret_name>with the name used in thesecretNamefield of yourCertificateCR.Example output:
[stack@osp-storage-04 ~]$ oc get secret cert-ceph-rgw -o yaml apiVersion: v1 data: ca.crt: <CA> tls.crt: <b64cert> tls.key: <b64key> kind: Secret-
The
<b64cert>and<b64key>values are the base64-encoded certificate and key strings that you must use in the next step.
-
The
Extract and base64 decode the certificate and key information obtained in the previous step.
Extract and decode the certificate:
$ oc get secret <ceph_cert_secret_name> -o yaml | awk '/tls.crt/ {print $2}' | base64 -dExtract and decode the key:
$ oc get secret <ceph_cert_secret_name> -o yaml | awk '/tls.key/ {print $2}' | base64 -dIf you are using Red Hat Ceph Storage 7 or 8, concatenate the decoded certificate and key values with no spaces in between, and save them in the Ceph Object Gateway service specification.
The
rgwsection of the specification file looks like the following:service_type: rgw service_id: rgw service_name: rgw.rgw placement: hosts: - host1 - host2 networks: - 172.18.0.0/24 spec: rgw_frontend_port: 8082 rgw_realm: default rgw_zone: default ssl: true rgw_frontend_ssl_certificate: | -----BEGIN CERTIFICATE----- MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw <redacted> -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB <redacted> -----END RSA PRIVATE KEY-----The
ingresssection of the specification file looks like the following:service_type: ingress service_id: rgw.default service_name: ingress.rgw.default placement: count: 1 spec: backend_service: rgw.rgw frontend_port: 8080 monitor_port: 8999 virtual_interface_networks: - 172.18.0.0/24 virtual_ip: 172.18.0.2/24 ssl_cert: | -----BEGIN CERTIFICATE----- MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw <redacted> -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB <redacted> -----END RSA PRIVATE KEY-----where:
rgw_frontend_ssl_certificate-
Contains the base64 decoded values from both
<b64cert>and<b64key>in the previous step with no spaces in between. ssl_cert-
Contains the base64 decoded values from both
<b64cert>and<b64key>in the previous step with no spaces in between.
If you are using Red Hat Ceph Storage 9, save the decoded certificate and key values separately in the Ceph Object Gateway service specification.
The
rgwsection of the specification file looks like the following:service_type: rgw service_id: rgw service_name: rgw.rgw placement: hosts: - host1 - host2 networks: - 172.18.0.0/24 spec: rgw_frontend_port: 8082 rgw_realm: default rgw_zone: default ssl: true certificate_source: inline ssl_cert: | -----BEGIN CERTIFICATE----- MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw <redacted> -----END CERTIFICATE----- ssl_key: | -----BEGIN PRIVATE KEY----- MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB <redacted> -----END PRIVATE KEY-----The
ingresssection of the specification file looks like the following:service_type: ingress service_id: rgw.default service_name: ingress.rgw.default placement: count: 1 spec: backend_service: rgw.rgw frontend_port: 8080 monitor_port: 8999 virtual_interface_networks: - 172.18.0.0/24 virtual_ip: 172.18.0.2/24 ssl: true certificate_source: inline ssl_cert: | -----BEGIN CERTIFICATE----- MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw <redacted> -----END CERTIFICATE----- ssl_key: | -----BEGIN PRIVATE KEY----- MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB <redacted> -----END PRIVATE KEY-----where:
certificate_source: inline- Specifies that the certificate and key are embedded directly in the specification.
ssl_cert-
Contains the base64 decoded certificate value from
<b64cert>in the previous step. ssl_keyContains the base64 decoded key value from
<b64key>in the previous step.NoteIn Red Hat Ceph Storage 9, the
rgw_frontend_ssl_certificatefield, which required concatenated certificate and key values, is deprecated. New deployments must use the separatessl_certandssl_keyfields.
Use the procedure "Deploying the Ceph Object Gateway using the service specification" to deploy Ceph RGW with SSL. For more information, see the Red Hat Ceph Storage Operations Guide:
-
Connect to the
openstackclientpod. Verify that DNS forwarding has been successfully configured.
$ curl --trace - <host_name>Replace
<host_name>with the name of the external host previously added to theDNSDataCR.Example output:
sh-5.1$ curl https://rgw-external-vip.ceph.local:8080 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult> .1$ sh-5.1$-
In this example, the
openstackclientpod successfully resolved the host name, and no SSL verification errors were encountered.
3.9. Enabling deferred deletion for volumes or images with dependencies Copiar enlaceEnlace copiado en el portapapeles!
Enable deferred deletion in the Ceph RBD Clone v2 API to delete volumes or images with dependencies. The volume or image is removed from the service but stored in a Ceph RBD trash area until dependencies are resolved. The volume or image is only deleted from Ceph RBD when there are no dependencies.
The trash area maintained by deferred deletion does not provide restoration functionality. When volumes or images are moved to the trash area, they cannot be recovered or restored. The trash area serves only as a holding mechanism for the volume or image until all dependencies have been removed. The volume or image will be permanently deleted once no dependencies exist.
- Limitations
- When you enable Clone v2 deferred deletion in existing environments, the feature only applies to new volumes or images.
Procedure
Verify which Ceph version the clients in your Ceph Storage cluster are running:
$ cephadm shell -- ceph osd get-require-min-compat-clientExample output:
luminousTo set the cluster to use the Clone v2 API and the deferred deletion feature by default, set
min-compat-clienttomimic. Only clients in the cluster that are running Ceph version 13.2.x (Mimic) can access images with dependencies:$ cephadm shell -- ceph osd set-require-min-compat-client mimicSchedule an interval for
trash purgein minutes by using themsuffix:$ rbd trash purge schedule add --pool <pool> <30m>-
Replace
<pool>with the name of the associated storage pool, for example,volumesin the Block Storage service. -
Replace
<30m>with the interval in minutes that you want to specify fortrash purge.
-
Replace
Verify a trash purge schedule has been set for the pool:
$ rbd trash purge schedule list --pool <pool>
3.10. Troubleshooting Red Hat Ceph Storage RBD integration Copiar enlaceEnlace copiado en el portapapeles!
If Compute (nova), Block Storage (cinder), or Image (glance) service integration with Red Hat Ceph Storage RBD fails, use this incremental troubleshooting procedure. This example focuses on Image service integration, but you can adapt it for other services.
If you discover the cause of your issue before completing this procedure, it is not necessary to do any subsequent steps. You can exit this procedure and resolve the issue.
Procedure
Determine if any parts of the control plane are not properly deployed by assessing whether the
Readycondition is notTrue:$ oc get -n openstack OpenStackControlPlane \ -o jsonpath="{range .items[0].status.conditions[?(@.status!='True')]}{.type} is {.status} due to {.message}{'\n'}{end}"If you identify a service that is not properly deployed, check the status of the service.
The following example checks the status of the Compute service:
$ oc get -n openstack Nova/nova \ -o jsonpath="{range .status.conditions[?(@.status!='True')]}{.type} is {.status} due to {.message}{'\n'}{end}"You can check the status of all deployed services:
$ oc get pods -n openstackYou can check the logs of a specific service:
$ oc logs -n openstack <service_pod_name>-
Replace
<service_pod_name>with the name of the service pod you want to check.
-
Replace
If you identify an operator that is not properly deployed, check the status of the operator:
$ oc get pods -n openstack-operators -lopenstack.org/operator-nameYou can check the operator logs:
$ oc logs -n openstack-operators -lopenstack.org/operator-name=<operator_name>
Check the
Statusof the data plane deployment:$ oc get -n openstack OpenStackDataPlaneDeploymentIf the
Statusof the data plane deployment isFalse, check the logs of the associated Ansible job:$ oc logs -n openstack job/<ansible_job_name>-
Replace
<ansible_job_name>with the name of the associated job. The job name is listed in theMessagefield of theoc get -n openstack OpenStackDataPlaneDeploymentcommand output.
-
Replace
Check the
Statusof the data plane node set deployment:$ oc get -n openstack OpenStackDataPlaneNodeSetIf the
Statusof the data plane node set deployment isFalse, check the logs of the associated Ansible job:$ oc logs -n openstack job/<ansible_job_name>-
Replace
<ansible_job_name>with the name of the associated job. It is listed in theMessagefield of theoc get -n openstack OpenStackDataPlaneNodeSetcommand output.
-
Replace
If any pods are in the
CrashLookBackOffstate, you can duplicate them for troubleshooting purposes:$ oc debug <pod_name>-
Replace
<pod_name>with the name of the pod to duplicate.
-
Replace
Optional: You can route traffic to the duplicate pod during the debug process:
$ oc debug <pod_name> --keep-labels=trueOptional: You can use the
oc debugcommand in the following object debugging activities:-
To run
/bin/shon a container other than the first one, the command’s default behavior, using the command formoc debug -container <pod_name> <container_name>. This is useful for pods like the API where the first container is tailing a file and the second container is the one you want to debug. If you use this command form, you must first use the commandoc get pods | grep <search_string>to find the container name. -
To create any resource that creates pods such as
Deployments,StatefulSets, andNodes, use the command formoc debug <resource_type>/<resource_name>. An example of creating aStatefulSetwould beoc debug StatefulSet/cinder-scheduler.
-
To run
Connect to the pod and confirm that the
ceph.client.openstack.keyringandceph.conffiles are present in the/etc/cephdirectory.$ oc rsh <pod_name>-
Replace
<pod_name>with the name of the applicable pod. -
If the Ceph configuration files are missing, check the
extraMountsparameter in yourOpenStackControlPlaneCR.
-
Replace
Confirm the pod has a network connection to the Red Hat Ceph Storage cluster by connecting to the IP and port of a Ceph Monitor from the pod. The IP and port information is located in
/etc/ceph.conf.The following is an example of this process:
$ oc get pods | grep glance | grep external-api-0 glance-06f7a-default-external-api-0 3/3 Running 0 2d3h $ oc debug --container glance-api glance-06f7a-default-external-api-0 Starting pod/glance-06f7a-default-external-api-0-debug-p24v9, command was: /usr/bin/dumb-init --single-child -- /bin/bash -c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start Pod IP: 192.168.25.50 If you don't see a command prompt, try pressing enter. sh-5.1# cat /etc/ceph/ceph.conf # Ansible managed [global] fsid = 63bdd226-fbe6-5f31-956e-7028e99f1ee1 mon host = [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0],[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] [client.libvirt] admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/ceph/qemu-guest-$pid.log sh-5.1# python3 Python 3.9.19 (main, Jul 18 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> s = socket.socket() >>> ip="192.168.122.100" >>> port=3300 >>> s.connect((ip,port)) >>>Optional: If you cannot connect to a Ceph Monitor, troubleshoot the network connection between the cluster and pod. The previous example uses a Python socket to connect to the IP and port of the Red Hat Ceph Storage cluster from the
ceph.conffile.There are two potential outcomes from the execution of the
s.connect((ip,port))function:- If the command executes successfully and there is no error similar to the following example, the network connection between the pod and cluster is functioning correctly. If the connection is functioning correctly, the command execution will provide no return value at all.
If the command takes a long time to execute and returns an error similar to the following example, the network connection between the pod and cluster is not functioning correctly. It should be investigated further to troubleshoot the connection.
Traceback (most recent call last): File "<stdin>", line 1, in <module> TimeoutError: [Errno 110] Connection timed out
Examine the
cephxkey as shown in the following example:bash-5.1$ cat /etc/ceph/ceph.client.openstack.keyring [client.openstack] key = "<redacted>" caps mgr = allow * caps mon = profile rbd caps osd = profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images bash-5.1$List the contents of a pool from the
caps osdparameter as shown in the following example:$ /usr/bin/rbd --conf /etc/ceph/ceph.conf \ --keyring /etc/ceph/ceph.client.openstack.keyring \ --cluster ceph --id openstack \ ls -l -p <pool_name> | wc -l-
Replace
<pool_name>with the name of the required Red Hat Ceph Storage pool. -
If this command returns the number 0 or greater, the
cephxkey provides adequate permissions to connect to, and read information from, the Red Hat Ceph Storage cluster. -
If this command does not complete but network connectivity to the cluster was confirmed, work with the Ceph administrator to obtain the correct
cephxkeyring. -
Check if there is an MTU mismatch on the Storage network. If the network is using jumbo frames (an MTU value of 9000), all switch ports between servers using the interface must be updated to support jumbo frames. If this change is not made on the switch, problems can occur at the Ceph application layer. Verify all hosts using the network can communicate at the desired MTU with a command such as
ping -M do -s 8972 <ip_address>.
-
Replace
Send test data to the
imagespool on the Ceph cluster.The following is an example of performing this task:
# DATA=$(date | md5sum | cut -c-12) # POOL=images # RBD="/usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack" # $RBD create --size 1024 $POOL/$DATATipIt is possible to be able to read data from the cluster but not have permissions to write data to it, even if write permission was granted in the
cephxkeyring. If you have write permissions, but you cannot write data to the cluster, the cluster might be overloaded and not able to write new data.In the example, the
rbdcommand did not complete successfully and was canceled. It was subsequently confirmed the cluster itself did not have the resources to write new data. The issue was resolved on the cluster itself. There was nothing incorrect with the client configuration.
3.11. Troubleshooting Red Hat Ceph Storage clients Copiar enlaceEnlace copiado en el portapapeles!
Put Red Hat OpenStack Services on OpenShift (RHOSO) Ceph clients in debug mode to troubleshoot their operation.
Procedure
- Locate the Red Hat Ceph Storage configuration file mapped in the Red Hat OpenShift secret created in Creating a Red Hat Ceph Storage secret.
Modify the contents of the configuration file to include troubleshooting-related configuration.
The following is an example of troubleshooting-related configuration:
[client.openstack] admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/guest-$pid.log debug ms = 1 debug rbd = 20 log to file = trueNoteFor more information about troubleshooting, see the Red Hat Ceph Storage Troubleshooting Guide:
- Update the secret with the new content.
3.12. Customizing Red Hat Ceph Storage configurations Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 supports Red Hat Ceph Storage 7, 8, and 9. For information about customizing and managing Ceph Storage, see the documentation sets for your Ceph Storage version.
For complete documentation, see: