搜索

此内容没有您所选择的语言版本。

Chapter 2. Integrating Red Hat Ceph Storage

download PDF

You can configure Red Hat OpenStack Services on OpenShift (RHOSO) to integrate with an external Red Hat Ceph Storage cluster. This configuration connects the following services to a Red Hat Ceph Storage cluster:

  • Block Storage service (cinder)
  • Image service (glance)
  • Object Storage service (swift)
  • Compute service (nova)
  • Shared File Systems service (manila)

If you want to deploy a Red Hat Ceph Storage Hyper Converged Infrastructure (HCI), see Configuring a Hyperconverged Infrastructure environment.

To configure Red Hat Ceph Storage as the back end for RHOSO storage, complete the following tasks:

  1. Create the Red Hat Ceph Storage pools on the Red Hat Ceph Storage cluster.
  2. Create a Red Hat Ceph Storage secret on the Red Hat Ceph Storage cluster to provide RHOSO services access to the Red Hat Ceph Storage cluster.
  3. Obtain the Ceph File System Identifier.
  4. Configure the OpenStackControlPlane CR to use the Red Hat Ceph Storage cluster as the back end.
  5. Configure the OpenStackDataPlane CR to use the Red Hat Ceph Storage cluster as the back end.

Prerequisites

  • Access to a Red Hat Ceph Storage cluster. If you intend to host Red Hat Ceph Storage on data plane nodes (HCI), then complete Configuring a Hyperconverged Infrastructure environment first.
  • The RHOSO control plane is installed on an operational Red Hat OpenShift Platform cluster.

2.1. Creating Red Hat Ceph Storage pools

Create pools on the Red Hat Ceph Storage cluster server for each RHOSO service that uses the cluster.

Procedure

  1. Create pools for the Compute service (vms), the Block Storage service (volumes), and the Image service (images):

    $ for P in vms volumes images; do
      cephadm shell -- ceph osd pool create $P;
      cephadm shell -- ceph osd pool application enable $P rbd;
    done
  2. Optional: Create the cephfs volume if the Shared File Systems service (manila) is enabled in the control plane. This automatically enables the CephFS Metadata service (MDS) and creates the necessary data and metadata pools on the Ceph cluster:

    $ cephadm shell -- ceph fs volume create cephfs
  3. Optional: Deploy an NFS service on the Red Hat Ceph Storage cluster to use CephFS with NFS:

    $ cephadm shell -- ceph nfs cluster create cephfs \
    --ingress --virtual-ip=<vip> \
    --ingress-mode=haproxy-protocol
    • Replace <vip> with the IP address assigned to the NFS service. The NFS service should be isolated on a network that can be shared with all Red Hat OpenStack users. See NFS cluster and export management, for more information about customizing the NFS service.

      Important

      When you deploy an NFS service for the Shared File Systems service, do not select a custom port to expose NFS. Only the default NFS port of 2049 is supported. You must enable the Red Hat Ceph Storage ingress service and set the ingress-mode to haproxy-protocol. Otherwise, you cannot use IP-based access rules with the Shared File Systems service. For security in production environments, Red Hat does not recommend providing access to 0.0.0.0/0 on shares to mount them on client machines.

  4. Create a cephx key for RHOSO to use to access pools:

    $ cephadm shell -- \
       ceph auth add client.openstack \
         mgr 'allow *' \
            mon 'allow r' \
            osd 'allow class-read object_prefix rbd_children, allow rwx pool=vms, allow rwx pool=volumes, allow rwx pool=images'
    Important

    If the Shared File Systems service is enabled in the control plane, replace osd caps with the following:

    osd 'allow class-read object_prefix rbd_children, allow rwx pool=vms, allow rwx pool=volumes, allow rwx pool=images, allow rwx pool=cephfs.cephfs.data'

  5. Export the cephx key:

    $ cephadm shell -- ceph auth get client.openstack > /etc/ceph/ceph.client.openstack.keyring
  6. Export the configuration file:

    $ cephadm shell -- ceph config generate-minimal-conf > /etc/ceph/ceph.conf

2.2. Creating a Red Hat Ceph Storage secret

Create a secret so that services can access the Red Hat Ceph Storage cluster.

Procedure

  1. Transfer the cephx key and configuration file created in the Creating Red Hat Ceph Storage pools procedure to a host that can create resources in the openstack namespace.
  2. Base64 encode these files and store them in KEY and CONF environment variables:

    $ KEY=$(cat /etc/ceph/ceph.client.openstack.keyring | base64 -w 0)
    $ CONF=$(cat /etc/ceph/ceph.conf | base64 -w 0)
  3. Create a YAML file to create the Secret resource.
  4. Using the environment variables, add the Secret configuration to the YAML file:

    apiVersion: v1
    data:
      ceph.client.openstack.keyring: $KEY
      ceph.conf: $CONF
    kind: Secret
    metadata:
      name: ceph-conf-files
      namespace: openstack
    type: Opaque
  5. Save the YAML file.
  6. Create the Secret resource:

    $ oc create -f <secret_configuration_file>
    • Replace <secret_configuration_file> with the name of the YAML file you created.
Note

The examples in this section use openstack as the name of the Red Hat Ceph Storage user. The file name in the Secret resource must match this user name.

For example, if the file name used is /etc/ceph/ceph.client.openstack2.keyring, then the secret data line should be ceph.client.openstack2.keyring: $KEY.

2.3. Obtaining the Red Hat Ceph Storage File System Identifier

The Red Hat Ceph Storage File System Identifier (FSID) is a unique identifier for the cluster. The FSID is used in configuration and verification of cluster interoperability with RHOSO.

Procedure

  • Extract the FSID from the Red Hat Ceph Storage secret:

    $ FSID=$(oc get secret ceph-conf-files -o json | jq -r '.data."ceph.conf"' | base64 -d | grep fsid | sed -e 's/fsid = //')

2.4. Configuring the control plane to use the Red Hat Ceph Storage cluster

You must configure the OpenStackControlPlane CR to use the Red Hat Ceph Storage cluster. Configuration includes the following tasks:

  1. Confirming the Red Hat Ceph Storage cluster and the associated services have the correct network configuration.
  2. Configuring the control plane to use the Red Hat Ceph Storage secret.
  3. Configuring the Image service (glance) to use the Red Hat Ceph Storage cluster.
  4. Configuring the Block Storage service (cinder) to use the Red Hat Ceph Storage cluster.
  5. Optional: Configuring the Shared File Systems service (manila) to use native CephFS or CephFS-NFS with the Red Hat Ceph Storage cluster.
Note

This example does not include configuring Block Storage backup service (cinder-backup) with Red Hat Ceph Storage.

Procedure

  1. Check the storage interface defined in your NodeNetworkConfigurationPolicy (nncp) custom resource to confirm that it has the same network configuration as the public_network of the Red Hat Ceph Storage cluster. This is required to enable access to the Red Hat Ceph Storage cluster through the Storage network. The Storage network should have the same network configuration as the public_network of the Red Hat Ceph Storage cluster.

    Note

    It is not necessary for RHOSO to access the cluster_network of the Red Hat Ceph Storage cluster.

  2. Check the networkAttachments for the default Image service instance in the OpenStackControlPlane CR to confirm that the default Image service is configured to access the Storage network:

    glance:
        enabled: true
        template:
          databaseInstance: openstack
          storageClass: ""
          storageRequest: 10G
          glanceAPIs:
            default
              replicas: 3
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
              networkAttachments:
              - storage
  3. Confirm the Block Storage service is configured to access the Storage network through MetalLB.
  4. Optional: Confirm the Shared File Systems service is configured to access the Storage network through ManilaShare.
  5. Confirm the Compute service (nova) is configured to access the Storage network.
  6. Confirm the Red Hat Ceph Storage configuration file, /etc/ceph/ceph.conf, contains the IP addresses of the Red Hat Ceph Storage cluster monitors. These IP addresses must be within the Storage network IP address range.
  7. Open your openstack_control_plane.yaml file to edit the OpenStackControlPlane CR.
  8. Add the extraMounts parameter to define the services that require access to the Red Hat Ceph Storage secret.

    The following is an example of using the extraMounts parameter for this purpose. Only include ManilaShare in the propagation list if you are using the Shared File Systems service (manila):

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      extraMounts:
        - name: v1
          region: r1
          extraVol:
            - propagation:
              - CinderVolume
              - GlanceAPI
              - ManilaShare
              extraVolType: Ceph
              volumes:
              - name: ceph
                projected:
                  sources:
                  - secret:
                      name: <ceph-conf-files>
              mounts:
              - name: ceph
                mountPath: "/etc/ceph"
                readOnly: true
  9. Add the customServiceConfig parameter to the glance template to configure the Image service to use the Red Hat Ceph Storage cluster:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
        ...
      glance:
        enabled: true
        template:
          databaseInstance: openstack
          databaseUser: glance
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = default_backend:rbd
            enabled_import_methods=[web-download,glance-direct]
            [glance_store]
            default_backend = default_backend
            [default_backend]
            rbd_store_ceph_conf = /etc/ceph/ceph.conf
            store_description = "RBD backend"
            rbd_store_pool = images
            rbd_store_user = openstack
        glanceAPIs:
          default:
            preserveJobs: false
            replicas: 1
        secret: osp-secret
        storageClass: ""
        storageRequest: 10G
      extraMounts:
        - name: v1
          region: r1
          extraVol:
            - propagation:
              - Glance
              extraVolType: Ceph
              volumes:
                - name: ceph
                  projected:
                    sources:
                    - secret:
                        name: ceph-conf-files
              mounts:
                - name: ceph
                  mountPath: "/etc/ceph"
                  readOnly: true
  10. Add the customServiceConfig parameter to the cinder template to configure the Block Storage service to use the Red Hat Ceph Storage cluster:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      extraMounts:
        ...
      cinder:
        template:
          cinderVolumes:
            ceph:
              customServiceConfig: |
                [DEFAULT]
                enabled_backends=ceph
                [ceph]
                volume_backend_name=ceph
                volume_driver=cinder.volume.drivers.rbd.RBDDriver
                rbd_ceph_conf=/etc/ceph/ceph.conf
                rbd_user=openstack
                rbd_pool=volumes
                rbd_flatten_volume_from_snapshot=False
                rbd_secret_uuid=$FSID 1
    1
    Replace with the actual FSID. The FSID itself does not need to be considered secret. For more information, see Obtaining the Red Hat Ceph Storage FSID.
  11. Optional: Add the customServiceConfig parameter to the manila template to configure the Shared File Systems service to use native CephFS or CephFS-NFS with the Red Hat Ceph Storage cluster. For more information, see Configuring the Shared File Systems service (manila):

    The following example exposes native CephFS:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      extraMounts:
        ...
        manila:
            template:
     	manilaAPI:
                 customServiceConfig: |
                   [DEFAULT]
                   enabled_share_protocols=cephfs
               manilaShares:
                 share1:
                    customServiceConfig: |
                        [DEFAULT]
                        enabled_share_backends=cephfs
                        [cephfs]
                        driver_handles_share_servers=False
                        share_backend_name=cephfs
                        share_driver=manila.share.drivers.cephfs.driver.CephFSDriver
                        cephfs_conf_path=/etc/ceph/ceph.conf
                        cephfs_auth_id=openstack
                        cephfs_cluster_name=ceph
                        cephfs_volume_mode=0755
                        cephfs_protocol_helper_type=CEPHFS

    The following example exposes CephFS with NFS:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      extraMounts:
        ...
        manila:
            template:
               manilaAPI:
                 customServiceConfig: |
                   [DEFAULT]
                   enabled_share_protocols=nfs
               manilaShares:
                 share1:
                    customServiceConfig: |
                        [DEFAULT]
                        enabled_share_backends=cephfsnfs
                        [cephfsnfs]
                        driver_handles_share_servers=False
                        share_backend_name=cephfsnfs
                        share_driver=manila.share.drivers.cephfs.driver.CephFSDriver
                        cephfs_conf_path=/etc/ceph/ceph.conf
                        cephfs_auth_id=openstack
                        cephfs_cluster_name=ceph
                        cephfs_volume_mode=0755
                        cephfs_protocol_helper_type=NFS
                        cephfs_nfs_cluster_id=cephfs
  12. Apply the updates to the OpenStackControlPlane CR:

    $ oc apply -f openstack_control_plane.yaml

2.5. Configuring the data plane to use the Red Hat Ceph Storage cluster

Configure the data plane to use the Red Hat Ceph Storage cluster.

Procedure

  1. Create a ConfigMap with additional content for the Compute service (nova) configuration file /etc/nova/nova.conf.d/ inside the nova_compute container. This additional content directs the Compute service to use Red Hat Ceph Storage RBD.

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: ceph-nova
    data:
      03-ceph-nova.conf: | 1
        [libvirt]
        images_type=rbd
        images_rbd_pool=vms
        images_rbd_ceph_conf=/etc/ceph/ceph.conf
        images_rbd_glance_store_name=default_backend
        images_rbd_glance_copy_poll_interval=15
        images_rbd_glance_copy_timeout=600
        rbd_user=openstack
        rbd_secret_uuid=$FSID 2
    1
    This file name must follow the naming convention of ##-<name>-nova.conf. Files are evaluated by the Compute service alphabetically. A filename that starts with 01 will be evaluated by the Compute service before a filename that starts with 02.
    2
    The $FSID value should contain the actual FSID as described in the Obtaining the Ceph FSID section. The FSID itself does not need to be considered secret.
  2. Create a custom version of the default nova operator to use the new ConfigMap, which in this case is called ceph-nova.

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: nova-custom-ceph 1
    spec:
      label: dataplane-deployment-nova-custom-ceph
      configMaps:
        - ceph-nova
      secrets:
        - nova-cell1-compute-config
      playbook: osp.edpm.nova
    1
    The custom service is named nova-custom-ceph. It cannot be named nova because nova is an unchangeable default service. Any custom service that has the same name as a default service name will be overwritten during reconciliation.
  3. Apply the ConfigMap and custom service changes:

    $ oc create -f ceph-nova.yaml
  4. Update the OpenStackDataPlaneNodeSet services list to replace the nova service with the new custom service (in this case called nova-custom-ceph), add the ceph-client service, and use the extraMounts parameter to define access to the Ceph Storage secret.

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    spec:
      ...
      roles:
        edpm-compute:
          ...
          services:
            - configure-network
            - validate-network
            - install-os
            - configure-os
            - run-os
            - ceph-client
            - ovn
            - libvirt
            - nova-custom-ceph
            - telemetry
    
      nodeTemplate:
        extraMounts:
        - extraVolType: Ceph
          volumes:
          - name: ceph
            secret:
              secretName: ceph-conf-files
          mounts:
          - name: ceph
            mountPath: "/etc/ceph"
            readOnly: true
    Note

    The ceph-client service must be added before the libvirt and nova-custom-ceph services. The ceph-client service configures EDPM nodes as clients of a Red Hat Ceph Storage server by distributing the Red Hat Ceph Storage client files.

  5. Save the changes to the services list.
  6. Create an OpenStackDataPlaneDeployment CR:

    $ oc create -f <dataplanedeployment_cr_file>
    • Replace <dataplanedeployment_cr_file> with the name of your file.

      Note

      An example of an OpenStackDataPlaneDeployment CR file is available here: link:https://github.com/openstack-k8s-operators/dataplane-operator/blob/main/config/samples/dataplane_v1beta1_openstackdataplanedeployment.yaml.

Result

When the nova-custom-ceph service Ansible job runs, the job copies overrides from the ConfigMaps to the Compute service hosts. It will also use virsh secret-* commands so the libvirt service retrieves the cephx secret by FSID.

  • Run the following command on an EDPM node after the job completes to confirm the job results:

    $ podman exec libvirt_virtsecretd virsh secret-get-value $FSID

2.6. Configuring the Object Storage service (swift) with an external Ceph Object Gateway back end

You can configure an external Ceph Object Gateway (RGW) to act as an Object Storage service (swift) back end, by completing the following high-level tasks:

  1. Configure the RGW to verify users and their roles in the Identity service (keystone) to authenticate with the external RGW service.
  2. Deploy and configure a RGW service to handle object storage requests.

You use the openstack client tool to configure the Object Storage service.

2.6.1. Configuring RGW authentication

You must configure RGW to verify users and their roles in the Identity service (keystone) to authenticate with the external RGW service.

Prerequisites

  • You have deployed an operational OpenStack control plane.

Procedure

  1. Create the Object Storage service on the control plane:

    $ openstack service create --name swift --description "OpenStack Object Storage" object-store
  2. Create a user called swift:

    $ openstack user create --project service --password <swift_password> swift
    • Replace <swift_password> with the password to assign to the swift user.
  3. Create roles for the swift user:

    $ openstack role create swiftoperator
    $ openstack role create ResellerAdmin
  4. Add the swift user to system roles:

    $ openstack role add --user swift --project service member
    $ openstack role add --user swift --project service admin
  5. Export the RGW endpoint IP addresses to variables and create control plane endpoints:

    $ export RGW_ENDPOINT_STORAGE=<rgw_endpoint_ip_address_storage>
    $ export RGW_ENDPOINT_EXTERNAL=<rgw_endpoint_ip_address_external>
    $ openstack endpoint create --region regionOne object-store public http://$RGW_ENDPOINT_EXTERNAL:8080/swift/v1/AUTH_%\(tenant_id\)s;
    $ openstack endpoint create --region regionOne object-store internal http://$RGW_ENDPOINT_STORAGE:8080/swift/v1/AUTH_%\(tenant_id\)s;
    • Replace <rgw_endpoint_ip_address_storage> with the IP address of the RGW endpoint on the storage network. This is how internal services will access RGW.
    • Replace <rgw_endpoint_ip_address_external> with the IP address of the RGW endpoint on the external network. This is how cloud users will write objects to RGW.

      Note

      Both endpoint IP addresses are the endpoints that represent the Virtual IP addresses, owned by haproxy and keepalived, used to reach the RGW backends that will be deployed in the Red Hat Ceph Storage cluster in the procedure Configuring and Deploying the RGW service.

  6. Add the swiftoperator role to the control plane admin group:

    $ openstack role add --project admin --user admin swiftoperator

2.6.2. Configuring and deploying the RGW service

Configure and deploy a RGW service to handle object storage requests.

Procedure

  1. Log in to a Red Hat Ceph Storage Controller node.
  2. Create a file called /tmp/rgw_spec.yaml and add the RGW deployment parameters:

    service_type: rgw
    service_id: rgw
    service_name: rgw.rgw
    placement:
      hosts:
        - <host_1>
        - <host_2>
        ...
        - <host_n>
    networks:
    - <storage_network>
    spec:
      rgw_frontend_port: 8082
      rgw_realm: default
      rgw_zone: default
    ---
    service_type: ingress
    service_id: rgw.default
    service_name: ingress.rgw.default
    placement:
      count: 1
    spec:
      backend_service: rgw.rgw
      frontend_port: 8080
      monitor_port: 8999
      virtual_ips_list:
      - <storage_network_vip>
      - <external_network_vip>
      virtual_interface_networks:
      - <storage_network>
    • Replace <host_1>, <host_2>, …, <host_n> with the name of the Ceph nodes where the RGW instances are deployed.
    • Replace <storage_network> with the network range used to resolve the interfaces where radosgw processes are bound.
    • Replace <storage_network_vip> with the virtual IP (VIP) used as the haproxy front end. This is the same address configured as the Object Storage service endpoint ($RGW_ENDPOINT) in the Configuring RGW authentication procedure.
    • Optional: Replace <external_network_vip> with an additional VIP on an external network to use as the haproxy front end. This address is used to connect to RGW from an external network.
  3. Save the file.
  4. Enter the cephadm shell and mount the rgw_spec.yaml file.

    $ cephadm shell -m /tmp/rgw_spec.yaml
  5. Add RGW related configuration to the cluster:

    $ ceph config set global rgw_keystone_url "https://<keystone_endpoint>"
    $ ceph config set global rgw_keystone_verify_ssl false
    $ ceph config set global rgw_keystone_api_version 3
    $ ceph config set global rgw_keystone_accepted_roles "member, Member, admin"
    $ ceph config set global rgw_keystone_accepted_admin_roles "ResellerAdmin, swiftoperator"
    $ ceph config set global rgw_keystone_admin_domain default
    $ ceph config set global rgw_keystone_admin_project service
    $ ceph config set global rgw_keystone_admin_user swift
    $ ceph config set global rgw_keystone_admin_password "$SWIFT_PASSWORD"
    $ ceph config set global rgw_keystone_implicit_tenants true
    $ ceph config set global rgw_s3_auth_use_keystone true
    $ ceph config set global rgw_swift_versioning_enabled true
    $ ceph config set global rgw_swift_enforce_content_length true
    $ ceph config set global rgw_swift_account_in_url true
    $ ceph config set global rgw_trust_forwarded_https true
    $ ceph config set global rgw_max_attr_name_len 128
    $ ceph config set global rgw_max_attrs_num_in_req 90
    $ ceph config set global rgw_max_attr_size 1024
    • Replace <keystone_endpoint> with the Identity service internal endpoint. The EDPM nodes are able to resolve the internal endpoint but not the public one. Do not omit the URIScheme from the URL, it must be either http:// or https://.
    • Replace <swift_password> with the password assigned to the swift user in the previous step.
  6. Deploy the RGW configuration using the Orchestrator:

    $ ceph orch apply -i /mnt/rgw_spec.yaml
Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.