이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 2. CephFS through NFS installation
2.1. CephFS with NFS-Ganesha deployment 링크 복사링크가 클립보드에 복사되었습니다!
A typical Ceph file system (CephFS) through NFS installation in a Red Hat OpenStack Platform (RHOSP) environment includes the following configurations:
- OpenStack Controller nodes running containerized Ceph metadata server (MDS), Ceph monitor (MON), manila, and NFS-Ganesha services. Some of these services can coexist on the same node or can have one or more dedicated nodes.
- Ceph storage cluster with containerized object storage daemons (OSDs) running on Ceph storage nodes.
- An isolated StorageNFS network that provides access from projects to the NFS-Ganesha services for NFS share provisioning.
The Shared File Systems service (manila) provides APIs that allow the projects to request file system shares, which are fulfilled by driver modules. The driver for Red Hat CephFS, manila.share.drivers.cephfs.driver.CephFSDriver, means that you can use the Shared File Systems service as a CephFS as a back end. RHOSP director configures the driver to deploy the NFS-Ganesha gateway so that the CephFS shares are presented through the NFS 4.1 protocol.
Using RHOSP director to deploy the Shared File Systems service with a CephFS back end on the overcloud automatically creates the required storage network defined in the heat template. For more information about network planning, see Overcloud networks in the Director Installation and Usage guide.
Although you can manually configure the Shared File Systems service by editing its node /etc/manila/manila.conf file, RHOSP director can override any settings in future overcloud updates. The recommended method for configuring a Shared File System back end is through director.
Currently, you can define only one CephFS back end at a time in director.
CephFS through NFS
2.1.1. Requirements for CephFS through NFS 링크 복사링크가 클립보드에 복사되었습니다!
CephFS through NFS requires a Red Hat OpenStack Platform (RHOSP) version 13 or later environment, which can be an existing or a new environment.
- For RHOSP versions 13, 14, and 15, CephFS works with Red Hat Ceph Storage (RHCS) version 3.
- For RHOSP version 16 or later, CephFS works with Red Hat Ceph Storage (RHCS) version 4.1 or later.
For more information, see the Deploying an Overcloud with Containerized Red Hat Ceph Guide.
Prerequisites
- You install the Shared File Systems service on Controller nodes, as is the default behavior.
- You install the NFS-Ganesha gateway service on Pacemaker cluster of the Controller node.
- You configure only a single instance of a CephFS back end to use the Shared File Systems service. You can use other non-CephFS back ends with the single CephFS back end.
- You use RHOSP director to create an extra network (StorageNFS) for the storage traffic.
- You configure a new RHCS version 4.1 or later cluster at the same time as CephFS through NFS.
2.1.3. Isolated network used by CephFS through NFS 링크 복사링크가 클립보드에 복사되었습니다!
CephFS through NFS deployments use an extra isolated network, StorageNFS. This network is deployed so users can mount shares over NFS on that network without accessing the Storage or Storage Management networks which are reserved for infrastructure traffic.
For more information about isolating networks, see Basic network isolation in the Advanced Overcloud Customization guide.
To install CephFS through NFS, complete the following procedures:
- Install the ceph-ansible package. See Section 2.2.1, “Installing the ceph-ansible package”
-
Prepare the overcloud container images with the
openstack overcloud image preparecommand. See Section 2.2.2, “Preparing overcloud container images” -
Generate the custom roles file,
roles_data.yaml, andnetwork_data.yamlfile. See Section 2.2.2.1, “Generating the custom roles file” -
Deploy Ceph, Shared File Systems service (manila), and CephFS using the
openstack overcloud deploycommand with custom roles and environments. See Section 2.2.3, “Deploying the updated environment” - Configure the isolated StorageNFS network and create the default share type. See Section 2.2.4, “Completing post-deployment configuration”
Examples use the standard stack user in the Red Hat Platform (RHOSP) environment.
Perform tasks as part of a RHOSP installation or environment update.
2.2.1. Installing the ceph-ansible package 링크 복사링크가 클립보드에 복사되었습니다!
Install the ceph-ansible package to be installed on an undercloud node to deploy containerized Ceph.
Procedure
-
Log in to an undercloud node as the
stackuser. Install the ceph-ansible package:
sudo dnf install -y ceph-ansible sudo dnf list ceph-ansible ... Installed Packages ceph-ansible.noarch 3.1.0-0.1.el7
[stack@undercloud-0 ~]$ sudo dnf install -y ceph-ansible [stack@undercloud-0 ~]$ sudo dnf list ceph-ansible ... Installed Packages ceph-ansible.noarch 3.1.0-0.1.el7Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.2. Preparing overcloud container images 링크 복사링크가 클립보드에 복사되었습니다!
Because all services are containerized in Red Hat OpenStack Platform (RHOSP), you must prepare container images for the overcloud by using the openstack overcloud image prepare command. Enter this command with the additional options to add default images for the ceph and manila services to the container registry. Ceph MDS and NFS-Ganesha services use the same Ceph base container image.
For more information about container images, see Container Images for Additional Services in the Director Installation and Usage guide.
Procedure
From the undercloud as the
stackuser, enter theopenstack overcloud image preparecommand with-eto include the following environment files:openstack overcloud container image prepare \ ... -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/manila.yaml \ ...
$ openstack overcloud container image prepare \ ... -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/manila.yaml \ ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use grep to verify that the default images for the ceph and manila services are available in the
containers-default-parameters.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.2.1. Generating the custom roles file 링크 복사링크가 클립보드에 복사되었습니다!
The ControllerStorageNFS custom role configures the isolated StorageNFS network. This role is similar to the default Controller.yaml role file with the addition of the StorageNFS network and the CephNfs service, indicated by the OS::TripleO::Services:CephNfs command.
For more information about the openstack overcloud roles generate command, see Roles in the Advanced Overcloud Customization guide.
The openstack overcloud roles generate command creates a custom roles_data.yaml file including the services specified after -o. In the following example, the roles_data.yaml file created has the services for ControllerStorageNfs, Compute, and CephStorage.
If you have an existing roles_data.yaml file, modify it to add ControllerStorageNfs, Compute, and CephStorage services to the configuration file. For more information, see Roles in the Advanced Overcloud Customization guide.
Procedure
-
Log in to an undercloud node as the
stackuser, Use the
openstack overcloud roles generatecommand to create theroles_data.yamlfile:openstack overcloud roles generate --roles-path /usr/share/openstack-tripleo-heat-templates/roles -o /home/stack/roles_data.yaml ControllerStorageNfs Compute CephStorage
[stack@undercloud ~]$ openstack overcloud roles generate --roles-path /usr/share/openstack-tripleo-heat-templates/roles -o /home/stack/roles_data.yaml ControllerStorageNfs Compute CephStorageCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.3. Deploying the updated environment 링크 복사링크가 클립보드에 복사되었습니다!
When you are ready to deploy your environment, use the openstack overcloud deploy command with the custom environments and roles required to run CephFS with NFS-Ganesha.
The overcloud deploy command has the following options in addition to other required options.
| Action | Option | Additional information |
|---|---|---|
|
Add the updated default containers from the | -e /home/stack/containers-default-parameters.yaml` | |
|
Add the extra StorageNFS network with | -n /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml` | Section 2.2.3.1, “StorageNFS and network_data_ganesha.yaml file” |
|
Add the custom roles defined in |
| |
|
Deploy the Ceph daemons with |
| Initiating Overcloud Deployment in the Deploying an Overcloud with Containzerized Red Hat Ceph guide |
|
Deploy the Ceph metadata server with |
| Initiating Overcloud Deployment in the Deploying an Overcloud with Containzerized Red Hat Ceph guide |
|
Deploy the |
|
The following example shows an openstack overcloud deploy command with options to deploy CephFS through NFS-Ganesha, Ceph cluster, Ceph MDS, and the isolated StorageNFS network:
For more information about the openstack overcloud deploy command, see Deployment command in the Director Installation and Usage guide.
2.2.3.1. StorageNFS and network_data_ganesha.yaml file 링크 복사링크가 클립보드에 복사되었습니다!
Use composable networks to define custom networks and assign them to any role. Instead of using the standard network_data.yaml file, you can configure the StorageNFS composable network with the network_data_ganesha.yaml file. Both of these roles are available in the /usr/share/openstack-tripleo-heat-templates directory.
The network_data_ganesha.yaml file contains an additional section that defines the isolated StorageNFS network. Although the default settings work for most installations, you must edit the YAML file to add your network settings, including the VLAN ID, subnet, and other settings.
For more information about composable networks, see Using Composable Networks in the Advanced Overcloud Customization guide.
2.2.3.2. manila-cephfsganesha-config.yaml 링크 복사링크가 클립보드에 복사되었습니다!
The integrated environment file for defining a CephFS back end is located in the following path of an undercloud node:
/usr/share/openstack-tripleo-heat-templates/environments/
/usr/share/openstack-tripleo-heat-templates/environments/
The manila-cephfsganesha-config.yaml environment file contains settings relevant to the deployment of the Shared File Systems service. The back end default settings work for most environments. The following example shows the default values that director uses during deployment of the Shared File Systems service:
The parameter_defaults header signifies the start of the configuration. In this section, you can edit settings to override default values set in resource_registry. This includes values set by OS::Tripleo::Services::ManilaBackendCephFs, which sets defaults for a CephFS back end.
- 1
ManilaCephFSBackendNamesets the name of the manila configuration of your CephFS backend. In this case, the default back end name iscephfs.- 2
ManilaCephFSDriverHandlesShareServerscontrols the lifecycle of the share server. When set tofalse, the driver does not handle the lifecycle. This is the only supported option.- 3
ManilaCephFSCephFSAuthIddefines the Ceph auth ID that the director creates for themanilaservice to access the Ceph cluster.- 4
ManilaCephFSCephFSEnableSnapshotscontrols snapshot activation. Thefalsevalue indicates that snapshots are not enabled. This feature is currently not supported.
For more information about environment files, refer to the Environment Files section in the Director Installation and Usage Guide.
2.2.4. Completing post-deployment configuration 링크 복사링크가 클립보드에 복사되었습니다!
You must complete two post-deployment configuration tasks before you create NFS shares, grant user access, and mount NFS shares.
- Map the neutron StorageNFS network to the isolated data center Storage NFS network. See Section 2.2.4.1, “Configuring the isolated network”
- Create the default share type. See Section 2.2.4.3, “Configuring a default share type”
2.2.4.1. Configuring the isolated network 링크 복사링크가 클립보드에 복사되었습니다!
Map the new isolated StorageNFS network to a neutron-shared provider network. The Compute VMs attach to this neutron network to access share export locations provided by the NFS-Ganesha gateway.
For more information about network security with the Shared File Systems service, see Hardening the Shared File System Service in the Security and Hardening Guide.
The openstack network create command defines the configuration for the StorageNFS neutron network. You can enter this command with the following options:
-
For
--provider-network-type, use the valuevlan. -
For
--provider-physical-network, use the default valuedatacentre, unless you set another tag for the br-isolated bridge throughNeutronBridgeMappingsin your tripleo-heat-templates. -
For
--provider-segment, use the VLAN value set for the StorageNFS isolated network in the heat template,/usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml. This value is 70, unless the deployer modified the isolated network definitions.
Procedure
On an undercloud node as the
stackuser, enter the following command:source ~/overcloudrc
[stack@undercloud ~]$ source ~/overcloudrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow On an undercloud node, enter the
openstack network createcommand to create the StorageNFS network:openstack network create StorageNFS --share --provider-network-type vlan --provider-physical-network datacentre --provider-segment 70
(overcloud) [stack@undercloud-0 ~]$ openstack network create StorageNFS --share --provider-network-type vlan --provider-physical-network datacentre --provider-segment 70Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.4.2. Configuring the shared provider StorageNFS network 링크 복사링크가 클립보드에 복사되었습니다!
Create a corresponding StorageNFSSubnet on the neutron-shared provider network. Ensure that the subnet is the same as the storage_nfs network definition in the network_data.yml file and ensure that the allocation range for the StorageNFS subnet and the corresponding undercloud subnet do not overlap. No gateway is required because the StorageNFS subnet is dedicated to serving NFS shares.
Prerequisites
- The start and ending IP range for the allocation pool.
- The subnet IP range.