Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 3. Configuring the Red Hat Ceph Storage cluster for HCI
This chapter describes how to configure and deploy the Red Hat Ceph Storage cluster for HCI environments.
3.1. Deployment prerequisites Copier lienLien copié sur presse-papiers!
Confirm the following has been performed before attempting to configure and deploy the Red Hat Ceph Storage cluster:
- Provision of bare metal instances and their networks using the Bare Metal Provisioning service (ironic). For more information about the provisioning of bare metal instances, see Configuring the Bare Metal Provisioning service.
3.2. The openstack overcloud ceph deploy command Copier lienLien copié sur presse-papiers!
If you deploy the Ceph cluster using director, you must use the openstack overcloud ceph deploy command. For a complete listing of command options and parameters, see openstack overcloud ceph deploy in the Command line interface reference.
The command openstack overcloud ceph deploy --help provides the current options and parameters available in your environment.
3.3. Ceph configuration overrides for HCI Copier lienLien copié sur presse-papiers!
A standard format initialization file is an option for Ceph cluster configuration. This initialization file is then used to configure the Ceph cluster with either the cephadm bootstap --config <file_name> or openstack overcloud ceph deploy --config <file_name> commands.
Colocating Ceph OSD and Compute services on hyperconverged nodes risks resource contention between Red Hat Ceph Storage and Compute services. This occurs because the services are not aware of the colocation. Resource contention can result in service degradation, which offsets the benefits of hyperconvergence.
Resource allocation can be tuned using an initialization file to manage resource contention. The following creates an initialization file called initial-ceph.conf and then uses the openstack overcloud ceph deploy command to configure the HCI deployment.
The osd_memory_target_autotune option is set to true so that the OSD daemons adjust their memory consumption based on the osd_memory_target config option. The autotune_memory_target_ratio defaults to 0.7. This indicates 70% of the total RAM in the system is the starting point from which any memory consumed by non-autotuned Ceph daemons are subtracted. Then the remaining memory is divided by the OSDs, assuming all OSDs have osd_memory_target_autotune set to true. For HCI deployments, set the mgr/cephadm/autotune_memory_target_ratio to 0.2 to ensure more memory is available for the Compute service. The 0.2 value is a cautious starting point. After deployment, use the ceph command to change this value if necessary.
A two NUMA node system can host a latency sensitive Nova workload on one NUMA node and a Ceph OSD workload on the other NUMA node. To configure Ceph OSDs to use a specific NUMA node not used by the Compute workload, use either of the following Ceph OSD configurations:
-
osd_numa_nodesets affinity to a numa node -
osd_numa_auto_affinityautomatically sets affinity to the NUMA node where storage and network match
If there are network interfaces on both NUMA nodes and the disk controllers are NUMA node 0, use a network interface on NUMA node 0 for the storage network and host the Ceph OSD workload on NUMA node 0. Host the Nova workload on NUMA node 1 and have it use the network interfaces on NUMA node 1. Setting osd_numa_auto_affinity to true to achieve this configuration. Alternatively, the osd_numa_node could be set directly to 0 and a value would not be set for osd_numa_auto_affinity so that it defaults to false.
When a hyperconverged cluster backfills as a result of an OSD going offline, the backfill process can be slowed down. In exchange for a slower recovery, the backfill activity has less of an impact on the collocated Compute workload. Red Hat Ceph Storage has the following defaults to control the rate of backfill activity:
-
osd_recovery_op_priority = 3 -
osd_max_backfills = 1 -
osd_recovery_max_active_hdd = 3 osd_recovery_max_active_ssd = 10NoteIt is not necessary to pass these defaults in an initialization file as they are the default values. If values other than the defaults are desired for the inital configuration, add them to the initialization file with the required values before deployment. After deployment, use the command ‘ceph config set osd’.
3.4. Configuring time synchronization Copier lienLien copié sur presse-papiers!
The Time Synchronization Service (chrony) is enabled for time synchronization by default. You can perform the following tasks to configure the service.
Time synchronization is configured using either a delimited list or an environment file. Use the procedure that is best suited to your administrative practices.
3.4.1. Configuring time synchronization with a delimited list Copier lienLien copié sur presse-papiers!
You can configure the Time Synchronization Service (chrony) to use a delimited list to configure NTP servers.
Procedure
-
Log in to the undercloud node as the
stackuser. Configure NTP servers with a delimited list:
openstack overcloud ceph deploy \ --ntp-server "<ntp_server_list>"openstack overcloud ceph deploy \ --ntp-server "<ntp_server_list>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<ntp_server_list>with a comma delimited list of servers.openstack overcloud ceph deploy \ --ntp-server "0.pool.ntp.org,1.pool.ntp.org"openstack overcloud ceph deploy \ --ntp-server "0.pool.ntp.org,1.pool.ntp.org"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.2. Configuring time synchronization with an environment file Copier lienLien copié sur presse-papiers!
You can configure the Time Synchronization Service (chrony) to use an environment file that defines NTP servers.
Procedure
-
Log in to the undercloud node as the
stackuser. -
Create an environment file, such as
/home/stack/templates/ntp-parameters.yaml, to contain the NTP server configuration. Add the
NtpServerparameter. TheNtpServerparameter contains a comma delimited list of NTP servers.parameter_defaults: NtpServer: 0.pool.ntp.org,1.pool.ntp.org
parameter_defaults: NtpServer: 0.pool.ntp.org,1.pool.ntp.orgCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure NTP servers with an environment file:
openstack overcloud ceph deploy \ --ntp-heat-env-file "<ntp_file_name>"openstack overcloud ceph deploy \ --ntp-heat-env-file "<ntp_file_name>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<ntp_file_name>with the name of the environment file you created.openstack overcloud ceph deploy \ --ntp-heat-env-file "/home/stack/templates/ntp-parameters.yaml"openstack overcloud ceph deploy \ --ntp-heat-env-file "/home/stack/templates/ntp-parameters.yaml"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.3. Disabling time synchronization Copier lienLien copié sur presse-papiers!
The Time Synchronization Service (chrony) is enabled by default. You can disable the service if you do not want to use it.
Procedure
-
Log in to the undercloud node as the
stackuser. Disable the Time Synchronization Service (chrony):
openstack overcloud ceph deploy \ --skip-ntpopenstack overcloud ceph deploy \ --skip-ntpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. Configuring a top level domain suffix Copier lienLien copié sur presse-papiers!
You can configure a top level domain (TLD) suffix. This suffix is added to the short hostname to create a fully qualified domain name for overcloud nodes.
A fully qualified domain name is required for TLS-e configuration.
Procedure
-
Log in to the undercloud node as the
stackuser. Configure the top level domain suffix:
openstack overcloud ceph deploy \ --tld "<domain_name>"openstack overcloud ceph deploy \ --tld "<domain_name>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<domain_name>with the required domain name.openstack overcloud ceph deploy \ --tld "example.local"openstack overcloud ceph deploy \ --tld "example.local"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.6. Configuring the Red Hat Ceph Storage cluster name Copier lienLien copié sur presse-papiers!
You can deploy the Red Hat Ceph Storage cluster with a name that you configure. The default name is ceph.
Procedure
-
Log in to the undercloud node as the
stackuser. Configure the name of the Ceph Storage cluster by using the following command:
openstack overcloud ceph deploy \ --cluster <cluster_name>$ openstack overcloud ceph deploy \ --cluster central \
Keyring files are not created at this time. Keyring files are created during the overcloud deployment. Keyring files inherit the cluster name configured during this procedure. For more information about overcloud deployment see Section 5.8, “Initiating overcloud deployment for HCI”
In the example above, the Ceph cluster is named central. The configuration and keyring files for the central Ceph cluster would be created in /etc/ceph during the deployment process.
Troubleshooting
The following error may be displayed if you configure a custom name for the Ceph Storage cluster:
monclient: get_monmap_and_config cannot identify monitors to contact because
If this error is displayed, use the following command after Ceph deployment:
cephadm shell --config <configuration_file> --keyring <keyring_file>
For example, if this error was displayed when you configured the cluster name to central, you would use the following command:
cephadm shell --config /etc/ceph/central.conf \
--keyring /etc/ceph/central.client.admin.keyring
cephadm shell --config /etc/ceph/central.conf \
--keyring /etc/ceph/central.client.admin.keyring
The following command could also be used as an alternative:
cephadm shell --mount /etc/ceph:/etc/ceph export CEPH_ARGS='--cluster central'
cephadm shell --mount /etc/ceph:/etc/ceph
export CEPH_ARGS='--cluster central'
3.7. Configuring network options with the network data file Copier lienLien copié sur presse-papiers!
The network data file describes the networks used by the Red Hat Ceph Storage cluster.
Procedure
-
Log in to the undercloud node as the
stackuser. Create a YAML format file that defines the custom network attributes called
network_data.yaml.ImportantUsing network isolation, the standard network deployment consists of two storage networks which map to the two Ceph networks:
-
The storage network,
storage, maps to the Ceph network,public_network. This network handles storage traffic such as the RBD traffic from the Compute nodes to the Ceph cluster. -
The storage network,
storage_mgmt, maps to the Ceph network,cluster_network. This network handles storage management traffic such as data replication between Ceph OSDs.
-
The storage network,
Use the
openstack overcloud ceph deploycommand with the--crush-hierarchyoption to deploy the configuration.openstack overcloud ceph deploy \ deployed_metal.yaml \ -o deployed_ceph.yaml \ --network-data network_data.yamlopenstack overcloud ceph deploy \ deployed_metal.yaml \ -o deployed_ceph.yaml \ --network-data network_data.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe
openstack overcloud ceph deploycommand uses the network data file specified by the--network-dataoption to determine the networks to be used as thepublic_networkandcluster_network. The command assumes these networks are namedstorageandstorage_mgmtin network data file unless a different name is specified by the--public-network-nameand--cluster-network-nameoptions.You must use the
--network-dataoption when deploying with network isolation. The default undercloud (192.168.24.0/24) will be used for both thepublic_networkandcluster_networkif you do not use this option.
3.8. Configuring network options with a configuration file Copier lienLien copié sur presse-papiers!
Network options can be specified with a configuration file as an alternative to the network data file.
Using this method to configure network options overwrites automatically generated values in network_data.yaml. Ensure you set all four values when using this network configuration method.
Procedure
-
Log in to the undercloud node as the
stackuser. - Create a standard format initialization file to configure the Ceph cluster. If you have already created a file to include other configuration options, you can add the network configuration to it.
Add the following parameters to the
[global]section of the file:-
public_network -
cluster_network ms_bind_ipv4ImportantEnsure the
public_networkandcluster_networkmap to the same networks asstorageandstorage_mgmt.The following is an example of a configuration file entry for a network configuration with multiple subnets and custom networking names:
[global] public_network = 172.16.14.0/24,172.16.15.0/24 cluster_network = 172.16.12.0/24,172.16.13.0/24 ms_bind_ipv4 = True ms_bind_ipv6 = False
[global] public_network = 172.16.14.0/24,172.16.15.0/24 cluster_network = 172.16.12.0/24,172.16.13.0/24 ms_bind_ipv4 = True ms_bind_ipv6 = FalseCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Use the command
openstack overcloud ceph deploywith the--configoption to deploy the configuration file.openstack overcloud ceph deploy \ --config initial-ceph.conf --network-data network_data.yaml
$ openstack overcloud ceph deploy \ --config initial-ceph.conf --network-data network_data.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.9. Configuring a CRUSH hierarchy for an OSD Copier lienLien copié sur presse-papiers!
You can configure a custom Controlled Replication Under Scalable Hashing (CRUSH) hierarchy during OSD deployment to add the OSD location attribute to the Ceph Storage cluster hosts specification. The location attribute configures where the OSD is placed within the CRUSH hierarchy.
The location attribute sets only the initial CRUSH location. Subsequent changes of the attribute are ignored.
Procedure
-
Log in to the undercloud node as the
stackuser. Source the
stackrcundercloud credentials file:$ source ~/stackrc-
Create a configuration file to define the custom CRUSH hierarchy, for example,
crush_hierarchy.yaml. Add the following configuration to the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<osd_host>with the hostnames of the nodes where the OSDs are deployed, for example,ceph-0. -
Replace
<rack_num>with the number of the rack where the OSDs are deployed, for example,r0.
-
Replace
Deploy the Ceph cluster with your custom OSD layout:
openstack overcloud ceph deploy \ deployed_metal.yaml \ -o deployed_ceph.yaml \ --osd-spec osd_spec.yaml \ --crush-hierarchy crush_hierarchy.yamlopenstack overcloud ceph deploy \ deployed_metal.yaml \ -o deployed_ceph.yaml \ --osd-spec osd_spec.yaml \ --crush-hierarchy crush_hierarchy.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The Ceph cluster is created with the custom OSD layout.
The example file above would result in the following OSD layout.
Device classes are automatically detected by Ceph but CRUSH rules are associated with pools. Pools are still defined and created using the CephCrushRules parameter during the overcloud deployment.
Additional resources
See Red Hat Ceph Storage workload considerations in the Red Hat Ceph Storage Installation Guide for additional information.
3.10. Configuring Ceph service placement options Copier lienLien copié sur presse-papiers!
You can define what nodes run what Ceph services using a custom roles file. A custom roles file is only necessary when default role assignments are not used because of the environment. For example, when deploying hyperconverged nodes, the predeployed compute nodes should be labeled as osd with a service type of osd to have a placement list containing a list of compute instances.
Service definitions in the roles_data.yaml file determine which bare metal instance runs which service. By default, the Controller role has the CephMon and CephMgr service while the CephStorage role has the CephOSD service. Unlike most composable services, Ceph services do not require heat output to determine how services are configured. The roles_data.yaml file always determines Ceph service placement even though the deployed Ceph process occurs before Heat runs.
Procedure
-
Log in to the undercloud node as the
stackuser. - Create a YAML format file that defines the custom roles.
Deploy the configuration file:
openstack overcloud ceph deploy \ deployed_metal.yaml \ -o deployed_ceph.yaml \ --roles-data custom_roles.yaml$ openstack overcloud ceph deploy \ deployed_metal.yaml \ -o deployed_ceph.yaml \ --roles-data custom_roles.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.11. Configuring SSH user options for Ceph nodes Copier lienLien copié sur presse-papiers!
The openstack overcloud ceph deploy command creates the user and keys and distributes them to the hosts so it is not necessary to perform the procedures in this section. However, it is a supported option.
Cephadm connects to all managed remote Ceph nodes using SSH. The Red Hat Ceph Storage cluster deployment process creates an account and SSH key pair on all overcloud Ceph nodes. The key pair is then given to Cephadm so it can communicate with the nodes.
3.11.1. Creating the SSH user before Red Hat Ceph Storage cluster creation Copier lienLien copié sur presse-papiers!
You can create the SSH user before Ceph cluster creation with the openstack overcloud ceph user enable command.
Procedure
-
Log in to the undercloud node as the
stackuser. Create the SSH user:
$ openstack overcloud ceph user enable <specification_file>Replace
<specification_file>with the path and name of a Ceph specification file that describes the cluster where the user is created and the public SSH keys are installed. The specification file provides the information to determine which nodes to modify and if the private keys are required.For more information on creating a specification file, see Generating the service specification.
NoteThe default user name is
ceph-admin. To specify a different user name, use the--cephadm-ssh-useroption to specify a different one.openstack overcloud ceph user enable --cephadm-ssh-user <custom_user_name>It is recommended to use the default name and not use the
--cephadm-ssh-userparameter.If the user is created in advance, use the parameter
--skip-user-createwhen executingopenstack overcloud ceph deploy.
3.11.2. Disabling the SSH user Copier lienLien copié sur presse-papiers!
Disabling the SSH user disables cephadm. Disabling cephadm removes the ability of the service to administer the Ceph cluster and prevents associated commands from working. It also prevents Ceph node overcloud scaling operations. It also removes all public and private SSH keys.
Procedure
-
Log in to the undercloud node as the
stackuser. Use the command
openstack overcloud ceph user disable --fsid <FSID> <specification_file>to disable the SSH user.-
Replace
<FSID>with the File System ID of the cluster. The FSID is a unique identifier for the cluster. The FSID is located in thedeployed_ceph.yamlenvironment file. Replace
<specification_file>with the path and name of a Ceph specification file that describes the cluster where the user was created.ImportantThe
openstack overcloud ceph user disablecommand is not recommended unless it is necessary to disablecephadm.ImportantTo enable the SSH user and Ceph orchestrator service after being disabled, use the
openstack overcloud ceph user enable --fsid <FSID> <specification_file>command.NoteThis command requires the path to a Ceph specification file to determine:
- Which hosts require the SSH user.
- Which hosts have the _admin label and require the private SSH key.
- Which hosts require the public SSH key.
For more information about specification files and how to generate them, see Generating the service specification.
-
Replace
3.12. Accessing Ceph Storage containers Copier lienLien copié sur presse-papiers!
Preparing container images in the Installing and managing Red Hat OpenStack Platform with director guide contains procedures and information on how to prepare the registry and your undercloud and overcloud configuration to use container images. Use the information in this section to adapt these procedures to access Ceph Storage containers.
There are two options for accessing Ceph Storage containers from the overcloud.
3.12.1. Cacheing containers on the undercloud Copier lienLien copié sur presse-papiers!
The procedure Modifying images during preparation describes using the following command:
sudo openstack tripleo container image prepare \ -e ~/containers-prepare-parameter.yaml \
sudo openstack tripleo container image prepare \
-e ~/containers-prepare-parameter.yaml \
If you do not use the --container-image-prepare option to provide authentication credentials to the openstack overcloud ceph deploy command and directly download the Ceph containers from a remote registry, as described in Downloading containers directly from a remote registry, you must run the sudo openstack tripleo container image prepare command before deploying Ceph.
3.12.2. Downloading containers directly from a remote registry Copier lienLien copié sur presse-papiers!
You can configure Ceph to download containers directly from a remote registry.
The cephadm command uses the credentials that are configured in the containers-prepare-parameter.yaml file to authenticate to the remote registry and download the Red Hat Ceph Storage container.
Procedure
-
Create a
containers-prepare-parameter.yamlfile using the procedure Preparing container images in the Installing and managing Red Hat OpenStack Platform with director guide. -
Add the remote registry credentials to the
containers-prepare-parameter.yamlfile using theContainerImageRegistryCredentialsparameter as described in Obtaining container images from private registries. When you deploy Ceph, pass the
containers-prepare-parameter.yamlfile using theopenstack overcloud ceph deploycommand.openstack overcloud ceph deploy \ --container-image-prepare containers-prepare-parameter.yamlopenstack overcloud ceph deploy \ --container-image-prepare containers-prepare-parameter.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you do not cache the containers on the undercloud, as described in Cacheing containers on the undercloud, then you should pass the same
containers-prepare-parameter.yamlfile to theopenstack overcloud ceph deploycommand when you deploy Ceph. This will cache containers on the undercloud.