Search

Chapter 3. Configuring the Red Hat Ceph Storage cluster for HCI

download PDF

This chapter describes how to configure and deploy the Red Hat Ceph Storage cluster for HCI environments.

3.1. Deployment prerequisites

Confirm the following has been performed before attempting to configure and deploy the Red Hat Ceph Storage cluster:

  • Provision of bare metal instances and their networks using the Bare Metal Provisioning service (ironic). For more information about the provisioning of bare metal instances, see Bare Metal Provisioning.

3.2. The openstack overcloud ceph deploy command

If you deploy the Ceph cluster using director, you must use the openstack overcloud ceph deploy command. For a complete listing of command options and parameters, see openstack overcloud ceph deploy in the Command Line Interface Reference.

The command openstack overcloud ceph deploy --help provides the current options and parameters available in your environment.

3.3. Ceph configuration overrides for HCI

A standard format initialization file is an option for Ceph cluster configuration. This initialization file is then used to configure the Ceph cluster with either the cephadm bootstap --config <file_name> or openstack overcloud ceph deploy --config <file_name> commands.

Colocating Ceph OSD and Compute services on hyperconverged nodes risks resource contention between Red Hat Ceph Storage and Compute services. This occurs because the services are not aware of the colocation. Resource contention can result in service degradation, which offsets the benefits of hyperconvergence.

Resource allocation can be tuned using an initialization file to manage resource contention. The following creates an initialization file called initial-ceph.conf and then uses the openstack overcloud ceph deploy command to configure the HCI deployment.

$ cat <<EOF > initial-ceph.conf
[osd]
osd_memory_target_autotune = true
osd_numa_auto_affinity = true
[mgr]
mgr/cephadm/autotune_memory_target_ratio = 0.2
EOF
$ openstack overcloud ceph deploy --config initial-ceph.conf

The osd_memory_target_autotune option is set to true so that the OSD daemons adjust their memory consumption based on the osd_memory_target config option. The autotune_memory_target_ratio defaults to 0.7. This indicates 70% of the total RAM in the system is the starting point from which any memory consumed by non-autotuned Ceph daemons are subtracted. Then the remaining memory is divided by the OSDs, assuming all OSDs have osd_memory_target_autotune set to true. For HCI deployments, set the mgr/cephadm/autotune_memory_target_ratio to 0.2 to ensure more memory is available for the Compute service. The 0.2 value is a cautious starting point. After deployment, use the ceph command to change this value if necessary.

A two NUMA node system can host a latency sensitive Nova workload on one NUMA node and a Ceph OSD workload on the other NUMA node. To configure Ceph OSDs to use a specific NUMA node not used by the Compute workload, use either of the following Ceph OSD configurations:

  • osd_numa_node sets affinity to a numa node
  • osd_numa_auto_affinity automatically sets affinity to the NUMA node where storage and network match

If there are network interfaces on both NUMA nodes and the disk controllers are NUMA node 0, use a network interface on NUMA node 0 for the storage network and host the Ceph OSD workload on NUMA node 0. Host the Nova workload on NUMA node 1 and have it use the network interfaces on NUMA node 1. Setting osd_numa_auto_affinity to true to achieve this configuration. Alternatively, the osd_numa_node could be set directly to 0 and a value would not be set for osd_numa_auto_affinity so that it defaults to false.

When a hyperconverged cluster backfills as a result of an OSD going offline, the backfill process can be slowed down. In exchange for a slower recovery, the backfill activity has less of an impact on the collocated Compute workload. Red Hat Ceph Storage 5 has the following defaults to control the rate of backfill activity:

  • osd_recovery_op_priority = 3
  • osd_max_backfills = 1
  • osd_recovery_max_active_hdd = 3
  • osd_recovery_max_active_ssd = 10

    Note

    It is not necessary to pass these defaults in an initialization file as they are the default values. If values other than the defaults are desired for the inital configuration, add them to the initialization file with the required values before deployment. After deployment, use the command ‘ceph config set osd’.

3.4. Configuring the Red Hat Ceph Storage cluster name

You can deploy the Red Hat Ceph Storage cluster with a name that you configure. The default name is ceph.

Procedure

  1. Log in to the undercloud node as the stack user.
  2. Configure the name of the Ceph Storage cluster by using the following command:

    openstack overcloud ceph deploy \ --cluster <cluster_name>

    $ openstack overcloud ceph deploy \ --cluster central \

Note

Keyring files are not created at this time. Keyring files are created during the overcloud deployment. Keyring files inherit the cluster name configured during this procedure. For more information about overcloud deployment see Section 5.8, “Initiating overcloud deployment for HCI”

In the example above, the Ceph cluster is named central. The configuration and keyring files for the central Ceph cluster would be created in /etc/ceph during the deployment process.

[root@oc0-controller-0 ~]# ls -l /etc/ceph/
total 16
-rw-------. 1 root root  63 Mar 26 21:49 central.client.admin.keyring
-rw-------. 1  167  167 201 Mar 26 22:17 central.client.openstack.keyring
-rw-------. 1  167  167 134 Mar 26 22:17 central.client.radosgw.keyring
-rw-r--r--. 1 root root 177 Mar 26 21:49 central.conf

Troubleshooting

The following error may be displayed if you configure a custom name for the Ceph Storage cluster:

monclient: get_monmap_and_config cannot identify monitors to contact because

If this error is displayed, use the following command after Ceph deployment:

cephadm shell --config <configuration_file> --keyring <keyring_file>

For example, if this error was displayed when you configured the cluster name to central, you would use the following command:

cephadm shell --config /etc/ceph/central.conf \
              --keyring /etc/ceph/central.client.admin.keyring

The following command could also be used as an alternative:

cephadm shell --mount /etc/ceph:/etc/ceph
export CEPH_ARGS='--cluster central'

3.5. Configuring network options with the network data file

The network data file describes the networks used by the Red Hat Ceph Storage cluster.

Procedure

  1. Log in to the undercloud node as the stack user.
  2. Create a YAML format file that defines the custom network attributes called network_data.yaml.

    Important

    Using network isolation, the standard network deployment consists of two storage networks which map to the two Ceph networks:

    • The storage network, storage, maps to the Ceph network, public_network. This network handles storage traffic such as the RBD traffic from the Compute nodes to the Ceph cluster.
    • The storage network, storage_mgmt, maps to the Ceph network, cluster_network. This network handles storage management traffic such as data replication between Ceph OSDs.
  3. Use the openstack overcloud ceph deploy command with the --crush-hierarchy option to deploy the configuration.

    openstack overcloud ceph deploy \
            deployed_metal.yaml \
            -o deployed_ceph.yaml \
            --network-data network_data.yaml
    Important

    The openstack overcloud ceph deploy command uses the network data file specified by the --network-data option to determine the networks to be used as the public_network and cluster_network. The command assumes these networks are named storage and storage_mgmt in network data file unless a different name is specified by the --public-network-name and --cluster-network-name options.

    You must use the --network-data option when deploying with network isolation. The default undercloud (192.168.24.0/24) will be used for both the public_network and cluster_network if you do not use this option.

3.6. Configuring network options with a configuration file

Network options can be specified with a configuration file as an alternative to the network data file.

Important

Using this method to configure network options overwrites automatically generated values in network_data.yaml. Ensure you set all four values when using this network configuration method.

Procedure

  1. Log in to the undercloud node as the stack user.
  2. Create a standard format initialization file to configure the Ceph cluster. If you have already created a file to include other configuration options, you can add the network configuration to it.
  3. Add the following parameters to the [global] section of the file:

    • public_network
    • cluster_network
    • ms_bind_ipv4

      Important

      Ensure the public_network and cluster_network map to the same networks as storage and storage_mgmt.

      The following is an example of a configuration file entry for a network configuration with multiple subnets and custom networking names:

      [global]
      public_network = 172.16.14.0/24,172.16.15.0/24
      cluster_network = 172.16.12.0/24,172.16.13.0/24
      ms_bind_ipv4 = True
      ms_bind_ipv6 = False
  4. Use the command openstack overcloud ceph deploy with the --config option to deploy the configuration file.

    $ openstack overcloud ceph deploy \
      --config initial-ceph.conf --network-data network_data.yaml

3.7. Configuring a CRUSH hierarchy for an OSD

You can configure a custom Controlled Replication Under Scalable Hashing (CRUSH) hierarchy during OSD deployment to add the OSD location attribute to the Ceph Storage cluster hosts specification. The location attribute configures where the OSD is placed within the CRUSH hierarchy.

Note

The location attribute sets only the initial CRUSH location. Subsequent changes of the attribute are ignored.

Procedure

  1. Log in to the undercloud node as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc

  3. Create a configuration file to define the custom CRUSH hierarchy, for example, crush_hierarchy.yaml.
  4. Add the following configuration to the file:

    ceph_crush_hierarchy:
      <osd_host>:
        root: default
        rack: <rack_num>
      <osd_host>:
        root: default
        rack: <rack_num>
      <osd_host>:
        root: default
        rack: <rack_num>
    • Replace <osd_host> with the hostnames of the nodes where the OSDs are deployed, for example, ceph-0.
    • Replace <rack_num> with the number of the rack where the OSDs are deployed, for example, r0.
  5. Deploy the Ceph cluster with your custom OSD layout:

    openstack overcloud ceph deploy \
            deployed_metal.yaml \
            -o deployed_ceph.yaml \
            --osd-spec osd_spec.yaml \
            --crush-hierarchy crush_hierarchy.yaml

The Ceph cluster is created with the custom OSD layout.

The example file above would result in the following OSD layout.

ID  CLASS  WEIGHT       TYPE NAME                  STATUS  REWEIGHT  PRI-AFF
-1         0.02939      root default
-3         0.00980      rack r0
-2         0.00980          host ceph-node-00
 0    hdd  0.00980              osd.0                 up   1.00000   1.00000
-5         0.00980      rack r1
-4         0.00980          host ceph-node-01
 1    hdd  0.00980              osd.1                 up   1.00000   1.00000
-7         0.00980      rack r2
-6         0.00980          host ceph-node-02
 2    hdd  0.00980              osd.2                 up   1.00000   1.00000
Note

Device classes are automatically detected by Ceph but CRUSH rules are associated with pools. Pools are still defined and created using the CephCrushRules parameter during the overcloud deployment.

Additional resources

See Red Hat Ceph Storage workload considerations in the Red Hat Ceph Storage Installation Guide for additional information.

3.8. Configuring Ceph service placement options

You can define what nodes run what Ceph services using a custom roles file. A custom roles file is only necessary when default role assignments are not used because of the environment. For example, when deploying hyperconverged nodes, the predeployed compute nodes should be labeled as osd with a service type of osd to have a placement list containing a list of compute instances.

Service definitions in the roles_data.yaml file determine which bare metal instance runs which service. By default, the Controller role has the CephMon and CephMgr service while the CephStorage role has the CephOSD service. Unlike most composable services, Ceph services do not require heat output to determine how services are configured. The roles_data.yaml file always determines Ceph service placement even though the deployed Ceph process occurs before Heat runs.

Procedure

  1. Log in to the undercloud node as the stack user.
  2. Create a YAML format file that defines the custom roles.
  3. Deploy the configuration file:

    $ openstack overcloud ceph deploy \
            deployed_metal.yaml \
            -o deployed_ceph.yaml \
            --roles-data custom_roles.yaml

3.9. Configuring SSH user options for Ceph nodes

The openstack overcloud ceph deploy command creates the user and keys and distributes them to the hosts so it is not necessary to perform the procedures in this section. However, it is a supported option.

Cephadm connects to all managed remote Ceph nodes using SSH. The Red Hat Ceph Storage cluster deployment process creates an account and SSH key pair on all overcloud Ceph nodes. The key pair is then given to Cephadm so it can communicate with the nodes.

3.9.1. Creating the SSH user before Red Hat Ceph Storage cluster creation

You can create the SSH user before Ceph cluster creation with the openstack overcloud ceph user enable command.

Procedure

  1. Log in to the undercloud node as the stack user.
  2. Create the SSH user:

    $ openstack overcloud ceph user enable

    Note

    The default user name is ceph-admin. To specify a different user name, use the --cephadm-ssh-user option to specify a different one.

    openstack overcloud ceph user enable --cephadm-ssh-user <custom_user_name>

    It is recommended to use the default name and not use the --cephadm-ssh-user parameter.

    If the user is created in advance, use the parameter --skip-user-create when executing openstack overcloud ceph deploy.

3.9.2. Disabling the SSH user

Disabling the SSH user disables Cephadm. Disabling Cephadm removes the ability of the service to administer the Ceph cluster and prevents associated commands from working. It also prevents Ceph node overcloud scaling operations. It also removes all public and private SSH keys.

Procedure

  1. Log in to the undercloud node as the stack user.
  2. Use the command openstack overcloud ceph user disable --fsid <FSID> ceph_spec.yaml to disable the SSH user.

    Note

    The FSID is located in the deployed_ceph.yaml environment file.

    Important

    The openstack overcloud ceph user disable command is not recommended unless it is necessary to disable Cephadm.

    Important

    To enable the SSH user and Cephadm service after being disabled, use the openstack overcloud ceph user enable --fsid <FSID> ceph_spec.yaml command.

    Note

    This command requires the path to a Ceph specification file to determine:

    • Which hosts require the SSH user.
    • Which hosts have the _admin label and require the private SSH key.
    • Which hosts require the public SSH key.

    For more information about specification files and how to generate them, see Generating the service specification.

3.10. Accessing Ceph Storage containers

Obtaining and modifying container images in the Transitioning to Containerized Services guide contains procedures and information on how to prepare the registry and your undercloud and overcloud configuration to use container images. Use the information in this section to adapt these procedures to access Ceph Storage containers.

There are two options for accessing Ceph Storage containers from the overcloud.

3.10.1. Cacheing containers on the undercloud

The procedure Modifying images during preparation describes using the following command:

sudo openstack tripleo container image prepare \
  -e ~/containers-prepare-parameter.yaml \

If you do not use the --container-image-prepare option to provide authentication credentials to the openstack overcloud ceph deploy command and directly download the Ceph containers from a remote registry, as described in Downloading containers directly from a remote registry, you must run the sudo openstack tripleo container image prepare command before deploying Ceph.

3.10.2. Downloading containers directly from a remote registry

You can configure Ceph to download containers directly from a remote registry.

Procedure

  1. Create a containers-prepare-parameter.yaml file using the procedure Preparing container images.
  2. Add the remote registry credentials to the containers-prepare-parameter.yaml file using the ContainerImageRegistryCredentials parameter as described in Obtaining container images from private registries.
  3. When you deploy Ceph, pass the containers-prepare-parameter.yaml file using the openstack overcloud ceph deploy command.

    openstack overcloud ceph deploy \
            --container-image-prepare containers-prepare-parameter.yaml
    Note

    If you do not cache the containers on the undercloud, as described in Cacheing containers on the undercloud, then you should pass the same containers-prepare-parameter.yaml file to the openstack overcloud ceph deploy command when you deploy Ceph. This will cache containers on the undercloud.

Result

The credentials in the containers-prepare-parameter.yaml are used by the cephadm command to authenticate to the remote registry and download the Ceph Storage container.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.