Search

Chapter 2. Preparing overcloud nodes

download PDF

The overcloud deployment that is used to demonstrate how to integrate with a Red Hat Ceph Storage cluster consists of Controller nodes with high availability and Compute nodes to host workloads. The Red Hat Ceph Storage cluster has its own nodes that you manage independently from the overcloud by using the Ceph management tools, not through director. For more information about Red Hat Ceph Storage, see the product documentation for Red Hat Ceph Storage.

2.1. Configuring the existing Red Hat Ceph Storage cluster

To configure your Red Hat Ceph Storage cluster, you create object storage daemon (OSD) pools, define capabilities, and create keys and IDs directly on the Ceph Storage cluster. You can execute commands from any machine that can reach the Ceph Storage cluster and has the Ceph command line client installed.

Procedure

  1. Log in to the external Ceph admin node.
  2. Open an interactive shell to access Ceph commands:

    [user@ceph ~]$ sudo cephadm shell
  3. Create the following RADOS Block Device (RBD) pools in your Ceph Storage cluster, relevant to your environment:

    • Storage for OpenStack Block Storage (cinder):

      $ ceph osd pool create volumes <pgnum>
    • Storage for OpenStack Image Storage (glance):

      $ ceph osd pool create images <pgnum>
    • Storage for instances:

      $ ceph osd pool create vms <pgnum>
    • Storage for OpenStack Block Storage Backup (cinder-backup):

      $ ceph osd pool create backups <pgnum>
  4. If your overcloud deploys the Shared File Systems service (manila) with Red Hat Ceph 5 (Ceph package 16) or later, you do not need to create data and metadata pools for CephFS. You can create a filesystem volume. For more information, see Management of MDS service using the Ceph Orchestrator in the Red Hat Ceph Storage Operations Guide.
  5. Create a client.openstack user in your Ceph Storage cluster with the following capabilities:

    • cap_mgr: allow *
    • cap_mon: profile rbd
    • cap_osd: profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups

      $ ceph auth add client.openstack mgr 'allow *' mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups'
  6. Note the Ceph client key created for the client.openstack user:

    $ ceph auth list
    ...
    [client.openstack]
    	key = <AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==>
    	caps mgr = "allow *"
    	caps mon = "profile rbd"
    	caps osd = "profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups"
    ...
    • The key value in the example, AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==, is your Ceph client key.
  7. If your overcloud deploys the Shared File Systems service with CephFS, create the client.manila user in your Ceph Storage cluster. The capabilities required for the client.manila user depend on whether your deployment exposes CephFS shares through the native CephFS protocol or the NFS protocol.

    • If you expose CephFS shares through the native CephFS protocol, the following capabilities are required:

      • cap_mgr: allow rw
      • cap_mon: allow r

        $ ceph auth add client.manila mgr 'allow rw' mon 'allow r'
    • If you expose CephFS shares through the NFS protocol, the following capabilities are required:

      • cap_mgr: allow rw
      • cap_mon: allow r
      • cap_osd: allow rw pool=manila_data

        The specified pool name must be the value set for the ManilaCephFSDataPoolName parameter, which defaults to manila_data.

        $ ceph auth add client.manila  mgr 'allow rw' mon 'allow r' osd 'allow rw pool=manila_data'
  8. Note the manila client name and the key value to use in overcloud deployment templates:

    $ ceph auth get-key client.manila
         <AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==>
  9. Note the file system ID of your Ceph Storage cluster. This value is specified in the fsid field, under the [global] section of the configuration file for your cluster:

    [global]
    fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19
    ...
Note

Use the Ceph client key and file system ID, and the Shared File Systems service client IDs and key when you create the custom environment file.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.