搜索

此内容没有您所选择的语言版本。

Chapter 4. Deploying OpenShift Data Foundation external storage cluster

download PDF

Use this procedure to deploy an external storage cluster to add additional storage or expand your current internal storage cluster.

Prerequisites

  • An OpenShift Data Foundation cluster deployed in internal mode.
  • Ensure that both the OpenShift container Platform and OpenShift Data Foundation are upgraded to version 4.15.

Procedure

  1. In the OpenShift web Console, navigate to Storage Data Foundation Storage Systems tab
  2. Click Create StorageSystem.
  3. In the Backing storage page, Connect an external storage platform is selected by default.

    1. Choose Red Hat Ceph Storage as the Storage platform from available options.
    2. Click Next.
  4. In the Security and Network page,

    1. Optional: To select encryption, select Enable encryption checkbox.
    2. In the Connection section, click on the Download Script link to download the python script for extracting Ceph cluster details.
    3. For extracting the Red Hat Ceph Storage (RHCS) cluster details, run the downloaded python script on a Red Hat Ceph Storage node with the admin key.

      1. Run the following command on the RHCS node to view the list of available arguments:

        # python3 ceph-external-cluster-details-exporter.py --help

        You can also run the script from inside a MON container (containerized deployment) or from a MON node (RPM deployment).

        Note

        Use the yum install cephadm command and then the cephadm command to deploy your RHCS cluster using containers. You must pull the RHCS container images using the cephadm command, rather than using yum for installing the Ceph packages onto nodes. For more information, see RHCS product documentation.

      2. To retrieve the external cluster details from the RHCS cluster, run the following command:

        # python3 ceph-external-cluster-details-exporter.py \
        --rbd-data-pool-name <rbd block pool name>  [optional arguments]

        For example:

        # python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs

        In this example,

        rbd-data-pool-name

        A mandatory parameter that is used for providing block storage in OpenShift Data Foundation.

        rgw-endpoint

        (Optional) This parameter is required only if the object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: <ip_address>:<port>

        Note

        A fully-qualified domain name (FQDN) is also supported in the format <FQDN>:<PORT>.

        monitoring-endpoint

        (Optional) This parameter accepts comma-separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated.

        monitoring-endpoint-port

        (Optional) It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint. If not provided, the value is automatically populated.

        run-as-user

        (Optional) This parameter is used for providing name for the Ceph user which is created by the script. If this parameter is not specified, a default user name client.healthchecker is created. The permissions for the new user is set as:

        • caps: [mgr] allow command config
        • caps: [mon] allow r, allow command quorum_status, allow command version
        • caps: [osd] allow rwx pool=RGW_POOL_PREFIX.rgw.meta, allow r pool=.rgw.root, allow rw pool=RGW_POOL_PREFIX.rgw.control, allow rx pool=RGW_POOL_PREFIX.rgw.log, allow x pool=RGW_POOL_PREFIX.rgw.buckets.index

        Additional flags:

        rgw-pool-prefix

        (Optional) The prefix of the RGW pools. If not specified, the default prefix is default.

        rgw-tls-cert-path

        (Optional) The file path of the RADOS Gateway endpoint TLS certificate.

        rgw-skip-tls

        (Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED).

        ceph-conf

        (Optional) The name of the Ceph configuration file.

        cluster-name

        (Optional) The Ceph cluster name.

        output

        (Optional) The file where the output is required to be stored.

        cephfs-metadata-pool-name

        (Optional) The name of the CephFS metadata pool.

        cephfs-data-pool-name

        (Optional) The name of the CephFS data pool.

        cephfs-filesystem-name

        (Optional) The name of the CephFS filesystem.

        rbd-metadata-ec-pool-name

        (Optional) The name of the erasure coded RBD metadata pool.

        dry-run

        (Optional) This parameter helps to print the executed commands without running them.

        restricted-auth-permission

        (Optional) This parameter restricts cephCSIKeyrings auth permissions to specific pools and clusters. Mandatory flags that need to be set with this are rbd-data-pool-name and cluster-name. You can also pass the cephfs-filesystem-name flag if there is CephFS user restriction so that permission is restricted to a particular CephFS filesystem.

        Note

        This parameter must be applied only for the new deployments. To restrict csi-users per pool and per cluster, you need to create new csi-users and new secrets for those csi-users.

        Example with restricted auth permission:

        # python3 /etc/ceph/create-external-cluster-resources.py --cephfs-filesystem-name myfs --rbd-data-pool-name replicapool --cluster-name rookStorage --restricted-auth-permission true

        Example of JSON output generated using the python script:

        [{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxx:xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]

      3. Save the JSON output to a file with .json extension

        Note

        For OpenShift Data Foundation to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remain unchanged on the RHCS external cluster after the storage cluster creation.

      4. Run the command when there is a multi-tenant deployment in which the RHCS cluster is already connected to OpenShift Data Foundation deployment with a lower version.

        # python3 ceph-external-cluster-details-exporter.py --upgrade
    4. Click Browse to select and upload the JSON file.

      The content of the JSON file is populated and displayed in the text box.

    5. Click the Next button, which is enabled after you upload the .json file.
  5. In the Review and create page, review the configuration details.

    To modify any configuration settings, click Back to go back to the previous configuration page.

  6. Click Create StorageSystem.

Verification steps

Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.