Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 2. Storage classes

download PDF

The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create custom storage classes to use other storage resources or to offer a different behavior to applications.

Note

Custom storage classes are not supported for external mode OpenShift Data Foundation clusters.

2.1. Creating storage classes and pools

You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it.

Prerequisites

  • Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state.

Procedure

  1. Click Storage StorageClasses.
  2. Click Create Storage Class.
  3. Enter the storage class Name and Description.
  4. Reclaim Policy is set to Delete as the default option. Use this setting.

    If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC).

  5. Volume binding mode is set to WaitForConsumer as the default option.

    If you choose the Immediate option, then the PV gets created immediately when creating the PVC.

  6. Select RBD or CephFS Provisioner as the plugin for provisioning the persistent volumes.
  7. Choose a Storage system for your workloads.
  8. Select an existing Storage Pool from the list or create a new pool.

    Note

    The 2-way replication data protection policy is only supported for the non-default RBD pool. 2-way replication can be used by creating an additional pool. To know about Data Availability and Integrity considerations for replica 2 pools, see Knowledgebase Customer Solution Article.

    Create new pool
    1. Click Create New Pool.
    2. Enter Pool name.
    3. Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy.
    4. Select Enable compression if you need to compress the data.

      Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed.

    5. Click Create to create the new storage pool.
    6. Click Finish after the pool is created.
  9. Optional: Select Enable Encryption checkbox.
  10. Click Create to create the storage class.

2.2. Storage class for persistent volume encryption

Persistent volume (PV) encryption guarantees isolation and confidentiality between tenants (applications). Before you can use PV encryption, you must create a storage class for PV encryption. Persistent volume encryption is only available for RBD PVs.

OpenShift Data Foundation supports storing encryption passphrases in HashiCorp Vault and Thales CipherTrust Manager. You can create an encryption enabled storage class using an external key management system (KMS) for persistent volume encryption. You need to configure access to the KMS before creating the storage class.

Note

For PV encryption, you must have a valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.

2.2.1. Access configuration for Key Management System (KMS)

Based on your use case, you need to configure access to KMS using one of the following ways:

  • Using vaulttokens: allows users to authenticate using a token
  • Using Thales CipherTrust Manager: uses Key Management Interoperability Protocol (KMIP)
  • Using vaulttenantsa (Technology Preview): allows users to use serviceaccounts to authenticate with Vault
Important

Accessing the KMS using vaulttenantsa is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information, see Technology Preview Features Support Scope.

2.2.1.1. Configuring access to KMS using vaulttokens

Prerequisites

  • The OpenShift Data Foundation cluster is in Ready state.
  • On the external key management system (KMS),

    • Ensure that a policy with a token exists and the key value backend path in Vault is enabled.
    • Ensure that you are using signed certificates on your Vault servers.

Procedure

Create a secret in the tenant’s namespace.

  1. In the OpenShift Container Platform web console, navigate to Workloads Secrets.
  2. Click Create Key/value secret.
  3. Enter Secret Name as ceph-csi-kms-token.
  4. Enter Key as token.
  5. Enter Value.

    It is the token from Vault. You can either click Browse to select and upload the file containing the token or enter the token directly in the text box.

  6. Click Create.
Note

The token can be deleted only after all the encrypted PVCs using the ceph-csi-kms-token have been deleted.

2.2.1.2. Configuring access to KMS using Thales CipherTrust Manager

Prerequisites

  1. Create a KMIP client if one does not exist. From the user interface, select KMIP Client Profile Add Profile.

    1. Add the CipherTrust username to the Common Name field during profile creation.
  2. Create a token be navigating to KMIP Registration Token New Registration Token. Copy the token for the next step.
  3. To register the client, navigate to KMIP Registered Clients Add Client. Specify the Name. Paste the Registration Token from the previous step, then click Save.
  4. Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively.
  5. To create a new KMIP interface, navigate to Admin Settings Interfaces Add Interface.

    1. Select KMIP Key Management Interoperability Protocol and click Next.
    2. Select a free Port.
    3. Select Network Interface as all.
    4. Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional.
    5. (Optional) You can enable hard delete to delete both meta-data and material when the key is deleted. It is disabled by default.
    6. Select the CA to be used, and click Save.
  6. To get the server CA certificate, click on the Action menu (⋮) on the right of the newly created interface, and click Download Certificate.

Procedure

  1. To create a key to act as the Key Encryption Key (KEK) for storageclass encryption, follow the steps below:

    1. Navigate to Keys Add Key.
    2. Enter Key Name.
    3. Set the Algorithm and Size to AES and 256 respectively.
    4. Enable Create a key in Pre-Active state and set the date and time for activation.
    5. Ensure that Encrypt and Decrypt are enabled under Key Usage.
    6. Copy the ID of the newly created Key to be used as the Unique Identifier during deployment.

2.2.1.3. Configuring access to KMS using vaulttenantsa

Prerequisites

  • The OpenShift Data Foundation cluster is in Ready state.
  • On the external key management system (KMS),

    • Ensure that a policy exists and the key value backend path in Vault is enabled.
    • Ensure that you are using signed certificates on your Vault servers.
  • Create the following serviceaccount in the tenant namespace as shown below:

    $ cat <<EOF | oc create -f -
    apiVersion: v1
    kind: ServiceAccount
    metadata:
        name: ceph-csi-vault-sa
    EOF

Procedure

You need to configure the Kubernetes authentication method before OpenShift Data Foundation can authenticate with and start using Vault. The following instructions create and configure serviceAccount, ClusterRole, and ClusterRoleBinding required to allow OpenShift Data Foundation to authenticate with Vault.

  1. Apply the following YAML to your Openshift cluster:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: rbd-csi-vault-token-review
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: rbd-csi-vault-token-review
    rules:
      - apiGroups: ["authentication.k8s.io"]
        resources: ["tokenreviews"]
        verbs: ["create", "get", "list"]
    
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: rbd-csi-vault-token-review
    subjects:
      - kind: ServiceAccount
        name: rbd-csi-vault-token-review
        namespace: openshift-storage
    roleRef:
      kind: ClusterRole
      name: rbd-csi-vault-token-review
      apiGroup: rbac.authorization.k8s.io
  2. Create a secret for serviceaccount token and CA certificate.

    $ cat <<EOF | oc create -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: rbd-csi-vault-token-review-token
      namespace: openshift-storage
      annotations:
        kubernetes.io/service-account.name: "rbd-csi-vault-token-review"
    type: kubernetes.io/service-account-token
    data: {}
    EOF
  3. Get the token and the CA certificate from the secret.

    $ SA_JWT_TOKEN=$(oc -n openshift-storage get secret rbd-csi-vault-token-review-token -o jsonpath="{.data['token']}" | base64 --decode; echo)
    $ SA_CA_CRT=$(oc -n openshift-storage get secret rbd-csi-vault-token-review-token -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)
  4. Retrieve the OpenShift cluster endpoint.

    $ OCP_HOST=$(oc config view --minify --flatten -o jsonpath="{.clusters[0].cluster.server}")
  5. Use the information collected in the previous steps to set up the kubernetes authentication method in Vault as shown:

    $ vault auth enable kubernetes
    $ vault write auth/kubernetes/config \
              token_reviewer_jwt="$SA_JWT_TOKEN" \
              kubernetes_host="$OCP_HOST" \
              kubernetes_ca_cert="$SA_CA_CRT"
  6. Create a role in Vault for the tenant namespace:

    $ vault write "auth/kubernetes/role/csi-kubernetes" bound_service_account_names="ceph-csi-vault-sa" bound_service_account_namespaces=<tenant_namespace> policies=<policy_name_in_vault>

    csi-kubernetes is the default role name that OpenShift Data Foundation looks for in Vault. The default service account name in the tenant namespace in the OpenShift Data Foundation cluster is ceph-csi-vault-sa. These default values can be overridden by creating a ConfigMap in the tenant namespace.

    For more information about overriding the default names, see Overriding Vault connection details using tenant ConfigMap.

Sample YAML

  • To create a storageclass that uses the vaulttenantsa method for PV encrytpion, you must either edit the existing ConfigMap or create a ConfigMap named csi-kms-connection-details that will hold all the information needed to establish the connection with Vault.

    The sample yaml given below can be used to update or create the csi-kms-connection-detail ConfigMap:

    apiVersion: v1
    data:
      vault-tenant-sa: |-
        {
          "encryptionKMSType": "vaulttenantsa",
          "vaultAddress": "<https://hostname_or_ip_of_vault_server:port>",
          "vaultTLSServerName": "<vault TLS server name>",
          "vaultAuthPath": "/v1/auth/kubernetes/login",
          "vaultAuthNamespace": "<vault auth namespace name>"
          "vaultNamespace": "<vault namespace name>",
          "vaultBackendPath": "<vault backend path name>",
          "vaultCAFromSecret": "<secret containing CA cert>",
          "vaultClientCertFromSecret": "<secret containing client cert>",
          "vaultClientCertKeyFromSecret": "<secret containing client private key>",
          "tenantSAName": "<service account name in the tenant namespace>"
        }
    metadata:
      name: csi-kms-connection-details

    encryptionKMSType

    Set to vaulttenantsa to use service accounts for authentication with vault.

    vaultAddress

    The hostname or IP address of the vault server with the port number.

    vaultTLSServerName

    (Optional) The vault TLS server name

    vaultAuthPath

    (Optional) The path where kubernetes auth method is enabled in Vault. The default path is kubernetes. If the auth method is enabled in a different path other than kubernetes, this variable needs to be set as "/v1/auth/<path>/login".

    vaultAuthNamespace

    (Optional) The Vault namespace where kubernetes auth method is enabled.

    vaultNamespace

    (Optional) The Vault namespace where the backend path being used to store the keys exists

    vaultBackendPath

    The backend path in Vault where the encryption keys will be stored

    vaultCAFromSecret

    The secret in the OpenShift Data Foundation cluster containing the CA certificate from Vault

    vaultClientCertFromSecret

    The secret in the OpenShift Data Foundation cluster containing the client certificate from Vault

    vaultClientCertKeyFromSecret

    The secret in the OpenShift Data Foundation cluster containing the client private key from Vault

    tenantSAName

    (Optional) The service account name in the tenant namespace. The default value is ceph-csi-vault-sa. If a different name is to be used, this variable has to be set accordingly.

2.2.2. Creating a storage class for persistent volume encryption

Prerequisites

Based on your use case, you must ensure to configure access to KMS for one of the following:

Procedure

  1. In the OpenShift Web Console, navigate to Storage StorageClasses.
  2. Click Create Storage Class.
  3. Enter the storage class Name and Description.
  4. Select either Delete or Retain for the Reclaim Policy. By default, Delete is selected.
  5. Select either Immediate or WaitForFirstConsumer as the Volume binding mode. WaitForConsumer is set as the default option.
  6. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes.
  7. Select Storage Pool where the volume data is stored from the list or create a new pool.
  8. Select the Enable encryption checkbox. There are two options available to set the KMS connection details:

    • Select existing KMS connection: Select an existing KMS connection from the drop-down list. The list is populated from the the connection details available in the csi-kms-connection-details ConfigMap.

      1. Select the Provider from the drop down.
      2. Select the Key service for the given provider from the list.
    • Create new KMS connection: This is applicable for vaulttokens and Thales CipherTrust Manager (using KMIP) only.

      1. Select the Key Management Service Provider.
      2. If Vault is selected as the Key Management Service Provider, follow these steps:

        1. Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Token.
        2. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:

          1. Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
          2. Optional: Enter TLS Server Name and Vault Enterprise Namespace.
          3. Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
          4. Click Save.
      3. If Thales CipherTrust Manager (using KMIP) is selected as the Key Management Service Provider, follow these steps:

        1. Enter a unique Connection Name.
        2. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example, Address: 123.34.3.2, Port: 5696.
        3. Upload the Client Certificate, CA certificate, and Client Private Key.
        4. Enter the Unique Identifier for the key to be used for encryption and decryption, generated above.
        5. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local.
      4. Click Save.
      5. Click Create.
  9. Edit the ConfigMap to add the vaultBackend parameter if the HashiCorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path.

    Note

    vaultBackend is an optional parameters that is added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation.

    1. Identify the encryptionKMSID being used by the newly created storage class.

      1. On the OpenShift Web Console, navigate to Storage Storage Classes.
      2. Click the Storage class name YAML tab.
      3. Capture the encryptionKMSID being used by the storage class.

        Example:

        encryptionKMSID: 1-vault
    2. On the OpenShift Web Console, navigate to Workloads ConfigMaps.
    3. To view the KMS connection details, click csi-kms-connection-details.
    4. Edit the ConfigMap.

      1. Click Action menu (⋮) Edit ConfigMap.
      2. Add the vaultBackend parameter depending on the backend that is configured for the previously identified encryptionKMSID.

        You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2.

        Example:

         kind: ConfigMap
         apiVersion: v1
         metadata:
           name: csi-kms-connection-details
         [...]
         data:
           1-vault: |-
             {
               "encryptionKMSType": "vaulttokens",
               "kmsServiceName": "1-vault",
               [...]
               "vaultBackend": "kv-v2"
             }
           2-vault: |-
             {
               "encryptionKMSType": "vaulttenantsa",
               [...]
               "vaultBackend": "kv"
             }
      3. Click Save

Next steps

  • The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims.

    Important

    Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the HashiCorp product. For technical assistance with this product, contact HashiCorp.

2.2.2.1. Overriding Vault connection details using tenant ConfigMap

The Vault connections details can be reconfigured per tenant by creating a ConfigMap in the Openshift namespace with configuration options that differ from the values set in the csi-kms-connection-details ConfigMap in the openshift-storage namespace. The ConfigMap needs to be located in the tenant namespace. The values in the ConfigMap in the tenant namespace will override the values set in the csi-kms-connection-details ConfigMap for the encrypted Persistent Volumes created in that namespace.

Procedure

  1. Ensure that you are in the tenant namespace.
  2. Click on Workloads ConfigMaps.
  3. Click on Create ConfigMap.
  4. The following is a sample yaml. The values to be overidden for the given tenant namespace can be specified under the data section as shown below:

    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: ceph-csi-kms-config
    data:
      vaultAddress: "<vault_address:port>"
      vaultBackendPath: "<backend_path>"
      vaultTLSServerName: "<vault_tls_server_name>"
      vaultNamespace: "<vault_namespace>"
  5. After the yaml is edited, click on Create.

2.3. Storage class with single replica

You can create a storage class with a single replica to be used by your applications. This avoids redundant data copies and allows resiliency management on the application level.

Warning

Enabling this feature creates a single replica pool without data replication, increasing the risk of data loss, data corruption, and potential system instability if your application does not have its own replication. If any OSDs are lost, this feature requires very disruptive steps to recover. All apllications can lose their data, and must be recreated in case of a failed OSD.

Procedure

  1. Enable the single replica feature using the following command:

    $ oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/managedResources/cephNonResilientPools/enable", "value": true }]'
  2. Verify storagecluster is in Ready state:

    $ oc get storagecluster

    Example output:

    NAME                 AGE   PHASE   EXTERNAL   CREATED AT             VERSION
    ocs-storagecluster   10m   Ready              2024-02-05T13:56:15Z   4.15.0
  3. New cephblockpools are created for each failure domain. Verify cephblockpools are in Ready state:

    $ oc get cephblockpools

    Example output:

    NAME                                          PHASE
    ocs-storagecluster-cephblockpool              Ready
    ocs-storagecluster-cephblockpool-us-east-1a   Ready
    ocs-storagecluster-cephblockpool-us-east-1b   Ready
    ocs-storagecluster-cephblockpool-us-east-1c   Ready
  4. Verify new storage classes have been created:

    $ oc get storageclass

    Example output:

    NAME                                        PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    gp2 (default)                               kubernetes.io/aws-ebs                   Delete          WaitForFirstConsumer   true                   104m
    gp2-csi                                     ebs.csi.aws.com                         Delete          WaitForFirstConsumer   true                   104m
    gp3-csi                                     ebs.csi.aws.com                         Delete          WaitForFirstConsumer   true                   104m
    ocs-storagecluster-ceph-non-resilient-rbd   openshift-storage.rbd.csi.ceph.com      Delete          WaitForFirstConsumer   true                   46m
    ocs-storagecluster-ceph-rbd                 openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   52m
    ocs-storagecluster-cephfs                   openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   52m
    openshift-storage.noobaa.io                 openshift-storage.noobaa.io/obc         Delete          Immediate              false                  50m
  5. New OSD pods are created; 3 osd-prepare pods and 3 additional pods. Verify new OSD pods are in Running state:

    $ oc get pods | grep osd

    Example output:

    rook-ceph-osd-0-6dc76777bc-snhnm                                  2/2     Running     0               9m50s
    rook-ceph-osd-1-768bdfdc4-h5n7k                                   2/2     Running     0               9m48s
    rook-ceph-osd-2-69878645c4-bkdlq                                  2/2     Running     0               9m37s
    rook-ceph-osd-3-64c44d7d76-zfxq9                                  2/2     Running     0               5m23s
    rook-ceph-osd-4-654445b78f-nsgjb                                  2/2     Running     0               5m23s
    rook-ceph-osd-5-5775949f57-vz6jp                                  2/2     Running     0               5m22s
    rook-ceph-osd-prepare-ocs-deviceset-gp2-0-data-0x6t87-59swf       0/1     Completed   0               10m
    rook-ceph-osd-prepare-ocs-deviceset-gp2-1-data-0klwr7-bk45t       0/1     Completed   0               10m
    rook-ceph-osd-prepare-ocs-deviceset-gp2-2-data-0mk2cz-jx7zv       0/1     Completed   0               10m

2.3.1. Recovering after OSD lost from single replica

When using replica 1, a storage class with a single replica, data loss is guaranteed when an OSD is lost. Lost OSDs go into a failing state. Use the following steps to recover after OSD loss.

Procedure

Follow these recovery steps to get your applications running again after data loss from replica 1. You first need to identify the domain where the failing OSD is.

  1. If you know which failure domain the failing OSD is in, run the following command to get the exact replica1-pool-name required for the next steps. If you do not know where the failing OSD is, skip to step 2.

    $ oc get cephblockpools

    Example output:

    NAME                                          PHASE
    ocs-storagecluster-cephblockpool              Ready
    ocs-storagecluster-cephblockpool-us-south-1   Ready
    ocs-storagecluster-cephblockpool-us-south-2   Ready
    ocs-storagecluster-cephblockpool-us-south-3   Ready

    Copy the corresponding failure domain name for use in next steps, then skip to step 4.

  2. Find the OSD pod that is in Error state or CrashLoopBackoff state to find the failing OSD:

    $ oc get pods -nopenshift-storage -l app=rook-ceph-osd  | grep 'CrashLoopBackOff\|Error'
  3. Identify the replica-1 pool that had the failed OSD.

    1. Identify the node where the failed OSD was running:

      failed_osd_id=0 #replace with the ID of the failed OSD
    2. Identify the failureDomainLabel for the node where the failed OSD was running:

      failure_domain_label=$(oc get storageclass ocs-storagecluster-ceph-non-resilient-rbd -o yaml | grep domainLabel |head -1 |awk -F':' '{print $2}')
      failure_domain_value=$”(oc get pods $failed_osd_id -oyaml |grep topology-location-zone |awk ‘{print $2}’)”

      The output shows the replica-1 pool name whose OSD is failing, for example:

      replica1-pool-name= "ocs-storagecluster-cephblockpool-$failure_domain_value”

      where $failure_domain_value is the failureDomainName.

  4. Delete the replica-1 pool.

    1. Connect to the toolbox pod:

      toolbox=$(oc get pod -l app=rook-ceph-tools -n openshift-storage -o jsonpath='{.items[*].metadata.name}')
      
      oc rsh $toolbox -n openshift-storage
    2. Delete the replica-1 pool. Note that you have to enter the replica-1 pool name twice in the command, for example:

      ceph osd pool rm <replica1-pool-name> <replica1-pool-name> --yes-i-really-really-mean-it

      Replace replica1-pool-name with the failure domain name identified earlier.

  5. Purge the failing OSD by following the steps in section "Replacing operational or failed storage devices" based on your platform in the Replacing devices guide.
  6. Restart the rook-ceph operator:

    $ oc delete pod -l rook-ceph-operator -nopenshift-storage
  7. Recreate any affected applications in that avaialbity zone to start using the new pool with same name.
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.