Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 14. Creating exports using NFS

download PDF

This section describes how to create exports using NFS that can then be accessed externally from the OpenShift cluster.

Follow the instructions below to create exports and access them externally from the OpenShift Cluster:

14.1. Enabling the NFS feature

To use NFS feature, you need to enable it in the storage cluster using the command-line interface (CLI) after the cluster is created. You can also enable the NFS feature while creating the storage cluster using the user interface.

Prerequisites

  • OpenShift Data Foundation is installed and running in the openshift-storage namespace.
  • The OpenShift Data Foundation installation includes a CephFilesystem.

Procedure

  • Run the following command to enable the NFS feature from CLI:
$ oc --namespace openshift-storage patch storageclusters.ocs.openshift.io ocs-storagecluster --type merge --patch '{"spec": {"nfs":{"enable": true}}}'

Verification steps

NFS installation and configuration is complete when the following conditions are met:

  • The CephNFS resource named ocs-storagecluster-cephnfs has a status of Ready.
  • Check if all the csi-nfsplugin-* pods are running:

    oc -n openshift-storage describe cephnfs ocs-storagecluster-cephnfs
    oc -n openshift-storage get pod | grep csi-nfsplugin

    Output has multiple pods. For example:

    csi-nfsplugin-47qwq                                          2/2     Running  0  10s
    csi-nfsplugin-77947                                          2/2     Running  0  10s
    csi-nfsplugin-ct2pm                                          2/2     Running  0  10s
    csi-nfsplugin-provisioner-f85b75fbb-2rm2w                    2/2     Running  0  10s
    csi-nfsplugin-provisioner-f85b75fbb-8nj5h                    2/2     Running  0  10s

14.2. Creating NFS exports

NFS exports are created by creating a Persistent Volume Claim (PVC) against the ocs-storagecluster-ceph-nfs StorageClass.

You can create NFS PVCs two ways:

Create NFS PVC using a yaml.

The following is an example PVC.

Note

volumeMode: Block will not work for NFS volumes.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: <desired_name>
spec:
 accessModes:
   - ReadWriteOnce
 resources:
   requests:
     storage: 1Gi
 storageClassName: ocs-storagecluster-ceph-nfs
<desired_name>
Specify a name for the PVC, for example, my-nfs-export.

The export is created once the PVC reaches the Bound state.

Create NFS PVCs from the OpenShift Container Platform web console.

Prerequisites

  • Ensure that you are logged into the OpenShift Container Platform web console and the NFS feature is enabled for the storage cluster.

Procedure

  1. In the OpenShift Web Console, click Storage Persistent Volume Claims
  2. Set the Project to openshift-storage.
  3. Click Create PersistentVolumeClaim.

    1. Specify Storage Class, ocs-storagecluster-ceph-nfs.
    2. Specify the PVC Name, for example, my-nfs-export.
    3. Select the required Access Mode.
    4. Specify a Size as per application requirement.
    5. Select Volume mode as Filesystem.

      Note: Block mode is not supported for NFS PVCs

    6. Click Create and wait until the PVC is in Bound status.

14.3. Consuming NFS exports in-cluster

Kubernetes application pods can consume NFS exports created by mounting a previously created PVC.

You can mount the PVC one of two ways:

Using a YAML:

Below is an example pod that uses the example PVC created in Section 14.2, “Creating NFS exports”:

apiVersion: v1
kind: Pod
metadata:
 name: nfs-export-example
spec:
 containers:
   - name: web-server
     image: nginx
     volumeMounts:
       - name: nfs-export-pvc
         mountPath: /var/lib/www/html
 volumes:
   - name: nfs-export-pvc
     persistentVolumeClaim:
       claimName: <pvc_name>
       readOnly: false
<pvc_name>
Specify the PVC you have previously created, for example, my-nfs-export.

Using the OpenShift Container Platform web console.

Procedure

  1. On the OpenShift Container Platform web console, navigate to Workloads Pods.
  2. Click Create Pod to create a new application pod.
  3. Under the metadata section add a name. For example, nfs-export-example, with namespace as openshift-storage.
  4. Under the spec: section, add containers: section with image and volumeMounts sections:

    apiVersion: v1
    kind: Pod
    metadata:
     name: nfs-export-example
     namespace: openshift-storage
    spec:
     containers:
       - name: web-server
         image: nginx
         volumeMounts:
           - name: <volume_name>
             mountPath: /var/lib/www/html

    For example:

    apiVersion: v1
    kind: Pod
    metadata:
     name: nfs-export-example
     namespace: openshift-storage
    spec:
     containers:
       - name: web-server
         image: nginx
         volumeMounts:
           - name: nfs-export-pvc
             mountPath: /var/lib/www/html
  5. Under the spec: section, add volumes: section to add the NFS PVC as a volume for the application pod:

    volumes:
      - name: <volume_name>
        persistentVolumeClaim:
          claimName: <pvc_name>

    For example:

    volumes:
      - name: nfs-export-pvc
        persistentVolumeClaim:
          claimName: my-nfs-export

14.4. Consuming NFS exports externally from the OpenShift cluster

NFS clients outside of the OpenShift cluster can mount NFS exports created by a previously-created PVC.

Procedure

  1. After the nfs flag is enabled, singe-server CephNFS is deployed by Rook. You need to fetch the value of the ceph_nfs field for the nfs-ganesha server to use in the next step:

    $ oc get pods -n openshift-storage | grep rook-ceph-nfs
    $ oc describe pod  <name of the rook-ceph-nfs pod> | grep ceph_nfs

    For example:

    $ oc describe pod rook-ceph-nfs-ocs-storagecluster-cephnfs-a-7bb484b4bf-bbdhs | grep ceph_nfs
      ceph_nfs=my-nfs
  2. Expose the NFS server outside of the OpenShift cluster by creating a Kubernetes LoadBalancer Service. The example below creates a LoadBalancer Service and references the NFS server created by OpenShift Data Foundation.

    apiVersion: v1
    kind: Service
    metadata:
     name: rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer
     namespace: openshift-storage
    spec:
     ports:
       - name: nfs
         port: 2049
     type: LoadBalancer
     externalTrafficPolicy: Local
     selector:
       app: rook-ceph-nfs
       ceph_nfs: <my-nfs>
       instance: a

    Replace <my-nfs> with the value you got in step 1.

  3. Collect connection information. The information external clients need to connect to an export comes from the Persistent Volume (PV) created for the PVC, and the status of the LoadBalancer Service created in the previous step.

    1. Get the share path from the PV.

      1. Get the name of the PV associated with the NFS export’s PVC:

        $ oc get pvc <pvc_name> --output jsonpath='{.spec.volumeName}'
        pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d

        Replace <pvc_name> with your own PVC name. For example:

        oc get pvc pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.volumeName}'
        pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d
      2. Use the PV name obtained previously to get the NFS export’s share path:

        $ oc get pv pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.csi.volumeAttributes.share}'
        /0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215
    2. Get an ingress address for the NFS server. A service’s ingress status may have multiple addresses. Choose the one desired to use for external clients. In the example below, there is only a single address: the host name ingress-id.somedomain.com.

      $ oc -n openshift-storage get service rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer --output jsonpath='{.status.loadBalancer.ingress}'
      [{"hostname":"ingress-id.somedomain.com"}]
  4. Connect the external client using the share path and ingress address from the previous steps. The following example mounts the export to the client’s directory path /export/mount/path:

    $ mount -t nfs4 -o proto=tcp ingress-id.somedomain.com:/0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215 /export/mount/path

    If this does not work immediately, it could be that the Kubernetes environment is still taking time to configure the network resources to allow ingress to the NFS server.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.