Este contenido no está disponible en el idioma seleccionado.

Chapter 13. Creating exports using NFS [Technology Preview]


This section describes how to create exports using NFS that can then be accessed externally from the OpenShift cluster.

Important

Using NFS to create exports is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information, see Technology Preview Features Support Scope.

Follow the instructions below to create exports and access them externally from the OpenShift Cluster:

13.1. Enabling the NFS feature

In order to use the NFS feature, it needs to be enabled in the cluster.

Prerequisites

  • OpenShift Data Foundation is installed and running in the openshift-storage namespace.
  • The OpenShift Data Foundation installation includes a CephFilesystem.

Procedure

Run the following commands to enable the NFS feature:

$ oc --namespace openshift-storage patch storageclusters.ocs.openshift.io ocs-storagecluster --type merge --patch '{"spec": {"nfs":{"enable": true}}}'
Copy to Clipboard Toggle word wrap
$ oc  --namespace openshift-storage patch configmap rook-ceph-operator-config --type merge --patch '{"data":{"ROOK_CSI_ENABLE_NFS": "true"}}'
Copy to Clipboard Toggle word wrap

Verification steps

NFS installation and configuration is complete when the following conditions are met:

  • The CephNFS resource named ocs-storagecluster-cephnfs has a status of Ready.
  • Check all csi-nfsplugin-* pods are running:

    oc -n openshift-storage describe cephnfs ocs-storagecluster-cephnfs
    Copy to Clipboard Toggle word wrap
    oc -n openshift-storage get pod | grep csi-nfsplugin
    Copy to Clipboard Toggle word wrap

    Output will be multiple pods. For example:

    csi-nfsplugin-47qwq                                          2/2     Running  0  10s
    csi-nfsplugin-77947                                          2/2     Running  0  10s
    csi-nfsplugin-ct2pm                                          2/2     Running  0  10s
    csi-nfsplugin-provisioner-f85b75fbb-2rm2w                    2/2     Running  0  10s
    csi-nfsplugin-provisioner-f85b75fbb-8nj5h                    2/2     Running  0  10s
    Copy to Clipboard Toggle word wrap

13.2. Creating NFS exports

NFS exports are created by creating a Persistent Volume Claim (PVC) against the ocs-storagecluster-ceph-nfs StorageClass.

You can create NFS PVCs two ways:

Create NFS PVC using a yaml.

The following is an example PVC.

Note

volumeMode: Block will not work for NFS volumes.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: <desired_name>
spec:
 accessModes:
   - ReadWriteOnce
 resources:
   requests:
     storage: 1Gi
 storageClassName: ocs-storagecluster-ceph-nfs
Copy to Clipboard Toggle word wrap
<desired_name>
Specify a name for the PVC, for example, my-nfs-export.

The export is created once the PVC reaches the Bound state.

Create NFS PVCs from the OpenShift Container Platform web console.

Prerequisites

  • Ensure that you are logged into the OpenShift Container Platform web console and the NFS feature is enabled for the storage cluster.

Procedure

  1. In the OpenShift Web Console, click Storage Persistent Volume Claims
  2. Set the Project to openshift-storage.
  3. Click Create PersistentVolumeClaim.

    1. Specify Storage Class, ocs-storagecluster-ceph-nfs.
    2. Specify the PVC Name, for example, my-nfs-export.
    3. Select the required Access Mode.
    4. Specify a Size as per application requirement.
    5. Select Volume mode as Filesystem.

      Note: Block mode is not supported for NFS PVCs

    6. Click Create and wait until the PVC is in Bound status.

13.3. Consuming NFS exports in-cluster

Kubernetes application pods can consume NFS exports created by mounting a previously created PVC.

You can mount the PVC one of two ways:

Using a YAML:

Below is an example pod that uses the example PVC created in Section 13.2, “Creating NFS exports”:

apiVersion: v1
kind: Pod
metadata:
 name: nfs-export-example
spec:
 containers:
   - name: web-server
     image: nginx
     volumeMounts:
       - name: nfs-export-pvc
         mountPath: /var/lib/www/html
 volumes:
   - name: nfs-export-pvc
     persistentVolumeClaim:
       claimName: <pvc_name>
       readOnly: false
Copy to Clipboard Toggle word wrap
<pvc_name>
Specify the PVC you have previously created, for example, my-nfs-export.

Using the OpenShift Container Platform web console.

Procedure

  1. On the OpenShift Container Platform web console, navigate to Workloads Pods.
  2. Click Create Pod to create a new application pod.
  3. Under the metadata section add a name. For example, nfs-export-example, with namespace as openshift-storage.
  4. Under the spec: section, add containers: section with image and volumeMounts sections:

    apiVersion: v1
    kind: Pod
    metadata:
     name: nfs-export-example
     namespace: openshift-storage
    spec:
     containers:
       - name: web-server
         image: nginx
         volumeMounts:
           - name: <volume_name>
             mountPath: /var/lib/www/html
    Copy to Clipboard Toggle word wrap

    For example:

    apiVersion: v1
    kind: Pod
    metadata:
     name: nfs-export-example
     namespace: openshift-storage
    spec:
     containers:
       - name: web-server
         image: nginx
         volumeMounts:
           - name: nfs-export-pvc
             mountPath: /var/lib/www/html
    Copy to Clipboard Toggle word wrap
  5. Under the spec: section, add volumes: section to add the NFS PVC as a volume for the application pod:

    volumes:
      - name: <volume_name>
        persistentVolumeClaim:
          claimName: <pvc_name>
    Copy to Clipboard Toggle word wrap

    For example:

    volumes:
      - name: nfs-export-pvc
        persistentVolumeClaim:
          claimName: my-nfs-export
    Copy to Clipboard Toggle word wrap

13.4. Consuming NFS exports externally from the OpenShift cluster

NFS clients outside of the OpenShift cluster can mount NFS exports created by a previously-created PVC.

Procedure

  1. After the nfs flag is enabled, singe-server CephNFS is deployed by Rook. You need to fetch the value of the ceph_nfs field for the nfs-ganesha server to use in the next step:

    $ oc get pods -n openshift-storage | grep rook-ceph-nfs
    Copy to Clipboard Toggle word wrap
    $ oc describe pod  <name of the rook-ceph-nfs pod> | grep ceph_nfs
    Copy to Clipboard Toggle word wrap

    For example:

    $ oc describe pod rook-ceph-nfs-ocs-storagecluster-cephnfs-a-7bb484b4bf-bbdhs | grep ceph_nfs
      ceph_nfs=my-nfs
    Copy to Clipboard Toggle word wrap
  2. Expose the NFS server outside of the OpenShift cluster by creating a Kubernetes LoadBalancer Service. The example below creates a LoadBalancer Service and references the NFS server created by OpenShift Data Foundation.

    apiVersion: v1
    kind: Service
    metadata:
     name: rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer
     namespace: openshift-storage
    spec:
     ports:
       - name: nfs
         port: 2049
     type: LoadBalancer
     externalTrafficPolicy: Local
     selector:
       app: rook-ceph-nfs
       ceph_nfs: <my-nfs>
       instance: a
    Copy to Clipboard Toggle word wrap

    Replace <my-nfs> with the value you got in step 1.

  3. Collect connection information. The information external clients need to connect to an export comes from the Persistent Volume (PV) created for the PVC, and the status of the LoadBalancer Service created in the previous step.

    1. Get the share path from the PV.

      1. Get the name of the PV associated with the NFS export’s PVC:

        $ oc get pvc <pvc_name> --output jsonpath='{.spec.volumeName}'
        pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d
        Copy to Clipboard Toggle word wrap

        Replace <pvc_name> with your own PVC name. For example:

        oc get pvc pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.volumeName}'
        pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d
        Copy to Clipboard Toggle word wrap
      2. Use the PV name obtained previously to get the NFS export’s share path:

        $ oc get pv pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.csi.volumeAttributes.share}'
        /0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215
        Copy to Clipboard Toggle word wrap
    2. Get an ingress address for the NFS server. A service’s ingress status may have multiple addresses. Choose the one desired to use for external clients. In the example below, there is only a single address: the host name ingress-id.somedomain.com.

      $ oc -n openshift-storage get service rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer --output jsonpath='{.status.loadBalancer.ingress}'
      [{"hostname":"ingress-id.somedomain.com"}]
      Copy to Clipboard Toggle word wrap
  4. Connect the external client using the share path and ingress address from the previous steps. The following example mounts the export to the client’s directory path /export/mount/path:

    $ mount -t nfs4 -o proto=tcp ingress-id.somedomain.com:/0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215 /export/mount/path
    Copy to Clipboard Toggle word wrap

    If this does not work immediately, it could be that the Kubernetes environment is still taking time to configure the network resources to allow ingress to the NFS server.

Volver arriba
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2025 Red Hat