Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 8. Post-installation storage configuration

download PDF

After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including storage configuration.

8.1. Dynamic provisioning

8.1.1. About dynamic provisioning

The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators (cluster-admin) or Storage Administrators (storage-admin) define and create the StorageClass objects that users can request without needing any detailed knowledge about the underlying storage volume sources.

The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure.

Many storage types are available for use as persistent volumes in OpenShift Container Platform. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plugin APIs.

8.1.2. Available dynamic provisioning plugins

OpenShift Container Platform provides the following provisioner plugins, which have generic implementations for dynamic provisioning that use the cluster’s configured provider’s API to create new storage resources:

Storage typeProvisioner plugin nameNotes

Red Hat OpenStack Platform (RHOSP) Cinder

kubernetes.io/cinder

 

RHOSP Manila Container Storage Interface (CSI)

manila.csi.openstack.org

Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning.

AWS Elastic Block Store (EBS)

kubernetes.io/aws-ebs

For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster.

Azure Disk

kubernetes.io/azure-disk

 

Azure File

kubernetes.io/azure-file

The persistent-volume-binder service account requires permissions to create and get secrets to store the Azure storage account and keys.

GCE Persistent Disk (gcePD)

kubernetes.io/gce-pd

In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists.

VMware vSphere

kubernetes.io/vsphere-volume

 
Important

Any chosen provisioner plugin also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation.

8.2. Defining a storage class

StorageClass objects are currently a globally scoped object and must be created by cluster-admin or storage-admin users.

Important

The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class.

The following sections describe the basic definition for a StorageClass object and specific examples for each of the supported plugin types.

8.2.1. Basic StorageClass object definition

The following resource shows the parameters and default values that you use to configure a storage class. This example uses the AWS ElasticBlockStore (EBS) object definition.

Sample StorageClass definition

kind: StorageClass 1
apiVersion: storage.k8s.io/v1 2
metadata:
  name: <storage-class-name> 3
  annotations: 4
    storageclass.kubernetes.io/is-default-class: 'true'
    ...
provisioner: kubernetes.io/aws-ebs 5
parameters: 6
  type: gp2
...

1
(required) The API object type.
2
(required) The current apiVersion.
3
(required) The name of the storage class.
4
(optional) Annotations for the storage class.
5
(required) The type of provisioner associated with this storage class.
6
(optional) The parameters required for the specific provisioner, this will change from plugin to plugin.

8.2.2. Storage class annotations

To set a storage class as the cluster-wide default, add the following annotation to your storage class metadata:

storageclass.kubernetes.io/is-default-class: "true"

For example:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
...

This enables any persistent volume claim (PVC) that does not specify a specific storage class to automatically be provisioned through the default storage class. However, your cluster can have more than one storage class, but only one of them can be the default storage class.

Note

The beta annotation storageclass.beta.kubernetes.io/is-default-class is still working; however, it will be removed in a future release.

To set a storage class description, add the following annotation to your storage class metadata:

kubernetes.io/description: My Storage Class Description

For example:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    kubernetes.io/description: My Storage Class Description
...

8.2.3. RHOSP Cinder object definition

cinder-storageclass.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: <storage-class-name> 1
provisioner: kubernetes.io/cinder
parameters:
  type: fast  2
  availability: nova 3
  fsType: ext4 4

1
Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes.
2
Volume type created in Cinder. Default is empty.
3
Availability Zone. If not specified, volumes are generally round-robined across all active zones where the OpenShift Container Platform cluster has a node.
4
File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4.

8.2.4. AWS Elastic Block Store (EBS) object definition

aws-ebs-storageclass.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: <storage-class-name> 1
provisioner: kubernetes.io/aws-ebs
parameters:
  type: io1 2
  iopsPerGB: "10" 3
  encrypted: "true" 4
  kmsKeyId: keyvalue 5
  fsType: ext4 6

1
(required) Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes.
2
(required) Select from io1, gp2, sc1, st1. The default is gp2. See the AWS documentation for valid Amazon Resource Name (ARN) values.
3
Optional: Only for io1 volumes. I/O operations per second per GiB. The AWS volume plugin multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See the AWS documentation for further details.
4
Optional: Denotes whether to encrypt the EBS volume. Valid values are true or false.
5
Optional: The full ARN of the key to use when encrypting the volume. If none is supplied, but encypted is set to true, then AWS generates a key. See the AWS documentation for a valid ARN value.
6
Optional: File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4.

8.2.5. Azure Disk object definition

azure-advanced-disk-storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: <storage-class-name> 1
provisioner: kubernetes.io/azure-disk
volumeBindingMode: WaitForFirstConsumer 2
allowVolumeExpansion: true
parameters:
  kind: Managed 3
  storageaccounttype: Premium_LRS 4
reclaimPolicy: Delete

1
Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes.
2
Using WaitForFirstConsumer is strongly recommended. This provisions the volume while allowing enough storage to schedule the pod on a free worker node from an available zone.
3
Possible values are Shared (default), Managed, and Dedicated.
Important

Red Hat only supports the use of kind: Managed in the storage class.

With Shared and Dedicated, Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes.

4
Azure storage account SKU tier. Default is empty. Note that Premium VMs can attach both Standard_LRS and Premium_LRS disks, Standard VMs can only attach Standard_LRS disks, Managed VMs can only attach managed disks, and unmanaged VMs can only attach unmanaged disks.
  1. If kind is set to Shared, Azure creates all unmanaged disks in a few shared storage accounts in the same resource group as the cluster.
  2. If kind is set to Managed, Azure creates new managed disks.
  3. If kind is set to Dedicated and a storageAccount is specified, Azure uses the specified storage account for the new unmanaged disk in the same resource group as the cluster. For this to work:

    • The specified storage account must be in the same region.
    • Azure Cloud Provider must have write access to the storage account.
  4. If kind is set to Dedicated and a storageAccount is not specified, Azure creates a new dedicated storage account for the new unmanaged disk in the same resource group as the cluster.

8.2.6. Azure File object definition

The Azure File storage class uses secrets to store the Azure storage account name and the storage account key that are required to create an Azure Files share. These permissions are created as part of the following procedure.

Procedure

  1. Define a ClusterRole object that allows access to create and view secrets:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    #  name: system:azure-cloud-provider
      name: <persistent-volume-binder-role> 1
    rules:
    - apiGroups: ['']
      resources: ['secrets']
      verbs:     ['get','create']
    1
    The name of the cluster role to view and create secrets.
  2. Add the cluster role to the service account:

    $ oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder
  3. Create the Azure File StorageClass object:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: <azure-file> 1
    provisioner: kubernetes.io/azure-file
    parameters:
      location: eastus 2
      skuName: Standard_LRS 3
      storageAccount: <storage-account> 4
    reclaimPolicy: Delete
    volumeBindingMode: Immediate
    1
    Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes.
    2
    Location of the Azure storage account, such as eastus. Default is empty, meaning that a new Azure storage account will be created in the OpenShift Container Platform cluster’s location.
    3
    SKU tier of the Azure storage account, such as Standard_LRS. Default is empty, meaning that a new Azure storage account will be created with the Standard_LRS SKU.
    4
    Name of the Azure storage account. If a storage account is provided, then skuName and location are ignored. If no storage account is provided, then the storage class searches for any storage account that is associated with the resource group for any accounts that match the defined skuName and location.

8.2.6.1. Considerations when using Azure File

The following file system features are not supported by the default Azure File storage class:

  • Symlinks
  • Hard links
  • Extended attributes
  • Sparse files
  • Named pipes

Additionally, the owner user identifier (UID) of the Azure File mounted directory is different from the process UID of the container. The uid mount option can be specified in the StorageClass object to define a specific user identifier to use for the mounted directory.

The following StorageClass object demonstrates modifying the user and group identifier, along with enabling symlinks for the mounted directory.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: azure-file
mountOptions:
  - uid=1500 1
  - gid=1500 2
  - mfsymlinks 3
provisioner: kubernetes.io/azure-file
parameters:
  location: eastus
  skuName: Standard_LRS
reclaimPolicy: Delete
volumeBindingMode: Immediate
1
Specifies the user identifier to use for the mounted directory.
2
Specifies the group identifier to use for the mounted directory.
3
Enables symlinks.

8.2.7. GCE PersistentDisk (gcePD) object definition

gce-pd-storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: <storage-class-name> 1
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard 2
  replication-type: none
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete

1
Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes.
2
Select either pd-standard or pd-ssd. The default is pd-standard.

8.2.8. VMware vSphere object definition

vsphere-storageclass.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: <storage-class-name> 1
provisioner: kubernetes.io/vsphere-volume 2
parameters:
  diskformat: thin 3

1
Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes.
2
For more information about using VMware vSphere with OpenShift Container Platform, see the VMware vSphere documentation.
3
diskformat: thin, zeroedthick and eagerzeroedthick are all valid disk formats. See vSphere docs for additional details regarding the disk format types. The default value is thin.

8.2.9. Red Hat Virtualization (RHV) object definition

OpenShift Container Platform creates a default object of type StorageClass named ovirt-csi-sc which is used for creating dynamically provisioned persistent volumes.

To create additional storage classes for different configurations, create and save a file with the StorageClass object described by the following sample YAML:

ovirt-storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: <storage_class_name>  1
  annotations:
    storageclass.kubernetes.io/is-default-class: "<boolean>"  2
provisioner: csi.ovirt.org
allowVolumeExpansion: <boolean> 3
reclaimPolicy: Delete 4
volumeBindingMode: Immediate 5
parameters:
  storageDomainName: <rhv-storage-domain-name> 6
  thinProvisioning: "<boolean>"  7
  csi.storage.k8s.io/fstype: <file_system_type> 8

1
Name of the storage class.
2
Set to false if the storage class is the default storage class in the cluster. If set to true, the existing default storage class must be edited and set to false.
3
true enables dynamic volume expansion, false prevents it. true is recommended.
4
Dynamically provisioned persistent volumes of this storage class are created with this reclaim policy. This default policy is Delete.
5
Indicates how to provision and bind PersistentVolumeClaims. When not set, VolumeBindingImmediate is used. This field is only applied by servers that enable the VolumeScheduling feature.
6
The RHV storage domain name to use.
7
If true, the disk is thin provisioned. If false, the disk is preallocated. Thin provisioning is recommended.
8
Optional: File system type to be created. Possible values: ext4 (default) or xfs.

8.3. Changing the default storage class

Use the following process to change the default storage class. For example you have two defined storage classes, gp2 and standard, and you want to change the default storage class from gp2 to standard.

  1. List the storage class:

    $ oc get storageclass

    Example output

    NAME                 TYPE
    gp2 (default)        kubernetes.io/aws-ebs 1
    standard             kubernetes.io/aws-ebs

    1
    (default) denotes the default storage class.
  2. Change the value of the storageclass.kubernetes.io/is-default-class annotation to false for the default storage class:

    $ oc patch storageclass gp2 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
  3. Make another storage class the default by setting the storageclass.kubernetes.io/is-default-class annotation to true:

    $ oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
  4. Verify the changes:

    $ oc get storageclass

    Example output

    NAME                 TYPE
    gp2                  kubernetes.io/aws-ebs
    standard (default)   kubernetes.io/aws-ebs

8.4. Optimizing storage

Optimizing storage helps to minimize storage use across all resources. By optimizing storage, administrators help ensure that existing storage resources are working in an efficient manner.

8.5. Available persistent storage options

Understand your persistent storage options so that you can optimize your OpenShift Container Platform environment.

Table 8.1. Available storage options
Storage typeDescriptionExamples

Block

  • Presented to the operating system (OS) as a block device
  • Suitable for applications that need full control of storage and operate at a low level on files bypassing the file system
  • Also referred to as a Storage Area Network (SAN)
  • Non-shareable, which means that only one client at a time can mount an endpoint of this type

AWS EBS and VMware vSphere support dynamic persistent volume (PV) provisioning natively in OpenShift Container Platform.

File

  • Presented to the OS as a file system export to be mounted
  • Also referred to as Network Attached Storage (NAS)
  • Concurrency, latency, file locking mechanisms, and other capabilities vary widely between protocols, implementations, vendors, and scales.

RHEL NFS, NetApp NFS [1], and Vendor NFS

Object

  • Accessible through a REST API endpoint
  • Configurable for use in the OpenShift image registry
  • Applications must build their drivers into the application and/or container.

AWS S3

  1. NetApp NFS supports dynamic PV provisioning when using the Trident plugin.
Important

Currently, CNS is not supported in OpenShift Container Platform 4.10.

8.6. Recommended configurable storage technology

The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application.

Table 8.2. Recommended and configurable storage technology
Storage typeROX1RWX2RegistryScaled registryMetrics3LoggingApps

1 ReadOnlyMany

2 ReadWriteMany

3 Prometheus is the underlying technology used for metrics.

4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk.

5 For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any persistent volume claims (PVCs) that are configured for use with metrics.

6 For logging, using any shared storage would be an anti-pattern. One volume per elasticsearch is required.

7 Object storage is not consumed through OpenShift Container Platform’s PVs or PVCs. Apps must integrate with the object storage REST API.

Block

Yes4

No

Configurable

Not configurable

Recommended

Recommended

Recommended

File

Yes4

Yes

Configurable

Configurable

Configurable5

Configurable6

Recommended

Object

Yes

Yes

Recommended

Recommended

Not configurable

Not configurable

Not configurable7

Note

A scaled registry is an OpenShift image registry where two or more pod replicas are running.

8.6.1. Specific application storage recommendations

Important

Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended.

Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components.

8.6.1.1. Registry

In a non-scaled/high-availability (HA) OpenShift image registry cluster deployment:

  • The storage technology does not have to support RWX access mode.
  • The storage technology must ensure read-after-write consistency.
  • The preferred storage technology is object storage followed by block storage.
  • File storage is not recommended for OpenShift image registry cluster deployment with production workloads.

8.6.1.2. Scaled registry

In a scaled/HA OpenShift image registry cluster deployment:

  • The storage technology must support RWX access mode.
  • The storage technology must ensure read-after-write consistency.
  • The preferred storage technology is object storage.
  • Red Hat OpenShift Data Foundation (ODF), Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported.
  • Object storage should be S3 or Swift compliant.
  • For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage.
  • Block storage is not configurable.

8.6.1.3. Metrics

In an OpenShift Container Platform hosted metrics cluster deployment:

  • The preferred storage technology is block storage.
  • Object storage is not configurable.
Important

It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads.

8.6.1.4. Logging

In an OpenShift Container Platform hosted logging cluster deployment:

  • The preferred storage technology is block storage.
  • Object storage is not configurable.

8.6.1.5. Applications

Application use cases vary from application to application, as described in the following examples:

  • Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster.
  • Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer.

8.6.2. Other specific application storage recommendations

Important

It is not recommended to use RAID configurations on Write intensive workloads, such as etcd. If you are running etcd with a RAID configuration, you might be at risk of encountering performance issues with your workloads.

  • Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases.
  • Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage.
  • The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices.

Additional resources

8.7. Deploy Red Hat OpenShift Data Foundation

Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring.

If you are looking for Red Hat OpenShift Data Foundation information about…​See the following Red Hat OpenShift Data Foundation documentation:

What’s new, known issues, notable bug fixes, and Technology Previews

OpenShift Data Foundation 4.9 Release Notes

Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations

Planning your OpenShift Data Foundation 4.9 deployment

Instructions on deploying OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster

Deploying OpenShift Data Foundation 4.9 in external mode

Instructions on deploying OpenShift Data Foundation to local storage on bare metal infrastructure

Deploying OpenShift Data Foundation 4.9 using bare metal infrastructure

Instructions on deploying OpenShift Data Foundation on Red Hat OpenShift Container Platform VMware vSphere clusters

Deploying OpenShift Data Foundation 4.9 on VMware vSphere

Instructions on deploying OpenShift Data Foundation using Amazon Web Services for local or cloud storage

Deploying OpenShift Data Foundation 4.9 using Amazon Web Services

Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Google Cloud clusters

Deploying and managing OpenShift Data Foundation 4.9 using Google Cloud

Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Azure clusters

Deploying and managing OpenShift Data Foundation 4.9 using Microsoft Azure

Instructions on deploying OpenShift Data Foundation to use local storage on IBM Power infrastructure

Deploying OpenShift Data Foundation on IBM Power

Instructions on deploying OpenShift Data Foundation to use local storage on IBM Z infrastructure

Deploying OpenShift Data Foundation on IBM Z infrastructure

Allocating storage to core services and hosted applications in Red Hat OpenShift Data Foundation, including snapshot and clone

Managing and allocating resources

Managing storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa)

Managing hybrid and multicloud resources

Safely replacing storage devices for Red Hat OpenShift Data Foundation

Replacing devices

Safely replacing a node in a Red Hat OpenShift Data Foundation cluster

Replacing nodes

Scaling operations in Red Hat OpenShift Data Foundation

Scaling storage

Monitoring a Red Hat OpenShift Data Foundation 4.9 cluster

Monitoring Red Hat OpenShift Data Foundation 4.9

Resolve issues encountered during operations

Troubleshooting OpenShift Data Foundation 4.9

Migrating your OpenShift Container Platform cluster from version 3 to version 4

Migration

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.