Chapter 7. Optimizing storage
Optimizing storage helps to minimize storage use across all resources. By optimizing storage, administrators help ensure that existing storage resources are working in an efficient manner.
7.1. Available persistent storage options
Understand your persistent storage options so that you can optimize your OpenShift Container Platform environment.
Storage type | Description | Examples |
---|---|---|
Block |
| AWS EBS and VMware vSphere support dynamic persistent volume (PV) provisioning natively in OpenShift Container Platform. |
File |
| RHEL NFS, NetApp NFS footnoteref:netappnfs[NetApp NFS supports dynamic PV provisioning when using the Trident plug-in.], and Vendor NFS |
Object |
| AWS S3 |
Currently, CNS is not supported in OpenShift Container Platform 4.2.
7.2. Recommended configurable storage technology
The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application.
Storage type | ROX [a] | RWX [b] | Registry | Scaled registry | Metrics [c] | Logging | Apps |
---|---|---|---|---|---|---|---|
Block | Yes [d] | No | Configurable | Not configurable | Recommended | Recommended | Recommended |
File | Yes [d] | Yes | Configurable | Configurable | Configurable [e] | Configurable [f] | Recommended |
Object | Yes | Yes | Recommended | Recommended | Not configurable | Not configurable | Not configurable [g] |
[a]
ReadOnlyMany
[b]
ReadWriteMany
[c]
Prometheus is the underlying technology used for metrics.
[d]
This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk.
[e]
For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any PersistentVolumeClaims that are configured for use with metrics.
[f]
For logging, using any shared storage would be an anti-pattern. One volume per elasticsearch is required.
[g]
Object storage is not consumed through OpenShift Container Platform’s PVs/persistent volume claims (PVCs). Apps must integrate with the object storage REST API.
|
A scaled registry is an OpenShift Container Platform registry where three or more pod replicas are running.
7.2.1. Specific application storage recommendations
Testing shows issues with using the NFS server on RHEL as storage backend for the container image registry. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using NFS to back PVs used by core services is not recommended.
Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift core components.
7.2.1.1. Registry
In a non-scaled/high-availability (HA) OpenShift Container Platform registry cluster deployment:
- The preferred storage technology is object storage followed by block storage. The storage technology does not have to support RWX access mode.
- The storage technology must ensure read-after-write consistency. All NAS storage are not recommended for OpenShift Container Platform Registry cluster deployment with production workloads.
-
While
hostPath
volumes are configurable for a non-scaled/HA OpenShift Container Platform Registry, they are not recommended for cluster deployment.
7.2.1.2. Scaled registry
In a scaled/HA OpenShift Container Platform registry cluster deployment:
- The preferred storage technology is object storage. The storage technology must support RWX access mode and must ensure read-after-write consistency.
- File storage and block storage are not recommended for a scaled/HA OpenShift Container Platform registry cluster deployment with production workloads.
- All NAS storage are not recommended for OpenShift Container Platform Registry cluster deployment with production workloads.
7.2.1.3. Metrics
In an OpenShift Container Platform hosted metrics cluster deployment:
- The preferred storage technology is block storage.
Testing shows significant unrecoverable corruptions using file storage and, therefore, file storage is not recommended for use with metrics.
There are file storage implementations in the marketplace that might not have these issues. Contact the individual storage vendor for more information on any testing that was possibly completed against these OpenShift core components.
7.2.1.4. Logging
In an OpenShift Container Platform hosted logging cluster deployment:
- The preferred storage technology is block storage.
- It is not recommended to use NAS storage for a hosted metrics cluster deployment with production workloads.
Testing shows issues with using the NFS server on RHEL as storage backend for the container image registry. This includes Elasticsearch for logging storage. Therefore, using NFS to back PVs used by core services is not recommended.
Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift core components.
7.2.1.5. Applications
Application use cases vary from application to application, as described in the following examples:
- Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster.
- Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer.
7.2.2. Other specific application storage recommendations
-
OpenShift Container Platform Internal
etcd
: For the bestetcd
reliability, the lowest consistent latency storage technology is preferable. -
It is highly recommended that you use
etcd
with storage that handles serial writes (fsync) quickly, such as NVMe or SSD. Ceph, NFS, and spinning disks are not recommended. - Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases.
- Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage.