Este contenido no está disponible en el idioma seleccionado.
Chapter 8. Postinstallation storage configuration
After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including storage configuration.
By default, containers operate by using the ephemeral storage or transient local storage. The ephemeral storage has a lifetime limitation. To store the data for a long time, you must configure persistent storage. You can configure storage by using one of the following methods:
- Dynamic provisioning
- You can dynamically provision storage on-demand by defining and creating storage classes that control different levels of storage, including storage access.
- Static provisioning
- You can use Kubernetes persistent volumes to make existing storage available to a cluster. Static provisioning can support various device configurations and mount options.
8.1. Dynamic provisioning
Dynamic Provisioning allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage. See Dynamic provisioning.
8.2. Recommended configurable storage technology
The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application.
Storage type | Block | File | Object |
---|---|---|---|
1
2 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk.
5 For metrics, using file storage with the 6 For logging, review the recommended storage solution in Configuring persistent storage for the log store section. Using NFS storage as a persistent volume or through NAS, such as Gluster, can corrupt the data. Hence, NFS is not supported for Elasticsearch storage and LokiStack log store in OpenShift Container Platform Logging. You must use one persistent volume type per log store. 7 Object storage is not consumed through OpenShift Container Platform’s PVs or PVCs. Apps must integrate with the object storage REST API. | |||
ROX1 | Yes4 | Yes4 | Yes |
RWX2 | No | Yes | Yes |
Registry | Configurable | Configurable | Recommended |
Scaled registry | Not configurable | Configurable | Recommended |
Metrics3 | Recommended | Configurable5 | Not configurable |
Elasticsearch Logging | Recommended | Configurable6 | Not supported6 |
Loki Logging | Not configurable | Not configurable | Recommended |
Apps | Recommended | Recommended | Not configurable7 |
A scaled registry is an OpenShift image registry where two or more pod replicas are running.
8.2.1. Specific application storage recommendations
Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as a storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended.
Other NFS implementations in the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components.
8.2.1.1. Registry
In a non-scaled/high-availability (HA) OpenShift image registry cluster deployment:
- The storage technology does not have to support RWX access mode.
- The storage technology must ensure read-after-write consistency.
- The preferred storage technology is object storage followed by block storage.
- File storage is not recommended for OpenShift image registry cluster deployment with production workloads.
8.2.1.2. Scaled registry
In a scaled/HA OpenShift image registry cluster deployment:
- The storage technology must support RWX access mode.
- The storage technology must ensure read-after-write consistency.
- The preferred storage technology is object storage.
- Red Hat OpenShift Data Foundation (ODF), Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported.
- Object storage should be S3 or Swift compliant.
- For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage.
- Block storage is not configurable.
- The use of Network File System (NFS) storage with OpenShift Container Platform is supported. However, the use of NFS storage with a scaled registry can cause known issues. For more information, see the Red Hat Knowledgebase solution, Is NFS supported for OpenShift cluster internal components in Production?.
8.2.1.3. Metrics
In an OpenShift Container Platform hosted metrics cluster deployment:
- The preferred storage technology is block storage.
- Object storage is not configurable.
It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads.
8.2.1.4. Logging
In an OpenShift Container Platform hosted logging cluster deployment:
Loki Operator:
- The preferred storage technology is S3 compatible Object storage.
- Block storage is not configurable.
OpenShift Elasticsearch Operator:
- The preferred storage technology is block storage.
- Object storage is not supported.
As of logging version 5.4.3 the OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.
8.2.1.5. Applications
Application use cases vary from application to application, as described in the following examples:
- Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster.
- Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer.
8.2.2. Other specific application storage recommendations
It is not recommended to use RAID configurations on Write
intensive workloads, such as etcd
. If you are running etcd
with a RAID configuration, you might be at risk of encountering performance issues with your workloads.
- Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases.
- Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage.
- The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices.
8.3. Deploy Red Hat OpenShift Data Foundation
Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. For more information, see the Red Hat OpenShift Data Foundation documentation.
OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide.
If you are looking for Red Hat OpenShift Data Foundation information about… | See the following Red Hat OpenShift Data Foundation documentation: |
---|---|
What’s new, known issues, notable bug fixes, and Technology Previews | |
Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations | |
Instructions on deploying OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster | |
Instructions on deploying OpenShift Data Foundation to local storage on bare metal infrastructure | Deploying OpenShift Data Foundation 4.12 using bare metal infrastructure |
Instructions on deploying OpenShift Data Foundation on Red Hat OpenShift Container Platform VMware vSphere clusters | |
Instructions on deploying OpenShift Data Foundation using Amazon Web Services for local or cloud storage | Deploying OpenShift Data Foundation 4.12 using Amazon Web Services |
Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Google Cloud clusters | Deploying and managing OpenShift Data Foundation 4.12 using Google Cloud |
Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Azure clusters | Deploying and managing OpenShift Data Foundation 4.12 using Microsoft Azure |
Instructions on deploying OpenShift Data Foundation to use local storage on IBM Power® infrastructure | |
Instructions on deploying OpenShift Data Foundation to use local storage on IBM Z® infrastructure | Deploying OpenShift Data Foundation on IBM Z® infrastructure |
Allocating storage to core services and hosted applications in Red Hat OpenShift Data Foundation, including snapshot and clone | |
Managing storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa) | |
Safely replacing storage devices for Red Hat OpenShift Data Foundation | |
Safely replacing a node in a Red Hat OpenShift Data Foundation cluster | |
Scaling operations in Red Hat OpenShift Data Foundation | |
Monitoring a Red Hat OpenShift Data Foundation 4.12 cluster | |
Resolve issues encountered during operations | |
Migrating your OpenShift Container Platform cluster from version 3 to version 4 |