OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 4. Deploying Containerized Storage in Converged Mode
Before following the deployment workflow for your preferred solution, make sure to review Section 4.1, “Specify Advanced Installer Variables” to understand ansible variable and playbook recommendations and requirements.
To set up storage to containers on top of an OpenShift Cluster, select the workflow that meets your objectives.
Deployment workflow | Registry | Metrics | Logging | Applications |
---|---|---|---|---|
Section 4.2, “Deploying Red Hat Openshift Container Storage in Converged Mode” | ✔ | |||
Section 4.3, “Deploying Red Hat Openshift Container Storage in Converged Mode with Registry” | ✔ | |||
✔ | ✔ | |||
✔ | ✔ | ✔ | ✔ |
- Red Hat Openshift Container Storage does not support a simultaneous deployment of converged and independent mode with ansible workflow. Therefore, you must deploy either converged mode or independent mode: you cannot mix both modes during deployment.
- s3 is deployed manually and not through Ansible installer. For more information on manual deployment, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#S3_Object_Store
New registry name registry.redhat.io
is used throughout in this Guide.
However, if you have not migrated to the new registry
yet then replace all occurrences of registry.redhat.io
with registry.access.redhat.com
where ever applicable.
4.1. Specify Advanced Installer Variables Copier lienLien copié sur presse-papiers!
The cluster installation process as documented in https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html-single/installing_clusters/#install-planning, can be used to install one or both the GlusterFS node groups:
-
glusterfs
: A general storage cluster for use by user applications. -
glusterfs-registry
: A dedicated storage cluster for use by infrastructureapplications such as an integrated OpenShift Container Registry.
It is recommended to deploy both groups to avoid potential impacts on performance in I/O and volume creation. Both of these are defined in the inventory hosts file.
The definition of the clusters is done by including the relevant names in the`[OSEv3:children]` group, creating similarly named groups, and then populating the groups with the node information. The clusters can then be configured through a variety of variables in the [OSEv3:vars]
group. glusterfs
variables begin with openshift_storage_glusterfs_
and glusterfs-registry
variables begin with openshift_storage_glusterfs_registry_
. A few other variables, such as openshift_hosted_registry_storage_kind
, interact with the GlusterFS clusters.
It is recommended to specify image names and version tags for all containerized components.This is to prevent components such as the Red Hat Gluster Storage pods from upgrading after an outage, which might lead to a cluster of widely disparate software versions. The relevant variables are as follows:
-
openshift_storage_glusterfs_image
-
openshift_storage_glusterfs_block_image
-
openshift_storage_glusterfs_heketi_image
The following are the recommended values for this release of Red Hat Openshift Container Storage
-
openshift_storage_glusterfs_image=registry.redhat.io/rhgs3/rhgs-server-rhel7:v3.11.8
-
openshift_storage_glusterfs_block_image=registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7:v3.11.8
-
openshift_storage_glusterfs_heketi_image=registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.8
-
openshift_storage_glusterfs_s3_server_image=registry.redhat.io/rhgs3/rhgs-s3-server-rhel7:v3.11.8
For a complete list of variables, see https://github.com/openshift/openshift-ansible/tree/release-3.11/roles/openshift_storage_glusterfs on GitHub.
Once the variables are configured, there are several playbooks available depending on the circumstances of the installation:
The main playbook for cluster installations can be used to deploy the GlusterFS clusters in tandem with an initial installation of OpenShift Container Platform.
- This includes deploying an integrated OpenShift Container Registry that uses GlusterFS storage.
-
/usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
can be used to deploy the clusters onto an existing OpenShift Container Platform installation. /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/registry.yml
can be used to deploy the clusters onto an existing OpenShift Container Platform installation. In addition, this will deploy an integrated OpenShift Container Registry, which uses GlusterFS storage.Important- There must not be a pre-existing registry in the OpenShift Container Platform cluster.
playbooks/openshift-glusterfs/uninstall.yml
can be used to remove existingclusters matching the configuration in the inventory hosts file. This is usefulfor cleaning up the Red Hat Openshift Container Storage environment in the case of a faileddeployment due to configuration errors.NoteThe GlusterFS playbooks are not guaranteed to be idempotent. Running the playbooks more than once for a given installation is currently not supported without deleting the entire GlusterFS installation (including disk data) and starting over.
4.2. Deploying Red Hat Openshift Container Storage in Converged Mode Copier lienLien copié sur presse-papiers!
In your inventory file, include the following variables in the
[OSEv3:vars]
section, adjusting them as needed for your configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Noteopenshift_storage_glusterfs_block_host_vol_size
takes an integer, which is the size of the volume in Gi.In your inventory file, add
glusterfs
in the[OSEv3:children]
section to enable the[glusterfs]
group:[OSEv3:children] masters etcd nodes glusterfs
[OSEv3:children] masters etcd nodes glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a
[glusterfs]
section with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devices
to a list of raw block devices that will be completely managed as part of a GlusterFS cluster.There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_zone=<zone_number> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
<hostname_or_ip> glusterfs_zone=<zone_number> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
[glusterfs] node103.example.com glusterfs_zone=1 glusterfs_devices='["/dev/sdd"]' node104.example.com glusterfs_zone=2 glusterfs_devices='["/dev/sdd"]' node105.example.com glusterfs_zone=3 glusterfs_devices='["/dev/sdd"]'
[glusterfs] node103.example.com glusterfs_zone=1 glusterfs_devices='["/dev/sdd"]' node104.example.com glusterfs_zone=2 glusterfs_devices='["/dev/sdd"]' node105.example.com glusterfs_zone=3 glusterfs_devices='["/dev/sdd"]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the hosts listed under
[glusterfs]
to the[nodes]
group:[nodes] ... node103.example.com openshift_node_group_name="node-config-infra" node104.example.com openshift_node_group_name="node-config-infra" node105.example.com openshift_node_group_name="node-config-infra"
[nodes] ... node103.example.com openshift_node_group_name="node-config-infra" node104.example.com openshift_node_group_name="node-config-infra" node105.example.com openshift_node_group_name="node-config-infra"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- To verify the deployment see, Section 4.8, “Verify your Deployment”.
4.3. Deploying Red Hat Openshift Container Storage in Converged Mode with Registry Copier lienLien copié sur presse-papiers!
In your inventory file, include the following variables in the [OSEv3:vars] section, adjusting them as needed for your configuration:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In your inventory file, set the following variable under
[OSEv3:vars]
:[OSEv3:vars] ... openshift_hosted_registry_storage_kind=glusterfs openshift_hosted_registry_storage_volume_size=5Gi openshift_hosted_registry_selector='node-role.kubernetes.io/infra=true'
[OSEv3:vars] ... openshift_hosted_registry_storage_kind=glusterfs openshift_hosted_registry_storage_volume_size=5Gi openshift_hosted_registry_selector='node-role.kubernetes.io/infra=true'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add
glusterfs_registry
in the[OSEv3:children]
section to enable the`[glusterfs_registry]` group:[OSEv3:children] masters etcd nodes glusterfs_registry
[OSEv3:children] masters etcd nodes glusterfs_registry
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a
[glusterfs_registry]
section with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devices
to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_zone=<zone_number> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
<hostname_or_ip> glusterfs_zone=<zone_number> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
[glusterfs_registry] node106.example.com glusterfs_zone=1 glusterfs_devices='["/dev/sdd"]' node107.example.com glusterfs_zone=2 glusterfs_devices='["/dev/sdd"]' node108.example.com glusterfs_zone=3 glusterfs_devices='["/dev/sdd"]'
[glusterfs_registry] node106.example.com glusterfs_zone=1 glusterfs_devices='["/dev/sdd"]' node107.example.com glusterfs_zone=2 glusterfs_devices='["/dev/sdd"]' node108.example.com glusterfs_zone=3 glusterfs_devices='["/dev/sdd"]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the hosts listed under
[glusterfs_registry]
to the[nodes]
group:[nodes] ... node106.example.com openshift_node_group_name="node-config-compute" node107.example.com openshift_node_group_name="node-config-compute" node108.example.com openshift_node_group_name="node-config-compute"
[nodes] ... node106.example.com openshift_node_group_name="node-config-compute" node107.example.com openshift_node_group_name="node-config-compute" node108.example.com openshift_node_group_name="node-config-compute"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- To verify the deployment see, Section 4.8, “Verify your Deployment”.
4.4. Deploying Red Hat Openshift Container Storage in Converged Mode with Logging and Metrics Copier lienLien copié sur presse-papiers!
In your inventory file, set the following variables under
[OSEv3:vars]
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor more details about all the variables, see https://github.com/openshift/openshift-ansible/tree/release-3.11/roles/openshift_storage_glusterfs.
Add
glusterfs_registry
in the[OSEv3:children]`section to enable the `[glusterfs_registry]
group:[OSEv3:children] masters etcd nodes glusterfs_registry
[OSEv3:children] masters etcd nodes glusterfs_registry
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a
[glusterfs_registry]
section with entries for each storage node thatwill host the GlusterFS storage. For each node, setglusterfs_devices
to alist of raw block devices that will be completely managed as part of aGlusterFS cluster. There must be at least one device listed. Each device mustbe bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_zone=<zone_number> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
<hostname_or_ip> glusterfs_zone=<zone_number> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
[glusterfs_registry] node106.example.com glusterfs_zone=1 glusterfs_devices='["/dev/sdd"]' node107.example.com glusterfs_zone=2 glusterfs_devices='["/dev/sdd"]' node108.example.com glusterfs_zone=3 glusterfs_devices='["/dev/sdd"]'
[glusterfs_registry] node106.example.com glusterfs_zone=1 glusterfs_devices='["/dev/sdd"]' node107.example.com glusterfs_zone=2 glusterfs_devices='["/dev/sdd"]' node108.example.com glusterfs_zone=3 glusterfs_devices='["/dev/sdd"]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Add the hosts listed under
[glusterfs_registry]
to the[nodes]
group:
[nodes] ... node106.example.com openshift_node_group_name="node-config-compute" node107.example.com openshift_node_group_name="node-config-compute" node108.example.com openshift_node_group_name="node-config-compute"
[nodes]
...
node106.example.com openshift_node_group_name="node-config-compute"
node107.example.com openshift_node_group_name="node-config-compute"
node108.example.com openshift_node_group_name="node-config-compute"
The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- To verify the deployment see, Section 4.8, “Verify your Deployment”.
4.5. Deploying Red Hat Openshift Container Storage in Converged mode for Applications with Registry, Logging, and Metrics Copier lienLien copié sur presse-papiers!
In your inventory file, set the following variables under
[OSEv3:vars]
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure to set
openshift_storage_glusterfs_block_deploy=false
in this deployment scenario.Add
glusterfs
andglusterfs_registry
in the[OSEv3:children]
section toenable the[glusterfs]
and[glusterfs_registry]
groups:[OSEv3:children] ... glusterfs glusterfs_registry
[OSEv3:children] ... glusterfs glusterfs_registry
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add
[glusterfs]
and[glusterfs_registry]
sections with entries for eachstorage node that will host the GlusterFS storage. For each node, set`glusterfs_devices` to a list of raw block devices that will be completelymanaged as part of a GlusterFS cluster. There must be at least one devicelisted. Each device must be bare, with no partitions or LVM PVs. Specifying thevariable takes the form:<hostname_or_ip> glusterfs_zone=<zone_number> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
<hostname_or_ip> glusterfs_zone=<zone_number> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the hosts listed under
[glusterfs]
and[glusterfs_registry]
to the`[nodes]` group:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- To verify the deployment see, Section 4.8, “Verify your Deployment”.
4.6. Single OCS cluster installation Copier lienLien copié sur presse-papiers!
It is possible to support both general-application storage and infrastructure storage in a single OCS cluster. To do this, the inventory file options will change slightly for logging and metrics. This is because when there is only one cluster, the gluster-block StorageClass
would be glusterfs-storage-block
.
The registry PV will be created on this single cluster if the second cluster,[glusterfs_registry]
does not exist. For high availability, it is very important to have four nodes for this cluster. Special attention should be given to choosing the size for openshift_storage_glusterfs_block_host_vol_size
.
This is the hosting volume for gluster-block devices that will be created for logging and metrics. Make sure that the size can accommodate all these block volumes and have sufficient storage if another hosting volume must be created.
openshift_storage_glusterfs_block_host_vol_size
takes an integer, which is the size of the volume in Gi.
4.7. Configure Heketi to Place Bricks Across Zones Copier lienLien copié sur presse-papiers!
Heketi uses node zones as a hint for brick placement. To force Heketi to strictly place replica bricks in different zones, "strict zone checking" feature of Heketi has to be enabled. When this feature is enabled, a volume is created successfully only if each brick set is spread across sufficiently many zones.
Ensure that the OCS nodes are labeled with the correct zones before configuring StorageClass to use heketi’s strict zoning.
You can configure this feature by adding the "volumeoptions" field with the desired setting in the parameters section of the StorageClass. For example:
volumeoptions: "user.heketi.zone-checking strict"
volumeoptions: "user.heketi.zone-checking strict"
OR
volumeoptions: "user.heketi.zone-checking none"
volumeoptions: "user.heketi.zone-checking none"
The settings are as follows:
- strict
- Requires at least 3 nodes to be present in different zones (assuming replica 3).
- none
- Previous (and current default) behavior
A sample StorageClass file with "strict zone checking" feature configured is shown below:
Existing storage class specifications are not editable. You can create a new storage class with the required volume options for all future applications. However, if you need to change the settings of an existing storage class then the existing storage class must first be deleted and then a new storage class with the same name as the previous class has to be re-created.
Execute the following commands to delete and re-create the glusterfs-storage storage class with the new settings:
Export the storage class object to a yaml file:
oc get sc glusterfs-storage --export=true -o yaml > glusterfs-storage.yaml
# oc get sc glusterfs-storage --export=true -o yaml > glusterfs-storage.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Use your preferred editor to add the new parameters.
Delete and re-create the storage class object:
oc delete sc glusterfs-storage oc create -f glusterfs-storage.yaml
# oc delete sc glusterfs-storage # oc create -f glusterfs-storage.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.8. Verify your Deployment Copier lienLien copié sur presse-papiers!
Execute the following steps to verify the deployment.
Installation Verification for converged mode
Examine the installation for the app-storage namespace by running the following commands This can be done from an OCP master node or the ansible deploy host that has the OC CLI installed.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Examine the installation for the infra-storage namespace by running the following commands This can be done from an OCP master node or the ansible deploy host that has the OC CLI installed.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the existence of the registry PVC backed by OCP infrastructure Red Hat Openshift Container Storage. This volume was statically provisioned by openshift-ansible deployment.
oc get pvc -n default NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE registry-claim Bound pvc-7ca4c8de-10ca-11e8-84d3-069df2c4f284 25Gi RWX 1h
oc get pvc -n default NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE registry-claim Bound pvc-7ca4c8de-10ca-11e8-84d3-069df2c4f284 25Gi RWX 1h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the registry DeploymentConfig to verify it’s using this glusterfs volume.
oc describe dc/docker-registry -n default | grep -A3 Volumes Volumes: registry-storage: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: registry-claim
oc describe dc/docker-registry -n default | grep -A3 Volumes Volumes: registry-storage: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: registry-claim
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Storage Provisioning Verification for Converged Mode
The Storage Class resources can be used to create new PV claims for verification of the RHOCS deployment. Validate PV provisioning using the following OCP Storage Class created during the RHOCS deployment:
- Use the glusterfs-storage-block OCP Storage Class resource to create new PV claims if you deployed RHOCS using Section 4.2, “Deploying Red Hat Openshift Container Storage in Converged Mode”.
Use the glusterfs-registry-block OCP Storage Class resource to create new PV claims if you deployed RHOCS using one of the following workflows:
- Section 4.3, “Deploying Red Hat Openshift Container Storage in Converged Mode with Registry”
- Section 4.4, “Deploying Red Hat Openshift Container Storage in Converged Mode with Logging and Metrics”
- Section 4.5, “Deploying Red Hat Openshift Container Storage in Converged mode for Applications with Registry, Logging, and Metrics”
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f pvc-file.yaml oc create -f pvc-block.yaml
# oc create -f pvc-file.yaml # oc create -f pvc-block.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Validate that the two PVCs and respective PVs are created correctly:
oc get pvc
# oc get pvc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Using the heketi-client for Verification
The heketi-client package needs to be installed on the ansible deploy host or on a OCP master. Once it is installed two new files should be created to easily export the required environment variables to run the heketi-client commands (or heketi-cli). The content of each file as well as useful heketi-cli commands are detailed here.
Create a new file (e.g. "heketi-exports-app") with the following contents:
export HEKETI_POD=$(oc get pods -l glusterfs=heketi-storage-pod -n app-storage -o jsonpath="{.items[0].metadata.name}") export HEKETI_CLI_SERVER=http://$(oc get route/heketi-storage -n app-storage -o jsonpath='{.spec.host}') export HEKETI_CLI_KEY=$(oc get pod/$HEKETI_POD -n app-storage -o jsonpath='{.spec.containers[0].env[?(@.name=="HEKETI_ADMIN_KEY")].value}') export HEKETI_ADMIN_KEY_SECRET=$(echo -n ${HEKETI_CLI_KEY} | base64) export HEKETI_CLI_USER=admin
export HEKETI_POD=$(oc get pods -l glusterfs=heketi-storage-pod -n app-storage -o jsonpath="{.items[0].metadata.name}") export HEKETI_CLI_SERVER=http://$(oc get route/heketi-storage -n app-storage -o jsonpath='{.spec.host}') export HEKETI_CLI_KEY=$(oc get pod/$HEKETI_POD -n app-storage -o jsonpath='{.spec.containers[0].env[?(@.name=="HEKETI_ADMIN_KEY")].value}') export HEKETI_ADMIN_KEY_SECRET=$(echo -n ${HEKETI_CLI_KEY} | base64) export HEKETI_CLI_USER=admin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Source the file to create the HEKETI app-storage environment variables:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new file (e.g. "heketi-exports-infra") with the following contents:
export HEKETI_POD=$(oc get pods -l glusterfs=heketi-registry-pod -n infra-storage -o jsonpath="{.items[0].metadata.name}") export HEKETI_CLI_SERVER=http://$(oc get route/heketi-registry -n infra-storage -o jsonpath='{.spec.host}') export HEKETI_CLI_USER=admin export HEKETI_CLI_KEY=$(oc get pod/$HEKETI_POD -n infra-storage -o jsonpath='{.spec.containers[0].env[?(@.name=="HEKETI_ADMIN_KEY")].value}') export HEKETI_ADMIN_KEY_SECRET=$(echo -n ${HEKETI_CLI_KEY} | base64)
export HEKETI_POD=$(oc get pods -l glusterfs=heketi-registry-pod -n infra-storage -o jsonpath="{.items[0].metadata.name}") export HEKETI_CLI_SERVER=http://$(oc get route/heketi-registry -n infra-storage -o jsonpath='{.spec.host}') export HEKETI_CLI_USER=admin export HEKETI_CLI_KEY=$(oc get pod/$HEKETI_POD -n infra-storage -o jsonpath='{.spec.containers[0].env[?(@.name=="HEKETI_ADMIN_KEY")].value}') export HEKETI_ADMIN_KEY_SECRET=$(echo -n ${HEKETI_CLI_KEY} | base64)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Source the file to create the HEKETI infra-storage environment variables:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.9. Creating an Arbiter Volume (optional) Copier lienLien copié sur presse-papiers!
Arbiter volumes support all persistent volume types with similar consistency and less disk space requirements. An arbitrated replicated volume, or arbiter volume, acts like a three-way replicated volume where every third brick is a special type of brick called an arbiter. Arbiter bricks do not store file data; they only store file names, structure, and metadata. The arbiter uses client quorum to compare this metadata with the metadata of the other nodes to ensure consistency in the volume and prevent split-brain conditions.
Advantages of arbitrated replicated volumes:
- Similar consistency: When an arbiter is configured, arbitration logic uses client-side quorum in auto mode to prevent file operations that would lead to split-brain conditions.
- Less disk space required: Because an arbiter brick only stores file names and metadata, an arbiter brick can be much smaller than the other bricks in the volume.
For more information about Arbitrated Replicated Volumes, see https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#Creating_Arbitrated_Replicated_Volumes
Before creating the arbiter volume, make sure heketi-client packages are installed.
subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
yum install heketi-client
# yum install heketi-client
If you want to upgrade your already existing Heketi server, then see, https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/deployment_guide/index#upgrade_heketi_rhgs
Arbiter volumes may not be appropriate for small file or unpredictable file-size workloads as it could fill up the arbiter bricks faster than the data bricks. If you want to use an arbiter volume, we recommend you to choose a conservative average file size based on the size of the data brick and number of files so that the arbiter brick can accommodate your workload.
4.9.1. Creating an Arbiter Volume Copier lienLien copié sur presse-papiers!
Arbiter volume can be created using the Heketi CLI or by updating the storageclass file.
4.9.1.1. Creating an Arbiter Volume using Heketi CLI Copier lienLien copié sur presse-papiers!
To create an Arbiter volume using the Heketi CLI one must request a replica 3 volume as well as provide the Heketi-specific volume option “user.heketi.arbiter true” that will instruct the system to create the Arbiter variant of replica 3.
For example:
heketi-cli volume create --size=4 --gluster-volume-options='user.heketi.arbiter true'
# heketi-cli volume create --size=4 --gluster-volume-options='user.heketi.arbiter true'
4.9.1.2. Creating an Arbiter Volume using the Storageclass file Copier lienLien copié sur presse-papiers!
To create an arbiter volume using the storageclass file ensure to include the following two parameters in the storageclass file:
- user.heketi.arbiter true
- (Optional) user.heketi.average-file-size 1024
Following is a sample storageclass file:
4.9.2. Creating Block Hosting Volume as an Arbiter Volume Copier lienLien copié sur presse-papiers!
There are no changes to the storageclass file.
To create a block hosting volume as an arbiter volume, execute the following:
Edit the configuration file under Glusterfs section in Heketi deployment configuration by adding the following environment variable and value:
HEKETI_BLOCK_HOSTING_VOLUME_OPTIONS: group gluster-block,user.heketi.arbiter true
HEKETI_BLOCK_HOSTING_VOLUME_OPTIONS: group gluster-block,user.heketi.arbiter true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create block volume using Heketi CLI.
heketi-cli blockvolume create --size=100
# heketi-cli blockvolume create --size=100
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that block hosting volume is arbiter volume.
gluster v info
# gluster v info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor information about managing arbiter volumes see, Chapter 10, Managing Arbitrated Replicated Volumes