이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 4. Deploying Containerized Storage in Converged Mode
Before following the deployment workflow for your preferred solution, make sure to review Section 4.1, “Specify Advanced Installer Variables” to understand ansible variable and playbook recommendations and requirements.
To set up storage to containers on top of an OpenShift Cluster, select the workflow that meets your objectives.
Deployment workflow | Registry | Metrics | Logging | Applications |
---|---|---|---|---|
Section 4.2, “Deploying Red Hat Openshift Container Storage in Converged Mode” | ✔ | |||
Section 4.3, “Deploying Red Hat Openshift Container Storage in Converged Mode with Registry” | ✔ | |||
✔ | ✔ | |||
✔ | ✔ | ✔ | ✔ |
- Red Hat Openshift Container Storage does not support a simultaneous deployment of converged and independent mode with ansible workflow. Therefore, you must deploy either converged mode or independent mode: you cannot mix both modes during deployment.
- s3 is deployed manually and not through Ansible installer. For more information on manual deployment, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#S3_Object_Store
New registry name registry.redhat.io
is used throughout in this Guide.
However, if you have not migrated to the new registry
yet then replace all occurrences of registry.redhat.io
with registry.access.redhat.com
where ever applicable.
4.1. Specify Advanced Installer Variables
The cluster installation process as documented in https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html-single/installing_clusters/#install-planning, can be used to install one or both the GlusterFS node groups:
-
glusterfs
: A general storage cluster for use by user applications. -
glusterfs-registry
: A dedicated storage cluster for use by infrastructureapplications such as an integrated OpenShift Container Registry.
It is recommended to deploy both groups to avoid potential impacts on performance in I/O and volume creation. Both of these are defined in the inventory hosts file.
The definition of the clusters is done by including the relevant names in the`[OSEv3:children]` group, creating similarly named groups, and then populating the groups with the node information. The clusters can then be configured through a variety of variables in the [OSEv3:vars]
group. glusterfs
variables begin with openshift_storage_glusterfs_
and glusterfs-registry
variables begin with openshift_storage_glusterfs_registry_
. A few other variables, such as openshift_hosted_registry_storage_kind
, interact with the GlusterFS clusters.
It is recommended to specify image names and version tags for all containerized components.This is to prevent components such as the Red Hat Gluster Storage pods from upgrading after an outage, which might lead to a cluster of widely disparate software versions. The relevant variables are as follows:
-
openshift_storage_glusterfs_image
-
openshift_storage_glusterfs_block_image
-
openshift_storage_glusterfs_heketi_image
The following are the recommended values for this release of Red Hat Openshift Container Storage
-
openshift_storage_glusterfs_image=registry.redhat.io/rhgs3/rhgs-server-rhel7:v3.11.8
-
openshift_storage_glusterfs_block_image=registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7:v3.11.8
-
openshift_storage_glusterfs_heketi_image=registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.8
-
openshift_storage_glusterfs_s3_server_image=registry.redhat.io/rhgs3/rhgs-s3-server-rhel7:v3.11.8
For a complete list of variables, see https://github.com/openshift/openshift-ansible/tree/release-3.11/roles/openshift_storage_glusterfs on GitHub.
Once the variables are configured, there are several playbooks available depending on the circumstances of the installation:
The main playbook for cluster installations can be used to deploy the GlusterFS clusters in tandem with an initial installation of OpenShift Container Platform.
- This includes deploying an integrated OpenShift Container Registry that uses GlusterFS storage.
-
/usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
can be used to deploy the clusters onto an existing OpenShift Container Platform installation. /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/registry.yml
can be used to deploy the clusters onto an existing OpenShift Container Platform installation. In addition, this will deploy an integrated OpenShift Container Registry, which uses GlusterFS storage.Important- There must not be a pre-existing registry in the OpenShift Container Platform cluster.
playbooks/openshift-glusterfs/uninstall.yml
can be used to remove existingclusters matching the configuration in the inventory hosts file. This is usefulfor cleaning up the Red Hat Openshift Container Storage environment in the case of a faileddeployment due to configuration errors.NoteThe GlusterFS playbooks are not guaranteed to be idempotent. Running the playbooks more than once for a given installation is currently not supported without deleting the entire GlusterFS installation (including disk data) and starting over.
4.2. Deploying Red Hat Openshift Container Storage in Converged Mode
In your inventory file, include the following variables in the
[OSEv3:vars]
section, adjusting them as needed for your configuration:[OSEv3:vars] openshift_storage_glusterfs_namespace=app-storage openshift_storage_glusterfs_storageclass=true openshift_storage_glusterfs_storageclass_default=false openshift_storage_glusterfs_block_deploy=true openshift_storage_glusterfs_block_host_vol_create=true openshift_storage_glusterfs_block_host_vol_size=100 openshift_storage_glusterfs_block_storageclass=true openshift_storage_glusterfs_block_storageclass_default=false
Noteopenshift_storage_glusterfs_block_host_vol_size
takes an integer, which is the size of the volume in Gi.In your inventory file, add
glusterfs
in the[OSEv3:children]
section to enable the[glusterfs]
group:[OSEv3:children] masters etcd nodes glusterfs
Add a
[glusterfs]
section with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devices
to a list of raw block devices that will be completely managed as part of a GlusterFS cluster.There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_zone=<zone_number> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:
[glusterfs] node103.example.com glusterfs_zone=1 glusterfs_devices='["/dev/sdd"]' node104.example.com glusterfs_zone=2 glusterfs_devices='["/dev/sdd"]' node105.example.com glusterfs_zone=3 glusterfs_devices='["/dev/sdd"]'
Add the hosts listed under
[glusterfs]
to the[nodes]
group:[nodes] ... node103.example.com openshift_node_group_name="node-config-infra" node104.example.com openshift_node_group_name="node-config-infra" node105.example.com openshift_node_group_name="node-config-infra"
The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
- To verify the deployment see, Section 4.8, “Verify your Deployment”.
4.3. Deploying Red Hat Openshift Container Storage in Converged Mode with Registry
In your inventory file, include the following variables in the [OSEv3:vars] section, adjusting them as needed for your configuration:
openshift_storage_glusterfs_registry_namespace=app-storage openshift_storage_glusterfs_registry_storageclass=true openshift_storage_glusterfs_registry_storageclass_default=false openshift_storage_glusterfs_registry_block_deploy=true openshift_storage_glusterfs_registry_block_host_vol_create=true openshift_storage_glusterfs_registry_block_host_vol_size=100 openshift_storage_glusterfs_registry_block_storageclass=true openshift_storage_glusterfs_registry_block_storageclass_default=false
In your inventory file, set the following variable under
[OSEv3:vars]
:[OSEv3:vars] ... openshift_hosted_registry_storage_kind=glusterfs openshift_hosted_registry_storage_volume_size=5Gi openshift_hosted_registry_selector='node-role.kubernetes.io/infra=true'
Add
glusterfs_registry
in the[OSEv3:children]
section to enable the`[glusterfs_registry]` group:[OSEv3:children] masters etcd nodes glusterfs_registry
Add a
[glusterfs_registry]
section with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devices
to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_zone=<zone_number> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:
[glusterfs_registry] node106.example.com glusterfs_zone=1 glusterfs_devices='["/dev/sdd"]' node107.example.com glusterfs_zone=2 glusterfs_devices='["/dev/sdd"]' node108.example.com glusterfs_zone=3 glusterfs_devices='["/dev/sdd"]'
Add the hosts listed under
[glusterfs_registry]
to the[nodes]
group:[nodes] ... node106.example.com openshift_node_group_name="node-config-compute" node107.example.com openshift_node_group_name="node-config-compute" node108.example.com openshift_node_group_name="node-config-compute"
The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
- To verify the deployment see, Section 4.8, “Verify your Deployment”.
4.4. Deploying Red Hat Openshift Container Storage in Converged Mode with Logging and Metrics
In your inventory file, set the following variables under
[OSEv3:vars]
:[OSEv3:vars] ... openshift_metrics_install_metrics=true openshift_metrics_cassandra_storage_type=pv openshift_metrics_hawkular_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_metrics_cassandra_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_metrics_heapster_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_metrics_storage_volume_size=20Gi openshift_metrics_cassandra_pvc_storage_class_name="glusterfs-registry-block" openshift_logging_install_logging=true openshift_logging_es_pvc_dynamic=true openshift_logging_storage_kind=dynamic openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_es_pvc_size=20Gi openshift_logging_es_pvc_storage_class_name="glusterfs-registry-block" openshift_storage_glusterfs_registry_namespace=infra-storage openshift_storage_glusterfs_registry_storageclass=false openshift_storage_glusterfs_registry_storageclass_default=false openshift_storage_glusterfs_registry_block_deploy=true openshift_storage_glusterfs_registry_block_host_vol_create=true openshift_storage_glusterfs_registry_block_host_vol_size=100 openshift_storage_glusterfs_registry_block_storageclass=true openshift_storage_glusterfs_registry_block_storageclass_default=false
NoteFor more details about all the variables, see https://github.com/openshift/openshift-ansible/tree/release-3.11/roles/openshift_storage_glusterfs.
Add
glusterfs_registry
in the[OSEv3:children]`section to enable the `[glusterfs_registry]
group:[OSEv3:children] masters etcd nodes glusterfs_registry
Add a
[glusterfs_registry]
section with entries for each storage node thatwill host the GlusterFS storage. For each node, setglusterfs_devices
to alist of raw block devices that will be completely managed as part of aGlusterFS cluster. There must be at least one device listed. Each device mustbe bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_zone=<zone_number> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:
[glusterfs_registry] node106.example.com glusterfs_zone=1 glusterfs_devices='["/dev/sdd"]' node107.example.com glusterfs_zone=2 glusterfs_devices='["/dev/sdd"]' node108.example.com glusterfs_zone=3 glusterfs_devices='["/dev/sdd"]'
-
Add the hosts listed under
[glusterfs_registry]
to the[nodes]
group:
[nodes] ... node106.example.com openshift_node_group_name="node-config-compute" node107.example.com openshift_node_group_name="node-config-compute" node108.example.com openshift_node_group_name="node-config-compute"
The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml
- To verify the deployment see, Section 4.8, “Verify your Deployment”.
4.5. Deploying Red Hat Openshift Container Storage in Converged mode for Applications with Registry, Logging, and Metrics
In your inventory file, set the following variables under
[OSEv3:vars]
:[OSEv3:vars] ... openshift_hosted_registry_selector='node-role.kubernetes.io/infra=true' openshift_hosted_registry_storage_volume_size=5Gi openshift_hosted_registry_storage_kind=glusterfs [OSEv3:vars] ... openshift_metrics_install_metrics=true openshift_metrics_cassandra_storage_type=pv openshift_metrics_hawkular_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_metrics_cassandra_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_metrics_heapster_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_metrics_storage_volume_size=20Gi openshift_metrics_cassandra_pvc_storage_class_name="glusterfs-registry-block" openshift_logging_install_logging=true openshift_logging_es_pvc_dynamic=true openshift_logging_storage_kind=dynamic openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_es_pvc_size=20Gi openshift_logging_es_pvc_storage_class_name="glusterfs-registry-block" openshift_storage_glusterfs_namespace=app-storage openshift_storage_glusterfs_storageclass=true openshift_storage_glusterfs_storageclass_default=false openshift_storage_glusterfs_block_deploy=false openshift_storage_glusterfs_registry_namespace=infra-storage openshift_storage_glusterfs_registry_storageclass=false openshift_storage_glusterfs_registry_storageclass_default=false openshift_storage_glusterfs_registry_block_deploy=true openshift_storage_glusterfs_registry_block_host_vol_create=true openshift_storage_glusterfs_registry_block_host_vol_size=100 openshift_storage_glusterfs_registry_block_storageclass=true openshift_storage_glusterfs_registry_block_storageclass_default=false
NoteEnsure to set
openshift_storage_glusterfs_block_deploy=false
in this deployment scenario.Add
glusterfs
andglusterfs_registry
in the[OSEv3:children]
section toenable the[glusterfs]
and[glusterfs_registry]
groups:[OSEv3:children] ... glusterfs glusterfs_registry
Add
[glusterfs]
and[glusterfs_registry]
sections with entries for eachstorage node that will host the GlusterFS storage. For each node, set`glusterfs_devices` to a list of raw block devices that will be completelymanaged as part of a GlusterFS cluster. There must be at least one devicelisted. Each device must be bare, with no partitions or LVM PVs. Specifying thevariable takes the form:<hostname_or_ip> glusterfs_zone=<zone_number> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:
[glusterfs] node103.example.com glusterfs_zone=1 glusterfs_devices='["/dev/sdd"]' node104.example.com glusterfs_zone=2 glusterfs_devices='["/dev/sdd"]' node105.example.com glusterfs_zone=3 glusterfs_devices='["/dev/sdd"]' [glusterfs_registry] node106.example.com glusterfs_zone=1 glusterfs_devices='["/dev/sdd"]' node107.example.com glusterfs_zone=2 glusterfs_devices='["/dev/sdd"]' node108.example.com glusterfs_zone=3 glusterfs_devices='["/dev/sdd"]'
Add the hosts listed under
[glusterfs]
and[glusterfs_registry]
to the`[nodes]` group:[nodes] ... node103.example.com openshift_node_group_name="node-config-compute" node104.example.com openshift_node_group_name="node-config-compute" node105.example.com openshift_node_group_name="node-config-compute" node106.example.com openshift_node_group_name="node-config-infra" node107.example.com openshift_node_group_name="node-config-infra" node108.example.com openshift_node_group_name="node-config-infra"
The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml
- To verify the deployment see, Section 4.8, “Verify your Deployment”.
4.6. Single OCS cluster installation
It is possible to support both general-application storage and infrastructure storage in a single OCS cluster. To do this, the inventory file options will change slightly for logging and metrics. This is because when there is only one cluster, the gluster-block StorageClass
would be glusterfs-storage-block
.
The registry PV will be created on this single cluster if the second cluster,[glusterfs_registry]
does not exist. For high availability, it is very important to have four nodes for this cluster. Special attention should be given to choosing the size for openshift_storage_glusterfs_block_host_vol_size
.
This is the hosting volume for gluster-block devices that will be created for logging and metrics. Make sure that the size can accommodate all these block volumes and have sufficient storage if another hosting volume must be created.
[OSEv3:children] ... nodes glusterfs [OSEv3:vars] ... # registry ... # logging openshift_logging_install_logging=true ... openshift_logging_es_pvc_storage_class_name='glusterfs-storage-block' ... # metrics openshift_metrics_install_metrics=true ... openshift_metrics_cassandra_pvc_storage_class_name='glusterfs-storage-block' ... # glusterfs_registry_storage openshift_hosted_registry_storage_kind=glusterfs openshift_hosted_registry_storage_volume_size=20Gi openshift_hosted_registry_selector="node-role.kubernetes.io/infra=true" # OCS storage cluster for applications openshift_storage_glusterfs_namespace=app-storage openshift_storage_glusterfs_storageclass=true openshift_storage_glusterfs_storageclass_default=false openshift_storage_glusterfs_block_deploy=true openshift_storage_glusterfs_block_host_vol_create=true openshift_storage_glusterfs_block_host_vol_size=100 openshift_storage_glusterfs_block_storageclass=true openshift_storage_glusterfs_block_storageclass_default=false ... [nodes] … ose-app-node01.ocpgluster.com openshift_node_group_name="node-config-compute" ose-app-node02.ocpgluster.com openshift_node_group_name="node-config-compute" ose-app-node03.ocpgluster.com openshift_node_group_name="node-config-compute" ose-app-node04.ocpgluster.com openshift_node_group_name="node-config-compute" [glusterfs] ose-app-node01.ocpgluster.com glusterfs_zone=1 glusterfs_devices='[ "/dev/xvdf" ]' ose-app-node02.ocpgluster.com glusterfs_zone=2 glusterfs_devices='[ "/dev/xvdf" ]' ose-app-node03.ocpgluster.com glusterfs_zone=3 glusterfs_devices='[ "/dev/xvdf" ]' ose-app-node04.ocpgluster.com glusterfs_zone=1 glusterfs_devices='[ "/dev/xvdf" ]'
openshift_storage_glusterfs_block_host_vol_size
takes an integer, which is the size of the volume in Gi.
4.7. Configure Heketi to Place Bricks Across Zones
Heketi uses node zones as a hint for brick placement. To force Heketi to strictly place replica bricks in different zones, "strict zone checking" feature of Heketi has to be enabled. When this feature is enabled, a volume is created successfully only if each brick set is spread across sufficiently many zones.
Ensure that the OCS nodes are labeled with the correct zones before configuring StorageClass to use heketi’s strict zoning.
You can configure this feature by adding the "volumeoptions" field with the desired setting in the parameters section of the StorageClass. For example:
volumeoptions: "user.heketi.zone-checking strict"
OR
volumeoptions: "user.heketi.zone-checking none"
The settings are as follows:
- strict
- Requires at least 3 nodes to be present in different zones (assuming replica 3).
- none
- Previous (and current default) behavior
A sample StorageClass file with "strict zone checking" feature configured is shown below:
# cat glusterfs-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gluster-container provisioner: kubernetes.io/glusterfs reclaimPolicy: Delete parameters: resturl: "http://heketi-storage-project.cloudapps.mystorage.com" restuser: "admin" volumetype: "replicate:3" clusterid: "630372ccdc720a92c681fb928f27b53f" secretNamespace: "default" secretName: "heketi-secret" volumeoptions: "user.heketi.zone-checking strict" volumenameprefix: "test-vol" allowVolumeExpansion: true
Existing storage class specifications are not editable. You can create a new storage class with the required volume options for all future applications. However, if you need to change the settings of an existing storage class then the existing storage class must first be deleted and then a new storage class with the same name as the previous class has to be re-created.
Execute the following commands to delete and re-create the glusterfs-storage storage class with the new settings:
Export the storage class object to a yaml file:
# oc get sc glusterfs-storage --export=true -o yaml > glusterfs-storage.yaml
- Use your preferred editor to add the new parameters.
Delete and re-create the storage class object:
# oc delete sc glusterfs-storage # oc create -f glusterfs-storage.yaml
4.8. Verify your Deployment
Execute the following steps to verify the deployment.
Installation Verification for converged mode
Examine the installation for the app-storage namespace by running the following commands This can be done from an OCP master node or the ansible deploy host that has the OC CLI installed.
# switch to the app-storage namespace oc project app-storage # get the list of pods here (3 gluster pods +1 heketi pod + 1 gluster block provisioner pod) oc get pods NAME READY STATUS RESTARTS AGE glusterblock-storage-provisioner-dc-1-mphfp 1/1 Running 0 1h glusterfs-storage-6tlzx 1/1 Running 0 1h glusterfs-storage-lksps 1/1 Running 0 1h glusterfs-storage-nf7qk 1/1 Running 0 1h glusterfs-storage-tcnd8 1/1 Running 0 1h heketi-storage-1-5m6cl 1/1 Running 0 1h
Examine the installation for the infra-storage namespace by running the following commands This can be done from an OCP master node or the ansible deploy host that has the OC CLI installed.
# switch to the infra-storage namespace oc project infra-storage # list the pods here (3 gluster pods, 1 heketi pod and 1 glusterblock-provisioner pod) oc get pods NAME READY STATUS RESTARTS AGE glusterblock-registry-provisioner-dc-1-28sfc 1/1 Running 0 1h glusterfs-registry-cjp49 1/1 Running 0 1h glusterfs-registry-lhgjj 1/1 Running 0 1h glusterfs-registry-v4vqx 1/1 Running 0 1h heketi-registry-5-lht6s 1/1 Running 0 1h
Check the existence of the registry PVC backed by OCP infrastructure Red Hat Openshift Container Storage. This volume was statically provisioned by openshift-ansible deployment.
oc get pvc -n default NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE registry-claim Bound pvc-7ca4c8de-10ca-11e8-84d3-069df2c4f284 25Gi RWX 1h
Check the registry DeploymentConfig to verify it’s using this glusterfs volume.
oc describe dc/docker-registry -n default | grep -A3 Volumes Volumes: registry-storage: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: registry-claim
Storage Provisioning Verification for Converged Mode
The Storage Class resources can be used to create new PV claims for verification of the RHOCS deployment. Validate PV provisioning using the following OCP Storage Class created during the RHOCS deployment:
- Use the glusterfs-storage-block OCP Storage Class resource to create new PV claims if you deployed RHOCS using Section 4.2, “Deploying Red Hat Openshift Container Storage in Converged Mode”.
Use the glusterfs-registry-block OCP Storage Class resource to create new PV claims if you deployed RHOCS using one of the following workflows:
- Section 4.3, “Deploying Red Hat Openshift Container Storage in Converged Mode with Registry”
- Section 4.4, “Deploying Red Hat Openshift Container Storage in Converged Mode with Logging and Metrics”
- Section 4.5, “Deploying Red Hat Openshift Container Storage in Converged mode for Applications with Registry, Logging, and Metrics”
# oc get storageclass NAME TYPE glusterfs-storage kubernetes.io/glusterfs glusterfs-storage-block gluster.org/glusterblock $ cat pvc-file.yaml kind: PersistentVolumeClaim apiVersion: v1 spec: name: rhocs-file-claim1 annotations: storageClassName: glusterfs-storage spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi
# cat pvc-block.yaml kind: PersistentVolumeClaim apiVersion: v1 spec: name: rhocs-block-claim1 annotations: storageClassName: glusterfs-storage-block spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi
# oc create -f pvc-file.yaml # oc create -f pvc-block.yaml
Validate that the two PVCs and respective PVs are created correctly:
# oc get pvc
Using the heketi-client for Verification
The heketi-client package needs to be installed on the ansible deploy host or on a OCP master. Once it is installed two new files should be created to easily export the required environment variables to run the heketi-client commands (or heketi-cli). The content of each file as well as useful heketi-cli commands are detailed here.
Create a new file (e.g. "heketi-exports-app") with the following contents:
export HEKETI_POD=$(oc get pods -l glusterfs=heketi-storage-pod -n app-storage -o jsonpath="{.items[0].metadata.name}") export HEKETI_CLI_SERVER=http://$(oc get route/heketi-storage -n app-storage -o jsonpath='{.spec.host}') export HEKETI_CLI_KEY=$(oc get pod/$HEKETI_POD -n app-storage -o jsonpath='{.spec.containers[0].env[?(@.name=="HEKETI_ADMIN_KEY")].value}') export HEKETI_ADMIN_KEY_SECRET=$(echo -n ${HEKETI_CLI_KEY} | base64) export HEKETI_CLI_USER=admin
Source the file to create the HEKETI app-storage environment variables:
source heketi-exports-app # see if heketi is alive curl -w '\n' ${HEKETI_CLI_SERVER}/hello Hello from Heketi # ask heketi about the cluster it knows about heketi-cli cluster list Clusters: Id:56ed234a384cef7dbef6c4aa106d4477 [file][block] # ask heketi about the topology of the RHOCS cluster for apps heketi-cli topology info # ask heketi about the volumes already created (one for the heketi db should exist after the OCP initial installation) heketi-cli volume list Id:d71a4cbea22af3453615a9020f261b5c Cluster:56ed234a384cef7dbef6c4aa106d4477 Name:heketidbstorage
Create a new file (e.g. "heketi-exports-infra") with the following contents:
export HEKETI_POD=$(oc get pods -l glusterfs=heketi-registry-pod -n infra-storage -o jsonpath="{.items[0].metadata.name}") export HEKETI_CLI_SERVER=http://$(oc get route/heketi-registry -n infra-storage -o jsonpath='{.spec.host}') export HEKETI_CLI_USER=admin export HEKETI_CLI_KEY=$(oc get pod/$HEKETI_POD -n infra-storage -o jsonpath='{.spec.containers[0].env[?(@.name=="HEKETI_ADMIN_KEY")].value}') export HEKETI_ADMIN_KEY_SECRET=$(echo -n ${HEKETI_CLI_KEY} | base64)
Source the file to create the HEKETI infra-storage environment variables:
source heketi-exports-infra # see if heketi is alive curl -w '\n' ${HEKETI_CLI_SERVER}/hello Hello from Heketi # ask heketi about the cluster it knows about (the RHOCS cluster for infrastructure) heketi-cli cluster list Clusters: Id:baf91b261cbca2bb4b62caece63f60d0 [file][block] # ask heketi about the volumes already created heketi-cli volume list Id:77baed02f79f4518326d8cc1db6c7af8 Cluster:baf91b261cbca2bb4b62caece63f60d0 Name:heketidbstorage
4.9. Creating an Arbiter Volume (optional)
Arbiter volumes support all persistent volume types with similar consistency and less disk space requirements. An arbitrated replicated volume, or arbiter volume, acts like a three-way replicated volume where every third brick is a special type of brick called an arbiter. Arbiter bricks do not store file data; they only store file names, structure, and metadata. The arbiter uses client quorum to compare this metadata with the metadata of the other nodes to ensure consistency in the volume and prevent split-brain conditions.
Advantages of arbitrated replicated volumes:
- Similar consistency: When an arbiter is configured, arbitration logic uses client-side quorum in auto mode to prevent file operations that would lead to split-brain conditions.
- Less disk space required: Because an arbiter brick only stores file names and metadata, an arbiter brick can be much smaller than the other bricks in the volume.
For more information about Arbitrated Replicated Volumes, see https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#Creating_Arbitrated_Replicated_Volumes
Before creating the arbiter volume, make sure heketi-client packages are installed.
# subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
# yum install heketi-client
If you want to upgrade your already existing Heketi server, then see, https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/deployment_guide/index#upgrade_heketi_rhgs
Arbiter volumes may not be appropriate for small file or unpredictable file-size workloads as it could fill up the arbiter bricks faster than the data bricks. If you want to use an arbiter volume, we recommend you to choose a conservative average file size based on the size of the data brick and number of files so that the arbiter brick can accommodate your workload.
4.9.1. Creating an Arbiter Volume
Arbiter volume can be created using the Heketi CLI or by updating the storageclass file.
4.9.1.1. Creating an Arbiter Volume using Heketi CLI
To create an Arbiter volume using the Heketi CLI one must request a replica 3 volume as well as provide the Heketi-specific volume option “user.heketi.arbiter true” that will instruct the system to create the Arbiter variant of replica 3.
For example:
# heketi-cli volume create --size=4 --gluster-volume-options='user.heketi.arbiter true'
4.9.1.2. Creating an Arbiter Volume using the Storageclass file
To create an arbiter volume using the storageclass file ensure to include the following two parameters in the storageclass file:
- user.heketi.arbiter true
- (Optional) user.heketi.average-file-size 1024
Following is a sample storageclass file:
# cat glusterfs-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gluster-container provisioner: kubernetes.io/glusterfs parameters: resturl: "http://heketi-storage-project.cloudapps.mystorage.com" restuser: "admin" volumetype: "replicate:3" clusterid: "630372ccdc720a92c681fb928f27b53f,796e6db1981f369ea0340913eeea4c9a" secretNamespace: "default" secretName: "heketi-secret" volumeoptions: "user.heketi.arbiter true,user.heketi.average-file-size 1024" volumenameprefix: "test-vol" spec: persistentVolumeReclaimPolicy: Retain accessModes: - ReadWriteOnce resources: requests: storage: 5Gi
4.9.2. Creating Block Hosting Volume as an Arbiter Volume
There are no changes to the storageclass file.
To create a block hosting volume as an arbiter volume, execute the following:
Edit the configuration file under Glusterfs section in Heketi deployment configuration by adding the following environment variable and value:
HEKETI_BLOCK_HOSTING_VOLUME_OPTIONS: group gluster-block,user.heketi.arbiter true
Create block volume using Heketi CLI.
# heketi-cli blockvolume create --size=100
Ensure that block hosting volume is arbiter volume.
# gluster v info
NoteFor information about managing arbiter volumes see, Chapter 10, Managing Arbitrated Replicated Volumes