Add-ons
Read more to learn how to use add-ons for your cluster.
Abstract
Chapter 1. Add-ons overview Copy linkLink copied to clipboard!
Red Hat Advanced Cluster Management for Kubernetes add-ons can improve some areas of performance and add functionality to enhance your applications. The following sections provide a summary of the add-ons that are available for Red Hat Advanced Cluster Management:
1.1. Submariner multicluster networking and service discovery Copy linkLink copied to clipboard!
Submariner is an open source tool that can be used with Red Hat Advanced Cluster Management for Kubernetes to provide direct networking and service discovery between two or more managed clusters in your environment, either on-premises or in the cloud. Submariner is compatible with Multi-Cluster Services API (Kubernetes Enhancements Proposal #1645). For more information about Submariner, see the Submariner site.
Make sure to see the Red Hat Advanced Cluster Management support matrix for more details about the support levels of infrastructure providers, including which providers support automated console deployments or require manual deployment.
See the following topics to learn more about how to use Submariner:
1.1.1. Deploying Submariner on disconnected clusters Copy linkLink copied to clipboard!
Deploying Submariner on disconnected clusters can help with security concerns by reducing the risk of external attacks on clusters. To deploy Submariner with Red Hat Advanced Cluster Management for Kubernetes on disconnected clusters, you must first complete the steps outlined in Install in disconnected network environments.
1.1.1.1. Configuring Submariner on disconnected clusters Copy linkLink copied to clipboard!
After following the steps outlined in Install in disconnected network environments, you must configure Submariner during the installation to support deployment on disconnected clusters. See the following topics:
1.1.1.1.1. Mirroring images in the local registry Copy linkLink copied to clipboard!
Make sure to mirror the Submariner Operator bundle image in the local registry before deploying Submariner on disconnected clusters.
Note: If you are using Red Hat Advanced Cluster Management 2.7.2 or older, you must also mirror the nettest-rhel8 image.
1.1.1.1.2. Customizing catalogSource names Copy linkLink copied to clipboard!
By default, submariner-addon searches for a catalogSource with the name redhat-operators. When using a catalogSource with a different name, you must update the value of the SubmarinerConfig.Spec.subscriptionConfig.Source parameter in the SubmarinerConfig associated with your managed cluster with the custom name of the catalogSource.
1.1.1.1.3. Enabling airGappedDeployment in SubmarinerConfig Copy linkLink copied to clipboard!
When installing submariner-addon on a managed cluster from the Red Hat Advanced Cluster Management for Kubernetes console, you can select the Disconnected cluster option so that Submariner does not make API queries to external servers.
If you are installing Submariner by using the APIs, you must set the airGappedDeployment parameter to true in the SubmarinerConfig associated with your managed cluster.
1.1.2. Configuring Submariner Copy linkLink copied to clipboard!
Red Hat Advanced Cluster Management for Kubernetes provides Submariner as an add-on for your hub cluster. You can find more information about Submariner in the Submariner open source project documentation.
1.1.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you have the following prerequisites before using Submariner:
-
A credential to access the hub cluster with
cluster-adminpermissions. - IP connectivity must be configured between the gateway nodes. When connecting two clusters, at least one of the clusters must be accessible to the gateway node using its public or private IP address designated to the gateway node. See Submariner NAT Traversal for more information.
- If you are using OVN Kubernetes, clusters must be at Red Hat OpenShift Container Platform version 4.11 or later.
- If your Red Hat OpenShift Container Platform clusters use OpenShift SDN CNI, the firewall configuration across all nodes in each of the managed clusters must allow 4800/UDP in both directions.
- The firewall configuration must allow 4500/UDP and 4490/UDP on the gateway nodes for establishing tunnels between the managed clusters.
If the gateway nodes are directly reachable over their private IPs without any NAT in between, make sure that the firewall configuration allows the ESP protocol on the gateway nodes.
Note: This is configured automatically when your clusters are deployed in an Amazon Web Services, Google Cloud Platform, Microsoft Azure, or Red Hat OpenStack environment, but must be configured manually for clusters on other environments and for the firewalls that protect private clouds.
The
managedclustername must follow the DNS label standard as defined in RFC 1123 and meet the following requirements:- Contain 63 characters or fewer
- Contain only lowercase alphanumeric characters or '-'
- Start with an alphanumeric character
- End with an alphanumeric character
1.1.2.2. Submariner ports table Copy linkLink copied to clipboard!
View the following table to see which Submariner ports you need to enable:
| Name | Default value | Customizable | Optional or required |
|---|---|---|---|
| IPsec NATT | 4500/UDP | Yes | Required |
| VXLAN | 4800/UDP | No | Required |
| NAT discovery port | 4490/UDP | No | Required |
See the Submariner upstream prerequisites documentation for more detailed information about the prerequisites.
1.1.2.3. Globalnet Copy linkLink copied to clipboard!
Globalnet is a feature included with the Submariner add-on which supports connectivity between clusters with overlapping CIDRs. Globalnet is a cluster set wide configuration, and can be selected when the first managed cluster is added to the cluster set. When Globalnet is enabled, each managed cluster is allocated a global CIDR from the virtual Global Private Network. The global CIDR is used for supporting inter-cluster communication.
If there is a chance that your clusters running Submariner might have overlapping CIDRs, consider enabling Globalnet. When using the console, the ClusterAdmin can enable Globalnet for a cluster set by selecting the option Enable Globalnet when enabling the Submariner add-on for clusters in the cluster set. After you enable Globalnet, you cannot disable it without removing Submariner.
When using the Red Hat Advanced Cluster Management APIs, the ClusterAdmin can enable Globalnet by creating a submariner-broker object in the <ManagedClusterSet>-broker namespace.
The ClusterAdmin role has the required permissions to create this object in the broker namespace. The ManagedClusterSetAdmin role, which is sometimes created to act as a proxy administrator for the cluster set, does not have the required permissions. To provide the required permissions, the ClusterAdmin must associate the role permissions for the access-to-brokers-submariner-crd to the ManagedClusterSetAdmin user.
Complete the following steps to create the submariner-broker object:
Retrieve the
<broker-namespace>by running the following command:oc get ManagedClusterSet <cluster-set-name> -o jsonpath="{.metadata.annotations['cluster\.open-cluster-management\.io/submariner-broker-ns']}"oc get ManagedClusterSet <cluster-set-name> -o jsonpath="{.metadata.annotations['cluster\.open-cluster-management\.io/submariner-broker-ns']}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
submariner-brokerobject that specifies the Globalnet configuration by creating a YAML file namedsubmariner-broker. Add content that resembles the following lines to the YAML file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
broker-namespacewith the name of your broker namespace.Replace
true-or-falsewithtrueto enable Globalnet.Note: The
metadatanameparameter must besubmariner-broker.Apply the file to your YAML file by entering the following command:
oc apply -f submariner-broker.yaml
oc apply -f submariner-broker.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
For more information about Globalnet, see Globalnet controller in the Submariner documentation.
1.1.3. Installing the subctl command utility Copy linkLink copied to clipboard!
The subctl utility is shipped in a container image. Complete the following steps to install the subctl utility locally:
Log in to the registry by running the following command and entering your credentials when prompted:
oc registry login --registry registry.redhat.io
oc registry login --registry registry.redhat.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow Download the
subctlcontainer and extract a compressed version of thesubctlbinary to/tmpby entering the following command:oc image extract registry.redhat.io/rhacm2/subctl-rhel8:v0.14 --path="/dist/subctl-v0.14*-linux-amd64.tar.xz":/tmp/ --confirm
oc image extract registry.redhat.io/rhacm2/subctl-rhel8:v0.14 --path="/dist/subctl-v0.14*-linux-amd64.tar.xz":/tmp/ --confirmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Decompress the
subctlutility by entering the following command:tar -C /tmp/ -xf /tmp/subctl-v0.14*-linux-amd64.tar.xz
tar -C /tmp/ -xf /tmp/subctl-v0.14*-linux-amd64.tar.xzCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
subctlutility by entering the following command:install -m744 /tmp/subctl-v0.14*/subctl-v0.14*-linux-amd64 /$HOME/.local/bin/subctl
install -m744 /tmp/subctl-v0.14*/subctl-v0.14*-linux-amd64 /$HOME/.local/bin/subctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.3.1. Using the subctl commands Copy linkLink copied to clipboard!
After adding the utility to your path, view the following table for a brief description of the available commands:
|
Creates a | |
|
Removes the | |
| Provides information about Submariner resources. | |
| Verifies connectivity, service discovery, and other Submariner features when Submariner is configured across a pair of clusters. | |
| Benchmarks throughput and latency across a pair of clusters that are enabled with Submariner or within a single cluster. | |
| Runs checks to identify issues that prevent the Submariner deployment from working correctly. | |
| Collects information from the clusters to help troubleshoot a Submariner deployment. | |
|
Displays the version details of the |
For more information about the subctl utility and its commands, see subctl in the Submariner documentation.
1.1.4. Deploying Submariner by using the console Copy linkLink copied to clipboard!
Before you deploy Submariner with Red Hat Advanced Cluster Management for Kubernetes, you must prepare the clusters on the hosting environment. You can use the SubmarinerConfig API or the Red Hat Advanced Cluster Management for Kubernetes console to automatically prepare Red Hat OpenShift Container Platform clusters on the following providers:
- Amazon Web Services
- Google Cloud Platform
- Red Hat OpenStack Platform
- Microsoft Azure
- VMware vSphere
Note: Only non-NSX deployments are supported on VMware vSphere.
To deploy Submariner on other providers, follow the instructions in Deploying Submariner manually.
Complete the following steps to deploy Submariner with the Red Hat Advanced Cluster Management for Kubernetes console:
Required access: Cluster administrator
- From the console, select Infrastructure > Clusters.
- On the Clusters page, select the Cluster sets tab. The clusters that you want enable with Submariner must be in the same cluster set.
- If the clusters on which you want to deploy Submariner are already in the same cluster set, skip to step 5.
If the clusters on which you want to deploy Submariner are not in the same cluster set, create a cluster set for them by completing the following steps:
- Select Create cluster set.
- Name the cluster set, and select Create.
- Select Manage resource assignments to assign clusters to the cluster set.
- Select the managed clusters that you want to connect with Submariner to add them to the cluster set.
- Select Review to view and confirm the clusters that you selected.
- Select Save to save the cluster set, and view the resulting cluster set page.
- On the cluster set page, select the Submariner add-ons tab.
- Select Install Submariner add-ons.
- Select the clusters on which you want to deploy Submariner.
See the fields in the following table and enter the required information in the Install Submariner add-ons editor:
Expand Field Notes AWS Access Key IDOnly visible when you import an AWS cluster.
AWS Secret Access KeyOnly visible when you import an AWS cluster.
Base domain resource group nameOnly visible when you import an Azure cluster.
Client IDOnly visible when you import an Azure cluster.
Client secretOnly visible when you import an Azure cluster.
Subscription IDOnly visible when you import an Azure cluster.
Tenant IDOnly visible when you import an Azure cluster.
Google Cloud Platform service account JSON keyOnly visible when you import a Google Cloud Platform cluster.
Instance typeThe instance type of the gateway node that is created on the managed cluster.
IPsec NAT-T portThe default value for the IPsec NAT traversal port is port
4500. If your managed cluster environment is VMware vSphere, ensure that this port is opened on your firewall.Gateway countThe number of gateway nodes to be deployed on the managed cluster. For AWS, GCP, Azure, and OpenStack clusters, dedicated Gateway nodes are deployed. For VWware clusters, existing worker nodes are tagged as gateway nodes. The default value is
1. If the value is greater than 1, the Submariner gateway High Availability (HA) is automatically enabled.Cable driverThe Submariner gateway cable engine component that maintains the cross-cluster tunnels. The default value is
Libreswan IPsec.Disconnected clusterIf enabled, tells Submariner to not access any external servers for public IP resolution.
Globalnet CIDR
Only visible when the Globalnet configuration is selected on the cluster set. The Globalnet CIDR to be used for the managed cluster. If left blank, a CIDR is allocated from the cluster set pool.
- Select Next at the end of the editor to move to the editor for the next cluster, and complete the editor for each of the remaining clusters that you selected.
- Verify your configuration for each managed cluster.
Click Install to deploy Submariner on the selected managed clusters.
It might take several minutes for the installation and configuration to complete. You can check the Submariner status in the list on the Submariner add-ons tab:
-
Connection statusindicates how many Submariner connections are established on the managed cluster. -
Agent statusindicates whether Submariner is successfully deployed on the managed cluster. The console might report a status ofDegradeduntil it is installed and configured. -
Gateway nodes labeledindicates the number of gateway nodes on the managed cluster.
-
Submariner is now deployed on the selected clusters.
1.1.5. Deploying Submariner manually Copy linkLink copied to clipboard!
Before you deploy Submariner with Red Hat Advanced Cluster Management for Kubernetes, you must prepare the clusters on the hosting environment for the connection. See Deploying Submariner by using the console to learn how to automatically deploy Submariner on supported clusters by using the console.
If your cluster is hosted on a provider that does not support automatic Submariner deployment, see the following sections to prepare the infrastructure manually. Each provider has unique steps for preparation, so make sure to select the correct provider.
1.1.5.1. Preparing bare metal for Submariner Copy linkLink copied to clipboard!
To prepare bare metal clusters for deploying Submariner, complete the following steps:
- Ensure that the firewall allows inbound/outbound traffic for external clients on the 4500/UDP and 4490/UDP ports for the Gateway nodes. Also, if the cluster is deployed with OpenShiftSDN CNI, allow inbound/outbound UDP/4800 traffic within the local cluster nodes.
Customize and apply YAML content that is similar to the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
managed-cluster-namespacewith the name of your managed cluster. The name of theSubmarinerConfigmust besubmariner, as shown in the example.This configuration labels one of the worker nodes as the Submariner gateway on your bare metal cluster.
By default, Submariner uses IP security (IPsec) to establish the secure tunnels between the clusters on the gateway nodes. You can either use the default IPsec NATT port, or you can specify a different port that you configured. When you run this procedure without specifying an IPsec NATT port, 4500/UDP is used for the connections.
- Identify the Gateway node configured by Submariner and enable firewall configurations to allow the IPsec NATT (UDP/4500) and NatDiscovery (UDP/4490) ports for external traffic.
See Customizing Submariner deployments for information about the customization options.
1.1.5.2. Preparing Microsoft Azure Red Hat OpenShift for Submariner by using the console (Technology Preview) Copy linkLink copied to clipboard!
The Microsoft Azure Red Hat OpenShift service combines various tools and resources to help simplify the process of building container-based applications. To prepare Azure Red Hat OpenShift clusters for deploying Submariner by using the console, complete the following steps:
- Download the Python wheel and CLI extension.
From the Azure CLI, run the following command to install the extension:
az extension add --upgrade -s <path-to-extension>
az extension add --upgrade -s <path-to-extension>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
path-to-extensionwith the path to where you downloaded the.whlextension file.Run the following command to verify that the CLI extension is being used:
az extension list
az extension listCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the extension is being used, the output might resemble the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the Azure CLI, register the preview feature by running the following command:
az feature registration create --namespace Microsoft.RedHatOpenShift --name AdminKubeconfig
az feature registration create --namespace Microsoft.RedHatOpenShift --name AdminKubeconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the administrator
kubeconfigby running the following command:az aro get-admin-kubeconfig -g <resource group> -n <cluster resource name>
az aro get-admin-kubeconfig -g <resource group> -n <cluster resource name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note: The
az arocommand saves thekubeconfigto the local directory and uses the namekubeconfig. To use it, set the environment variableKUBECONFIGto match the path of the file. See the following example:export KUBECONFIG=<path-to-kubeconfig> oc get nodes
export KUBECONFIG=<path-to-kubeconfig> oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Import your Azure Red Hat OpenShift cluster to your cluster list by selecting Infrastructure > Clusters > Import cluster from the Red Hat Advanced Cluster Management console.
Select the
KubeconfigImport mode and enter the content from yourkubeconfigfile in the Kubeconfig window. Follow the instructions in the console to complete the import.You can verify that your Azure Red Hat OpenShift cluster was imported successfully by navigating to Infrastructure > Clusters.
- Navigate to Infrastructure > Clusters > Cluster sets and select the name of the cluster set that you want to add. Then, click the Submariner add-ons tab.
- Click the Install Submariner add-ons button and set your Azure Red Hat OpenShift cluster as your Target clusters. Follow the instructions in the console to complete the install.
-
Navigate to Infrastructure > Clusters > Cluster sets > Submariner add-ons to verify that your Azure Red Hat OpenShift cluster Connection status is
Healthy.
1.1.5.2.1. Preparing Microsoft Azure Red Hat OpenShift for Submariner by using the API (Technology Preview) Copy linkLink copied to clipboard!
To prepare Azure Red Hat OpenShift clusters for deploying Submariner by using the API, customize and apply YAML content that is similar to the following example:
Replace managed-cluster-namespace with the name of your managed cluster.
The name of the SubmarinerConfig must be submariner, as shown in the example.
This configuration labels one of the worker nodes as the Submariner gateway on your Azure Red Hat OpenShift cluster.
By default, Submariner uses IP security (IPsec) to establish the secure tunnels between the clusters on the gateway nodes. You can either use the default IPsec NATT port, or you can specify a different port that you configured. When you run this procedure without specifying an IPsec NATT port, port 4500/UDP is used for the connections.
See Customizing Submariner deployments for information about the customization options.
1.1.5.3. Preparing Red Hat OpenShift Service on AWS for Submariner by using the console (Technology Preview) Copy linkLink copied to clipboard!
Red Hat OpenShift Service on AWS provides a stable and flexible platform for application development and modernization. To prepare OpenShift Service on AWS clusters for deploying Submariner, complete the following steps:
Create a new node to run Submariner gateway by running the following command:
rosa create machinepool --cluster=<cluster_name> --name=sm-gw-mp --replicas=<number of Submariner gateway > --labels='submariner.io/gateway=true'
rosa create machinepool --cluster=<cluster_name> --name=sm-gw-mp --replicas=<number of Submariner gateway > --labels='submariner.io/gateway=true'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to OpenShift Service on AWS by running the following commands:
rosa login oc login <rosa-cluster-url>:6443 --username cluster-admin --password <password>
rosa login oc login <rosa-cluster-url>:6443 --username cluster-admin --password <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
kubeconfigfor your OpenShift Service on AWS cluster by running the following command:oc config view --flatten=true > rosa_kube/kubeconfig
oc config view --flatten=true > rosa_kube/kubeconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Import your OpenShift Service on AWS cluster to your cluster list by selecting Infrastructure > Clusters > Import cluster from the Red Hat Advanced Cluster Management console.
Select the
KubeconfigImport mode and enter the content from yourkubeconfigfile in the Kubeconfig window. Follow the instructions in the console to complete the import.You can verify that your OpenShift Service on AWS cluster was imported successfully by navigating to Infrastructure > Clusters.
- Navigate to Infrastructure > Clusters > Cluster sets and select the name of the cluster set that you want to add. Then, click the Submariner add-ons tab.
- Click the Install Submariner add-ons button and set your OpenShift Service on AWS cluster as your Target clusters. Follow the instructions in the console to complete the installation.
-
Navigate to Infrastructure > Clusters > Cluster sets > Submariner add-ons to verify that your OpenShift Service on AWS cluster Connection status is
Healthy.
1.1.5.3.1. Preparing Red Hat OpenShift Service on AWS for Submariner by using the API (Technology Preview) Copy linkLink copied to clipboard!
To prepare OpenShift Service on AWS clusters for deploying Submariner by using the API, complete the following steps:
Create a new node to run Submariner gateway by running the following command:
rosa create machinepool --cluster=<cluster_name> --name=sm-gw-mp --replicas=<number of Submariner gateway > --labels='submariner.io/gateway=true'
rosa create machinepool --cluster=<cluster_name> --name=sm-gw-mp --replicas=<number of Submariner gateway > --labels='submariner.io/gateway=true'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Customize and apply YAML content that is similar to the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
managed-cluster-namespacewith the name of your managed cluster.The name of the
SubmarinerConfigmust besubmariner, as shown in the example.By default, Submariner uses IP security (IPsec) to establish the secure tunnels between the clusters on the gateway nodes. You can either use the default IPsec NATT port, or you can specify a different port that you configured. When you run this procedure without specifying an IPsec NATT port, port 4500/UDP is used for the connections.
See Customizing Submariner deployments for information about the customization options.
1.1.5.4. Deploy Submariner with the ManagedClusterAddOn API Copy linkLink copied to clipboard!
After manually preparing your selected hosting environment, you can deploy Submariner with the ManagedClusterAddOn API by completing the following steps:
Create a
ManagedClusterSetresource on the hub cluster by using the instructions provided in the Creating a ManagedClusterSet documentation. Make sure your entry for theManagedClusterSetresembles the following content:apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSet metadata: name: <managed-cluster-set-name>
apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSet metadata: name: <managed-cluster-set-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
managed-cluster-set-namewith a name for theManagedClusterSetthat you are creating.Important: The maximum character length of a Kubernetes namespace is 63 characters. The maximum character length you can use for the
<managed-cluster-set-name>is 56 characters. If the character length of<managed-cluster-set-name>exceeds 56 characters, the<managed-cluster-set-name>is cut off from the head.After the
ManagedClusterSetis created, thesubmariner-addoncreates a namespace called<managed-cluster-set-name>-brokerand deploys the Submariner broker to it.Create the
Brokerconfiguration on the hub cluster in the<managed-cluster-set-name>-brokernamespace by customizing and applying YAML content that is similar to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
managed-cluster-set-namewith the name of the managed cluster.Set the value of
globalnetEnabledtotrueif you want to enable Submariner Globalnet in theManagedClusterSet.Add one managed cluster to the
ManagedClusterSetby running the following command:oc label managedclusters <managed-cluster-name> "cluster.open-cluster-management.io/clusterset=<managed-cluster-set-name>" --overwrite
oc label managedclusters <managed-cluster-name> "cluster.open-cluster-management.io/clusterset=<managed-cluster-set-name>" --overwriteCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<managed-cluster-name>with the name of the managed cluster that you want to add to theManagedClusterSet.Replace
<managed-cluster-set-name>with the name of theManagedClusterSetto which you want to add the managed cluster.Customize and apply YAML content that is similar to the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
managed-cluster-namespacewith the namespace of your managed cluster.Note: The name of the
SubmarinerConfigmust besubmariner, as shown in the example.Deploy Submariner on the managed cluster by customizing and applying YAML content that is similar to the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
managed-cluster-namewith the name of the managed cluster that you want to use with Submariner.The
installNamespacefield in the spec of theManagedClusterAddOnis the namespace on the managed cluster where it installs Submariner. Currently, Submariner must be installed in thesubmariner-operatornamespace.After the
ManagedClusterAddOnis created, thesubmariner-addondeploys Submariner to thesubmariner-operatornamespace on the managed cluster. You can view the deployment status of Submariner from the status of thisManagedClusterAddOn.Note: The name of
ManagedClusterAddOnmust besubmariner.- Repeat steps three, four, and five for all of the managed clusters that you want to enable Submariner on.
After Submariner is deployed on the managed cluster, you can verify the Submariner deployment status by checking the status of submariner
ManagedClusterAddOnby running the following command:oc -n <managed-cluster-name> get managedclusteraddons submariner -oyaml
oc -n <managed-cluster-name> get managedclusteraddons submariner -oyamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
managed-cluster-namewith the name of the managed cluster.In the status of the Submariner
ManagedClusterAddOn, three conditions indicate the deployment status of Submariner:-
SubmarinerGatewayNodesLabeledcondition indicates whether there are labeled Submariner gateway nodes on the managed cluster. -
SubmarinerAgentDegradedcondition indicates whether the Submariner is successfully deployed on the managed cluster. -
SubmarinerConnectionDegradedcondition indicates how many connections are established on the managed cluster with Submariner.
-
1.1.6. Customizing Submariner deployments Copy linkLink copied to clipboard!
You can customize some of the settings of your Submariner deployments, including your Network Address Translation-Traversal (NATT) port, number of gateway nodes, and instance type of your gateway nodes. These customizations are consistent across all of the providers.
1.1.6.1. NATT port Copy linkLink copied to clipboard!
If you want to customize your NATT port, customize and apply the following YAML content for your provider environment:
-
Replace
managed-cluster-namespacewith the namespace of your managed cluster. Replace
managed-cluster-namewith the name of your managed cluster-
AWS: Replace
providerwithaws. The value of<managed-cluster-name>-aws-credsis your AWS credential secret name, which you can find in the cluster namespace of your hub cluster. -
GCP: Replace
providerwithgcp. The value of<managed-cluster-name>-gcp-credsis your Google Cloud Platform credential secret name, which you can find in the cluster namespace of your hub cluster. -
OpenStack: Replace
providerwithosp. The value of<managed-cluster-name>-osp-credsis your Red Hat OpenStack Platform credential secret name, which you can find in the cluster namespace of your hub cluster. -
Azure: Replace
providerwithazure. The value of<managed-cluster-name>-azure-credsis your Microsoft Azure credential secret name, which you can find in the cluster namespace of your hub cluster.
-
AWS: Replace
-
Replace
managed-cluster-namespacewith the namespace of your managed cluster. -
Replace
managed-cluster-namewith the name of your managed cluster. The value ofmanaged-cluster-name-gcp-credsis your Google Cloud Platform credential secret name, which you can find in the cluster namespace of your hub cluster. -
Replace
NATTPortwith the NATT port that you want to use.
Note: The name of the SubmarinerConfig must be submariner, as shown in the example.
1.1.6.2. Number of gateway nodes Copy linkLink copied to clipboard!
If you want to customize the number of your gateway nodes, customize and apply YAML content that is similar to the following example:
-
Replace
managed-cluster-namespacewith the namespace of your managed cluster. Replace
managed-cluster-namewith the name of your managed cluster.-
AWS: Replace
providerwithaws. The value of<managed-cluster-name>-aws-credsis your AWS credential secret name, which you can find in the cluster namespace of your hub cluster. -
GCP: Replace
providerwithgcp. The value of<managed-cluster-name>-gcp-credsis your Google Cloud Platform credential secret name, which you can find in the cluster namespace of your hub cluster. -
OpenStack: Replace
providerwithosp. The value of<managed-cluster-name>-osp-credsis your Red Hat OpenStack Platform credential secret name, which you can find in the cluster namespace of your hub cluster. -
Azure: Replace
providerwithazure. The value of<managed-cluster-name>-azure-credsis your Microsoft Azure credential secret name, which you can find in the cluster namespace of your hub cluster.
-
AWS: Replace
-
Replace
gatewayswith the number of gateways that you want to use. If the value is greater than 1, the Submariner gateway automatically enables high availability.
Note: The name of the SubmarinerConfig must be submariner, as shown in the example.
1.1.6.3. Instance types of gateway nodes Copy linkLink copied to clipboard!
If you want to customize the instance type of your gateway node, customize and apply YAML content that is similar to the following example:
-
Replace
managed-cluster-namespacewith the namespace of your managed cluster. Replace
managed-cluster-namewith the name of your managed cluster.-
AWS: Replace
providerwithaws. The value of<managed-cluster-name>-aws-credsis your AWS credential secret name, which you can find in the cluster namespace of your hub cluster. -
GCP: Replace
providerwithgcp. The value of<managed-cluster-name>-gcp-credsis your Google Cloud Platform credential secret name, which you can find in the cluster namespace of your hub cluster. -
OpenStack: Replace
providerwithosp. The value of<managed-cluster-name>-osp-credsis your Red Hat OpenStack Platform credential secret name, which you can find in the cluster namespace of your hub cluster. -
Azure: Replace
providerwithazure. The value of<managed-cluster-name>-azure-credsis your Microsoft Azure credential secret name, which you can find in the cluster namespace of your hub cluster.
-
AWS: Replace
-
Replace
instance-typewith the AWS instance type that you want to use.
Note: The name of the SubmarinerConfig must be submariner, as shown in the example.
1.1.6.4. Cable driver Copy linkLink copied to clipboard!
The Submariner Gateway Engine component creates secure tunnels to other clusters. The cable driver component maintains the tunnels by using a pluggable architecture in the Gateway Engine component. You can use the Libreswan or VXLAN implementations for the cableDriver configuration of the cable engine component. See the following example:
Best practice: Do not use the VXLAN cable driver on public networks. The VXLAN cable driver is unencrypted. Only use VXLAN to avoid unnecessary double encryption on private networks. For example, some on-premise environments might handle the tunnel’s encryption with a dedicated line-level hardware device.
1.1.7. Managing service discovery for Submariner Copy linkLink copied to clipboard!
After Submariner is deployed into the same environment as your managed clusters, the routes are configured for secure IP routing between the pod and services across the clusters in the managed cluster set.
1.1.7.1. Enabling service discovery for Submariner Copy linkLink copied to clipboard!
To make a service from a cluster visible and discoverable to other clusters in the managed cluster set, you must create a ServiceExport object. After a service is exported with a ServiceExport object, you can access the service by the following format: <service>.<namespace>.svc.clusterset.local. If multiple clusters export a service with the same name, and from the same namespace, they are recognized by other clusters as a single logical service.
This example uses the nginx service in the default namespace, but you can discover any Kubernetes ClusterIP service or headless service:
Apply an instance of the
nginxservice on a managed cluster that is in theManagedClusterSetby entering the following commands:oc -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine oc -n default expose deployment nginx --port=8080
oc -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine oc -n default expose deployment nginx --port=8080Copy to Clipboard Copied! Toggle word wrap Toggle overflow Export the service by creating a
ServiceExportentry by entering a command with thesubctltool that is similar to the following command:subctl export service --namespace <service-namespace> <service-name>
subctl export service --namespace <service-namespace> <service-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
service-namespacewith the name of the namespace where the service is located. In this example, it isdefault.Replace
service-namewith the name of the service that you are exporting. In this example, it isnginx.See
exportin the Submariner documentation for more information about other available flags.Run the following command from a different managed cluster to confirm that it can access the
nginxservice:oc -n default run --generator=run-pod/v1 tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash curl nginx.default.svc.clusterset.local:8080
oc -n default run --generator=run-pod/v1 tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash curl nginx.default.svc.clusterset.local:8080Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The nginx service discovery is now configured for Submariner.
1.1.7.2. Disabling service discovery for Submariner Copy linkLink copied to clipboard!
To disable a service from being exported to other clusters, enter a command similar to the following example for nginx:
subctl unexport service --namespace <service-namespace> <service-name>
subctl unexport service --namespace <service-namespace> <service-name>
Replace service-namespace with the name of the namespace where the service is located.
Replace service-name with the name of the service that you are exporting.
See unexport in the Submariner documentation for more information about other available flags.
The service is no longer available for discovery by clusters.
1.1.8. Uninstalling Submariner Copy linkLink copied to clipboard!
You can uninstall the Submariner components from your clusters using the Red Hat Advanced Cluster Management for Kubernetes console or the command-line. For Submariner versions earlier than 0.12, additional steps are needed to completely remove all data plane components. The Submariner uninstall is idempotent, so you can repeat steps without any issues.
1.1.8.1. Uninstalling Submariner by using the console Copy linkLink copied to clipboard!
To uninstall Submariner from a cluster by using the console, complete the following steps:
- From the console navigation, select Infrastructure > Clusters, and select the Cluster sets tab.
- Select the cluster set that contains the clusters from which you want to remove the Submariner components.
- Select the Submariner Add-ons tab to view the clusters in the cluster set that have Submariner deployed.
- In the Actions menu for the cluster that you want to uninstall Submariner, select Uninstall Add-on.
- In the Actions menu for the cluster that you want to uninstall Submariner, select Delete cluster sets.
Repeat those steps for other clusters from which you are removing Submariner.
Tip: You can remove the Submariner add-on from multiple clusters in the same cluster set by selecting multiple clusters and clicking Actions. Select Uninstall Submariner add-ons.
If the version of Submariner that you are removing is earlier than version 0.12, continue with Uninstalling Submariner manually. If the Submariner version is 0.12, or later, Submariner is removed.
Important: Verify that all of the cloud resources are removed from the cloud provider to avoid additional charges by your cloud provider. See Verifying Submariner resource removal for more information.
1.1.8.2. Uninstalling Submariner by using the CLI Copy linkLink copied to clipboard!
To uninstall Submariner by using the command line, complete the following steps:
Remove the Submariner deployment for the cluster by running the following command:
oc -n <managed-cluster-namespace> delete managedclusteraddon submariner
oc -n <managed-cluster-namespace> delete managedclusteraddon submarinerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
managed-cluster-namespacewith the namespace of your managed cluster.Remove the cloud resources of the cluster by running the following command:
oc -n <managed-cluster-namespace> delete submarinerconfig submariner
oc -n <managed-cluster-namespace> delete submarinerconfig submarinerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
managed-cluster-namespacewith the namespace of your managed cluster.Delete the cluster set to remove the broker details by running the following command:
oc delete managedclusterset <managedclusterset>
oc delete managedclusterset <managedclusterset>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
managedclustersetwith the name of your managed cluster set.
If the version of Submariner that you are removing is earlier than version 0.12, continue with Uninstalling Submariner manually. If the Submariner version is 0.12, or later, Submariner is removed.
Important: Verify that all of the cloud resources are removed from the cloud provider to avoid additional charges by your cloud provider. See Verifying Submariner resource removal for more information.
1.1.8.3. Uninstalling Submariner manually Copy linkLink copied to clipboard!
When uninstalling versions of Submariner that are earlier than version 0.12, complete steps 5-8 in the Manual Uninstall section in the Submariner documentation.
After completing those steps, your Submariner components are removed from the cluster.
Important: Verify that all of the cloud resources are removed from the cloud provider to avoid additional charges by your cloud provider. See Verifying Submariner resource removal for more information.
1.1.8.4. Verifying Submariner resource removal Copy linkLink copied to clipboard!
After uninstalling Submariner, verify that all of the Submariner resources are removed from your clusters. If they remain on your clusters, some resources continue to accrue charges from infrastructure providers. Ensure that you have no additional Submariner resourceson your cluster by completing the following steps:
Run the following command to list any Submariner resources that remain on the cluster:
oc get cluster <CLUSTER_NAME> grep submariner
oc get cluster <CLUSTER_NAME> grep submarinerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
CLUSTER_NAMEwith the name of your cluster.Remove any resources on the list by entering the following command:
oc delete resource <RESOURCE_NAME> cluster <CLUSTER_NAME>
oc delete resource <RESOURCE_NAME> cluster <CLUSTER_NAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
RESOURCE_NAMEwith the name of the Submariner resource that you want to remove.- Repeat steps 1-2 for each of the clusters until your search does not identify any resources.
The Submariner resources are removed from your cluster.
1.2. VolSync persistent volume replication service Copy linkLink copied to clipboard!
VolSync is a Kubernetes operator that enables asynchronous replication of persistent volumes within a cluster, or across clusters with storage types that are not otherwise compatible for replication. It uses the Container Storage Interface (CSI) to overcome the compatibility limitation. After deploying the VolSync operator in your environment, you can leverage it to create and maintain copies of your persistent data. VolSync can only replicate persistent volume claims on Red Hat OpenShift Container Platform clusters that are at version 4.8 or later.
Important: VolSync only supports replicating persistent volume claims with the volumeMode of Filesystem. If you do not select the volumeMode, it defaults to Filesystem.
1.2.1. Replicating persistent volumes with VolSync Copy linkLink copied to clipboard!
You can use three supported methods to replicate persistent volumes with VolSync, which depend on the number of synchronization locations that you have: Rsync, restic, or Rclone.
1.2.1.1. Prerequisites Copy linkLink copied to clipboard!
Before installing VolSync on your clusters, you must have the following requirements:
- A configured Red Hat OpenShift Container Platform environment running a Red Hat Advanced Cluster Management version 2.4, or later, hub cluster
- At least two configured clusters that are managed by the same Red Hat Advanced Cluster Management hub cluster
-
Network connectivity between the clusters that you are configuring with VolSync. If the clusters are not on the same network, you can configure the Submariner multicluster networking and service discovery and use the
ClusterIPvalue forServiceTypeto network the clusters, or use a load balancer with theLoadBalancervalue forServiceType. - The storage driver that you use for your source persistent volume must be CSI-compatible and able to support snapshots.
1.2.1.2. Installing VolSync on the managed clusters Copy linkLink copied to clipboard!
To enable VolSync to replicate the persistent volume claim on one cluster to the persistent volume claim of another cluster, you must install VolSync on both the source and the target managed clusters.
VolSync does not create its own namespace, so it is in the same namespace as other OpenShift Container Platform all-namespace operators. Any changes that you make to the operator settings for VolSync also affects the other operators in the same namespace, such as if you change to manual approval for channel updates.
You can use either of two methods to install VolSync on two clusters in your environment. You can either add a label to each of the managed clusters in the hub cluster, or you can manually create and apply a ManagedClusterAddOn, as they are described in the following sections:
1.2.1.2.1. Installing VolSync using labels Copy linkLink copied to clipboard!
To install VolSync on the managed cluster by adding a label.
Complete the following steps from the Red Hat Advanced Cluster Management console:
-
Select one of the managed clusters from the
Clusterspage in the hub cluster console to view its details. In the Labels field, add the following label:
addons.open-cluster-management.io/volsync=true
addons.open-cluster-management.io/volsync=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow The VolSync service pod is installed on the managed cluster.
- Add the same label the other managed cluster.
Run the following command on each managed cluster to confirm that the VolSync operator is installed:
oc get csv -n openshift-operators
oc get csv -n openshift-operatorsCopy to Clipboard Copied! Toggle word wrap Toggle overflow There is an operator listed for VolSync when it is installed.
-
Select one of the managed clusters from the
Complete the following steps from the command-line interface:
- Start a command-line session on the hub cluster.
Enter the following command to add the label to the first cluster:
oc label managedcluster <managed-cluster-1> "addons.open-cluster-management.io/volsync"="true"
oc label managedcluster <managed-cluster-1> "addons.open-cluster-management.io/volsync"="true"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
managed-cluster-1with the name of one of your managed clusters.Enter the following command to add the label to the second cluster:
oc label managedcluster <managed-cluster-2> "addons.open-cluster-management.io/volsync"="true"
oc label managedcluster <managed-cluster-2> "addons.open-cluster-management.io/volsync"="true"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
managed-cluster-2with the name of your other managed cluster.A
ManagedClusterAddOnresource should be created automatically on your hub cluster in the namespace of each corresponding managed cluster.
1.2.1.2.2. Installing VolSync using a ManagedClusterAddOn Copy linkLink copied to clipboard!
To install VolSync on the managed cluster by adding a ManagedClusterAddOn manually, complete the following steps:
On the hub cluster, create a YAML file called
volsync-mcao.yamlthat contains content that is similar to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
managed-cluster-1-namespacewith the namespace of one of your managed clusters. This namespace is the same as the name of the managed cluster.Note: The name must be
volsync.Apply the file to your configuration by entering a command similar to the following example:
oc apply -f volsync-mcao.yaml
oc apply -f volsync-mcao.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Repeat the procedure for the other managed cluster.
A
ManagedClusterAddOnresource should be created automatically on your hub cluster in the namespace of each corresponding managed cluster.
1.2.1.3. Configuring an Rsync replication Copy linkLink copied to clipboard!
You can create a 1:1 asynchronous replication of persistent volumes by using an Rsync replication. You can use Rsync-based replication for disaster recovery or sending data to a remote site.
The following example shows how to configure by using the Rsync method. For additional information about Rsync, see Usage in the VolSync documentation.
1.2.1.3.1. Configuring Rsync replication across managed clusters Copy linkLink copied to clipboard!
For Rsync-based replication, configure custom resources on the source and destination clusters. The custom resources use the address value to connect the source to the destination, and the sshKeys to ensure that the transferred data is secure.
Note: You must copy the values for address and sshKeys from the destination to the source, so configure the destination before you configure the source.
This example provides the steps to configure an Rsync replication from a persistent volume claim on the source cluster in the source-ns namespace to a persistent volume claim on a destination cluster in the destination-ns namespace. You can replace those values with other values, if necessary.
Configure your destination cluster.
Run the following command on the destination cluster to create the namespace:
oc create ns <destination-ns>
oc create ns <destination-ns>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
destination-nswith a name for the namespace that will contain your destination persistent volume claim.Copy the following YAML content to create a new file called
replication_destination.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note: The
capacityvalue should match the capacity of the persistent volume claim that is being replicated.Replace
destinationwith the name of your replication destination CR.Replace
destination-nswith the name of the namespace where your destination is located.For this example, the
ServiceTypevalue ofLoadBalanceris used. The load balancer service is created by the source cluster to enable your source managed cluster to transfer information to a different destination managed cluster. You can useClusterIPas the service type if your source and destinations are on the same cluster, or if you have Submariner network service configured. Note the address and the name of the secret to refer to when you configure the source cluster.The
storageClassNameandvolumeSnapshotClassNameare optional parameters. Specify the values for your environment, particularly if you are using a storage class and volume snapshot class name that are different than the default values for your environment.Run the following command on the destination cluster to create the
replicationdestinationresource:oc create -n <destination-ns> -f replication_destination.yaml
oc create -n <destination-ns> -f replication_destination.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
destination-nswith the name of the namespace where your destination is located.After the
replicationdestinationresource is created, following parameters and values are added to the resource:Expand Parameter Value .status.rsync.addressIP address of the destination cluster that is used to enable the source and destination clusters to communicate.
.status.rsync.sshKeysName of the SSH key file that enables secure data transfer from the source cluster to the destination cluster.
Run the following command to copy the value of
.status.rsync.addressto use on the source cluster:ADDRESS=`oc get replicationdestination <destination> -n <destination-ns> --template={{.status.rsync.address}}` echo $ADDRESSADDRESS=`oc get replicationdestination <destination> -n <destination-ns> --template={{.status.rsync.address}}` echo $ADDRESSCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
destinationwith the name of your replication destination custom resource.Replace
destination-nswith the name of the namespace where your destination is located.The output should appear similar to the following output, which is for an Amazon Web Services environment:
a831264645yhrjrjyer6f9e4a02eb2-5592c0b3d94dd376.elb.us-east-1.amazonaws.com
a831264645yhrjrjyer6f9e4a02eb2-5592c0b3d94dd376.elb.us-east-1.amazonaws.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to copy the name of the secret and the contents of the secret that are provided as the value of
.status.rsync.sshKeys.SSHKEYS=`oc get replicationdestination <destination> -n <destination-ns> --template={{.status.rsync.sshKeys}}` echo $SSHKEYSSSHKEYS=`oc get replicationdestination <destination> -n <destination-ns> --template={{.status.rsync.sshKeys}}` echo $SSHKEYSCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
destinationwith the name of your replication destination custom resource.Replace
destination-nswith the name of the namespace where your destination is located.You will have to enter it on the source cluster when you configure the source. The output should be the name of your SSH keys secret file, which might resemble the following name:
volsync-rsync-dst-src-destination-name
volsync-rsync-dst-src-destination-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Identify the source persistent volume claim that you want to replicate.
Note: The source persistent volume claim must be on a CSI storage class.
Create the
ReplicationSourceitems.Copy the following YAML content to create a new file called
replication_source.yamlon the source cluster:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
sourcewith the name for your replication source custom resource. See step 3-vi of this procedure for instructions on how to replace this automatically.Replace
source-nswith the namespace of the persistent volume claim where your source is located. See step 3-vi of this procedure for instructions on how to replace this automatically.Replace
persistent_volume_claimwith the name of your source persistent volume claim.Replace
mysshkeyswith the keys that you copied from the.status.rsync.sshKeysfield of theReplicationDestinationwhen you configured it.Replace
my.host.comwith the host address that you copied from the.status.rsync.addressfield of theReplicationDestinationwhen you configured it.If your storage driver supports cloning, using
Cloneas the value forcopyMethodmight be a more streamlined process for the replication.StorageClassNameandvolumeSnapshotClassNameare optional parameters. If you are using a storage class and volume snapshot class name that are different than the defaults for your environment, specify those values.You can now set up the synchronization method of the persistent volume.
Copy the SSH secret from the destination cluster by entering the following command against the destination cluster:
oc get secret -n <destination-ns> $SSHKEYS -o yaml > /tmp/secret.yaml
oc get secret -n <destination-ns> $SSHKEYS -o yaml > /tmp/secret.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
destination-nswith the namespace of the persistent volume claim where your destination is located.Open the secret file in the
vieditor by entering the following command:vi /tmp/secret.yaml
vi /tmp/secret.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the open secret file on the destination cluster, make the following changes:
-
Change the namespace to the namespace of your source cluster. For this example, it is
source-ns. -
Remove the owner references (
.metadata.ownerReferences).
-
Change the namespace to the namespace of your source cluster. For this example, it is
On the source cluster, create the secret file by entering the following command on the source cluster:
oc create -f /tmp/secret.yaml
oc create -f /tmp/secret.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow On the source cluster, modify the
replication_source.yamlfile by replacing the value of theaddressandsshKeysin theReplicationSourceobject with the values that you noted from the destination cluster by entering the following commands:sed -i "s/<my.host.com>/$ADDRESS/g" replication_source.yaml sed -i "s/<mysshkeys>/$SSHKEYS/g" replication_source.yaml oc create -n <source> -f replication_source.yaml
sed -i "s/<my.host.com>/$ADDRESS/g" replication_source.yaml sed -i "s/<mysshkeys>/$SSHKEYS/g" replication_source.yaml oc create -n <source> -f replication_source.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
my.host.comwith the host address that you copied from the.status.rsync.addressfield of theReplicationDestinationwhen you configured it.Replace
mysshkeyswith the keys that you copied from the.status.rsync.sshKeysfield of theReplicationDestinationwhen you configured it.Replace
sourcewith the name of the persistent volume claim where your source is located.Note: You must create the file in the same namespace as the persistent volume claim that you want to replicate.
Verify that the replication completed by running the following command on the
ReplicationSourceobject:oc describe ReplicationSource -n <source-ns> <source>
oc describe ReplicationSource -n <source-ns> <source>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
source-nswith the namespace of the persistent volume claim where your source is located.Replace
sourcewith the name of your replication source custom resource.If the replication was successful, the output should be similar to the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
Last Sync Timehas no time listed, then the replication is not complete.
You have a replica of your original persistent volume claim.
1.2.1.4. Configuring a restic backup Copy linkLink copied to clipboard!
A restic-based backup copies a restic-based backup copy of the persistent volume to a location that is specified in your restic-config.yaml secret file. A restic backup does not synchronize data between the clusters, but provides data backup.
Complete the following steps to configure a restic-based backup:
Specify a repository where your backup images are stored by creating a secret that resembles the following YAML content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
my-restic-repositorywith the location of the S3 bucket repository where you want to store your backup files.Replace
my-restic-passwordwith the encryption key that is required to access the repository.Replace
accessandpasswordwith the credentials for your provider, if required. Refer to Preparing a new repository for more information.If you need to prepare a new repository, see Preparing a new repository for the procedure. If you use that procedure, skip the step that requires you to run the
restic initcommand to initialize the repository. VolSync automatically initializes the repository during the first backup.Important: When backing up multiple persistent volume claims to the same S3 bucket, the path to the bucket must be unique for each persistent volume claim. Each persistent volume claim is backed up with a separate
ReplicationSource, and each requires a separate restic-config secret.By sharing the same S3 bucket, each
ReplicationSourcehas write access to the entire S3 bucket.Configure your backup policy by creating a
ReplicationSourceobject that resembles the following YAML content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
sourcewith the persistent volume claim that you are backing up.Replace the value for
schedulewith how often to run the backup. This example has the schedule for every 30 minutes. See Scheduling your synchronization for more information.Replace the value of
PruneIntervalDaysto the number of days that elapse between instances of repacking the data to save space. The prune operation can generate significant I/O traffic while it is running.Replace
restic-configwith the name of the secret that you created in step 1.Set the values for
retainto your retention policy for the backed up images.Best practice: Use
Clonefor the value ofCopyMethodto ensure that a point-in-time image is saved.For additional information about the backup options, see Backup options in the VolSync documentation.
Note: Restic movers run without root permissions by default. If you want to run restic movers as root, run the following command to add the elevated permissions annotation to your namespace.
oc annotate namespace <namespace> volsync.backube/privileged-movers=true
oc annotate namespace <namespace> volsync.backube/privileged-movers=true
Replace <namespace> with the name of your namespace.
1.2.1.4.1. Restoring a restic backup Copy linkLink copied to clipboard!
You can restore the copied data from a restic backup into a new persistent volume claim. Best practice: Restore only one backup into a new persistent volume claim. To restore the restic backup, complete the following steps:
Create a new persistent volume claim to contain the new data similar to the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
pvc-namewith the name of the new persistent volume claim.Create a
ReplicationDestinationcustom resource that resembles the following example to specify where to restore the data:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
destinationwith the name of your replication destination CR.Replace
restic-repowith the path to your repository where the source is stored.Replace
pvc-namewith the name of the new persistent volume claim where you want to restore the data. Use an existing persistent volume claim for this, rather than provisioning a new one.
The restore process only needs to be completed once, and this example restores the most recent backup. For more information about restore options, see Restore options in the VolSync documentation.
1.2.1.5. Configuring an Rclone replication Copy linkLink copied to clipboard!
An Rclone backup copies a single persistent volume to multiple locations by using Rclone through an intermediate object storage location, like AWS S3. It can be helpful when distributing data to multiple locations.
Complete the following steps to configure an Rclone replication:
Create a
ReplicationSourcecustom resource that resembles the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
source-pvcwith the name for your replication source custom resource.Replace
source-nswith the namespace of the persistent volume claim where your source is located.Replace
sourcewith the persistent volume claim that you are replicating.Replace the value of
schedulewith how often to run the replication. This example has the schedule for every 6 minutes. This value must be within quotation marks. See Scheduling your synchronization for more information.Replace
intermediate-s3-bucketwith the path to the configuration section of the Rclone configuration file.Replace
destination-bucketwith the path to the object bucket where you want your replicated files copied.Replace
rclone-secretwith the name of the secret that contains your Rclone configuration information.Set the value for
copyMethodasClone,Direct, orSnapshot. This value specifies whether the point-in-time copy is generated, and if so, what method is used for generating it.Replace
my-sc-namewith the name of the storage class that you want to use for your point-in-time copy. If not specified, the storage class of the source volume is used.Replace
my-vscwith the name of theVolumeSnapshotClassto use, if you specifiedSnapshotas yourcopyMethod. This is not required for other types ofcopyMethod.Create a
ReplicationDestinationcustom resource that resembles the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the value for
schedulewith how often to move the replication to the destination. The schedules for the source and destination must be offset to allow the data to finish replicating before it is pulled from the destination. This example has the schedule for every 6 minutes, offset by 3 minutes. This value must be within quotation marks. See Scheduling your synchronization for more information.Replace
intermediate-s3-bucketwith the path to the configuration section of the Rclone configuration file.Replace
destination-bucketwith the path to the object bucket where you want your replicated files copied.Replace
rclone-secretwith the name of the secret that contains your Rclone configuration information.Set the value for
copyMethodasClone,Direct, orSnapshot. This value specifies whether the point-in-time copy is generated, and if so, which method is used for generating it.The value for
accessModesspecifies the access modes for the persistent volume claim. Valid values areReadWriteOnceorReadWriteMany.The
capacityspecifies the size of the destination volume, which must be large enough to contain the incoming data.Replace
my-scwith the name of the storage class that you want to use as the destination for your point-in-time copy. If not specified, the system storage class is used.Replace
my-vscwith the name of theVolumeSnapshotClassto use, if you specifiedSnapshotas yourcopyMethod. This is not required for other types ofcopyMethod. If not included, the system defaultVolumeSnapshotClassis used.
Note: Rclone movers run without root permissions by default. If you want to run Rclone movers as root, run the following command to add the elevated permissions annotation to your namespace.
oc annotate namespace <namespace> volsync.backube/privileged-movers=true
oc annotate namespace <namespace> volsync.backube/privileged-movers=true
Replace <namespace> with the name of your namespace.
1.2.2. Converting a replicated image to a usable persistent volume claim Copy linkLink copied to clipboard!
You might need to use the replicated image to recover data, or create a new instance of a persistent volume claim. The copy of the image must be converted to a persistent volume claim before it can be used. To convert a replicated image to a persistent volume claim, complete the following steps:
When the replication is complete, identify the latest snapshot from the
ReplicationDestinationobject by entering the following command:kubectl get replicationdestination <destination> -n <destination-ns> --template={{.status.latestImage.name}}$ kubectl get replicationdestination <destination> -n <destination-ns> --template={{.status.latestImage.name}}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the value of the latest snapshot for when you create your persistent volume claim.
Replace
destinationwith the name of your replication destination.Replace
destination-nswith the namespace of your destination.Create a
pvc.yamlfile that resembles the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
pvc-namewith a name for your new persistent volume claim.Replace
destination-nswith the namespace where the persistent volume claim is located.Replace
snapshot_to_replacewith theVolumeSnapshotname that you found in the previous step.Best practice: You can update
resources.requests.storagewith a different value when the value is at least the same size as the initial source persistent volume claim.Validate that your persistent volume claim is running in the environment by entering the following command:
kubectl get pvc -n <destination-ns>
$ kubectl get pvc -n <destination-ns>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Your original backup image is running as the main persistent volume claim.
1.2.3. Scheduling your synchronization Copy linkLink copied to clipboard!
Select from three options when determining how you start your replications: always running, on a schedule, or manually. Scheduling your replications is an option that is often selected.
The Schedule option runs replications at scheduled times. A schedule is defined by a cronspec, so the schedule can be configured as intervals of time or as specific times. The order of the schedule values are:
"minute (0-59) hour (0-23) day-of-month (1-31) month (1-12) day-of-week (0-6)"
The replication starts when the scheduled time occurs. Your setting for this replication option might resemble the following content:
spec:
trigger:
schedule: "*/6 * * * *"
spec:
trigger:
schedule: "*/6 * * * *"
After enabling one of these methods, your synchronization schedule runs according to the method that you configured.
See the VolSync documentation for additional information and options.
1.3. Enabling klusterlet add-ons on clusters for cluster management Copy linkLink copied to clipboard!
After you install Red Hat Advanced Cluster Management for Kubernetes and then create or import clusters with multicluster engine operator you can enable the klusterlet add-ons for those managed clusters. The klusterlet add-ons are not enabled by default if you created or imported clusters unless you create or import with the Red Hat Advanced Cluster Management console. See the following available klusterlet add-ons:
- application-manager
- cert-policy-controller
- config-policy-controller
- iam-policy-controller
- governance-policy-framework
- search-collector
Complete the following steps to enable the klusterlet add-ons for the managed clusters after Red Hat Advanced Cluster Management is installed:
Create a YAML file that is similar to the following
KlusterletAddonConfig, with thespecvalue that represents the add-ons:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note: The
policy-controlleradd-on is divided into two add-ons: Thegovernance-policy-frameworkand theconfig-policy-controller. As a result, thepolicyControllercontrols thegovernance-policy-frameworkand theconfig-policy-controllermanagedClusterAddons.-
Save the file as
klusterlet-addon-config.yaml. Apply the YAML by running the following command on the hub cluster:
oc apply -f klusterlet-addon-config.yaml
oc apply -f klusterlet-addon-config.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify whether the enabled
managedClusterAddonsare created after theKlusterletAddonConfigis created, run the following command:oc get managedclusteraddons -n <cluster namespace>
oc get managedclusteraddons -n <cluster namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4. Enabling cluster-wide proxy on existing cluster add-ons Copy linkLink copied to clipboard!
You can configure the KlusterletAddonConfig in the cluster namespace to add the proxy environment variables to all the klusterlet add-on pods of the managed Red Hat OpenShift Container Platform clusters. Complete the following steps to configure the KlusterletAddonConfig to add the three environment variables to the pods of the klusterlet add-ons:
Edit the
KlusterletAddonConfigfile that is in the namespace of the cluster that needs the proxy. You can use the console to find the resource, or you can edit from the terminal with the following command:oc -n <my-cluster-name> edit klusterletaddonconfig <my-cluster-name>
oc -n <my-cluster-name> edit klusterletaddonconfig <my-cluster-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note: If you are working with only one cluster, you do not need
<my-cluster-name>at the end of your command. See the following command:oc -n <my-cluster-name> edit klusterletaddonconfig
oc -n <my-cluster-name> edit klusterletaddonconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
.spec.proxyConfigsection of the file so it resembles the following example. Thespec.proxyConfigis an optional section:spec proxyConfig: httpProxy: "<proxy_not_secure>" httpsProxy: "<proxy_secure>" noProxy: "<no_proxy>"spec proxyConfig: httpProxy: "<proxy_not_secure>" httpsProxy: "<proxy_secure>" noProxy: "<no_proxy>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
proxy_not_securewith the address of the proxy server forhttprequests. For example, usehttp://192.168.123.145:3128.Replace
proxy_securewith the address of the proxy server forhttpsrequests. For example, usehttps://192.168.123.145:3128.Replace
no_proxywith a comma delimited list of IP addresses, hostnames, and domain names where traffic will not be routed through the proxy. For example, use.cluster.local,.svc,10.128.0.0/14,example.com.If the OpenShift Container Platform cluster is created with cluster wide proxy configured on the hub cluster, the cluster wide proxy configuration values are added to the pods of the klusterlet add-ons as environment variables when the following conditions are met:
-
The
.spec.policyController.proxyPolicyin theaddonsection is enabled and set toOCPGlobalProxy. The
.spec.applicationManager.proxyPolocyis enabled and set toCustomProxy.Note: The default value of
proxyPolicyin theaddonsection isDisabled.See the following examples of
proxyPolicyentries:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
The
Important: Global proxy settings do not impact alert forwarding. To set up alert forwarding for Red Hat Advanced Cluster Management hub clusters with a cluster-wide proxy, see Forwarding alerts for more details.