Este conteúdo não está disponível no idioma selecionado.
Add-ons
Read more to learn how to use add-ons for your cluster.
Abstract
Chapter 1. Add-ons overview
Red Hat Advanced Cluster Management for Kubernetes add-ons can improve some areas of performance and add function to enhace your applications. The following sections provide a summary of the add-ons that are available for Red Hat Advanced Cluster Management:
1.1. Submariner multicluster networking and service discovery
Submariner is an open source tool that can be used with Red Hat Advanced Cluster Management for Kubernetes to provide direct networking and service discovery between two or more managed clusters in your environment, either on-premises or in the cloud. Submariner is compatible with Multi-Cluster Services API (Kubernetes Enhancements Proposal #1645). For more information about Submariner, see the Submariner site.
Red Hat Advanced Cluster Management for Kubernetes provides Submariner as an add-on for your hub cluster. You can find more information about Submariner in the Submariner open source project documentation.
See the Red Hat Advanced Cluster Management Support Matrix for more details about which infrastructure providers are supported by automated console deployments and which infrastructure providers require manual deployment.
1.1.1. Prerequisites
Ensure that you have the following prerequisites before using Submariner:
- 
							A credential to access the hub cluster with cluster-adminpermissions.
- IP connectivity must be configured between the gateway nodes. When connecting two clusters, at least one of the clusters must be accessible to the gateway node using its public or private IP address designated to the gateway node. See Submariner NAT Traversal for more information.
- Firewall configuration across all nodes in each of the managed clusters must allow 4800/UDP in both directions.
- Firewall configuration on the gateway nodes must allow ingress 8080/TCP so the other nodes in the cluster can access it.
- Firewall configuration open for 4500/UDP and any other ports that are used for IPsec traffic on the gateway nodes.
- If the gateway nodes are directly reachable over their private IPs without any NAT in between, make sure that the firewall configuration allows the ESP protocol on the gateway nodes. - Note: This is configured automatically when your clusters are deployed in an AWS or GCP environment, but must be configured manually for clusters on other environments and for the firewalls that protect private clouds. 
- The - managedclustername must follow the DNS label standard as defined in RFC 1123. This means the name must meet the following criteria:- Contain at most 63 characters
- Contain only lowercase alphanumeric characters or '-'
- Start with an alphanumeric character
- End with an alphanumeric character
 
| Name | Default value | Customizable | 
|---|---|---|
| IPsec NATT | 4500/UDP | Yes | 
| VXLAN | 4800/UDP | No | 
| Submariner metrics port | 8080/TCP | No | 
See the Submariner upstream prerequisites documentation for more detailed information about the prerequisites.
1.1.2. subctl command utility
					Submariner contains the subctl utility that provides additional commands that simplify running tasks on your Submariner environment.
				
1.1.2.1. Installing the subctl command utility
						The subctl utility is shipped in a container image. Complete the following steps to install the subctl utility locally:
					
- Log in to the registry by running the following command and entering your credentials when prompted: - oc registry login --registry registry.redhat.io - oc registry login --registry registry.redhat.io- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Download the - subctlcontainer and extract a compressed version of the- subctlbinary to- /tmpby entering the following command:- oc image extract registry.redhat.io/rhacm2/subctl-rhel8:v0.12 --path=/dist/subctl-v0.12.1-linux-amd64.tar.xz:/tmp/ --confirm - oc image extract registry.redhat.io/rhacm2/subctl-rhel8:v0.12 --path=/dist/subctl-v0.12.1-linux-amd64.tar.xz:/tmp/ --confirm- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Note: You might have to change - subctl-v0.12.1-linux-amd64.tar.xzto the version of Submariner that you are using.
- Decompress the - subctlutility by entering the following command:- tar -C /tmp/ -xf /tmp/subctl-v0.12-linux-amd64.tar.xz - tar -C /tmp/ -xf /tmp/subctl-v0.12-linux-amd64.tar.xz- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Install the - subctlutility by entering the following command:- install -m744 /tmp/subctl-v0.12/subctl-v0.12-linux-amd64 /$HOME/.local/bin/subctl - install -m744 /tmp/subctl-v0.12/subctl-v0.12-linux-amd64 /$HOME/.local/bin/subctl- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
1.1.2.2. Using the subctl commands
After adding the utility to your path, view the following table for a brief description of the available commands:
| 
										Creates a  | |
| 
										Removes the  | |
| Provides information about Submariner resources. | |
| Verifies connectivity, service discovery, and other Submariner features when Submariner is configured across a pair of clusters. | |
| Benchmarks throughput and latency across a pair of clusters that are enabled with Submariner or within a single cluster. | |
| Runs checks to identify issues that prevent the Submariner deployment from working correctly. | |
| Collects information from the clusters to help troubleshoot a Submariner deployment. | |
| 
										Displays the version details of the  | 
						For more information about the subctl utility and its commands, see subctl in the Submariner documentation.
					
1.1.3. Globalnet
Globalnet is a feature included with the Submariner add-on which supports connectivity between clusters with overlapping CIDRs. Globalnet is a cluster set wide configuration, and can be selected when the first managed cluster is added to the cluster set. When Globalnet is enabled, each managed cluster is allocated a global CIDR from the virtual Global Private Network. The global CIDR is used for supporting inter-cluster communication.
					If there is a chance that your clusters running Submariner might have overlapping CIDRs, consider enabling Globalnet. When using the Red Hat Advanced Cluster Management console, the ClusterAdmin can enable Globalnet for a cluster set by selecting the option Enable Globalnet when enabling the Submariner add-on for clusters in the cluster set. After you enable Globalnet, you cannot disable it without removing Submariner.
				
					When using the Red Hat Advanced Cluster Management APIs, the ClusterAdmin can enable Globalnet by creating a submariner-broker object in the <ManagedClusterSet>-broker namespace.
				
					The ClusterAdmin role has the required permissions to create this object in the broker namespace. The ManagedClusterSetAdmin role, which is sometimes created to act as a proxy administrator for the cluster set, does not have the required permissions. To provide the required permissions, the ClusterAdmin must associate the role permissions for the access-to-brokers-submariner-crd to the ManagedClusterSetAdmin user.
				
					Complete the following steps to create the submariner-broker object:
				
- Retrieve the - <broker-namespace>by running the following command:- oc get ManagedClusterSet <cluster-set-name> -o jsonpath="{.metadata.annotations['cluster\.open-cluster-management\.io/submariner-broker-ns']}"- oc get ManagedClusterSet <cluster-set-name> -o jsonpath="{.metadata.annotations['cluster\.open-cluster-management\.io/submariner-broker-ns']}"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create a - submariner-brokerobject that specifies the Globalnet configuration by creating a YAML file named- submariner-broker. Add content that resembles the following lines to the YAML file:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - broker-namespacewith the name of your broker namespace.- Replace - true-or-falsewith- trueto enable Globalnet.- Note: The - metadata- nameparameter must be- submariner-broker.
- Apply the file to your YAML file by entering the following command: - oc apply -f submariner-broker.yaml - oc apply -f submariner-broker.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
For more information about Globalnet, see Globalnet controller in the Submariner documentation.
1.1.4. Deploying Submariner
You can deploy Submariner to network clusters on the following providers:
Automatic deployment process:
- Amazon Web Services
- Google Cloud Platform
- Red Hat OpenStack Platform
Manual deployment process:
- Microsoft Azure
- IBM Cloud
- VMware vSphere
- Bare metal
1.1.4.1. Deploying Submariner with the console
You can deploy Submariner on Red Hat OpenShift Container Platform managed clusters that are deployed on Amazon Web Services, Google Cloud Platform, and VMware vSphere by using the Red Hat Advanced Cluster Management for Kubernetes console. To deploy Submariner on other providers, follow the instructions in Deploying Submariner manually. Complete the following steps to deploy Submariner with the Red Hat Advanced Cluster Management for Kubernetes console:
Required access: Cluster administrator
- From the console navigation menu, select Infrastructure > Clusters.
- On the Clusters page, select the Cluster sets tab. The clusters that you want enable with Submariner must be in the same cluster set.
- If the clusters on which you want to deploy Submariner are already in the same cluster set, skip to step 5 to deploy Submariner.
- If the clusters on which you want to deploy Submariner are not in the same cluster set, create a cluster set for them by completing the following steps: - Select Create cluster set.
- Name the cluster set, and select Create.
- Select Manage resource assignments to assign clusters to the cluster set.
- Select the managed clusters that you want to connect with Submariner to add them to the cluster set.
- Select Review to view and confirm the clusters that you selected.
- Select Save to save the cluster set, and view the resulting cluster set page.
 
- On the cluster set page, select the Submariner add-ons tab.
- Select Install Submariner add-ons.
- Select the clusters on which you want to deploy Submariner.
- Enter the following information in the Install Submariner add-ons editor: - 
										AWS Access Key ID- This field is only visible when you import an AWS cluster.
- 
										AWS Secret Access Key- This field is only visible when you import an AWS cluster.
- 
										Google Cloud Platform service account JSON key- This field is only visible when you import a Google Cloud Platform cluster.
- 
										Instance type- The Amazon Web Services EC2 instance type of the gateway node that is created on the managed cluster. The default value isc5d.large. This field is only visible when your managed cluster environment is AWS.
- 
										IPsec NAT-T port- The default value for the IPsec NAT traversal port is port4500. If your managed cluster environment is VMware vSphere, ensure that this port is opened on your firewalls.
- 
										Gateway count- The number of worker nodes that are used to deploy the Submariner gateway component on your managed cluster. The default value is1. If the value is greater than 1, the Submariner gateway High Availability (HA) is automatically enabled.
- 
										Cable driver- The Submariner gateway cable engine component that maintains the cross-cluster tunnels. The default value isLibreswan IPsec.
 
- 
										
- Select Next at the end of the editor to move to the editor for the next cluster, and complete the editor for each of the remaining clusters that you selected.
- Verify your configuration for each managed cluster.
- Click Install to deploy Submariner on the selected managed clusters. - It might take several minutes for the installation and configuration to complete. You can check the Submariner status in the list on the Submariner add-ons tab: - 
										Connection statusindicates how many Submariner connections are established on the managed cluster.
- 
										Agent statusindicates whether Submariner is successfully deployed on the managed cluster. The console might report a status ofDegradeduntil it is installed and configured.
- 
										Gateway nodes labeledindicates how many worker nodes are labeled with the Submariner gateway label:submariner.io/gateway=trueon the managed cluster.
 
- 
										
Submariner is now deployed on the clusters.
1.1.4.2. Deploying Submariner manually
						Before you deploy Submariner with Red Hat Advanced Cluster Management for Kubernetes, you must prepare the clusters on the hosting environment for the connection. Currently, you can use the SubmarinerConfig API to automatically prepare the clusters on Amazon Web Services, Google Cloud Platform and VMware vSphere. For other platforms, you need to prepare them manually, see Preparing selected hosts to deploy Submariner for the steps.
					
1.1.4.2.1. Preparing selected hosts to deploy Submariner
Before you deploy Submariner with Red Hat Advanced Cluster Management for Kubernetes, you must manually prepare the clusters on the hosting environment for the connection. The requirements are different for different hosting environments, so follow the instructions for your hosting environment.
1.1.4.2.1.1. Preparing Microsoft Azure for Submariner
To prepare the clusters on your Microsoft Azure for deploying the Submariner component, complete the following steps:
- Tag a node as a gateway node by running the following command: - kubectl label nodes <worker-node-name> "submariner.io/gateway=true" --overwrite - kubectl label nodes <worker-node-name> "submariner.io/gateway=true" --overwrite- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create a public IP and assign it to the VM of the node that was tagged as gateway node by running the following commands: - az network public-ip create --name <public-ip-name> --resource-group <res-group> -sku Standard az network nic ip-config update --name <name> --nic-name <gw-vm-nic> --resource-group <res-group> --public-ip-address <public-ip-name> - az network public-ip create --name <public-ip-name> --resource-group <res-group> -sku Standard az network nic ip-config update --name <name> --nic-name <gw-vm-nic> --resource-group <res-group> --public-ip-address <public-ip-name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - res-groupwith the resource group of the cluster.- Replace - gw-vm-nicwith the interface address.
- Create a network security group for the Submariner gateway by running the following command: - az network nsg create --name <gw-nsg-name> --resource-group <res-group> - az network nsg create --name <gw-nsg-name> --resource-group <res-group>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create network security groups rules in your Azure environment to open the tunnel port (4500/UDP by default), NAT discovery port (4490/UDP by default) and metrics port (8080/TCP and 8081/TCP by default) for Submariner. These rules need to be created in both inbound and outbound directions for each of the ports. - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create network security groups rules to allow communication by using the Encapsulated Security Payload (ESP) and Authentication Header (AH) protocols. These rules need to be created in both inbound and outbound directions for both of the protocols. - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Attach the security group to the gateway VM interface by entering the following command: - az network nic update -g <res-group> -n <gw-vm-nic> --network-security-group <gw-nsg-name> - az network nic update -g <res-group> -n <gw-vm-nic> --network-security-group <gw-nsg-name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create network security group rules in your Azure environment to open the VXLAN port (4800/UDP by default) on the existing security groups ( - <resource-group-name>-nsgby default) that are associated with the worker and the main nodes.- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Important: Ensure that a new gateway node is tagged as a gateway node when you reinstall Submariner. Reusing the current gateway after uninstalling Submariner result in the connections displaying an error state. This requirement is only applicable when you are using Red Hat Advanced Cluster Management for Kubernetes with manual cloud preparation steps.
1.1.4.2.1.2. Preparing VMware vSphere for Submariner
Submariner uses IPsec to establish the secure tunnels between the clusters on the gateway nodes. You can use the default port or specify a custom port. When you run this procedure without specifying an IPsec NATT port, the default port is automatically used for the communication. The default port is 4500/UDP.
Submariner uses virtual extensible LAN (VXLAN) to encapsulate traffic when it moves from the worker and master nodes to the gateway nodes. The VXLAN port cannot be customized, and is always port 4800/UDP.
Submariner uses 8080/TCP to send its metrics information among nodes in the cluster, this port cannot be customized.
The following ports must be opened by your VMWare vSphere administrator before you can enable Submariner:
| Name | Default value | Customizable | 
|---|---|---|
| IPsec NATT | 4500/UDP | Yes | 
| VXLAN | 4800/UDP | No | 
| Submariner metrics | 8080/TCP | No | 
To prepare VMware vSphere clusters for deploying Submariner, complete the following steps:
- Ensure that the IPsec NATT, VXLAN, and metrics ports are open.
- Customize and apply YAML content that is similar to the following example: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - managed-cluster-namespacewith the namespace of your managed cluster.- Note: The name of the - SubmarinerConfigmust be- submariner, as shown in the example.- This configuration uses the default network address translation - traversal (NATT) port (4500/UDP) for your Submariner and one worker node is labeled as the Submariner gateway on your vSphere cluster. - Submariner uses IP security (IPsec) to establish the secure tunnels between the clusters on the gateway nodes. You can either use the default IPsec NATT port, or you can specify a different port that you configured. When you run this procedure without specifying an IPsec NATT port of 4500/UDP is automatically used for the communication. 
1.1.4.2.1.3. Preparing bare metal for Submariner
To prepare bare metal clusters for deploying Submariner, complete the following steps:
- Ensure that the IPsec NATT, VXLAN, and metrics ports are open.
- Customize and apply YAML content that is similar to the following example: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - managed-cluster-namespacewith the namespace of your managed cluster.- Note: The name of the - SubmarinerConfigmust be- submariner, as shown in the example.- This configuration uses the default network address translation - traversal (NATT) port (4500/UDP) for your Submariner and one worker node is labeled as the Submariner gateway on your bare metal cluster. - Submariner uses IP security (IPsec) to establish the secure tunnels between the clusters on the gateway nodes. You can either use the default IPsec NATT port, or you can specify a different port that you configured. When you run this procedure without specifying an IPsec NATT port of 4500/UDP is automatically used for the communication. 
See Customizing Submariner deployments for information about the customization options.
1.1.4.2.2. Deploy Submariner with the ManagedClusterAddOn API
							To deploy Submariner by using the ManagedClusterAddOn API, you must first prepare the clusters on the hosting environment. See Preparing selected hosts to deploy Submariner for more details.
						
After preparing the clusters, complete the following steps:
- Create a - ManagedClusterSetresource on the hub cluster by using the instructions provided in the Creating and managing ManagedClusterSets topic of the Managing your clusters documentation. Your entry for the- ManagedClusterSetshould resemble the following content:- apiVersion: cluster.open-cluster-management.io/v1beta1 kind: ManagedClusterSet metadata: name: <managed-cluster-set-name> - apiVersion: cluster.open-cluster-management.io/v1beta1 kind: ManagedClusterSet metadata: name: <managed-cluster-set-name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - managed-cluster-set-namewith a name for the- ManagedClusterSetthat you are creating.- Note: The maximum length of the name of the Kubernetes namespace is 63 characters, so the maximum length of the - <managed-cluster-set-name>is 56 characters. If the length of- <managed-cluster-set-name>exceeds 56, the- <managed-cluster-set-name>is truncated from the head.- After the - ManagedClusterSetis created, the- submariner-addoncreates a namespace called- <managed-cluster-set-name>-brokerand deploys the Submariner broker to it.
- Create the - Brokerconfiguration on the hub cluster in the- <managed-cluster-set-name>-brokernamespace by customizing and applying YAML content that is similar to the following example:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - managed-cluster-set-namewith the name of the managed cluster.- Set the the value of - globalnetEnabledto- trueif you want to enable Submariner Globalnet in the- ManagedClusterSet.
- Add one managed cluster to the - ManagedClusterSetby entering the following command:- oc label managedclusters <managed-cluster-name> "cluster.open-cluster-management.io/clusterset=<managed-cluster-set-name>" --overwrite - oc label managedclusters <managed-cluster-name> "cluster.open-cluster-management.io/clusterset=<managed-cluster-set-name>" --overwrite- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - <managed-cluster-name>with the name of the managed cluster that you want to add to the- ManagedClusterSet.- Replace - <managed-cluster-set-name>with the name of the- ManagedClusterSetto which you want to add the managed cluster.
- Deploy Submariner on the managed cluster by customizing and applying YAML content that is similar to the following example: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - managed-cluster-namewith the name of the managed cluster that you want to use with Submariner.- The - installNamespacefield in the spec of the- ManagedClusterAddOnis the namespace on the managed cluster where it installs Submariner. Currently, Submariner must be installed in the- submariner-operatornamespace.- After the - ManagedClusterAddOnis created, the- submariner-addondeploys Submariner to the- submariner-operatornamespace on the managed cluster. You can view the deployment status of Submariner from the status of this- ManagedClusterAddOn.- Note: The name of - ManagedClusterAddOnmust be- submariner.
- Repeat steps three and four for all of the managed clusters that you want to enable Submariner on.
- After Submariner is deployed on the managed cluster, you can verify the Submariner deployment status by checking the status of submariner - ManagedClusterAddOnby entering the following command:- oc -n <managed-cluster-name> get managedclusteraddons submariner -oyaml - oc -n <managed-cluster-name> get managedclusteraddons submariner -oyaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - managed-cluster-namewith the name of the managed cluster.- In the status of the Submariner - ManagedClusterAddOn, three conditions indicate the deployment status of Submariner:- 
											SubmarinerGatewayNodesLabeledcondition indicates whether there are labeled Submariner gateway nodes on the managed cluster.
- 
											SubmarinerAgentDegradedcondition indicates whether the Submariner is successfully deployed on the managed cluster.
- 
											SubmarinerConnectionDegradedcondition indicates how many connections are established on the managed cluster with Submariner.
 
- 
											
1.1.4.2.3. Customizing Submariner deployments
You can customize some of the settings of your Submariner deployments, including your Network Address Translation-Traversal (NATT) port, number of gateway nodes, and instance type of your gateway nodes. These customizations are consistent across all of the providers.
1.1.4.2.3.1. NATT port
If you want to customize your NATT port, customize and apply the following YAML content for your provider environment:
- 
										Replace managed-cluster-namespacewith the namespace of your managed cluster.
- Replace - managed-cluster-namewith the name of your managed cluster- 
												AWS: Replace providerwithaws. The value of<managed-cluster-name>-aws-credsis your AWS credential secret name, which you can find in the cluster namespace of your hub cluster.
- 
												GCP: Replace providerwithgcp. The value of<managed-cluster-name>-gcp-credsis your Google Cloud Platform credential secret name, which you can find in the cluster namespace of your hub cluster.
 
- 
												AWS: Replace 
- 
										Replace managed-cluster-namespacewith the namespace of your managed cluster.
- 
										Replace managed-cluster-namewith the name of your managed cluster. The value ofmanaged-cluster-name-gcp-credsis your Google Cloud Platform credential secret name, which you can find in the cluster namespace of your hub cluster.
- 
										Replace NATTPortwith the NATT port that you want to use.
								Note: The name of the SubmarinerConfig must be submariner, as shown in the example.
							
To customize your NATT port in the VMware vSphere environment, customize and apply the following YAML content:
- 
										Replace managed-cluster-namespacewith the namespace of your managed cluster.
- 
										Replace NATTPortwith the NATT port that you want to use.
								Note: The name of the SubmarinerConfig must be submariner, as shown in the example.
							
1.1.4.2.3.2. Number of gateway nodes
If you want to customize the number of your gateway nodes, customize and apply YAML content that is similar to the following example:
- 
										Replace managed-cluster-namespacewith the namespace of your managed cluster.
- Replace - managed-cluster-namewith the name of your managed cluster.- 
												AWS: Replace providerwithaws. The value ofmanaged-cluster-name-aws-credsis your AWS credential secret name, which you can find in the cluster namespace of your hub cluster.
- 
												GCP: Replace providerwithgcp. The value of<managed-cluster-name>-gcp-credsis your Google Cloud Platform credential secret name, which you can find in the cluster namespace of your hub cluster.
 
- 
												AWS: Replace 
- 
										Replace gatewayswith the number of gateways that you want to use. If the value is greater than 1, the Submariner gateway automatically enables high availability.
								Note: The name of the SubmarinerConfig must be submariner, as shown in the example.
							
If you want to customize the number of your gateway nodes in the VMware vSphere environment, customize and apply YAML content that is similar to the following example:
- 
										Replace managed-cluster-namespacewith the namespace of your managed cluster.
- 
										Replace gatewayswith the number of gateways that you want to use. If the value is greater than 1, the Submariner gateway automatically enables high availability.
1.1.4.2.3.3. Instance types of gateway nodes
If you want to customize the instance type of your gateway node, customize and apply YAML content that is similar to the following example:
- 
										Replace managed-cluster-namespacewith the namespace of your managed cluster.
- Replace - managed-cluster-namewith the name of your managed cluster.- 
												AWS: Replace providerwithaws. The value ofmanaged-cluster-name-aws-credsis your AWS credential secret name, which you can find in the cluster namespace of your hub cluster.
- 
												GCP: Replace providerwithgcp. The value of<managed-cluster-name>-gcp-credsis your Google Cloud Platform credential secret name, which you can find in the cluster namespace of your hub cluster.
 
- 
												AWS: Replace 
- 
										Replace instance-typewith the AWS instance type that you want to use.
								Note: The name of the SubmarinerConfig must be submariner, as shown in the example.
							
1.1.4.2.3.4. Cable driver
								The Submariner Gateway Engine component creates secure tunnels to other clusters. The cable driver component maintains the tunnels by using a pluggable architecture in the Gateway Engine component. You can use the Libreswan or VXLAN implementations for the cableDriver configuration of the cable engine component. See the following example:
							
Best practice: Do not use the VXLAN cable driver on public networks. The VXLAN cable driver is unencrypted. Only use VXLAN to avoid unnecessary double encryption on private networks. For example, some on-premise environments might handle the tunnel’s encryption with a dedicated line-level hardware device.
1.1.4.3. Managing service discovery for Submariner
After Submariner is deployed into the same environment as your managed clusters, the routes are configured for secure IP routing between the pod and services across the clusters in the managed cluster set.
1.1.4.3.1. Enabling service discovery for Submariner
							To make a service from a cluster visible and discoverable to other clusters in the managed cluster set, you must create a ServiceExport object. After a service is exported with a ServiceExport object, you can access the service by the following format: <service>.<namespace>.svc.clusterset.local. If multiple clusters export a service with the same name, and from the same namespace, they are recognized by other clusters as a single logical service.
						
							This example uses the nginx service in the default namespace, but you can discover any Kubernetes ClusterIP service or headless service:
						
- Apply an instance of the - nginxservice on a managed cluster that is in the- ManagedClusterSetby entering the following commands:- oc -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine oc -n default expose deployment nginx --port=8080 - oc -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine oc -n default expose deployment nginx --port=8080- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Export the service by creating a - ServiceExportentry by entering a command with the- subctltool that is similar to the following command:- subctl export service --namespace <service-namespace> <service-name> - subctl export service --namespace <service-namespace> <service-name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - service-namespacewith the name of the namespace where the service is located. In this example, it is- default.- Replace - service-namewith the name of the service that you are exporting. In this example, it is- nginx.- See - exportin the Submariner documentation for more information about other available flags.
- Run the following command from a different managed cluster to confirm that it can access the - nginxservice:- oc -n default run --generator=run-pod/v1 tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash curl nginx.default.svc.clusterset.local:8080 - oc -n default run --generator=run-pod/v1 tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash curl nginx.default.svc.clusterset.local:8080- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
							The nginx service discovery is now configured for Submariner.
						
1.1.4.3.2. Disabling service discovery for Submariner
							To disable a service from being exported to other clusters, enter a command similar to the following example for nginx:
						
subctl unexport service --namespace <service-namespace> <service-name>
subctl unexport service --namespace <service-namespace> <service-name>
							Replace service-namespace with the name of the namespace where the service is located.
						
							Replace service-name with the name of the service that you are exporting.
						
							See unexport in the Submariner documentation for more information about other available flags.
						
The service is no longer available for discovery by clusters.
1.1.4.4. Uninstalling Submariner
You can uninstall the Submariner components from your clusters using the Red Hat Advanced Cluster Management for Kubernetes console or the command-line. For Submariner versions earlier than 0.12, additional steps are needed to completely remove all data plane components. The Submariner uninstall is idempotent, so you can repeat steps without any issues.
1.1.4.4.1. Console method
To uninstall Submariner from a cluster by using the Red Hat Advanced Cluster Management console, complete the following steps:
- From the Red Hat Advanced Cluster Management console navigation, select Infrastructure > Clusters, and select the Cluster sets tab.
- Select the cluster set that contains the clusters from which you want to remove the Submariner components.
- Select the Submariner Add-ons tab to view the clusters in the cluster set that have Submariner deployed.
- In the Actions menu for the cluster that you want to uninstall Submariner, select Uninstall Add-on.
- Repeat those steps for other clusters from which you are removing Submariner. - Tip: You can remove the Submariner add-on from multiple clusters in the same cluster set by selecting multiple clusters and clicking Actions. Select Uninstall Submariner add-ons. 
If the version of Submariner that you are removing is earlier than version 0.12, continue with Manual removal steps for early versions of Submariner. If the Submariner version is 0.12, or later, Submariner is removed.
Important: Verify that all of the cloud resources are removed from the cloud provider to avoid additional charges by your cloud provider. See Verifying Submariner resource removal for more information.
1.1.4.4.2. Command-line method
To uninstall Submariner by using the command line, complete the following steps:
- Locate the clusters that contain the Submariner add-on by entering the following command: - oc get resource submariner-addon -n open-cluster-management - oc get resource submariner-addon -n open-cluster-management- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Run a command similar to the following example to uninstall Submariner from the cluster: - oc delete resource submariner-addon -n <CLUSTER_NAME> - oc delete resource submariner-addon -n <CLUSTER_NAME>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - CLUSTER_NAMEwith the name of the cluster.
- Confirm that you want to remove all of the Submariner components from the cluster.
- Repeat the steps for each cluster to remove Submariner.
If the version of Submariner that you are removing is earlier than version 0.12, continue with Manual removal steps for early versions of Submariner. If the Submariner version is 0.12, or later, Submariner is removed.
Important: Verify that all of the cloud resources are removed from the cloud provider to avoid additional charges by your cloud provider. See Verifying Submariner resource removal for more information.
1.1.4.4.3. Manual removal steps for early versions of Submariner
When uninstalling versions of Submariner that are earlier than version 0.12, complete steps 5-8 in the Manual Uninstall section in the Submariner documentation.
After completing those steps, your Submariner components are removed from the cluster.
Important: Verify that all of the cloud resources are removed from the cloud provider to avoid additional charges by your cloud provider. See Verifying Submariner resource removal for more information.
1.1.4.4.4. Verifying Submariner resource removal
After uninstalling Submariner, verify that all of the Submariner resources are removed from your clusters. If they remain on your clusters, some resources continue to accrue charges from infrastructure providers. Ensure that you have no additional Submariner resourceson your cluster by completing the following steps:
- Run the following command to list any Submariner resources that remain on the cluster: - oc get cluster <CLUSTER_NAME> grep submariner - oc get cluster <CLUSTER_NAME> grep submariner- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - CLUSTER_NAMEwith the name of your cluster.
- Remove any resources on the list by entering the following command: - oc delete resource <RESOURCE_NAME> cluster <CLUSTER_NAME> - oc delete resource <RESOURCE_NAME> cluster <CLUSTER_NAME>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - RESOURCE_NAMEwith the name of the Submariner resource that you want to remove.
- Repeat steps 1-2 for each of the clusters until your search does not identify any resources.
The Submariner resources are removed from your cluster.
1.2. VolSync persistent volume replication service (Technology Preview)
VolSync is a Kubernetes operator that enables asynchronous replication of persistent volumes within a cluster, or across clusters with storage types that are not otherwise compatible for replication. It uses the Container Storage Interface (CSI) to overcome the compatibility limitation. After deploying the VolSync operator in your environment, you can leverage it to create and maintain copies of your persistent data. VolSync can only replicate persistent volume claims on Red Hat OpenShift Container Platform clusters that are at version 4.8, or later.
1.2.1. Replicating persistent volumes with VolSync
You can use three methods to replicate persistent volumes with VolSync, which depend on the number of synchronization locations that you have. The Rsync method is used for this example. For information about the other methods and more information about Rsync, see Usage in the VolSync documentation.
Rsync replication is a commonly used, one-to-one replication of persistent volumes. This is used for replicating data to a remote site.
VolSync does not create its own namespace, so it is in the same namespace as other OpenShift Container Platform all-namespace operators. Any changes that you make to the operator settings for VolSync also affects the other operators in the same namespace, such as if you change to manual approval for channel updates.
1.2.1.1. Prerequisites
Before installing VolSync on your clusters, you must have the following requirements:
- A configured Red Hat OpenShift Container Platform environment running a Red Hat Advanced Cluster Management version 2.4, or later, hub cluster
- At least two configured clusters that are managed by the same Red Hat Advanced Cluster Management hub cluster
- 
								Network connectivity between the clusters that you are configuring with VolSync; If the clusters are not on the same network, you can configure the Submariner multicluster networking and service discovery and use the ClusterIPvalue forServiceTypeto network the clusters, or use a load balancer with theLoadBalancervalue forServiceType.
- The storage driver that you use for your source persistent volume must be CSI-compatible and able to support snapshots.
1.2.1.2. Installing VolSync on the managed clusters
To enable VolSync to replicate the persistent volume claim on one cluster to the persistent volume claim of another cluster, you must install VolSync on both the source and the target managed clusters.
						You can use either of two methods to install VolSync on two clusters in your environment. You can either add a label to each of the managed clusters in the hub cluster, or you can manually create and apply a ManagedClusterAddOn, as they are described in the following sections:
					
1.2.1.2.1. Installing VolSync using labels
To install VolSync on the managed cluster by adding a label.
- Complete the following steps from the Red Hat Advanced Cluster Management console: - 
											Select one of the managed clusters from the Clusterspage in the hub cluster console to view its details.
- In the Labels field, add the following label: - addons.open-cluster-management.io/volsync=true - addons.open-cluster-management.io/volsync=true- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - The VolSync service pod is installed on the managed cluster. 
- Add the same label the other managed cluster.
- Run the following command on each managed cluster to confirm that the VolSync operator is installed: - oc get csv -n openshift-operators - oc get csv -n openshift-operators- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - There is an operator listed for VolSync when it is installed. 
 
- 
											Select one of the managed clusters from the 
- Complete the following steps from the command-line interface: - Start a command-line session on the hub cluster.
- Enter the following command to add the label to the first cluster: - oc label managedcluster <managed-cluster-1> "addons.open-cluster-management.io/volsync"="true" - oc label managedcluster <managed-cluster-1> "addons.open-cluster-management.io/volsync"="true"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - managed-cluster-1with the name of one of your managed clusters.
- Enter the following command to add the label to the second cluster: - oc label managedcluster <managed-cluster-2> "addons.open-cluster-management.io/volsync"="true" - oc label managedcluster <managed-cluster-2> "addons.open-cluster-management.io/volsync"="true"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - managed-cluster-2with the name of your other managed cluster.- A - ManagedClusterAddOnresource should be created automatically on your hub cluster in the namespace of each corresponding managed cluster.
 
1.2.1.2.2. Installing VolSync using a ManagedClusterAddOn
							To install VolSync on the managed cluster by adding a ManagedClusterAddOn manually, complete the following steps:
						
- On the hub cluster, create a YAML file called - volsync-mcao.yamlthat contains content that is similar to the following example:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - managed-cluster-1-namespacewith the namespace of one of your managed clusters. This namespace is the same as the name of the managed cluster.- Note: The name must be - volsync.
- Apply the file to your configuration by entering a command similar to the following example: - oc apply -f volsync-mcao.yaml - oc apply -f volsync-mcao.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Repeat the procedure for the other managed cluster. - A - ManagedClusterAddOnresource should be created automatically on your hub cluster in the namespace of each corresponding managed cluster.
1.2.1.3. Configuring Rsync replication across managed clusters
						For Rsync-based replication, configure custom resources on the source and destination clusters. The custom resources use the address value to connect the source to the destination, and the sshKeys to ensure that the transferred data is secure.
					
						Note: You must copy the values for address and sshKeys from the destination to the source, so configure the destination before you configure the source.
					
						This example provides the steps to configure an Rsync replication from a persistent volume claim on the source cluster in the source-ns namespace to a persistent volume claim on a destination cluster in the destination-ns namespace. You can replace those values with other values, if necessary.
					
- Configure your destination cluster. - Run the following command on the destination cluster to create the namespace: - kubectl create ns <destination-ns> - $ kubectl create ns <destination-ns>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - destination-nswith a name for the namespace that will contain your destination persistent volume claim.
- Copy the following YAML content to create a new file called - replication_destination.yaml:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Note: The - capacityvalue should match the capacity of the persistent volume claim that is being replicated.- Replace - destinationwith the name of your replication destination CR.- Replace - destination-nswith the name of the namespace where your destination is located.- For this example, the - ServiceTypevalue of- LoadBalanceris used. The load balancer service is created by the source cluster to enable your source managed cluster to transfer information to a different destination managed cluster. You can use- ClusterIPas the service type if your source and destinations are on the same cluster, or if you have Submariner network service configured. Note the address and the name of the secret to refer to when you configure the source cluster.- The - storageClassNameand- volumeSnapshotClassNameare optional parameters. Specify the values for your environment, particularly if you are using a storage class and volume snapshot class name that are different than the default values for your environment.
- Run the following command on the destination cluster to create the - replicationdestinationresource:- kubectl create -n <destination-ns> -f replication_destination.yaml - $ kubectl create -n <destination-ns> -f replication_destination.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - destination-nswith the name of the namespace where your destination is located.- After the - replicationdestinationresource is created, following parameters and values are added to the resource:- Expand - Parameter - Value - .status.rsync.address- IP address of the destination cluster that is used to enable the source and destination clusters to communicate. - .status.rsync.sshKeys- Name of the SSH key file that enables secure data transfer from the source cluster to the destination cluster. 
- Run the following command to copy the value of - .status.rsync.addressto use on the source cluster:- ADDRESS=`kubectl get replicationdestination <destination> -n <destination-ns> --template={{.status.rsync.address}}` echo $ADDRESS- $ ADDRESS=`kubectl get replicationdestination <destination> -n <destination-ns> --template={{.status.rsync.address}}` $ echo $ADDRESS- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - destinationwith the name of your replication destination CR.- Replace - destination-nswith the name of the namespace where your destination is located.- The output should appear similar to the following output, which is for an Amazon Web Services environment: - a831264645yhrjrjyer6f9e4a02eb2-5592c0b3d94dd376.elb.us-east-1.amazonaws.com - a831264645yhrjrjyer6f9e4a02eb2-5592c0b3d94dd376.elb.us-east-1.amazonaws.com- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Run the following command to copy the name of the secret and the contents of the secret that are provided as the value of - .status.rsync.sshKeys.- SSHKEYS=`kubectl get replicationdestination <destination> -n <destination-ns> --template={{.status.rsync.sshKeys}}` echo $SSHKEYS- $ SSHKEYS=`kubectl get replicationdestination <destination> -n <destination-ns> --template={{.status.rsync.sshKeys}}` $ echo $SSHKEYS- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - destinationwith the name of your replication destination CR.- Replace - destination-nswith the name of the namespace where your destination is located.- You will have to enter it on the source cluster when you configure the source. The output should be the name of your SSH keys secret file, which might resemble the following name: - volsync-rsync-dst-src-destination-name - volsync-rsync-dst-src-destination-name- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Identify the source persistent volume claim that you want to replicate. - Note: The source persistent volume claim must be on a CSI storage class. 
- Create the - ReplicationSourceitems.- Copy the following YAML content to create a new file called - replication_source.yamlon the source cluster:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - sourcewith the name for your replication source CR. See step 3-vi of this procedure for instructions on how to replace this automatically.- Replace - source-nswith the namespace of the persistent volume claim where your source is located. See step 3-vi of this procedure for instructions on how to replace this automatically.- Replace - persistent_volume_claimwith the name of your source persistent volume claim.- Replace - mysshkeyswith the keys that you copied from the- .status.rsync.sshKeysfield of the- ReplicationDestinationwhen you configured it.- Replace - my.host.comwith the host address that you copied from the- .status.rsync.addressfield of the- ReplicationDestinationwhen you configured it.- If your storage driver supports cloning, using - Cloneas the value for- copyMethodmight be a more streamlined process for the replication.- StorageClassNameand- volumeSnapshotClassNameare optional parameters. If you are using a storage class and volume snapshot class name that are different than the defaults for your environment, specify those values.- You can now set up the synchronization method of the persistent volume. 
- Copy the SSH secret from the destination cluster by entering the following command against the destination cluster: - kubectl get secret -n <destination-ns> $SSHKEYS -o yaml > /tmp/secret.yaml - $ kubectl get secret -n <destination-ns> $SSHKEYS -o yaml > /tmp/secret.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - destination-nswith the namespace of the persistent volume claim where your destination is located.
- Open the secret file in the - vieditor by entering the following command:- vi /tmp/secret.yaml - $ vi /tmp/secret.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- In the open secret file on the destination cluster, make the following changes: - 
												Change the namespace to the namespace of your source cluster. For this example, it is source-ns.
- 
												Remove the owner references (.metadata.ownerReferences).
 
- 
												Change the namespace to the namespace of your source cluster. For this example, it is 
- On the source cluster, create the secret file by entering the following command on the source cluster: - kubectl create -f /tmp/secret.yaml - $ kubectl create -f /tmp/secret.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On the source cluster, modify the - replication_source.yamlfile by replacing the value of the- addressand- sshKeysin the- ReplicationSourceobject with the values that you noted from the destination cluster by entering the following commands:- sed -i "s/<my.host.com>/$ADDRESS/g" replication_source.yaml sed -i "s/<mysshkeys>/$SSHKEYS/g" replication_source.yaml kubectl create -n <source> -f replication_source.yaml - $ sed -i "s/<my.host.com>/$ADDRESS/g" replication_source.yaml $ sed -i "s/<mysshkeys>/$SSHKEYS/g" replication_source.yaml $ kubectl create -n <source> -f replication_source.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - my.host.comwith the host address that you copied from the- .status.rsync.addressfield of the- ReplicationDestinationwhen you configured it.- Replace - mysshkeyswith the keys that you copied from the- .status.rsync.sshKeysfield of the- ReplicationDestinationwhen you configured it.- Replace - sourcewith the name of the persistent volume claim where your source is located.- Note: You must create the the file in the same namespace as the persistent volume claim that you want to replicate. 
- Verify that the replication completed by running the following command on the - ReplicationSourceobject:- kubectl describe ReplicationSource -n <source-ns> <source> - $ kubectl describe ReplicationSource -n <source-ns> <source>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - source-nswith the namespace of the persistent volume claim where your source is located.- Replace - sourcewith the name of your replication source CR.- If the replication was successful, the output should be similar to the following example: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - If the - Last Sync Timehas no time listed, then the replication is not complete.
 
You have a replica of your original persistent volume claim.
1.2.2. Converting a replicated image to a usable persistent volume claim
You might need to use the replicated image to recover data, or create a new instance of a persistent volume claim. The copy of the image must be converted to a persistent volume claim before it can be used. To convert a replicated image to a persistent volume claim, complete the following steps:
- When the replication is complete, identify the latest snapshot from the - ReplicationDestinationobject by entering the following command:- kubectl get replicationdestination <destination> -n <destination-ns> --template={{.status.latestImage.name}}- $ kubectl get replicationdestination <destination> -n <destination-ns> --template={{.status.latestImage.name}}- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Note the value of the latest snapshot for when you create your persistent volume claim. - Replace - destinationwith the name of your replication destination.- Replace - destination-nswith the namespace of your destination.
- Create a - pvc.yamlfile that resembles the following example:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - pvc-namewith a name for your new persistent volume claim.- Replace - destination-nswith the namespace where the persistent volume claim is located.- Replace - snapshot_to_replacewith the- VolumeSnapshotname that you found in the previous step.- Best practice: You can update - resources.requests.storagewith a different value when the value is at least the same size as the initial source persistent volume claim.
- Validate that your persistent volume claim is running in the environment by entering the following command: - kubectl get pvc -n <destination-ns> - $ kubectl get pvc -n <destination-ns>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Your original backup image is running as the main persistent volume claim.
1.2.3. Scheduling your synchronization
Select from three options when determining how you start your replications: always running, on a schedule, or manually. Scheduling your replications is an option that is often selected.
					The Schedule option runs replications at scheduled times. A schedule is defined by a cronspec, so the schedule can be configured as intervals of time or as specific times. The order of the schedule values are:
				
					"minute (0-59) hour (0-23) day-of-month (1-31) month (1-12) day-of-week (0-6)"
				
The replication starts when the scheduled time occurs. Your setting for this replication option might resemble the following content:
spec:
  trigger:
    schedule: "*/6 * * * *"
spec:
  trigger:
    schedule: "*/6 * * * *"After enabling one of these methods, your synchronization schedule runs according to the method that you configured.
See the VolSync documentation for additional information and options.
1.3. Enabling klusterlet add-ons on clusters from the multicluster engine for Kubernetes operator
After you install Red Hat Advanced Cluster Management for Kubernetes and then create or import clusters with the multicluster engine for Kubernetes operator, you can enable the klusterlet add-ons for those managed clusters.
The klusterlet add-ons are not enabled by default if you created or imported clusters with the multicluster engine for Kubernetes operator. Additionally, klusterlet add-ons are not enabled by default after Red Hat Advanced Cluster Management is installed.
See the following available klusterlet add-ons:
- application-manager
- cert-policy-controller
- config-policy-controller
- iam-policy-controller
- governance-policy-framework
- search-collector
Complete the following steps to enable the klusterlet add-ons for the managed clusters after the Red Hat Advanced Cluster Management is installed:
- Create a YAML file that is similar to the following - KlusterletAddonConfig, with the- specvalue that represents the add-ons:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Note: The - policy-controlleradd-on is divided into two add-ons: The- governance-policy-frameworkand the- config-policy-controller. As a result, the- policyControllercontrols the- governance-policy-frameworkand the- config-policy-controller- managedClusterAddons.
- 
						Save the file as klusterlet-addon-config.yaml.
- Apply the YAML by running the following command on the hub cluster: - oc apply -f klusterlet-addon-config.yaml - oc apply -f klusterlet-addon-config.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- To verify whether the enabled - managedClusterAddonsare created after the- KlusterletAddonConfigis created, run the following command:- oc get managedclusteraddons -n <cluster namespace> - oc get managedclusteraddons -n <cluster namespace>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow