Chapter 6. Multi-cluster topologies
Multi-cluster topologies are useful for organizations with distributed systems or environments seeking enhanced scalability, fault tolerance, and regional redundancy.
6.1. About multi-cluster mesh topologies Copy linkLink copied to clipboard!
In a multi-cluster mesh topology, you install and manage a single Istio mesh across multiple OpenShift Container Platform clusters, enabling communication and service discovery between the services. Two factors determine the multi-cluster mesh topology: control plane topology and network topology. There are two options for each topology. Therefore, there are four possible multi-cluster mesh topology configurations.
- Multi-Primary Single Network: Combines the multi-primary control plane topology and the single network network topology models.
- Multi-Primary Multi-Network: Combines the multi-primary control plane topology and the multi-network network topology models.
- Primary-Remote Single Network: Combines the primary-remote control plane topology and the single network network topology models.
- Primary-Remote Multi-Network: Combines the primary-remote control plane topology and the multi-network network topology models.
6.1.1. Control plane topology models Copy linkLink copied to clipboard!
A multi-cluster mesh must use one of the following control plane topologies:
- Multi-Primary: In this configuration, a control plane resides on every cluster. Each control plane observes the API servers in all of the other clusters for services and endpoints.
- Primary-Remote: In this configuration, the control plane resides only on one cluster, called the primary cluster. No control plane runs on any of the other clusters, called remote clusters. The control plane on the primary cluster discovers services and endpoints and configures the sidecar proxies for the workloads in all clusters.
6.1.2. Network topology models Copy linkLink copied to clipboard!
A multi-cluster mesh must use one of the following network topologies:
- Single Network: All clusters reside on the same network and there is direct connectivity between the services in all the clusters. There is no need to use gateways for communication between the services across cluster boundaries.
- Multi-Network: Clusters reside on different networks and there is no direct connectivity between services. Gateways must be used to enable communication across network boundaries.
6.2. Multi-Cluster configuration overview Copy linkLink copied to clipboard!
To configure a multi-cluster topology you must perform the following actions:
- Install the OpenShift Service Mesh Operator for each cluster.
- Create or have access to root and intermediate certificates for each cluster.
- Apply the security certificates for each cluster.
- Install Istio for each cluster.
6.2.1. Creating certificates for a multi-cluster topology Copy linkLink copied to clipboard!
Create the root and intermediate certificate authority (CA) certificates for two clusters.
Prerequisites
- You have OpenSSL installed locally.
Procedure
Create the root CA certificate:
Create a key for the root certificate by running the following command:
$ openssl genrsa -out root-key.pem 4096Create an OpenSSL configuration certificate file named
for the root CA certificates:root-ca.confExample root certificate configuration file
encrypt_key = no prompt = no utf8 = yes default_md = sha256 default_bits = 4096 req_extensions = req_ext x509_extensions = req_ext distinguished_name = req_dn [ req_ext ] subjectKeyIdentifier = hash basicConstraints = critical, CA:true keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, keyCertSign [ req_dn ] O = Istio CN = Root CACreate the certificate signing request by running the following command:
$ openssl req -sha256 -new -key root-key.pem \ -config root-ca.conf \ -out root-cert.csrCreate a shared root certificate by running the following command:
$ openssl x509 -req -sha256 -days 3650 \ -signkey root-key.pem \ -extensions req_ext -extfile root-ca.conf \ -in root-cert.csr \ -out root-cert.pem
Create the intermediate CA certificate for the East cluster:
Create a directory named
by running the following command:east$ mkdir eastCreate a key for the intermediate certificate for the East cluster by running the following command:
$ openssl genrsa -out east/ca-key.pem 4096Create an OpenSSL configuration file named
in theintermediate.confdirectory for the intermediate certificate of the East cluster. Copy the following example file and save it locally:east/Example configuration file
[ req ] encrypt_key = no prompt = no utf8 = yes default_md = sha256 default_bits = 4096 req_extensions = req_ext x509_extensions = req_ext distinguished_name = req_dn [ req_ext ] subjectKeyIdentifier = hash basicConstraints = critical, CA:true, pathlen:0 keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, keyCertSign subjectAltName=@san [ san ] DNS.1 = istiod.istio-system.svc [ req_dn ] O = Istio CN = Intermediate CA L = eastCreate a certificate signing request by running the following command:
$ openssl req -new -config east/intermediate.conf \ -key east/ca-key.pem \ -out east/cluster-ca.csrCreate the intermediate CA certificate for the East cluster by running the following command:
$ openssl x509 -req -sha256 -days 3650 \ -CA root-cert.pem \ -CAkey root-key.pem -CAcreateserial \ -extensions req_ext -extfile east/intermediate.conf \ -in east/cluster-ca.csr \ -out east/ca-cert.pemCreate a certificate chain from the intermediate and root CA certificate for the east cluster by running the following command:
$ cat east/ca-cert.pem root-cert.pem > east/cert-chain.pem && cp root-cert.pem east
Create the intermediate CA certificate for the West cluster:
Create a directory named
by running the following command:west$ mkdir westCreate a key for the intermediate certificate for the West cluster by running the following command:
$ openssl genrsa -out west/ca-key.pem 4096Create an OpenSSL configuration file named
in theintermediate.confdirectory for for the intermediate certificate of the West cluster. Copy the following example file and save it locally:west/Example configuration file
[ req ] encrypt_key = no prompt = no utf8 = yes default_md = sha256 default_bits = 4096 req_extensions = req_ext x509_extensions = req_ext distinguished_name = req_dn [ req_ext ] subjectKeyIdentifier = hash basicConstraints = critical, CA:true, pathlen:0 keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, keyCertSign subjectAltName=@san [ san ] DNS.1 = istiod.istio-system.svc [ req_dn ] O = Istio CN = Intermediate CA L = westCreate a certificate signing request by running the following command:
$ openssl req -new -config west/intermediate.conf \ -key west/ca-key.pem \ -out west/cluster-ca.csrCreate the certificate by running the following command:
$ openssl x509 -req -sha256 -days 3650 \ -CA root-cert.pem \ -CAkey root-key.pem -CAcreateserial \ -extensions req_ext -extfile west/intermediate.conf \ -in west/cluster-ca.csr \ -out west/ca-cert.pemCreate the certificate chain by running the following command:
$ cat west/ca-cert.pem root-cert.pem > west/cert-chain.pem && cp root-cert.pem west
6.2.2. Applying certificates to a multi-cluster topology Copy linkLink copied to clipboard!
Apply root and intermediate certificate authority (CA) certificates to the clusters in a multi-cluster topology.
In this procedure,
CLUSTER1
CLUSTER2
Prerequisites
- You have access to two OpenShift Container Platform clusters with external load balancer support.
- You have created the root CA certificate and intermediate CA certificates for each cluster or someone has made them available for you.
Procedure
Apply the certificates to the East cluster of the multi-cluster topology:
Log in to East cluster by running the following command:
$ oc login -u https://<east_cluster_api_server_url>Set up the environment variable that contains the
command context for the East cluster by running the following command:oc$ export CTX_CLUSTER1=$(oc config current-context)Create a project called
by running the following command:istio-system$ oc get project istio-system --context "${CTX_CLUSTER1}" || oc new-project istio-system --context "${CTX_CLUSTER1}"Configure Istio to use
as the default network for the pods on the East cluster by running the following command:network1$ oc --context "${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1Create the CA certificates, certificate chain, and the private key for Istio on the East cluster by running the following command:
$ oc get secret -n istio-system --context "${CTX_CLUSTER1}" cacerts || oc create secret generic cacerts -n istio-system --context "${CTX_CLUSTER1}" \ --from-file=east/ca-cert.pem \ --from-file=east/ca-key.pem \ --from-file=east/root-cert.pem \ --from-file=east/cert-chain.pemNoteIf you followed the instructions in "Creating certificates for a multi-cluster mesh", your certificates will reside in the
directory. If your certificates reside in a different directory, modify the syntax accordingly.east/
Apply the certificates to the West cluster of the multi-cluster topology:
Log in to the West cluster by running the following command:
$ oc login -u https://<west_cluster_api_server_url>Set up the environment variable that contains the
command context for the West cluster by running the following command:oc$ export CTX_CLUSTER2=$(oc config current-context)Create a project called
by running the following command:istio-system$ oc get project istio-system --context "${CTX_CLUSTER2}" || oc new-project istio-system --context "${CTX_CLUSTER2}"Configure Istio to use
as the default network for the pods on the West cluster by running the following command:network2$ oc --context "${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2Create the CA certificate secret for Istio on the West cluster by running the following command:
$ oc get secret -n istio-system --context "${CTX_CLUSTER2}" cacerts || oc create secret generic cacerts -n istio-system --context "${CTX_CLUSTER2}" \ --from-file=west/ca-cert.pem \ --from-file=west/ca-key.pem \ --from-file=west/root-cert.pem \ --from-file=west/cert-chain.pemNoteIf you followed the instructions in "Creating certificates for a multi-cluster mesh", your certificates will reside in the
directory. If the certificates reside in a different directory, modify the syntax accordingly.west/
Next steps
Install Istio on all the clusters comprising the mesh topology.
6.3. Installing a multi-primary multi-network mesh Copy linkLink copied to clipboard!
Install Istio in the multi-primary multi-network topology on two OpenShift Container Platform clusters.
In this procedure,
CLUSTER1
CLUSTER2
You can adapt these instructions for a mesh spanning more than two clusters.
Prerequisites
- You have installed the OpenShift Service Mesh 3 Operator on all of the clusters that include the mesh.
- You have created certificates for the multi-cluster mesh.
- You have applied certificates to the multi-cluster topology.
- You have created an Istio Container Network Interface (CNI) resource.
-
You have installed.
istioctl
In on-premise environments, such as those running on bare metal, OpenShift Container Platform clusters often do not include a native load-balancer capability. A service of type
LoadBalancer
istio-eastwestgateway
- VMware vSphere
- IBM Z® and IBM® LinuxONE
- IBM Z® and IBM® LinuxONE for Red Hat Enterprise Linux (RHEL) KVM
- IBM Power®
For more information, see MetalLB Operator.
Procedure
Create an
environment variable that defines the Istio version to install by running the following command:ISTIO_VERSION$ export ISTIO_VERSION=1.24.3Install Istio on the East cluster:
Create an
resource on the East cluster by running the following command:Istio$ cat <<EOF | oc --context "${CTX_CLUSTER1}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system values: global: meshID: mesh1 multiCluster: clusterName: cluster1 network: network1 EOFWait for the control plane to return the
status condition by running the following command:Ready$ oc --context "${CTX_CLUSTER1}" wait --for condition=Ready istio/default --timeout=3mCreate an East-West gateway on the East cluster by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net1.yamlExpose the services through the gateway by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
Install Istio on the West cluster:
Create an
resource on the West cluster by running the following command:Istio$ cat <<EOF | oc --context "${CTX_CLUSTER2}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system values: global: meshID: mesh1 multiCluster: clusterName: cluster2 network: network2 EOFWait for the control plane to return the
status condition by running the following command:Ready$ oc --context "${CTX_CLUSTER2}" wait --for condition=Ready istio/default --timeout=3mCreate an East-West gateway on the West cluster by running the following command:
$ oc --context "${CTX_CLUSTER2}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net2.yamlExpose the services through the gateway by running the following command:
$ oc --context "${CTX_CLUSTER2}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
Create the
service account for the East cluster by running the following command:istio-reader-service-account$ oc --context="${CTX_CLUSTER1}" create serviceaccount istio-reader-service-account -n istio-systemCreate the
service account for the West cluster by running the following command:istio-reader-service-account$ oc --context="${CTX_CLUSTER2}" create serviceaccount istio-reader-service-account -n istio-systemAdd the
role to the East cluster by running the following command:cluster-reader$ oc --context="${CTX_CLUSTER1}" adm policy add-cluster-role-to-user cluster-reader -z istio-reader-service-account -n istio-systemAdd the
role to the West cluster by running the following command:cluster-reader$ oc --context="${CTX_CLUSTER2}" adm policy add-cluster-role-to-user cluster-reader -z istio-reader-service-account -n istio-systemInstall a remote secret on the East cluster that provides access to the API server on the West cluster by running the following command:
$ istioctl create-remote-secret \ --context="${CTX_CLUSTER2}" \ --name=cluster2 \ --create-service-account=false | \ oc --context="${CTX_CLUSTER1}" apply -f -Install a remote secret on the West cluster that provides access to the API server on the East cluster by running the following command:
$ istioctl create-remote-secret \ --context="${CTX_CLUSTER1}" \ --name=cluster1 \ --create-service-account=false | \ oc --context="${CTX_CLUSTER2}" apply -f -
6.3.1. Verifying a multi-cluster topology Copy linkLink copied to clipboard!
Deploy sample applications and verify traffic on a multi-cluster topology on two OpenShift Container Platform clusters.
In this procedure,
CLUSTER1
CLUSTER2
Prerequisites
- You have installed the OpenShift Service Mesh Operator on all of the clusters that comprise the mesh.
- You have completed "Creating certificates for a multi-cluster mesh".
- You have completed "Applying certificates to a multi-cluster topology".
- You have created an Istio Container Network Interface (CNI) resource.
-
You have installed on the laptop you will use to run these instructions.
istioctl - You have installed a multi-cluster topology.
Procedure
Deploy sample applications on the East cluster:
Create a sample application namespace on the East cluster by running the following command:
$ oc --context "${CTX_CLUSTER1}" get project sample || oc --context="${CTX_CLUSTER1}" new-project sampleLabel the application namespace to support sidecar injection by running the following command:
$ oc --context="${CTX_CLUSTER1}" label namespace sample istio-injection=enabledDeploy the
application:helloworldCreate the
service by running the following command:helloworld$ oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sampleCreate the
deployment by running the following command:helloworld-v1$ oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l version=v1 -n sample
Deploy the
application by running the following command:sleep$ oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml -n sampleWait for the
application on the East cluster to return thehelloworldstatus condition by running the following command:Ready$ oc --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/helloworld-v1Wait for the
application on the East cluster to return thesleepstatus condition by running the following command:Ready$ oc --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/sleep
Deploy the sample applications on the West cluster:
Create a sample application namespace on the West cluster by running the following command:
$ oc --context "${CTX_CLUSTER2}" get project sample || oc --context="${CTX_CLUSTER2}" new-project sampleLabel the application namespace to support sidecar injection by running the following command:
$ oc --context="${CTX_CLUSTER2}" label namespace sample istio-injection=enabledDeploy the
application:helloworldCreate the
service by running the following command:helloworld$ oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sampleCreate the
deployment by running the following command:helloworld-v2$ oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l version=v2 -n sample
Deploy the
application by running the following command:sleep$ oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml -n sampleWait for the
application on the West cluster to return thehelloworldstatus condition by running the following command:Ready$ oc --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/helloworld-v2Wait for the
application on the West cluster to return thesleepstatus condition by running the following command:Ready$ oc --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/sleep
Verifying traffic flows between clusters
For the East cluster, send 10 requests to the
service by running the following command:helloworld$ for i in {0..9}; do \ oc --context="${CTX_CLUSTER1}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \ doneVerify that you see responses from both clusters. This means version 1 and version 2 of the service can be seen in the responses.
For the West cluster, send 10 requests to the
service:helloworld$ for i in {0..9}; do \ oc --context="${CTX_CLUSTER2}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \ doneVerify that you see responses from both clusters. This means version 1 and version 2 of the service can be seen in the responses.
6.3.2. Removing a multi-cluster topology from a development environment Copy linkLink copied to clipboard!
After experimenting with the multi-cluster functionality in a development environment, remove the multi-cluster topology from all the clusters.
In this procedure,
CLUSTER1
CLUSTER2
Prerequisites
- You have installed a multi-cluster topology.
Procedure
Remove Istio and the sample applications from the East cluster of the development environment by running the following command:
$ oc --context="${CTX_CLUSTER1}" delete istio/default ns/istio-system ns/sample ns/istio-cniRemove Istio and the sample applications from the West cluster of development environment by running the following command:
$ oc --context="${CTX_CLUSTER2}" delete istio/default ns/istio-system ns/sample ns/istio-cni
6.4. Installing a primary-remote multi-network mesh Copy linkLink copied to clipboard!
Install Istio in a primary-remote multi-network topology on two OpenShift Container Platform clusters.
In this procedure,
CLUSTER1
CLUSTER2
You can adapt these instructions for a mesh spanning more than two clusters.
Prerequisites
- You have installed the OpenShift Service Mesh 3 Operator on all of the clusters that comprise the mesh.
- You have completed "Creating certificates for a multi-cluster mesh".
- You have completed "Applying certificates to a multi-cluster topology".
- You have created an Istio Container Network Interface (CNI) resource.
-
You have installed on the laptop you will use to run these instructions.
istioctl
Procedure
Create an
environment variable that defines the Istio version to install by running the following command:ISTIO_VERSION$ export ISTIO_VERSION=1.24.3Install Istio on the East cluster:
Set the default network for the East cluster by running the following command:
$ oc --context="${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1Create an
resource on the East cluster by running the following command:Istio$ cat <<EOF | oc --context "${CTX_CLUSTER1}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system values: global: meshID: mesh1 multiCluster: clusterName: cluster1 network: network1 externalIstiod: true1 EOF- 1
- This enables the control plane installed on the East cluster to serve as an external control plane for other remote clusters.
Wait for the control plane to return the "Ready" status condition by running the following command:
$ oc --context "${CTX_CLUSTER1}" wait --for condition=Ready istio/default --timeout=3mCreate an East-West gateway on the East cluster by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net1.yamlExpose the control plane through the gateway so that services in the West cluster can access the control plane by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-istiod.yamlExpose the application services through the gateway by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
Install Istio on the West cluster:
Save the IP address of the East-West gateway running in the East cluster by running the following command:
$ export DISCOVERY_ADDRESS=$(oc --context="${CTX_CLUSTER1}" \ -n istio-system get svc istio-eastwestgateway \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}')Create an
resource on the West cluster by running the following command:Istio$ cat <<EOF | oc --context "${CTX_CLUSTER2}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system profile: remote values: istiodRemote: injectionPath: /inject/cluster/cluster2/net/network2 global: remotePilotAddress: ${DISCOVERY_ADDRESS} EOFAnnotate the
namespace in the West cluster so that it is managed by the control plane in the East cluster by running the following command:istio-system$ oc --context="${CTX_CLUSTER2}" annotate namespace istio-system topology.istio.io/controlPlaneClusters=cluster1Set the default network for the West cluster by running the following command:
$ oc --context="${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2Install a remote secret on the East cluster that provides access to the API server on the West cluster by running the following command:
$ istioctl create-remote-secret \ --context="${CTX_CLUSTER2}" \ --name=cluster2 | \ oc --context="${CTX_CLUSTER1}" apply -f -Wait for the
resource to return the "Ready" status condition by running the following command:Istio$ oc --context "${CTX_CLUSTER2}" wait --for condition=Ready istio/default --timeout=3mCreate an East-West gateway on the West cluster by running the following command:
$ oc --context "${CTX_CLUSTER2}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net2.yamlNoteSince the West cluster is installed with a remote profile, exposing the application services on the East cluster exposes them on the East-West gateways of both clusters.
6.5. Installing Kiali in a multi-cluster mesh Copy linkLink copied to clipboard!
Install Kiali in a multi-cluster mesh configuration on two OpenShift Container Platform clusters.
In this procedure,
CLUSTER1
CLUSTER2
You can adapt these instructions for a mesh spanning more than two clusters.
Prerequisites
- You have installed the latest Kiali Operator on each cluster.
- Istio installed in a multi-cluster configuration on each cluster.
-
You have installed on the laptop you can use to run these instructions.
istioctl -
You are logged in to the OpenShift Container Platform web console as a user with the role.
cluster-admin - You have configured a metrics store so that Kiali can query metrics from all the clusters. Kiali queries metrics and traces from their respective endpoints.
Procedure
Install Kiali on the East cluster:
Create a YAML file named
that creates a namespace for the Kiali deployment.kiali.yamlExample configuration
apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali namespace: istio-system spec: version: default external_services: prometheus: auth: type: bearer use_kiali_token: true thanos_proxy: enabled: true url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091NoteThe endpoint for this example uses OpenShift Monitoring to configure metrics. For more information, see "Configuring OpenShift Monitoring with Kiali".
Apply the YAML file on the East cluster by running the following command:
$ oc --context cluster1 apply -f kiali.yamlExample output
kiali-istio-system.apps.example.com
Ensure that the Kiali custom resource (CR) is ready by running the following command:
$ oc wait --context cluster1 --for=condition=Successful kialis/kiali -n istio-system --timeout=3mExample output
kiali.kiali.io/kiali condition metDisplay your Kiali Route hostname.
$ oc --context cluster1 get route kiali -n istio-system -o jsonpath='{.spec.host}'Create a Kiali CR on the West cluster.
Example configuration
apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali namespace: istio-system spec: version: default auth: openshift: redirect_uris: # Replace kiali-route-hostname with the hostname from the previous step. - "https://{kiali-route-hostname}/api/auth/callback/cluster2" deployment: remote_cluster_resources_only: trueThe Kiali Operator creates the resources necessary for the Kiali server on the East cluster to connect to the West cluster. The Kiali server is not installed on the West cluster.
Apply the YAML file on the West cluster by running the following command:
$ oc --context cluster2 apply -f kiali-remote.yamlEnsure that the Kiali CR is ready by running the following command:
$ oc wait --context cluster2 --for=condition=Successful kialis/kiali -n istio-system --timeout=3mCreate a remote cluster secret so that Kiali installation in the East cluster can access the West cluster.
Create a long lived API token bound to the kiali-service-account in the West cluster. Kiali uses this token to authenticate to the West cluster.
Example configuration
apiVersion: v1 kind: Secret metadata: name: "kiali-service-account" namespace: "istio-system" annotations: kubernetes.io/service-account.name: "kiali-service-account" type: kubernetes.io/service-account-tokenApply the YAML file on the West cluster by running the following command:
$ oc --context cluster2 apply -f kiali-svc-account-token.yamlCreate a
file and save it as a secret in the namespace on the East cluster where the Kiali deployment resides.kubeconfigTo simplify this process, use the
script to generate thekiali-prepare-remote-cluster.shfile by running the followingkubeconfigcommand:curl$ curl -L -o kiali-prepare-remote-cluster.sh https://raw.githubusercontent.com/kiali/kiali/master/hack/istio/multicluster/kiali-prepare-remote-cluster.shModify the script to make it executeable by running the following command:
chmod +x kiali-prepare-remote-cluster.shExecute the script so that it passes the East and West cluster contexts to the
file by running the following command:kubeconfig$ ./kiali-prepare-remote-cluster.sh --kiali-cluster-context cluster1 --remote-cluster-context cluster2 --view-only false --kiali-resource-name kiali-service-account --remote-cluster-namespace istio-system --process-kiali-secret true --process-remote-resources false --remote-cluster-name cluster2NoteUse the
option to display additional details about how to use the script.--help
Trigger the reconciliation loop so that the Kiali Operator registers the remote secret that the CR contains by running the following command:
$ oc --context cluster1 annotate kiali kiali -n istio-system --overwrite kiali.io/reconcile="$(date)"Wait for Kiali resource to become ready by running the following command:
oc --context cluster1 wait --for=condition=Successful --timeout=2m kialis/kiali -n istio-systemWait for Kiali server to become ready by running the following command:
oc --context cluster1 rollout status deployments/kiali -n istio-systemLog in to Kiali.
-
When you first access Kiali, log in to the cluster that contains the Kiali deployment. In this example, access the cluster.
East Display the hostname of the Kiali route by running the following command:
oc --context cluster1 get route kiali -n istio-system -o jsonpath='{.spec.host}'- Navigate to the Kiali URL in your browser: https://<your-kiali-route-hostname>.
-
When you first access Kiali, log in to the cluster that contains the Kiali deployment. In this example, access the
Log in to the West cluster through Kiali.
In order to see other clusters in the Kiali UI, you must first login as a user to those clusters through Kiali.
- Click on the user profile dropdown in the top right hand menu.
- Select Login to West. You are redirected to an OpenShift login page and prompted for credentials for the West cluster.
Verify that Kiali shows information from both clusters.
- Click Overview and verify that you can see namespaces from both clusters.
- Click Navigate and verify that you see both clusters on the mesh graph.