OpenShift Service Mesh 3.0 is a Technology Preview feature only
Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. This documentation is a work in progress and might not be complete or fully tested.Installing
Installing OpenShift Service Mesh
Abstract
Chapter 1. Installing OpenShift Service Mesh
Installing OpenShift Service Mesh consists of three main tasks: installing the OpenShift Operator, deploying Istio, and customizing the Istio configuration. Then, you can also choose to install the sample bookinfo
application to push data through the mesh and explore mesh functionality.
1.1. About deploying Istio using the Red Hat OpenShift Service Mesh Operator
To deploy Istio using the Red Hat OpenShift Service Mesh Operator, you must create an Istio
resource. Then, the Operator creates an IstioRevision
resource, which represents one revision of the Istio control plane. Based on the IstioRevision
resource, the Operator deploys the Istio control plane, which includes the istiod
Deployment
resource and other resources.
The Red Hat OpenShift Service Mesh Operator may create additional instances of the IstioRevision
resource, depending on the update strategy defined in the Istio
resource.
1.1.1. About update strategies
The update strategy affects how the update process is performed. For each mesh, you select one of two strategies:
-
InPlace
-
RevisionBased
The default strategy is the InPlace
strategy. For more information, see the following documentation located in "Updating OpenShift Service Mesh":
- "About InPlace strategy"
- "About RevisionBased strategy"
1.2. Installing the Service Mesh Operator
Prerequisites
You have deployed a cluster on OpenShift Container Platform 4.14 or later.
You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.
Procedure
- In the OpenShift Container Platform web console, navigate to the Operators → OperatorHub page.
- Search for the Red Hat OpenShift Service Mesh 3 Operator.
- Locate the Service Mesh Operator, and click to select it.
- When the prompt that discusses the community operator appears, click Continue.
- Verify the Service Mesh Operator is version 3.0, and click Install.
- Use the default installation settings presented, and click Install to continue.
-
Click Operators → Installed Operators to verify that the Service Mesh Operator is installed.
Succeeded
should appear in the Status column.
1.2.1. About Service Mesh custom resource definitions
Installing the Red Hat OpenShift Service Mesh Operator also installs custom resource definitions (CRD) that administrators can use to configure Istio for Service Mesh installations. The Operator Lifecycle Manager (OLM) installs two categories of CRDs: Sail Operator CRDs and Istio CRDs.
Sail Operator CRDs define custom resources for installing and maintaining the Istio components required to operate a service mesh. These custom resources belong to the sailoperator.io
API group and include the Istio
, IstioRevision
, IstioCNI
, and ZTunnel
resource kinds. For more information on how to configure these resources, see the sailoperator.io
API reference documentation.
Istio CRDs are associated with mesh configuration and service management. These CRDs define custom resources in several istio.io
API groups, such as networking.istio.io
and security.istio.io
. The CRDs also include various resource kinds, such as AuthorizationPolicy
, DestinationRule
, and VirtualService
, that administrators use to configure a service mesh.
1.3. About Istio deployment
To deploy Istio, you must create two resources: Istio
and IstioCNI
. The Istio
resource deploys and configures the Istio Control Plane. The IstioCNI
resource deploys and configures the Istio Container Network Interface (CNI) plugin. You should create these resources in separate projects; therefore, you must create two projects as part of the Istio deployment process.
You can use the OpenShift web console or the OpenShift CLI (oc) to create a project or a resource in your cluster.
In the OpenShift Container Platform, a project is essentially a Kubernetes namespace with additional annotations, such as the range of user IDs that can be used in the project. Typically, the OpenShift Container Platform web console uses the term project, and the CLI uses the term namespace, but the terms are essentially synonymous.
1.3.1. Creating the Istio project using the web console
The Service Mesh Operator deploys the Istio control plane to a project that you create. In this example, istio-system
is the name of the project.
Prerequisties
- The Red Hat OpenShift Service Mesh Operator must be installed.
- You are logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
- In the OpenShift Container Platform web console, click Home → Projects.
- Click Create Project.
-
At the prompt, enter a name for the project in the Name field. For example,
istio-system
. The other fields provide supplementary information to theIstio
resource definition and are optional. - Click Create. The Service Mesh Operator deploys Istio to the project you specified.
1.3.2. Creating the Istio resource using the web console
Create the Istio resource that will contain the YAML configuration file for your Istio deployment. The Red Hat OpenShift Service Mesh Operator uses information in the YAML file to create an instance of the Istio control plane.
Prerequisties
- The Service Mesh Operator must be installed.
- You are logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
- In the OpenShift Container Platform web console, click Operators → Installed Operators.
-
Select
istio-system
in the Project drop-down menu. - Click the Service Mesh Operator.
- Click Istio.
- Click Create Istio.
-
Select the
istio-system
project from the Namespace drop-down menu. Click Create. This action deploys the Istio control plane.
When
State: Healthy
appears in the Status column, Istio is successfully deployed.
1.3.3. Creating the IstioCNI project using the web console
The Service Mesh Operator deploys the Istio CNI plugin to a project that you create. In this example, istio-cni
is the name of the project.
Prerequisties
- The Red Hat OpenShift Service Mesh Operator must be installed.
- You are logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
- In the OpenShift Container Platform web console, click Home → Projects.
- Click Create Project.
-
At the prompt, you must enter a name for the project in the Name field. For example,
istio-cni
. The other fields provide supplementary information and are optional. - Click Create.
1.3.4. Creating the IstioCNI resource using the web console
Create an Istio Container Network Interface (CNI) resource, which contains the configuration file for the Istio CNI plugin. The Service Mesh Operator uses the configuration specified by this resource to deploy the CNI pod.
Prerequisties
- The Red Hat OpenShift Service Mesh Operator must be installed.
- You are logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
- In the OpenShift Container Platform web console, click Operators → Installed Operators.
-
Select
istio-cni
in the Project drop-down menu. - Click the Service Mesh Operator.
- Click IstioCNI.
- Click Create IstioCNI.
-
Ensure that the name is
default
. Click Create. This action deploys the Istio CNI plugin.
When
State: Healthy
appears in the Status column, the Istio CNI plugin is successfully deployed.
1.4. Scoping the Service Mesh with discovery selectors
Service Mesh includes workloads that meet the following criteria:
- The control plane has discovered the workload.
- The workload has an Envoy proxy sidecar injected.
By default, the control plane discovers workloads in all namespaces across the cluster, with the following results:
- Each proxy instance receives configuration for all namespaces, including workloads not enrolled in the mesh.
- Any workload with the appropriate pod or namespace injection label receives a proxy sidecar.
In shared clusters, you might want to limit the scope of Service Mesh to only certain namespaces. This approach is especially useful if multiple service meshes run in the same cluster.
1.4.1. About discovery selectors
With discovery selectors, the mesh administrator can control which namespaces the control plane can access. By using a Kubernetes label selector, the administrator sets the criteria for the namespaces visible to the control plane, excluding any namespaces that do not match the specified criteria.
Istiod always opens a watch to OpenShift for all namespaces. However, discovery selectors ignore objects that are not selected very early in its processing, minimizing costs.
The discoverySelectors
field accepts an array of Kubernetes selectors, which apply to labels on namespaces. You can configure each selector for different use cases:
-
Custom label names and values. For example, configure all namespaces with the label
istio-discovery=enabled
. -
A list of namespace labels by using set-based selectors with OR logic. For instance, configure namespaces with
istio-discovery=enabled
ORregion=us-east1
. -
Inclusion and exclusion of namespaces. For example, configure namespaces with
istio-discovery=enabled
AND the labelapp=helloworld
.
Discovery selectors are not a security boundary. Istiod continues to have access to all namespaces even when you have configured the discoverySelector
field.
Additional resources
1.4.2. Scoping a Service Mesh by using discovery selectors
If you know which namespaces to include in the Service Mesh, configure discoverySelectors
during or after installation by adding the required selectors to the meshConfig.discoverySelectors
section of the Istio
resource. For example, configure Istio to discover only namespaces labeled istio-discovery=enabled
.
Prerequisites
- The OpenShift Service Mesh operator is installed.
- An Istio CNI resource is created.
Procedure
Add a label to the namespace containing the Istio control plane, for example, the
istio-system
system namespace.$ oc label namespace istio-system istio-discovery=enabled
Modify the
Istio
control plane resource to include adiscoverySelectors
section with the same label.kind: Istio apiVersion: sailoperator.io/v1alpha1 metadata: name: default spec: namespace: istio-system values: meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled
Apply the Istio CR:
$ oc apply -f istio.yaml
-
Ensure that all namespaces that will contain workloads that are to be part of the Service Mesh have both the
discoverySelector
label and, if needed, the appropriate Istio injection label.
Discovery selectors help restrict the scope of a single Service Mesh and are essential for limiting the control plane scope when you deploy multiple Istio control planes in a single cluster.
1.5. About the Bookinfo application
Installing the bookinfo
example application consists of two main tasks: deploying the application and creating a gateway so the application is accessible outside the cluster.
You can use the bookinfo
application to explore service mesh features. Using the bookinfo
application, you can easily confirm that requests from a web browser pass through the mesh and reach the application.
The bookinfo
application displays information about a book, similar to a single catalog entry of an online book store. The application displays a page that describes the book, lists book details (ISBN, number of pages, and other information), and book reviews.
The bookinfo
application is exposed through the mesh, and the mesh configuration determines how the microservices comprising the application are used to serve requests. The review information comes from one of three services: reviews-v1
, reviews-v2
or reviews-v3
. If you deploy the bookinfo
application without defining the reviews
virtual service, then the mesh uses a round robin rule to route requests to a service.
By deploying the reviews
virtual service, you can specify a different behavior. For example, you can specify that if a user logs into the bookinfo
application, then the mesh routes requests to the reviews-v2
service, and the application displays reviews with black stars. If a user does not log into the bookinfo
application, then the mesh routes requests to the reviews-v3
service, and the application displays reviews with red stars.
For more information, see Bookinfo Application in the upstream Istio documentation.
1.5.1. Deploying the Bookinfo application
Prerequisites
- You have deployed a cluster on OpenShift Container Platform 4.15 or later.
-
You are logged in to the OpenShift Container Platform web console as a user with the
cluster-admin
role. - You have access to the OpenShift CLI (oc).
- You have installed the Red Hat OpenShift Service Mesh Operator, created the Istio resource, and the Operator has deployed Istio.
- You have created IstioCNI resource, and the Operator has deployed the necessary IstioCNI pods.
Procedure
- In the OpenShift Container Platform web console, navigate to the Home → Projects page.
- Click Create Project.
Enter
bookinfo
in the Project name field.The Display name and Description fields provide supplementary information and are not required.
- Click Create.
Apply the Istio discovery selector and injection label to the
bookinfo
namespace by entering the following command:$ oc label namespace bookinfo istio-discovery=enabled istio-injection=enabled
NoteIn this example, the name of the Istio resource is
default
. If the Istio resource name is different, you must set theistio.io/rev
label to the name of the Istio resource instead of adding theistio-injection=enabled
label.Apply the
bookinfo
YAML file to deploy thebookinfo
application by entering the following command:oc apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo
Verification
Verify that the
bookinfo
service is available by running the following command:$ oc get services -n bookinfo
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE details ClusterIP 172.30.137.21 <none> 9080/TCP 44s productpage ClusterIP 172.30.2.246 <none> 9080/TCP 43s ratings ClusterIP 172.30.33.85 <none> 9080/TCP 44s reviews ClusterIP 172.30.175.88 <none> 9080/TCP 44s
Verify that the
bookinfo
pods are available by running the following command:$ oc get pods -n bookinfo
Example output
NAME READY STATUS RESTARTS AGE details-v1-698d88b-km2jg 2/2 Running 0 66s productpage-v1-675fc69cf-cvxv9 2/2 Running 0 65s ratings-v1-6484c4d9bb-tpx7d 2/2 Running 0 65s reviews-v1-5b5d6494f4-wsrwp 2/2 Running 0 65s reviews-v2-5b667bcbf8-4lsfd 2/2 Running 0 65s reviews-v3-5b9bd44f4-44hr6 2/2 Running 0 65s
When the
Ready
columns displays2/2
, the proxy sidecar was successfully injected. Confirm thatRunning
appears in theStatus
column for each pod.Verify that the
bookinfo
application is running by sending a request to thebookinfo
page. Run the following command:$ oc exec "$(oc get pod -l app=ratings -n bookinfo -o jsonpath='{.items[0].metadata.name}')" -c ratings -n bookinfo -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"
1.5.2. About accessing the Bookinfo application using a gateway
The Red Hat OpenShift Service Mesh Operator does not deploy gateways. Gateways are not part of the control plane. As a security best-practice, Ingress and Egress gateways should be deployed in a different namespace than the namespace that contains the control plane.
You can deploy gateways using either the Gateway API or the gateway injection method.
1.5.3. Accessing the Bookinfo application by using Istio gateway injection
Gateway injection uses the same mechanisms as Istio sidecar injection to create a gateway from a Deployment
resource that is paired with a Service
resource. The Service
resource can be made accessible from outside an OpenShift Container Platform cluster.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as
cluster-admin
. - The Red Hat OpenShift Service Mesh Operator must be installed.
- The Istio resource must be deployed.
Procedure
Create the
istio-ingressgateway
deployment and service by running the following command:$ oc apply -n bookinfo -f ingress-gateway.yaml
NoteThis example uses a sample
ingress-gateway.yaml
file that is available in the Istio community repository.Configure the
bookinfo
application to use the new gateway. Apply the gateway configuration by running the following command:$ oc apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/bookinfo/networking/bookinfo-gateway.yaml -n bookinfo
NoteTo configure gateway injection with the
bookinfo
application, this example uses a sample gateway configuration file that must be applied in the namespace where the application is installed.Use a route to expose the gateway external to the cluster by running the following command:
$ oc expose service istio-ingressgateway -n bookinfo
Modify the YAML file to automatically scale the pod when ingress traffic increases.
Example configuration
apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: labels: istio: ingressgateway release: istio name: ingressgatewayhpa namespace: bookinfo spec: maxReplicas: 5 1 metrics: - resource: name: cpu target: averageUtilization: 80 type: Utilization type: Resource minReplicas: 2 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: istio-ingressgateway
- 1
- This example sets the the maximum replicas to
5
and the minimum replicas to2
. It also creates another replica when utilization reaches 80%.
Specify the minimum number of pods that must be running on the node.
Example configuration
apiVersion: policy/v1 kind: PodDisruptionBudget metadata: labels: istio: ingressgateway release: istio name: ingressgatewaypdb namespace: bookinfo spec: minAvailable: 1 1 selector: matchLabels: istio: ingressgateway
- 1
- This example ensures one replica is running if a pod gets restarted on a new node.
Obtain the gateway host name and the URL for the product page by running the following command:
$ HOST=$(oc get route istio-ingressgateway -n bookinfo -o jsonpath='{.spec.host}')
Verify that the
productpage
is accessible from a web browser by running the following command:$ echo productpage URL: http://$HOST/productpage
1.5.4. Accessing the Bookinfo application by using Gateway API
The Kubernetes Gateway API deploys a gateway by creating a Gateway
resource. In OpenShift Container Platform 4.15 and later versions. If you want your cluster to use the Gateway API CRDs, you must enable the CRDs because they are disabled by default.
Red Hat provides support for using the Kubernetes Gateway API with Red Hat OpenShift Service Mesh. Red Hat does not provide support for the Kubernetes Gateway API custom resource definitions (CRDs). In this procedure, the use of community Gateway API CRDs is shown for demonstration purposes only.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as
cluster-admin
. - The Red Hat OpenShift Service Mesh Operator must be installed.
- The Istio resource must be deployed.
-
You are logged in to the OpenShift Container Platform web console as
Procedure
Enable the Gateway API CRDs:
$ oc get crd gateways.gateway.networking.k8s.io &> /dev/null || { oc kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.0.0" | oc apply -f -; }
Create and configure a gateway using a
Gateway
resource andHTTPRoute
resource:$ oc apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/bookinfo/gateway-api/bookinfo-gateway.yaml -n bookinfo
NoteTo configure a gateway with the
bookinfo
application by using the Gateway API, this example uses a sample gateway configuration file that must be applied in the namespace where the application is installed.Ensure that the Gateway API service is ready, and has an address allocated:
$ oc wait --for=condition=programmed gtw bookinfo-gateway -n bookinfo
Retrieve the host, port and gateway URL:
$ export INGRESS_HOST=$(oc get gtw bookinfo-gateway -n bookinfo -o jsonpath='{.status.addresses[0].value}') $ export INGRESS_PORT=$(oc get gtw bookinfo-gateway -n bookinfo -o jsonpath='{.spec.listeners[?(@.name=="http")].port}') $ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
Obtain the gateway host name and the URL of the product page:
$ echo "http://${GATEWAY_URL}/productpage"
-
Verify that the
productpage
is accessible from a web browser.
1.6. Customizing Istio configuration
The values
field of the Istio
custom resource definition, which was created when the control plane was deployed, can be used to customize Istio configuration using Istio’s Helm
configuration values. When you create this resource using the OpenShift Container Platform web console, it is pre-populated with configuration settings to enable Istio to run on OpenShift.
Procedure
- Click Operators → Installed Operators.
- Click Istio in the Provided APIs column.
-
Click the
Istio
instance, nameddefault
, in the Name column. -
Click YAML to view the
Istio
configuration and make modifications.
For a list of available configuration for the values
field, refer to Istio’s artifacthub chart documentation.
Additional resources
Chapter 2. Sidecar injection
To use Istio’s capabilities within a service mesh, each pod needs a sidecar proxy, configured and managed by the Istio control plane.
2.1. About sidecar injection
Sidecar injection is enabled using labels at the namespace or pod level. These labels also indicate the specific control plane managing the proxy. When you apply a valid injection label to the pod template defined in a deployment, any new pods created by that deployment automatically receive a sidecar. Similarly, applying a pod injection label at the namespace level ensures any new pods in that namespace include a sidecar.
Injection happens at pod creation through an admission controller, so changes appear on individual pods rather than the deployment resources. To confirm sidecar injection, check the pod details directly using oc describe
, where you can see the injected Istio proxy container.
2.2. Identifying the revision name
The label required to enable sidecar injection is determined by the specific control plane instance, known as a revision. Each revision is managed by an IstioRevision
resource, which is automatically created and managed by the Istio
resource, so manual creation or modification of IstioRevision
resources is generally unnecessary.
The naming of an IstioRevision
depends on the spec.updateStrategy.type
setting in the Istio
resource. If set to InPlace
, the revision shares the Istio
resource name. If set to RevisionBased
, the revision name follows the format <Istio resource name>-v<version>
. Typically, each Istio
resource corresponds to a single IstioRevision
. However, during a revision-based upgrade, multiple IstioRevision
resources may exist, each representing a distinct control plane instance.
To see available revision names, use the following command:
$ oc get istiorevisions
You should see output similar to the following example:
Example output
NAME READY STATUS IN USE VERSION AGE my-mesh-v1-23-0 True Healthy False v1.23.0 114s
2.2.1. Enabling sidecar injection with default revision
When the service mesh’s IstioRevision
name is default
, it’s possible to use the following labels on a namespace or a pod to enable sidecar injection:
Resource | Label | Enabled value | Disabled value |
---|---|---|---|
Namespace |
|
|
|
Pod |
|
|
|
You can also enable injection by setting the istio.io/rev: default
label in the namespace or pod.
2.2.2. Enabling sidecar injection with other revisions
When the IstioRevision
name is not default
, use the specific IstioRevision
name with the istio.io/rev
label to map the pod to the desired control plane and enable sidecar injection. To enable injection, set the istio.io/rev: default
label in either the namespace or the pod, as adding it to both is not required.
For example, with the revision shown above, the following labels would enable sidecar injection:
Resource | Enabled label | Disabled label |
---|---|---|
Namespace |
|
|
Pod |
|
|
When both istio-injection
and istio.io/rev
labels are applied, the istio-injection
label takes precedence and treats the namespace as part of the default revision.
2.3. Enabling sidecar injection
To demonstrate different approaches for configuring sidecar injection, the following procedures use the Bookinfo application.
Prerequisites
-
You have installed the Red Hat OpenShift Service Mesh Operator, created an
Istio
resource, and the Operator has deployed Istio. -
You have created the
IstioCNI
resource, and the Operator has deployed the necessaryIstioCNI
pods. - You have created the namespaces that are to be part of the mesh, and they are discoverable by the Istio control plane.
-
Optional: You have deployed the workloads to be included in the mesh. In the following examples, the Bookinfo has been deployed to the
bookinfo
namespace, but sidecar injection (step 5) has not been configured.
2.3.1. Enabling sidecar injection with namespace labels
In this example, all workloads within a namespace receive a sidecar proxy injection, making it the best approach when the majority of workloads in the namespace should be included in the mesh.
Procedure
Verify the revision name of the Istio control plane using the following command:
$ oc get istiorevisions
You should see output similar to the following example:
Example output
NAME TYPE READY STATUS IN USE VERSION AGE default Local True Healthy False v1.23.0 4m57s
Since the revision name is default, you can use the default injection labels without referencing the exact revision name.
Verify that workloads already running in the desired namespace show
1/1
containers asREADY
by using the following command. This confirms that the pods are running without sidecars.$ oc get pods -n bookinfo
You should see output similar to the following example:
Example output
NAME READY STATUS RESTARTS AGE details-v1-65cfcf56f9-gm6v7 1/1 Running 0 4m55s productpage-v1-d5789fdfb-8x6bk 1/1 Running 0 4m53s ratings-v1-7c9bd4b87f-6v7hg 1/1 Running 0 4m55s reviews-v1-6584ddcf65-6wqtw 1/1 Running 0 4m54s reviews-v2-6f85cb9b7c-w9l8s 1/1 Running 0 4m54s reviews-v3-6f5b775685-mg5n6 1/1 Running 0 4m54s
To apply the injection label to the
bookinfo
namespace, run the following command at the CLI:$ oc label namespace bookinfo istio-injection=enabled namespace/bookinfo labeled
To ensure sidecar injection is applied, redeploy the existing workloads in the
bookinfo
namespace. Use the following command to perform a rolling update of all workloads:$ oc -n bookinfo rollout restart deployments
Verification
Verify the rollout by checking that the new pods display
2/2
containers asREADY
, confirming successful sidecar injection by running the following command:$ oc get pods -n bookinfo
You should see output similar to the following example:
Example output
NAME READY STATUS RESTARTS AGE details-v1-7745f84ff-bpf8f 2/2 Running 0 55s productpage-v1-54f48db985-gd5q9 2/2 Running 0 55s ratings-v1-5d645c985f-xsw7p 2/2 Running 0 55s reviews-v1-bd5f54b8c-zns4v 2/2 Running 0 55s reviews-v2-5d7b9dbf97-wbpjr 2/2 Running 0 55s reviews-v3-5fccc48c8c-bjktn 2/2 Running 0 55sz
2.3.2. Exclude a workload from the mesh
You can exclude specific workloads from sidecar injection within a namespace where injection is enabled for all workloads.
This example is for demonstration purposes only. The bookinfo application requires all workloads to be part of the mesh for proper functionality.
Procedure
-
Open the application’s
Deployment
resource in an editor. In this case, exclude theratings-v1
service. Modify the
spec.template.metadata.labels
section of yourDeployment
resource to include the labelsidecar.istio.io/inject: false
to disable sidecar injection.kind: Deployment apiVersion: apps/v1 metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'false'
NoteAdding the label to the top-level
labels
section of theDeployment
does not affect sidecar injection.Updating the deployment triggers a rollout, creating a new ReplicaSet with updated pod(s).
Verification
Verify that the updated pod(s) do not contain a sidecar container and show
1/1
containers asRunning
by running the following command:$ oc get pods -n bookinfo
You should see output similar to the following example:
Example output
NAME READY STATUS RESTARTS AGE details-v1-6bc7b69776-7f6wz 2/2 Running 0 29m productpage-v1-54f48db985-gd5q9 2/2 Running 0 29m ratings-v1-5d645c985f-xsw7p 1/1 Running 0 7s reviews-v1-bd5f54b8c-zns4v 2/2 Running 0 29m reviews-v2-5d7b9dbf97-wbpjr 2/2 Running 0 29m reviews-v3-5fccc48c8c-bjktn 2/2 Running 0 29m
2.3.3. Enabling sidecar injection with pod labels
This approach allows you to include individual workloads for sidecar injection instead of applying it to all workloads within a namespace, making it ideal for scenarios where only a few workloads need to be part of a service mesh. This example also demonstrates the use of a revision label for sidecar injection, where the Istio
resource is created with the name my-mesh
. A unique Istio
resource name is required when multiple Istio control planes are present in the same cluster or during a revision-based control plane upgrade.
Procedure
Verify the revision name of the Istio control plane by running the following command:
$ oc get istiorevisions
You should see output similar to the following example:
Example output
NAME TYPE READY STATUS IN USE VERSION AGE my-mesh Local True Healthy False v1.23.0 47s
Since the revision name is
my-mesh
, use the revision labelistio.io/rev=my-mesh
to enable sidecar injection.Verify that workloads already running show
1/1
containers asREADY
, indicating that the pods are running without sidecars by running the following command:$ oc get pods -n bookinfo
You should see output similar to the following example:
Example output
NAME READY STATUS RESTARTS AGE details-v1-65cfcf56f9-gm6v7 1/1 Running 0 4m55s productpage-v1-d5789fdfb-8x6bk 1/1 Running 0 4m53s ratings-v1-7c9bd4b87f-6v7hg 1/1 Running 0 4m55s reviews-v1-6584ddcf65-6wqtw 1/1 Running 0 4m54s reviews-v2-6f85cb9b7c-w9l8s 1/1 Running 0 4m54s reviews-v3-6f5b775685-mg5n6 1/1 Running 0 4m54s
-
Open the application’s
Deployment
resource in an editor. In this case, update theratings-v1
service. Update the
spec.template.metadata.labels
section of yourDeployment
to include the appropriate pod injection or revision label. In this case,istio.io/rev: my-mesh
:kind: Deployment apiVersion: apps/v1 metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: istio.io/rev: my-mesh
NoteAdding the label to the
Deployment’s top-level `labels
section does not impact sidecar injection.Updating the deployment triggers a rollout, creating a new ReplicaSet with the updated pod(s).
Verification
Verify that only the ratings-v1 pod now shows
2/2
containersREADY
, indicating that the sidecar has been successfully injected by running the following command:$ oc get pods -n bookinfo
You should see output similar to the following example:
Example output
NAME READY STATUS RESTARTS AGE details-v1-559cd49f6c-b89hw 1/1 Running 0 42m productpage-v1-5f48cdcb85-8ppz5 1/1 Running 0 42m ratings-v1-848bf79888-krdch 2/2 Running 0 9s reviews-v1-6b7444ffbd-7m5wp 1/1 Running 0 42m reviews-v2-67876d7b7-9nmw5 1/1 Running 0 42m reviews-v3-84b55b667c-x5t8s 1/1 Running 0 42m
- Repeat for other workloads that you wish to include in the mesh.
2.4. Additional resources
Chapter 3. Running OpenShift Service Mesh 2.6 in the same cluster as OpenShift Service Mesh 3
If you are moving from Red Hat OpenShift Service Mesh v2.6, you can run OpenShift Service Mesh v2.6 side-by-side with OpenShift Service Mesh v3.0, in one cluster, without them interfering with each other.
3.1. Running OpenShift Service Mesh 2.6 and OpenShift Service Mesh 3 using multi-tenant deployment model
If you are moving from Red Hat OpenShift Service Mesh 2.6 from the default multi-tenant deployment model, you can run OpenShift Service Mesh 2.6 side-by-side with OpenShift Service Mesh 3.0, in one cluster, without them interfering with each other.
In OpenShift Service Mesh 2.6, you can check your deployment model from the ServiceMeshControlPlane
under spec.mode
:
Example ServiceMeshControlPlane
yaml
apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: MultiTenant
Prerequisites
- You are running OpenShift Container Platform 4.14 or later.
You are running OpenShift Service Mesh 2.6.
ImportantIf you are not running OpenShift Service Mesh 2.6, you must upgrade to 2.6 before following this procedure. To upgrade to OpenShift Service Mesh version to 2.6, see: Upgrading Service Mesh 2.x
Procedure
- Install the OpenShift Service Mesh 3 Operator.
-
Create an
IstioCNI
resource in theistio-cni
namespace. Create an
Istio
resource in a different namespace than the namespace used in theServiceMeshControlPlane
resource in OpenShift Service Mesh 2.6. This example uses theistio-system3
namespace:Example
Istio
resource withistio-system3
kind: Istio piVersion: sailoperator.io/v1alpha1 metadata: name: ossm3 1 spec: namespace: istio-system3 2 values: meshConfig: discoverySelectors: 3 - matchExpressions: - key: maistra.io/member-of operator: DoesNotExist updateStrategy: type: InPlace version: v1.23.0
- 1
- Do not use
default
as the name. - 2
- Must be different from the namespace used in the
ServiceMeshControlPlane
resource in OpenShift Service Mesh 2.6. This example uses theistio-system3
namespace. - 3
- To ignore OpenShift Service Mesh 2.6 namespaces, configure the
discoverySelectors
section as shown. All other namespaces will be part of the OpenShift Service Mesh 3.0 mesh.
Deploy your workloads and label the namespaces with
istio.io/rev=ossm3
label by running the following command:$ oc label namespace <namespace-name> istio.io/rev=<revision-name>
NoteIf you have changed
spec.memberSelectors
inServiceMeshMemberRoll
in theServiceMeshControlPlane
resource in OpenShift Service Mesh 2.6., then use theistio-injection=enabled
label for your OpenShift Service Mesh 3.0 workload namespaces.Confirm the application workloads are managed by their respective control planes by running the following command:
$ istioctl ps -i istio-system
Sample output
istio-system
$ istioctl ps -i istio-system NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION details-v1-7f46897b-88x4l.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 mongodb-v1-6cf7dc9885-7nlmq.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 mysqldb-v1-7c4c44b9b4-22b57.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 productpage-v1-6f9c6589cb-l6rvg.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v1-559b64556-f6b4l.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v2-8ddc4d65c-bztrg.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v2-mysql-cbc957476-m5j7w.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v1-847fb7c54d-7dwt7.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v2-5c7ff5b77b-5bpc4.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v3-5c5d764c9b-mk8vn.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8
Sample output
istio-system3
$ istioctl ps -i istio-system3 NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION details-v1-57f6466bdc-5krth.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 productpage-v1-5b84ccdddf-f8d9t.bookinfo2 Kubernetes SYNCED (2m39s) SYNCED (2m39s) SYNCED (2m34s) SYNCED (2m39s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 ratings-v1-fb764cb99-kx2dr.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v1-8bd5549cf-xqqmd.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v2-7f7cc8bf5c-5rvln.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v3-84f674b88c-ftcqg.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0
3.2. Running Red Hat OpenShift Service Mesh 2.6 and Red Hat OpenShift Service Mesh 3 using cluster-wide deployment model
If you are moving from Red Hat OpenShift Service Mesh 2.6 in a cluster-wide deployment model, you can run OpenShift Service Mesh 2.6 side-by-side with OpenShift Service Mesh 3.0, in one cluster, without them interfering with each other.
In OpenShift Service Mesh 2.6, you can check your deployment model from the ServiceMeshControlPlane
under spec.mode
:
Example ServiceMeshControlPlane
yaml
apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: ClusterWide
To prevent conflicts with OpenShift Service Mesh 3.0 when using the OpenShift Service Mesh 2.6 cluster-wide deployment model, you need to configure the ServiceMeshControlPlane
resource to restrict namespaces to only those belonging to (SMProduct) 2.6.
Prerequisites
- You are running OpenShift Container Platform 4.14 or later.
You are running OpenShift Service Mesh 2.6.
ImportantIf you are not running OpenShift Service Mesh 2.6, you must upgrade to 2.6 before following this procedure. To upgrade to OpenShift Service Mesh version to 2.6, see: Upgrading Service Mesh 2.x
Procedure
Configure
discoverySelectors
, and set theENABLE_ENHANCED_RESOURCE_SCOPING
environment variable on the pilot container totrue
in your OpenShift Service Mesh 2.6ServiceMeshControlPlane
custom resource (CR):Example
ServiceMeshControlPlane
CRapiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 mode: ClusterWide meshConfig: discoverySelectors: - matchExpressions: - key: maistra.io/member-of operator: Exists runtime: components: pilot: container: env: ENABLE_ENHANCED_RESOURCE_SCOPING: 'true'
- Install the OpenShift Service Mesh 3 Operator.
-
Create an
IstioCNI
resource in theistio-cni
namespace. Create an
Istio
resource in a different namespace than the namespace used in theServiceMeshControlPlane
resource in OpenShift Service Mesh 2.6. This example uses theistio-system3
namespace:Example
Istio
resource withistio-system3
kind: Istio apiVersion: sailoperator.io/v1alpha1 metadata: name: ossm3 1 spec: namespace: istio-system3 2 values: meshConfig: discoverySelectors: 3 - matchExpressions: - key: maistra.io/member-of operator: DoesNotExist updateStrategy: type: InPlace version: v1.23.0
- 1
- Do not use
default
as the name. - 2
- Must be different from the namespace used in the
ServiceMeshControlPlane
resource in OpenShift Service Mesh 2.6. This example uses theistio-system3
namespace. - 3
- To ignore OpenShift Service Mesh 2.6 namespaces, configure the
discoverySelectors
section as shown. All other namespaces will be part of the OpenShift Service Mesh 3.0 mesh.
Deploy your workloads and label the namespaces with
istio.io/rev=ossm3
label by running the following command:$ oc label namespace <namespace-name> istio.io/rev=ossm3
NoteIf you have changed
spec.memberSelectors
inServiceMeshMemberRoll
in theServiceMeshControlPlane
resource in OpenShift Service Mesh 2.6., then use theistio-injection=enabled
label for your OpenShift Service Mesh 3.0 workload namespaces.Confirm the application workloads are managed by their respective control planes by running the following command:
$ istioctl ps -i istio-system
Sample output
istio-system
$ istioctl ps -i istio-system NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION details-v1-7f46897b-88x4l.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 mongodb-v1-6cf7dc9885-7nlmq.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 mysqldb-v1-7c4c44b9b4-22b57.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 productpage-v1-6f9c6589cb-l6rvg.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v1-559b64556-f6b4l.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v2-8ddc4d65c-bztrg.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v2-mysql-cbc957476-m5j7w.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v1-847fb7c54d-7dwt7.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v2-5c7ff5b77b-5bpc4.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v3-5c5d764c9b-mk8vn.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8
Sample output
istio-system3
$ istioctl ps -i istio-system3 NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION details-v1-57f6466bdc-5krth.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 productpage-v1-5b84ccdddf-f8d9t.bookinfo2 Kubernetes SYNCED (2m39s) SYNCED (2m39s) SYNCED (2m34s) SYNCED (2m39s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 ratings-v1-fb764cb99-kx2dr.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v1-8bd5549cf-xqqmd.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v2-7f7cc8bf5c-5rvln.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v3-84f674b88c-ftcqg.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0
3.3. Additional resources
Chapter 4. OpenShift Service Mesh and cert-manager
The cert-manager tool is a solution for X.509 certificate management on Kubernetes. It delivers a unified API to integrate applications with private or public key infrastructure (PKI), such as Vault, Google Cloud Certificate Authority Service, Let’s Encrypt, and other providers.
The cert-manager tool must be installed before you create and install your Istio
resource.
The cert-manager tool ensures the certificates are valid and up-to-date by attempting to renew certificates at a configured time before they expire.
4.1. About integrating Service Mesh with cert-manager and istio-csr
The cert-manager tool provides integration with Istio through an external agent called istio-csr
. The istio-csr
agent handles certificate signing requests (CSR) from Istio proxies and the controlplane
in the following ways:
- Verifying the identity of the workload.
- Creating a CSR through cert-manager for the workload.
The cert-manager tool then creates a CSR to the configured CA Issuer, which signs the certificate.
Red Hat provides support for integrating with istio-csr
and cert-manager. Red Hat does not provide direct support for the istio-csr
or the community cert-manager components. The use of community cert-manager shown here is for demonstration purposes only.
Prerequisites
One of these versions of cert-manager:
- Red Hat cert-manager Operator 1.10 or later
- community cert-manager Operator 1.11 or later
- cert-manager 1.11 or later
- Red Hat OpenShift Service Mesh 3.0 or later
-
An
IstioCNI
instance is running in the cluster -
Istio CLI (
istioctl
) tool is installed -
jq
is installed - Helm is installed
4.2. Installing cert-manager
You can integrate cert-manager with OpenShift Service Mesh by deploying istio-csr
and then creating an Istio
resource that uses the istio-csr
agent to process workload and control plane certificate signing requests. This example creates a self-signed Issuer
, but any other Issuer
can be used instead.
You must install cert-manager before installing your Istio
resource.
Procedure
Create the
istio-system
namespace by running the following command:$ oc create namespace istio-system
Create the root issuer by creating an
Issuer
object in a YAML file.Create an
Issuer
object similar to the following example:Example
issuer.yaml
fileapiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned namespace: istio-system spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: istio-system spec: isCA: true duration: 87600h # 10 years secretName: istio-ca commonName: istio-ca privateKey: algorithm: ECDSA size: 256 subject: organizations: - cluster.local - cert-manager issuerRef: name: selfsigned kind: Issuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: istio-ca namespace: istio-system spec: ca: secretName: istio-ca ---
Create the objects by running the following command:
$ oc apply -f issuer.yaml
Wait for the
istio-ca
certificate to contain the "Ready" status condition by running the following command:$ oc wait --for=condition=Ready certificates/istio-ca -n istio-system
Copy the
istio-ca
certificate to thecert-manager
namespace so it can be used by istio-csr:Copy the secret to a local file by running the following command:
$ oc get -n istio-system secret istio-ca -o jsonpath='{.data.tls\.crt}' | base64 -d > ca.pem
Create a secret from the local certificate file in the
cert-manager
namespace by running the following command:$ oc create secret generic -n cert-manager istio-root-ca --from-file=ca.pem=ca.pem
Next steps
To install istio-csr
, you must follow the istio-csr
installation instructions for the type of update strategy you want. By default, spec.updateStrategy
is set to InPlace
when you create and install your Istio
resource. You create and install your Istio
resource after you install istio-csr
.
4.2.1. Installing the istio-csr agent by using the in place update strategy
Istio resources use the in place update strategy by default. Follow this procedure if you plan to leave spec.updateStrategy
as InPlace
when you create and install your Istio
resource.
Procedure
Add the Jetstack charts repository to your local Helm repository by running the following command:
$ helm repo add jetstack https://charts.jetstack.io --force-update
Install the
istio-csr
chart by running the following command:$ helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr \ --install \ --namespace cert-manager \ --wait \ --set "app.tls.rootCAFile=/var/run/secrets/istio-csr/ca.pem" \ --set "volumeMounts[0].name=root-ca" \ --set "volumeMounts[0].mountPath=/var/run/secrets/istio-csr" \ --set "volumes[0].name=root-ca" \ --set "volumes[0].secret.secretName=istio-root-ca" \ --set "app.istio.namespace=istio-system"
Next steps
4.2.2. Installing the istio-csr agent by using the revision based update strategy
Istio resources use the in place update strategy by default. Follow this procedure if you plan to change spec.updateStrategy
to RevisionBased
when you create and install your Istio
resource.
Procedure
-
Specify all the Istio revisions to your
istio-csr
deployment. See "istio-csr deployment". Add the Jetstack charts to your local Helm repository by running the following command:
$ helm repo add jetstack https://charts.jetstack.io --force-update
Install the
istio-csr
chart with your revision name by running the following command:$ helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr \ --install \ --namespace cert-manager \ --wait \ --set "app.tls.rootCAFile=/var/run/secrets/istio-csr/ca.pem" \ --set "volumeMounts[0].name=root-ca" \ --set "volumeMounts[0].mountPath=/var/run/secrets/istio-csr" \ --set "volumes[0].name=root-ca" \ --set "volumes[0].secret.secretName=istio-root-ca" \ --set "app.istio.namespace=istio-system" \ --set "app.istio.revisions={default-v1-23-0}"
NoteRevision names use the following format,
<istio-name>-v<major_version>-<minor_version>-<patch_version>
. For example:default-v1-23-0
.
Additional resources
Next steps
4.2.3. Installing your Istio resource
After you have installed istio-csr
by following the procedure for either an in place or revision based update strategy, you can install the Istio
resource.
You need to disable Istio’s built in CA server and tell istiod to use the istio-csr
CA server. The istio-csr
CA server issues certificates for both istiod and user workloads.
Procedure
Create the
Istio
object as shown in the following example:Example
istio.yaml
objectapiVersion: sailoperator.io/v1alpha1 kind: Istio metadata: name: default spec: version: v1.23.0 namespace: istio-system values: global: caAddress: cert-manager-istio-csr.cert-manager.svc:443 pilot: env: ENABLE_CA_SERVER: "false" volumeMounts: - mountPath: /tmp/var/run/secrets/istiod/tls name: istio-csr-dns-cert readOnly: true
NoteIf you installed your CSR agent with a revision based update strategy, then you need to add the following to your
Istio
object YAML:kind: Istio metadata: name: default spec: updateStrategy: type: RevisionBased
Create the
Istio
resource by running the following command:$ oc apply -f istio.yaml
Wait for the
Istio
object to become ready by running the following command:$ oc wait --for=condition=Ready istios/default -n istio-system
4.2.4. Verifying cert-manager installation
You can use the sample httpbin
service and sleep
application to check communication between the workloads. You can also check the workload certificate of the proxy to verify that the cert-manager tool is installed correctly.
Procedure
Create the
sample
namespace by running the following command:$ oc new-project sample
Find your active Istio revision by running the following command:
$ oc get istiorevisions
Add the injection label for your active revision to the
sample
namespace by running the following command:$ oc label namespace sample istio.io/rev=<your-active-revision-name> --overwrite=true
Deploy the sample
httpbin
service by running the following command:$ oc apply -n sample -f https://raw.githubusercontent.com/openshift-service-mesh/istio/refs/heads/master/samples/httpbin/httpbin.yaml
Deploy the sample
sleep
application by running the following command:$ oc apply -n sample -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/sleep/sleep.yaml
Wait for both applications to become ready by running the following command:
$ oc rollout status -n sample deployment httpbin sleep
Verify that
sleep
application can access thehttpbin
service by running the following command:$ oc exec "$(oc get pod -l app=sleep -n sample \ -o jsonpath={.items..metadata.name})" -c sleep -n sample -- \ curl http://httpbin.sample:8000/ip -s -o /dev/null \ -w "%{http_code}\n"
Example of a successful output
200
Run the following command to print the workload certificate for the
httpbin
service and verify the output:$ istioctl proxy-config secret -n sample $(oc get pods -n sample -o jsonpath='{.items..metadata.name}' --selector app=httpbin) -o json | jq -r '.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' | base64 --decode | openssl x509 -text -noout
Example output
... Issuer: O = cert-manager + O = cluster.local, CN = istio-ca ... X509v3 Subject Alternative Name: URI:spiffe://cluster.local/ns/sample/sa/httpbin
4.3. Updating istio-csr agents with revision-based update strategies
If you deployed your Istio resource using the revision based update strategy, you must pass all revisions each time you update your control plane. You must perform the update in the following order:
-
Update the
istio-csr
deployment with the new revision. -
Update the value of
Istio.spec.version
parameter/field.
Example update for RevisionBased control plane
In this example, the controlplane
is being updated from v1.23.0
to 1.23.1.
Update the
istio-csr
deployment with the new revision by running the following command:$ helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr \ --wait \ --reuse-values \ --set "app.istio.revisions={<old_revision>,<new_revision>}"
where:
old_revision
-
Specifies the old revision in the
<istio-name>-v<major_version>-<minor_version>-<patch_version>
format. For example:default-v1-23-0
. new_revision
-
Specfies the new revision in the
<istio-name>-v<major_version>-<minor_version>-<patch_version>
format. For example:default-v1-23-1
.
Update the
istio.spec.version
in theIstio
object similar to the following example:Example
istio.yaml
fileapiVersion: sailoperator.io/v1alpha1 kind: Istio metadata: name: default spec: version: <new_revision> 1
- 1
- Update to the new revision prefixed with the letter v, such as
v1.23.1
Remove the old revision from your
istio-csr
deployment by running the following command:helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr \ --install \ --namespace cert-manager \ --wait \ --reuse-values \ --set "app.istio.revisions={default-v1-23-1}"
Chapter 5. Multi-Cluster topologies
Multi-Cluster topologies are useful for organizations with distributed systems or environments seeking enhanced scalability, fault tolerance, and regional redundancy.
5.1. About multi-cluster mesh topologies
In a multi-cluster mesh topology, you install and manage a single Istio mesh across multiple OpenShift Container Platform clusters, enabling communication and service discovery between the services. Two factors determine the multi-cluster mesh topology: control plane topology and network topology. There are two options for each topology. Therefore, there are four possible multi-cluster mesh topology configurations.
- Multi-Primary Single Network: Combines the multi-primary control plane topology and the single network network topology models.
- Multi-Primary Multi-Network: Combines the Combines the multi-primary control plane topology and the multi-network network topology models.
- Primary-Remote Single Network: Combines the primary-remote control plane topology and the single network network topology models.
- Primary-Remote Multi-Network: Combines the primary-remote control plane topology and the multi-network network topology models.
5.1.1. Control plane topology models
A multi-cluster mesh must use one of the following control plane topologies:
- Multi-Primary: In this configuration, a control plane resides on every cluster. Each control plane observes the API servers in all of the other clusters for services and endpoints.
- Primary-Remote: In this configuration, the control plane resides only on one cluster, called the primary cluster. No control plane runs on any of the other clusters, called remote clusters. The control plane on the primary cluster discovers services and endpoints and configures the sidecar proxies for the workloads in all clusters.
5.1.2. Network topology models
A multi-cluster mesh must use one of the following network topologies:
- Single Network: All clusters reside on the same network and there is direct connectivity between the services in all the clusters. There is no need to use gateways for communication between the services across cluster boundaries.
- Multi-Network: Clusters reside on different networks and there is no direct connectivity between services. Gateways must be used to enable communication across network boundaries.
5.2. Multi-Cluster configuration overview
To configure a multi-cluster topology you must perform the following actions:
- Install the OpenShift Service Mesh Operator for each cluster.
- Create or have access to root and intermediate certificates for each cluster.
- Apply the security certificates for each cluster.
- Install Istio for each cluster.
5.2.1. Creating certificates for a multi-cluster topology
Create the root and intermediate certificate authority (CA) certificates for two clusters.
Prerequisites
- You have OpenSSL installed locally.
Procedure
Create the root CA certificate:
Create a key for the root certificate by running the following command:
$ openssl genrsa -out root-key.pem 4096
Create an OpenSSL configuration certificate file named
root-ca.conf
for the root CA certificates:Example root certificate configuration file
encrypt_key = no prompt = no utf8 = yes default_md = sha256 default_bits = 4096 req_extensions = req_ext x509_extensions = req_ext distinguished_name = req_dn [ req_ext ] subjectKeyIdentifier = hash basicConstraints = critical, CA:true keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, keyCertSign [ req_dn ] O = Istio CN = Root CA
Create the certificate signing request by running the following command:
$ openssl req -sha256 -new -key root-key.pem \ -config root-ca.conf \ -out root-cert.csr
Create a shared root certificate by running the following command:
$ openssl x509 -req -sha256 -days 3650 \ -signkey root-key.pem \ -extensions req_ext -extfile root-ca.conf \ -in root-cert.csr \ -out root-cert.pem
Create the intermediate CA certificate for the East cluster:
Create a directory named
east
by running the following command:$ mkdir east
Create a key for the intermediate certificate for the East cluster by running the following command:
$ openssl genrsa -out east/ca-key.pem 4096
Create an OpenSSL configuration file named
intermediate.conf
in theeast/
directory for the intermediate certificate of the East cluster. Copy the following example file and save it locally:Example configuration file
[ req ] encrypt_key = no prompt = no utf8 = yes default_md = sha256 default_bits = 4096 req_extensions = req_ext x509_extensions = req_ext distinguished_name = req_dn [ req_ext ] subjectKeyIdentifier = hash basicConstraints = critical, CA:true, pathlen:0 keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, keyCertSign subjectAltName=@san [ san ] DNS.1 = istiod.istio-system.svc [ req_dn ] O = Istio CN = Intermediate CA L = east
Create a certificate signing request by running the following command:
$ openssl req -new -config east/intermediate.conf \ -key east/ca-key.pem \ -out east/cluster-ca.csr
Create the intermediate CA certificate for the East cluster by running the following command:
$ openssl x509 -req -sha256 -days 3650 \ -CA root-cert.pem \ -CAkey root-key.pem -CAcreateserial \ -extensions req_ext -extfile east/intermediate.conf \ -in east/cluster-ca.csr \ -out east/ca-cert.pem
Create a certificate chain from the intermediate and root CA certificate for the east cluster by running the following command:
$ cat east/ca-cert.pem root-cert.pem > east/cert-chain.pem && cp root-cert.pem east
Create the intermediate CA certificate for the West cluster:
Create a directory named
west
by running the following command:$ mkdir west
Create a key for the intermediate certificate for the West cluster by running the following command:
$ openssl genrsa -out west/ca-key.pem 4096
Create an OpenSSL configuration file named
intermediate.conf
in thewest/
directory for for the intermediate certificate of the West cluster. Copy the following example file and save it locally:Example configuration file
[ req ] encrypt_key = no prompt = no utf8 = yes default_md = sha256 default_bits = 4096 req_extensions = req_ext x509_extensions = req_ext distinguished_name = req_dn [ req_ext ] subjectKeyIdentifier = hash basicConstraints = critical, CA:true, pathlen:0 keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, keyCertSign subjectAltName=@san [ san ] DNS.1 = istiod.istio-system.svc [ req_dn ] O = Istio CN = Intermediate CA L = west
Create a certificate signing request by running the following command:
$ openssl req -new -config west/intermediate.conf \ -key west/ca-key.pem \ -out west/cluster-ca.csr
Create the certificate by running the following command:
$ openssl x509 -req -sha256 -days 3650 \ -CA root-cert.pem \ -CAkey root-key.pem -CAcreateserial \ -extensions req_ext -extfile west/intermediate.conf \ -in west/cluster-ca.csr \ -out west/ca-cert.pem
Create the certificate chain by running the following command:
$ cat west/ca-cert.pem root-cert.pem > west/cert-chain.pem && cp root-cert.pem west
5.2.2. Applying certificates to a multi-cluster topology
Apply root and intermediate certificate authority (CA) certificates to the clusters in a multi-cluster topology.
In this procedure, CLUSTER1
is the East cluster and CLUSTER2
is the West cluster.
Prerequisites
- You have access to two OpenShift Container Platform clusters with external load balancer support.
- You have created the root CA certificate and intermediate CA certificates for each cluster or someone has made them available for you.
Procedure
Apply the certificates to the East cluster of the multi-cluster topology:
Log in to East cluster by running the following command:
$ oc login -u https://<east_cluster_api_server_url>
Set up the environment variable that contains the
oc
command context for the East cluster by running the following command:$ export CTX_CLUSTER1=$(oc config current-context)
Create a project called
istio-system
by running the following command:$ oc get project istio-system --context "${CTX_CLUSTER1}" || oc new-project istio-system --context "${CTX_CLUSTER1}"
Configure Istio to use
network1
as the default network for the pods on the East cluster by running the following command:$ oc --context "${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1
Create the CA certificates, certificate chain, and the private key for Istio on the East cluster by running the following command:
$ oc get secret -n istio-system --context "${CTX_CLUSTER1}" cacerts || oc create secret generic cacerts -n istio-system --context "${CTX_CLUSTER1}" \ --from-file=east/ca-cert.pem \ --from-file=east/ca-key.pem \ --from-file=east/root-cert.pem \ --from-file=east/cert-chain.pem
NoteIf you followed the instructions in "Creating certificates for a multi-cluster mesh", your certificates will reside in the
east/
directory. If your certificates reside in a different directory, modify the syntax accordingly.
Apply the certificates to the West cluster of the multi-cluster topology:
Log in to the West cluster by running the following command:
$ oc login -u https://<west_cluster_api_server_url>
Set up the environment variable that contains the
oc
command context for the West cluster by running the following command:$ export CTX_CLUSTER2=$(oc config current-context)
Create a project called
istio-system
by running the following command:$ oc get project istio-system --context "${CTX_CLUSTER2}" || oc new-project istio-system --context "${CTX_CLUSTER2}"
Configure Istio to use
network2
as the default network for the pods on the West cluster by running the following command:$ oc --context "${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2
Create the CA certificate secret for Istio on the West cluster by running the following command:
$ oc get secret -n istio-system --context "${CTX_CLUSTER2}" cacerts || oc create secret generic cacerts -n istio-system --context "${CTX_CLUSTER2}" \ --from-file=west/ca-cert.pem \ --from-file=west/ca-key.pem \ --from-file=west/root-cert.pem \ --from-file=west/cert-chain.pem
NoteIf you followed the instructions in "Creating certificates for a multi-cluster mesh", your certificates will reside in the
west/
directory. If the certificates reside in a different directory, modify the syntax accordingly.
Next steps
Install Istio on all the clusters comprising the mesh topology.
5.3. Installing a multi-primary multi-network mesh
Install Istio in the multi-primary multi-network topology on two OpenShift Container Platform clusters.
In this procedure, CLUSTER1
is the East cluster and CLUSTER2
is the West cluster.
You can adapt these instructions for a mesh spanning more than two clusters.
Prerequisites
- You have installed the OpenShift Service Mesh 3 Operator on all of the clusters that comprise the mesh.
- You have completed "Creating certificates for a multi-cluster mesh".
- You have completed "Applying certificates to a multi-cluster topology".
- You have created an Istio Container Network Interface (CNI) resource.
-
You have
istioctl
installed on the laptop you can use to run these instructions.
Procedure
Create an
ISTIO_VERSION
environment variable that defines the Istio version to install by running the following command:$ export ISTIO_VERSION=1.24.1
Install Istio on the East cluster:
Create an
Istio
resource on the East cluster by running the following command:$ cat <<EOF | oc --context "${CTX_CLUSTER1}" apply -f - apiVersion: sailoperator.io/v1alpha1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system values: global: meshID: mesh1 multiCluster: clusterName: cluster1 network: network1 EOF
Wait for the control plane to return the
Ready
status condition by running the following command:$ oc --context "${CTX_CLUSTER1}" wait --for condition=Ready istio/default --timeout=3m
Create an East-West gateway on the East cluster by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/multicluster/east-west-gateway-net1.yaml
Expose the services through the gateway by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/multicluster/expose-services.yaml
Install Istio on the West cluster:
Create an
Istio
resource on the West cluster by running the following command:$ cat <<EOF | oc --context "${CTX_CLUSTER2}" apply -f - apiVersion: sailoperator.io/v1alpha1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system values: global: meshID: mesh1 multiCluster: clusterName: cluster2 network: network2 EOF
Wait for the control plane to return the
Ready
status condition by running the following command:$ oc --context "${CTX_CLUSTER2}" wait --for condition=Ready istio/default --timeout=3m
Create an East-West gateway on the West cluster by running the following command:
$ oc --context "${CTX_CLUSTER2}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/multicluster/east-west-gateway-net2.yaml
Expose the services through the gateway by running the following command:
$ oc --context "${CTX_CLUSTER2}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/multicluster/expose-services.yaml
Install a remote secret on the East cluster that provides access to the API server on the West cluster by running the following command:
$ istioctl create-remote-secret \ --context="${CTX_CLUSTER2}" \ --name=cluster2 | \ oc --context="${CTX_CLUSTER1}" apply -f -
Install a remote secret on the West cluster that provides access to the API server on the East cluster by running the following command:
$ istioctl create-remote-secret \ --context="${CTX_CLUSTER1}" \ --name=cluster1 | \ oc --context="${CTX_CLUSTER2}" apply -f -
5.3.1. Verifying a multi-cluster topology
Deploy sample applications and verify traffic on a multi-cluster topology on two OpenShift Container Platform clusters.
In this procedure, CLUSTER1
is the East cluster and CLUSTER2
is the West cluster.
Prerequisites
- You have installed the OpenShift Service Mesh Operator on all of the clusters that comprise the mesh.
- You have completed "Creating certificates for a multi-cluster mesh".
- You have completed "Applying certificates to a multi-cluster topology".
- You have created an Istio Container Network Interface (CNI) resource.
-
You have
istioctl
installed on the laptop you will use to run these instructions. - You have installed a multi-cluster topology.
Procedure
Deploy sample applications on the East cluster:
Create a sample application namespace on the East cluster by running the following command:
$ oc --context "${CTX_CLUSTER1}" get project sample || oc --context="${CTX_CLUSTER1}" new-project sample
Label the application namespace to support sidecar injection by running the following command:
$ oc --context="${CTX_CLUSTER1}" label namespace sample istio-injection=enabled
Deploy the
helloworld
application:Create the
helloworld
service by running the following command:$ oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sample
Create the
helloworld-v1
deployment by running the following command:$ oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/helloworld/helloworld.yaml \ -l version=v1 -n sample
Deploy the
sleep
application by running the following command:$ oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/sleep/sleep.yaml -n sample
Wait for the
helloworld
application on the East cluster to return theReady
status condition by running the following command:$ oc --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/helloworld-v1
Wait for the
sleep
application on the East cluster to return theReady
status condition by running the following command:$ oc --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/sleep
Deploy the sample applications on the West cluster:
Create a sample application namespace on the West cluster by running the following command:
$ oc --context "${CTX_CLUSTER2}" get project sample || oc --context="${CTX_CLUSTER2}" new-project sample
Label the application namespace to support sidecar injection by running the following command:
$ oc --context="${CTX_CLUSTER2}" label namespace sample istio-injection=enabled
Deploy the
helloworld
application:Create the
helloworld
service by running the following command:$ oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sample
Create the
helloworld-v2
deployment by running the following command:$ oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/helloworld/helloworld.yaml \ -l version=v2 -n sample
Deploy the
sleep
application by running the following command:$ oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/sleep/sleep.yaml -n sample
Wait for the
helloworld
application on the West cluster to return theReady
status condition by running the following command:$ oc --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/helloworld-v2
Wait for the
sleep
application on the West cluster to return theReady
status condition by running the following command:$ oc --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/sleep
Verifying traffic flows between clusters
For the East cluster, send 10 requests to the
helloworld
service by running the following command:$ for i in {0..9}; do \ oc --context="${CTX_CLUSTER1}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \ done
Verify that you see responses from both clusters. This means version 1 and version 2 of the service can be seen in the responses.
For the West cluster, send 10 requests to the
helloworld
service:$ for i in {0..9}; do \ oc --context="${CTX_CLUSTER2}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \ done
Verify that you see responses from both clusters. This means version 1 and version 2 of the service can be seen in the responses.
5.3.2. Removing a multi-cluster topology from a development environment
After experimenting with the multi-cluster functionality in a development environment, remove the multi-cluster topology from all the clusters.
In this procedure, CLUSTER1
is the East cluster and CLUSTER2
is the West cluster.
Prerequisites
- You have installed a multi-cluster topology.
Procedure
Remove Istio and the sample applications from the East cluster of the development environment by running the following command:
$ oc --context="${CTX_CLUSTER1}" delete istio/default ns/istio-system ns/sample ns/istio-cni
Remove Istio and the sample applications from the West cluster of development environment by running the following command:
$ oc --context="${CTX_CLUSTER2}" delete istio/default ns/istio-system ns/sample ns/istio-cni
5.4. Installing a primary-remote multi-network mesh
Install Istio in a primary-remote multi-network topology on two OpenShift Container Platform clusters.
In this procedure, CLUSTER1
is the East cluster and CLUSTER2
is the West cluster. The East cluster is the primary cluster and the West cluster is the remote cluster.
You can adapt these instructions for a mesh spanning more than two clusters.
Prerequisites
- You have installed the OpenShift Service Mesh 3 Operator on all of the clusters that comprise the mesh.
- You have completed "Creating certificates for a multi-cluster mesh".
- You have completed "Applying certificates to a multi-cluster topology".
- You have created an Istio Container Network Interface (CNI) resource.
-
You have
istioctl
installed on the laptop you will use to run these instructions.
Procedure
Create an
ISTIO_VERSION
environment variable that defines the Istio version to install by running the following command:$ export ISTIO_VERSION=1.24.1
Install Istio on the East cluster:
Set the default network for the East cluster by running the following command:
$ oc --context="${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1
Create an
Istio
resource on the East cluster by running the following command:$ cat <<EOF | oc --context "${CTX_CLUSTER1}" apply -f - apiVersion: sailoperator.io/v1alpha1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system values: global: meshID: mesh1 multiCluster: clusterName: cluster1 network: network1 externalIstiod: true 1 EOF
- 1
- This enables the control plane installed on the East cluster to serve as an external control plane for other remote clusters.
Wait for the control plane to return the "Ready" status condition by running the following command:
$ oc --context "${CTX_CLUSTER1}" wait --for condition=Ready istio/default --timeout=3m
Create an East-West gateway on the East cluster by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/multicluster/east-west-gateway-net1.yaml
Expose the control plane through the gateway so that services in the West cluster can access the control plane by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/multicluster/expose-istiod.yaml
Expose the application services through the gateway by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/multicluster/expose-services.yaml
Install Istio on the West cluster:
Save the IP address of the East-West gateway running in the East cluster by running the following command:
$ export DISCOVERY_ADDRESS=$(oc --context="${CTX_CLUSTER1}" \ -n istio-system get svc istio-eastwestgateway \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
Create an
Istio
resource on the West cluster by running the following command:$ cat <<EOF | oc --context "${CTX_CLUSTER2}" apply -f - apiVersion: sailoperator.io/v1alpha1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system profile: remote values: istiodRemote: injectionPath: /inject/cluster/cluster2/net/network2 global: remotePilotAddress: ${DISCOVERY_ADDRESS} EOF
Annotate the
istio-system
namespace in the West cluster so that it is managed by the control plane in the East cluster by running the following command:$ oc --context="${CTX_CLUSTER2}" annotate namespace istio-system topology.istio.io/controlPlaneClusters=cluster1
Set the default network for the West cluster by running the following command:
$ oc --context="${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2
Install a remote secret on the East cluster that provides access to the API server on the West cluster by running the following command:
$ istioctl create-remote-secret \ --context="${CTX_CLUSTER2}" \ --name=cluster2 | \ oc --context="${CTX_CLUSTER1}" apply -f -
Wait for the
Istio
resource to return the "Ready" status condition by running the following command:$ oc --context "${CTX_CLUSTER2}" wait --for condition=Ready istio/default --timeout=3m
Create an East-West gateway on the West cluster by running the following command:
$ oc --context "${CTX_CLUSTER2}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/multicluster/east-west-gateway-net2.yaml
NoteSince the West cluster is installed with a remote profile, exposing the application services on the East cluster exposes them on the East-West gateways of both clusters.