Installing
Installing OpenShift Service Mesh
Abstract
Chapter 1. Supported platforms and configurations Copy linkLink copied to clipboard!
Before you can install Red Hat OpenShift Service Mesh 3.3.0, you must subscribe to OpenShift Container Platform and install OpenShift Container Platform in a supported configuration. If you do not have a subscription on your Red Hat account, contact your sales representative for more information.
1.1. Supported platforms for Service Mesh Copy linkLink copied to clipboard!
The following platform versions support Service Mesh control plane version 3.3.0:
- Red Hat OpenShift Container Platform version 4.18 or later
- Red Hat OpenShift Dedicated version 4
- Azure Red Hat OpenShift (ARO) version 4
- Red Hat OpenShift Service on AWS (ROSA)
The Red Hat OpenShift Service Mesh Operator supports multiple versions of
Istio
If you are installing Red Hat OpenShift Service Mesh on a restricted network, follow the instructions for your chosen OpenShift Container Platform infrastructure.
For additional information about Red Hat OpenShift Service Mesh lifecycle and supported platforms, refer to the Support Policy.
1.2. Supported configurations for Service Mesh Copy linkLink copied to clipboard!
Red Hat OpenShift Service Mesh supports the following configurations:
- This release of Red Hat OpenShift Service Mesh is supported on OpenShift Container Platform x86_64, IBM Z®, IBM Power®, and Advanced RISC Machine (ARM).
- Configurations where all Service Mesh components are contained within a single OpenShift Container Platform cluster.
- Configurations that do not integrate external services such as virtual machines.
Red Hat OpenShift Service Mesh does not support the
EnvoyFilter
1.3. Supported network configurations for Service Mesh Copy linkLink copied to clipboard!
You can use the following OpenShift networking plugins for the Red Hat OpenShift Service Mesh:
- OpenShift-SDN.
- OVN-Kubernetes. See About the OVN-Kubernetes network plugin for more information.
- Third-Party Container Network Interface (CNI) plugins that have been certified on OpenShift Container Platform and passed Service Mesh conformance testing. See Certified OpenShift CNI Plug-ins for more information.
1.3.1. Supported configurations for Kiali Copy linkLink copied to clipboard!
- The Kiali console is supported on Google Chrome, Microsoft Edge, Mozilla Firefox, or Apple Safari browsers.
-
The authentication strategy is the only supported authentication configuration when Kiali is deployed with Red Hat OpenShift Service Mesh (OSSM). The
openshiftstrategy controls access based on the user’s role-based access control (RBAC) roles of the OpenShift Container Platform.openshift
Chapter 2. Installing OpenShift Service Mesh Copy linkLink copied to clipboard!
Installing OpenShift Service Mesh consists of three main tasks: installing the OpenShift Operator, deploying Istio, and customizing the Istio configuration. Then, you can also choose to install the sample
bookinfo
Before installing OpenShift Service Mesh 3, make sure you are not running OpenShift Service Mesh 3 and OpenShift Service Mesh 2 in the same cluster, because it causes conflicts unless configured correctly. To migrate from OpenShift Service Mesh 2, see Migrating from OpenShift Service Mesh 2.6.
2.1. About deploying Istio using the Red Hat OpenShift Service Mesh Operator Copy linkLink copied to clipboard!
To deploy Istio using the Red Hat OpenShift Service Mesh Operator, you must create an
Istio
IstioRevision
IstioRevision
istiod
Deployment
The Red Hat OpenShift Service Mesh Operator may create additional instances of the
IstioRevision
Istio
2.1.1. About Istio control plane update strategies Copy linkLink copied to clipboard!
The update strategy affects how the update process is performed. The
spec.updateStrategy
Istio
spec.version
vX.Y-latest
-
InPlace -
RevisionBased
InPlace
If you use ambient mode, you must update the Istio Container Network Interface (CNI) and
ZTunnel
The
InPlace
RevisionBased
2.2. Installing the Service Mesh Operator Copy linkLink copied to clipboard!
For clusters without OpenShift Service Mesh instances, install the Service Mesh Operator. OpenShift Service Mesh operates cluster-wide and needs a scope configuration to prevent conflicts between Istio control planes. For clusters with OpenShift Service Mesh 3 or later, see "Deploying multiple service meshes on a single cluster".
Prerequisites
- You have deployed a cluster on OpenShift Container Platform 4.14 or later.
- You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.
Procedure
- In the OpenShift Container Platform web console, navigate to the Operators → OperatorHub page.
- Search for the Red Hat OpenShift Service Mesh 3 Operator.
- Locate the Service Mesh Operator, and click to select it.
- When the prompt that discusses the community operator opens, click Continue.
- Click Install.
On the Install Operator page, perform the following steps:
-
Select All namespaces on the cluster (default) as the Installation Mode. This mode installs the Operator in the default namespace, which enables the Operator to watch and be available to all namespaces in the cluster.
openshift-operators - Select Automatic as the Approval Strategy. This ensures that the Operator Lifecycle Manager (OLM) handles the future upgrades to the Operator automatically. If you select the Manual approval strategy, OLM creates an update request. As a cluster administrator, you must then manually approve the OLM update request to update the Operator to the new version.
Select an Update Channel.
- Choose the stable channel to install the latest stable version of the Red Hat OpenShift Service Mesh 3 Operator. It is the default channel for installing the Operator.
-
To install a specific version of the Red Hat OpenShift Service Mesh 3 Operator, choose the corresponding channel. For example, to install the Red Hat OpenShift Service Mesh Operator version 3.0.x, use the stable-3.0 channel.
stable-<version>
-
Select All namespaces on the cluster (default) as the Installation Mode. This mode installs the Operator in the default
- Click Install to install the Operator.
Verification
-
Click Operators → Installed Operators to verify that the Service Mesh Operator is installed. should show in the Status column.
Succeeded
2.2.1. About Service Mesh custom resource definitions Copy linkLink copied to clipboard!
Installing the Red Hat OpenShift Service Mesh Operator also installs custom resource definitions (CRD) that administrators can use to configure Istio for Service Mesh installations. The Operator Lifecycle Manager (OLM) installs two categories of CRDs: Sail Operator CRDs and Istio CRDs.
Sail Operator CRDs define custom resources for installing and maintaining the Istio components required to operate a service mesh. These custom resources belong to the
sailoperator.io
Istio
IstioRevision
IstioCNI
ZTunnel
sailoperator.io
Istio CRDs are associated with mesh configuration and service management. These CRDs define custom resources in several
istio.io
networking.istio.io
security.istio.io
AuthorizationPolicy
DestinationRule
VirtualService
2.3. About Istio deployment Copy linkLink copied to clipboard!
To deploy Istio, you must create two resources:
Istio
IstioCNI
Istio
IstioCNI
You can use the OpenShift web console or the OpenShift CLI (oc) to create a project or a resource in your cluster.
In the OpenShift Container Platform, a project is essentially a Kubernetes namespace with additional annotations, such as the range of user IDs that can be used in the project. Typically, the OpenShift Container Platform web console uses the term project, and the CLI uses the term namespace, but the terms are essentially synonymous.
2.3.1. Creating the Istio project using the web console Copy linkLink copied to clipboard!
The Service Mesh Operator deploys the Istio control plane to a project that you create. In this example,
istio-system
Prerequisties
- The Red Hat OpenShift Service Mesh Operator must be installed.
- You are logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
- In the OpenShift Container Platform web console, click Home → Projects.
- Click Create Project.
-
At the prompt, enter a name for the project in the Name field. For example, . The other fields provide supplementary information to the
istio-systemresource definition and are optional.Istio - Click Create. The Service Mesh Operator deploys Istio to the project you specified.
2.3.2. Creating the Istio resource using the web console Copy linkLink copied to clipboard!
Create the Istio resource that will contain the YAML configuration file for your Istio deployment. The Red Hat OpenShift Service Mesh Operator uses information in the YAML file to create an instance of the Istio control plane.
Prerequisties
- The Service Mesh Operator must be installed.
- You are logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
- In the OpenShift Container Platform web console, click Operators → Installed Operators.
-
Select in the Project drop-down menu.
istio-system - Click the Service Mesh Operator.
- Click Istio.
- Click Create Istio.
-
Select the project from the Namespace drop-down menu.
istio-system Click Create. This action deploys the Istio control plane.
When
appears in the Status column, Istio is successfully deployed.State: Healthy
2.3.3. Creating the IstioCNI project using the web console Copy linkLink copied to clipboard!
The Service Mesh Operator deploys the Istio CNI plugin to a project that you create. In this example,
istio-cni
Prerequisties
- The Red Hat OpenShift Service Mesh Operator must be installed.
- You are logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
- In the OpenShift Container Platform web console, click Home → Projects.
- Click Create Project.
-
At the prompt, you must enter a name for the project in the Name field. For example, . The other fields provide supplementary information and are optional.
istio-cni - Click Create.
2.3.4. Creating the IstioCNI resource using the web console Copy linkLink copied to clipboard!
Create an Istio Container Network Interface (CNI) resource, which contains the configuration file for the Istio CNI plugin. The Service Mesh Operator uses the configuration specified by this resource to deploy the CNI pod.
Prerequisties
- The Red Hat OpenShift Service Mesh Operator must be installed.
- You are logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
- In the OpenShift Container Platform web console, click Operators → Installed Operators.
-
Select in the Project drop-down menu.
istio-cni - Click the Service Mesh Operator.
- Click IstioCNI.
- Click Create IstioCNI.
-
Ensure that the name is .
default Click Create. This action deploys the Istio CNI plugin.
When
appears in the Status column, the Istio CNI plugin is successfully deployed.State: Healthy
2.4. Scoping the Service Mesh with discovery selectors Copy linkLink copied to clipboard!
Service Mesh includes workloads that meet the following criteria:
- The control plane has discovered the workload.
- The workload has an Envoy proxy sidecar injected.
By default, the control plane discovers workloads in all namespaces across the cluster, with the following results:
- Each proxy instance receives configuration for all namespaces, including workloads not enrolled in the mesh.
- Any workload with the appropriate pod or namespace injection label receives a proxy sidecar.
In shared clusters, you might want to limit the scope of Service Mesh to only certain namespaces. This approach is especially useful if multiple service meshes run in the same cluster.
2.4.1. About discovery selectors Copy linkLink copied to clipboard!
With discovery selectors, the mesh administrator can control which namespaces the control plane can access. By using a Kubernetes label selector, the administrator sets the criteria for the namespaces visible to the control plane, excluding any namespaces that do not match the specified criteria.
Istiod always opens a watch to OpenShift for all namespaces. However, discovery selectors ignore objects that are not selected very early in its processing, minimizing costs.
The
discoverySelectors
-
Custom label names and values. For example, configure all namespaces with the label .
istio-discovery=enabled -
A list of namespace labels by using set-based selectors with OR logic. For instance, configure namespaces with OR
istio-discovery=enabled.region=us-east1 -
Inclusion and exclusion of namespaces. For example, configure namespaces with AND the label
istio-discovery=enabled.app=helloworld
Discovery selectors are not a security boundary. Istiod continues to have access to all namespaces even when you have configured the
discoverySelector
2.4.2. Scoping a Service Mesh by using discovery selectors Copy linkLink copied to clipboard!
If you know which namespaces to include in the Service Mesh, configure
discoverySelectors
meshConfig.discoverySelectors
Istio
istio-discovery=enabled
Prerequisites
- The OpenShift Service Mesh operator is installed.
- An Istio CNI resource is created.
Procedure
Add a label to the namespace containing the Istio control plane, for example, the
system namespace.istio-system$ oc label namespace istio-system istio-discovery=enabledModify the
control plane resource to include aIstiosection with the same label.discoverySelectorskind: Istio apiVersion: sailoperator.io/v1 metadata: name: default spec: namespace: istio-system values: meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabledApply the Istio CR:
$ oc apply -f istio.yaml-
Ensure that all namespaces that will contain workloads that are to be part of the Service Mesh have both the label and, if needed, the appropriate Istio injection label.
discoverySelector
Discovery selectors help restrict the scope of a single Service Mesh and are essential for limiting the control plane scope when you deploy multiple Istio control planes in a single cluster.
2.5. About the Bookinfo application Copy linkLink copied to clipboard!
Installing the
bookinfo
You can use the
bookinfo
bookinfo
The
bookinfo
The
bookinfo
reviews-v1
reviews-v2
reviews-v3
bookinfo
reviews
By deploying the
reviews
bookinfo
reviews-v2
bookinfo
reviews-v3
For more information, see Bookinfo Application in the upstream Istio documentation.
2.5.1. Deploying the Bookinfo application Copy linkLink copied to clipboard!
Prerequisites
- You have deployed a cluster on OpenShift Container Platform 4.15 or later.
-
You are logged in to the OpenShift Container Platform web console as a user with the role.
cluster-admin - You have access to the OpenShift CLI (oc).
- You have installed the Red Hat OpenShift Service Mesh Operator, created the Istio resource, and the Operator has deployed Istio.
- You have created IstioCNI resource, and the Operator has deployed the necessary IstioCNI pods.
Procedure
- In the OpenShift Container Platform web console, navigate to the Home → Projects page.
- Click Create Project.
Enter
in the Project name field.bookinfoThe Display name and Description fields provide supplementary information and are not required.
- Click Create.
Apply the Istio discovery selector and injection label to the
namespace by entering the following command:bookinfo$ oc label namespace bookinfo istio-discovery=enabled istio-injection=enabledNoteIn this example, the name of the Istio resource is
. If the Istio resource name is different, you must set thedefaultlabel to the name of the Istio resource instead of adding theistio.io/revlabel.istio-injection=enabledApply the
YAML file to deploy thebookinfoapplication by entering the following command:bookinfooc apply -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo
Verification
Verify that the
service is available by running the following command:bookinfo$ oc get services -n bookinfoExample output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE details ClusterIP 172.30.137.21 <none> 9080/TCP 44s productpage ClusterIP 172.30.2.246 <none> 9080/TCP 43s ratings ClusterIP 172.30.33.85 <none> 9080/TCP 44s reviews ClusterIP 172.30.175.88 <none> 9080/TCP 44sVerify that the
pods are available by running the following command:bookinfo$ oc get pods -n bookinfoExample output
NAME READY STATUS RESTARTS AGE details-v1-698d88b-km2jg 2/2 Running 0 66s productpage-v1-675fc69cf-cvxv9 2/2 Running 0 65s ratings-v1-6484c4d9bb-tpx7d 2/2 Running 0 65s reviews-v1-5b5d6494f4-wsrwp 2/2 Running 0 65s reviews-v2-5b667bcbf8-4lsfd 2/2 Running 0 65s reviews-v3-5b9bd44f4-44hr6 2/2 Running 0 65sWhen the
columns displaysReady, the proxy sidecar was successfully injected. Confirm that2/2appears in theRunningcolumn for each pod.StatusVerify that the
application is running by sending a request to thebookinfopage. Run the following command:bookinfo$ oc exec "$(oc get pod -l app=ratings -n bookinfo -o jsonpath='{.items[0].metadata.name}')" -c ratings -n bookinfo -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"
2.5.2. About accessing the Bookinfo application using a gateway Copy linkLink copied to clipboard!
The Red Hat OpenShift Service Mesh Operator does not deploy gateways. Gateways are not part of the control plane. As a security best-practice, Ingress and Egress gateways should be deployed in a different namespace than the namespace that contains the control plane.
You can deploy gateways using either the Gateway API or the gateway injection method.
2.5.3. Accessing the Bookinfo application by using Istio gateway injection Copy linkLink copied to clipboard!
Gateway injection uses the same mechanisms as Istio sidecar injection to create a gateway from a
Deployment
Service
Service
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as .
cluster-admin - The Red Hat OpenShift Service Mesh Operator must be installed.
- The Istio resource must be deployed.
Procedure
Create the
deployment and service by running the following command:istio-ingressgateway$ oc apply -n bookinfo -f ingress-gateway.yamlNoteThis example uses a sample
file that is available in the Istio community repository.ingress-gateway.yamlConfigure the
application to use the new gateway. Apply the gateway configuration by running the following command:bookinfo$ oc apply -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/bookinfo/networking/bookinfo-gateway.yaml -n bookinfoNoteTo configure gateway injection with the
application, this example uses a sample gateway configuration file that must be applied in the namespace where the application is installed.bookinfoUse a route to expose the gateway external to the cluster by running the following command:
$ oc expose service istio-ingressgateway -n bookinfoModify the YAML file to automatically scale the pod when ingress traffic increases.
Example configuration
apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: labels: istio: ingressgateway release: istio name: ingressgatewayhpa namespace: bookinfo spec: maxReplicas: 51 metrics: - resource: name: cpu target: averageUtilization: 80 type: Utilization type: Resource minReplicas: 2 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: istio-ingressgateway- 1
- This example sets the the maximum replicas to
5and the minimum replicas to2. It also creates another replica when utilization reaches 80%.
Specify the minimum number of pods that must be running on the node.
Example configuration
apiVersion: policy/v1 kind: PodDisruptionBudget metadata: labels: istio: ingressgateway release: istio name: ingressgatewaypdb namespace: bookinfo spec: minAvailable: 11 selector: matchLabels: istio: ingressgateway- 1
- This example ensures one replica is running if a pod gets restarted on a new node.
Obtain the gateway host name and the URL for the product page by running the following command:
$ HOST=$(oc get route istio-ingressgateway -n bookinfo -o jsonpath='{.spec.host}')Verify that the
is accessible from a web browser by running the following command:productpage$ echo productpage URL: http://$HOST/productpage
2.5.4. Accessing the Bookinfo application by using Gateway API Copy linkLink copied to clipboard!
The Kubernetes Gateway API deploys a gateway by creating a
Gateway
For details about enabling Gateway API for Ingress in OpenShift Container Platform 4.19 and later, see "Configuring ingress cluster traffic" in the OpenShift Container Platform documentation.
Red Hat provides support for using the Kubernetes Gateway API with Red Hat OpenShift Service Mesh. Red Hat does not provide support for the Kubernetes Gateway API custom resource definitions (CRDs). In this procedure, the use of community Gateway API CRDs is shown for demonstration purposes only.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as .
cluster-admin - The Red Hat OpenShift Service Mesh Operator must be installed.
- The Istio resource must be deployed.
Procedure
Enable the Gateway API CRDs for OpenShift Container Platform 4.18 and earlier, by running the following command:
$ oc get crd gateways.gateway.networking.k8s.io &> /dev/null || { oc kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.0.0" | oc apply -f -; }Create and configure a gateway by using the
andGatewayresources by running the following command:HTTPRoute$ oc apply -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/bookinfo/gateway-api/bookinfo-gateway.yaml -n bookinfoNoteTo configure a gateway with the
application by using the Gateway API, this example uses a sample gateway configuration file that must be applied in the namespace where the application is installed.bookinfoEnsure that the Gateway API service is ready, and has an address allocated by running the following command:
$ oc wait --for=condition=programmed gtw bookinfo-gateway -n bookinfoRetrieve the host by running the following command:
$ export INGRESS_HOST=$(oc get gtw bookinfo-gateway -n bookinfo -o jsonpath='{.status.addresses[0].value}')Retrieve the port by running the following command:
$ export INGRESS_PORT=$(oc get gtw bookinfo-gateway -n bookinfo -o jsonpath='{.spec.listeners[?(@.name=="http")].port}')Retrieve the gateway URL by running the following command:
$ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORTObtain the gateway host name and the URL of the product page by running the following command:
$ echo "http://${GATEWAY_URL}/productpage"
Verification
- Verify that the productpage is accessible from a web browser.
2.6. Customizing Istio configuration Copy linkLink copied to clipboard!
The
values
Istio
Helm
Procedure
- Click Operators → Installed Operators.
- Click Istio in the Provided APIs column.
-
Click the instance, named
Istio, in the Name column.default -
Click YAML to view the configuration and make modifications.
Istio
For a list of available configuration for the
values
2.7. About Istio High Availability Copy linkLink copied to clipboard!
Running the Istio control plane in High Availability (HA) mode prevents single points of failure, and ensures continuous mesh operation even if an
istiod
istiod
There are two ways for a system administrator to configure HA for the Istio deployment:
-
Defining a static replica count: This approach involves setting a fixed number of pods, providing a consistent level of redundancy.
istiod -
Using autoscaling: This approach dynamically adjusts the number of pods based on resource utilization or custom metrics, providing more efficient resource consumption for fluctuating workloads.
istiod
2.7.1. Configuring Istio HA by using autoscaling Copy linkLink copied to clipboard!
Configure the Istio control plane in High Availability (HA) mode to prevent a single point of failure, and ensure continuous mesh operation even if one of the
istiod
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as a user with the role.
cluster-admin - You have installed the Red Hat OpenShift Service Mesh Operator.
- You have deployed the Istio resource.
Procedure
- In the OpenShift Container Platform web console, click Installed Operators.
- Click Red Hat OpenShift Service Mesh 3 Operator.
- Click Istio.
-
Click the name of the Istio installation. For example, .
default - Click YAML.
Modify the
custom resource (CR) similar to the following example:IstioExample configuration
apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: namespace: istio-system values: pilot: autoscaleMin: 21 autoscaleMax: 52 cpu: targetAverageUtilization: 803 memory: targetAverageUtilization: 804 - 1
- Specifies the minimum number of Istio control plane replicas that always run.
- 2
- Specifies the maximum number of Istio control plane replicas, allowing for scaling based on load. To support HA, there must be at least two replicas.
- 3
- Specifies the target CPU utilization for autoscaling to 80%. If the average CPU usage exceeds this threshold, the Horizontal Pod Autoscaler (HPA) automatically increases the number of replicas.
- 4
- Specifies the target memory utilization for autoscaling to 80%. If the average memory usage exceeds this threshold, the HPA automatically increases the number of replicas.
Verification
Verify the status of the Istio control pods by running the following command:
$ oc get pods -n istio-system -l app=istiodExample output
NAME READY STATUS RESTARTS AGE istiod-7c7b6564c9-nwhsg 1/1 Running 0 70s istiod-7c7b6564c9-xkmsl 1/1 Running 0 85sTwo
pods are running. Two pods, the minimum requirement for an HA Istio control plane, indicates that a basic HA setup is in place.istiod
2.7.1.1. API settings for Service Mesh HA autoscaling mode Copy linkLink copied to clipboard!
Use the following
istio
| Parameter | Description |
|---|---|
|
| Defines the minimum number of
OpenShift only uses this parameter when the Horizontal Pod Autoscaler (HPA) is enabled for the Istio deployment. This is the default behavior. |
|
| Defines the maximum number of
For OpenShift to automatically scale the number of
You must also configure metrics for autoscaling to work properly. If no metrics are configured, the autoscaler does not scale up or down. OpenShift only uses this parameter when Horizontal Pod Autoscaler (HPA) is enabled for the Istio deployment. This is the default behavior. |
|
| Defines the target CPU utilization for the
|
|
| Defines the target memory utilization for the
|
|
| You can use the
For more information, see Configurable Scaling Behavior. |
2.7.2. Configuring Istio HA by using replica count Copy linkLink copied to clipboard!
Configure the Istio control plane in High Availability (HA) mode to prevent a single point of failure, and ensure continuous mesh operation even if one of the
istiod
istiod
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as a user with the role.
cluster-admin - You have installed the Red Hat OpenShift Service Mesh Operator.
- You have deployed the Istio resource.
Procedure
Obtain the name of the
resource by running the following command:Istio$ oc get istio -n istio-sytemExample output
NAME REVISIONS READY IN USE ACTIVE REVISION STATUS VERSION AGE default 1 1 0 default Healthy v1.24.6 24mThe name of the
resource isIstio.defaultUpdate the
custom resource (CR) by adding theIstioandautoscaleEnabledparameters by running the following command:replicaCount$ oc patch istio default -n istio-system --type merge -p ' spec: values: pilot: autoscaleEnabled: false1 replicaCount: 22 '
Verification
Verify the status of the Istio control pods by running the following command:
$ oc get pods -n istio-system -l app=istiodExample output
NAME READY STATUS RESTARTS AGE istiod-7c7b6564c9-nwhsg 1/1 Running 0 70s istiod-7c7b6564c9-xkmsl 1/1 Running 0 85sTwo
pods are running, which is the minimum requirement for an HA Istio control plane and indicates that a basic HA setup is in place.istiod
Chapter 3. Sidecar injection Copy linkLink copied to clipboard!
Sidecar proxies are deployed into each application pod to intercept network traffic and enable service mesh features like security, observability, and traffic management.
3.1. About sidecar injection Copy linkLink copied to clipboard!
Sidecar injection is enabled using labels at the namespace or pod level. These labels also indicate the specific control plane managing the proxy. When you apply a valid injection label to the pod template defined in a deployment, any new pods created by that deployment automatically receive a sidecar. Similarly, applying a pod injection label at the namespace level ensures any new pods in that namespace include a sidecar.
Injection happens at pod creation through an admission controller, so changes appear on individual pods rather than the deployment resources. To confirm sidecar injection, check the pod details directly using
oc describe
3.2. Identifying the revision name Copy linkLink copied to clipboard!
The label required to enable sidecar injection is determined by the specific control plane instance, known as a revision. Each revision is managed by an
IstioRevision
Istio
IstioRevision
The naming of an
IstioRevision
spec.updateStrategy.type
Istio
InPlace
Istio
RevisionBased
<Istio resource name>-v<version>
Istio
IstioRevision
IstioRevision
To see available revision names, use the following command:
$ oc get istiorevisions
You should see output similar to the following example:
Example output
NAME READY STATUS IN USE VERSION AGE
my-mesh-v1-23-0 True Healthy False v1.23.0 114s
3.2.1. Enabling sidecar injection with default revision Copy linkLink copied to clipboard!
When the service mesh’s
IstioRevision
default
| Resource | Label | Enabled value | Disabled value |
|---|---|---|---|
| Namespace |
|
|
|
| Pod |
|
|
|
You can also enable injection by setting the
istio.io/rev: default
3.2.2. Enabling sidecar injection with other revisions Copy linkLink copied to clipboard!
When the
IstioRevision
default
IstioRevision
istio.io/rev
istio.io/rev: default
For example, with the revision shown above, the following labels would enable sidecar injection:
| Resource | Enabled label | Disabled label |
|---|---|---|
| Namespace |
|
|
| Pod |
|
|
When both
istio-injection
istio.io/rev
istio-injection
3.3. Enabling sidecar injection Copy linkLink copied to clipboard!
To demonstrate different approaches for configuring sidecar injection, the following procedures use the Bookinfo application.
Prerequisites
-
You have installed the Red Hat OpenShift Service Mesh Operator, created an resource, and the Operator has deployed Istio.
Istio -
You have created the resource, and the Operator has deployed the necessary
IstioCNIpods.IstioCNI - You have created the namespaces that are to be part of the mesh, and they are discoverable by the Istio control plane.
-
Optional: You have deployed the workloads to be included in the mesh. In the following examples, the Bookinfo has been deployed to the namespace, but sidecar injection (step 5) has not been configured. For more information, see "Deploying the Bookinfo application".
bookinfo
3.3.1. Enabling sidecar injection with namespace labels Copy linkLink copied to clipboard!
In this example, all workloads within a namespace receive a sidecar proxy injection, making it the best approach when the majority of workloads in the namespace should be included in the mesh.
Procedure
Verify the revision name of the Istio control plane using the following command:
$ oc get istiorevisionsYou should see output similar to the following example:
Example output
NAME TYPE READY STATUS IN USE VERSION AGE default Local True Healthy False v1.23.0 4m57sSince the revision name is default, you can use the default injection labels without referencing the exact revision name.
Verify that workloads already running in the desired namespace show
containers as1/1by using the following command. This confirms that the pods are running without sidecars.READY$ oc get pods -n bookinfoYou should see output similar to the following example:
Example output
NAME READY STATUS RESTARTS AGE details-v1-65cfcf56f9-gm6v7 1/1 Running 0 4m55s productpage-v1-d5789fdfb-8x6bk 1/1 Running 0 4m53s ratings-v1-7c9bd4b87f-6v7hg 1/1 Running 0 4m55s reviews-v1-6584ddcf65-6wqtw 1/1 Running 0 4m54s reviews-v2-6f85cb9b7c-w9l8s 1/1 Running 0 4m54s reviews-v3-6f5b775685-mg5n6 1/1 Running 0 4m54sTo apply the injection label to the
namespace, run the following command at the CLI:bookinfo$ oc label namespace bookinfo istio-injection=enabled namespace/bookinfo labeledTo ensure sidecar injection is applied, redeploy the existing workloads in the
namespace. Use the following command to perform a rolling update of all workloads:bookinfo$ oc -n bookinfo rollout restart deployments
Verification
Verify the rollout by checking that the new pods display
containers as2/2, confirming successful sidecar injection by running the following command:READY$ oc get pods -n bookinfoYou should see output similar to the following example:
Example output
NAME READY STATUS RESTARTS AGE details-v1-7745f84ff-bpf8f 2/2 Running 0 55s productpage-v1-54f48db985-gd5q9 2/2 Running 0 55s ratings-v1-5d645c985f-xsw7p 2/2 Running 0 55s reviews-v1-bd5f54b8c-zns4v 2/2 Running 0 55s reviews-v2-5d7b9dbf97-wbpjr 2/2 Running 0 55s reviews-v3-5fccc48c8c-bjktn 2/2 Running 0 55s
3.3.2. Exclude a workload from the mesh Copy linkLink copied to clipboard!
You can exclude specific workloads from sidecar injection within a namespace where injection is enabled for all workloads.
This example is for demonstration purposes only. The bookinfo application requires all workloads to be part of the mesh for proper functionality.
Procedure
-
Open the application’s resource in an editor. In this case, exclude the
Deploymentservice.ratings-v1 Modify the
section of yourspec.template.metadata.labelsresource to include the labelDeploymentto disable sidecar injection.sidecar.istio.io/inject: falsekind: Deployment apiVersion: apps/v1 metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'false'NoteAdding the label to the top-level
section of thelabelsdoes not affect sidecar injection.DeploymentUpdating the deployment triggers a rollout, creating a new ReplicaSet with updated pod(s).
Verification
Verify that the updated pod(s) do not contain a sidecar container and show
containers as1/1by running the following command:Running$ oc get pods -n bookinfoYou should see output similar to the following example:
Example output
NAME READY STATUS RESTARTS AGE details-v1-6bc7b69776-7f6wz 2/2 Running 0 29m productpage-v1-54f48db985-gd5q9 2/2 Running 0 29m ratings-v1-5d645c985f-xsw7p 1/1 Running 0 7s reviews-v1-bd5f54b8c-zns4v 2/2 Running 0 29m reviews-v2-5d7b9dbf97-wbpjr 2/2 Running 0 29m reviews-v3-5fccc48c8c-bjktn 2/2 Running 0 29m
3.3.3. Enabling sidecar injection with pod labels Copy linkLink copied to clipboard!
This approach allows you to include individual workloads for sidecar injection instead of applying it to all workloads within a namespace, making it ideal for scenarios where only a few workloads need to be part of a service mesh. This example also demonstrates the use of a revision label for sidecar injection, where the
Istio
my-mesh
Istio
Procedure
Verify the revision name of the Istio control plane by running the following command:
$ oc get istiorevisionsYou should see output similar to the following example:
Example output
NAME TYPE READY STATUS IN USE VERSION AGE my-mesh Local True Healthy False v1.23.0 47sSince the revision name is
, use the revision labelmy-meshto enable sidecar injection.istio.io/rev=my-meshVerify that workloads already running show
containers as1/1, indicating that the pods are running without sidecars by running the following command:READY$ oc get pods -n bookinfoYou should see output similar to the following example:
Example output
NAME READY STATUS RESTARTS AGE details-v1-65cfcf56f9-gm6v7 1/1 Running 0 4m55s productpage-v1-d5789fdfb-8x6bk 1/1 Running 0 4m53s ratings-v1-7c9bd4b87f-6v7hg 1/1 Running 0 4m55s reviews-v1-6584ddcf65-6wqtw 1/1 Running 0 4m54s reviews-v2-6f85cb9b7c-w9l8s 1/1 Running 0 4m54s reviews-v3-6f5b775685-mg5n6 1/1 Running 0 4m54s-
Open the application’s resource in an editor. In this case, update the
Deploymentservice.ratings-v1 Update the
section of yourspec.template.metadata.labelsto include the appropriate pod injection or revision label. In this case,Deployment:istio.io/rev: my-meshkind: Deployment apiVersion: apps/v1 metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: istio.io/rev: my-meshNoteAdding the label to the top-level
section of thelabelsresource does not impact sidecar injection.DeploymentUpdating the deployment triggers a rollout, creating a new ReplicaSet with the updated pod(s).
Verification
Verify that only the ratings-v1 pod now shows
containers2/2, indicating that the sidecar has been successfully injected by running the following command:READY$ oc get pods -n bookinfoYou should see output similar to the following example:
Example output
NAME READY STATUS RESTARTS AGE details-v1-559cd49f6c-b89hw 1/1 Running 0 42m productpage-v1-5f48cdcb85-8ppz5 1/1 Running 0 42m ratings-v1-848bf79888-krdch 2/2 Running 0 9s reviews-v1-6b7444ffbd-7m5wp 1/1 Running 0 42m reviews-v2-67876d7b7-9nmw5 1/1 Running 0 42m reviews-v3-84b55b667c-x5t8s 1/1 Running 0 42m- Repeat for other workloads that you wish to include in the mesh.
3.4. Enabling sidecar injection with namespace labels and an IstioRevisionTag resource Copy linkLink copied to clipboard!
To use the
istio-injection=enabled
default
IstioRevisionTag
default
Istio
Prerequisites
-
You have installed the Red Hat OpenShift Service Mesh Operator, created an resource, and the Operator has deployed Istio.
Istio -
You have created the resource, and the Operator has deployed the necessary
IstioCNIpods.IstioCNI - You have created the namespaces that are to be part of the mesh, and they are discoverable by the Istio control plane.
-
Optional: You have deployed the workloads to be included in the mesh. In the following examples, the Bookinfo has been deployed to the namespace, but sidecar injection (step 5 in "Deploying the Bookinfo application" procedure) has not been configured. For more information, see "Deploying the Bookinfo application".
bookinfo
Procedure
Find the name of your
resource by running the following command:Istio$ oc get istioExample output
NAME REVISIONS READY IN USE ACTIVE REVISION STATUS VERSION AGE default 1 1 1 default-v1-24-3 Healthy v1.24.3 11sIn this example, the
resource has the nameIstio, but the underlying revision is calleddefault.default-v1-24-3Create the
resource in a YAML file:IstioRevisionTagExample
IstioRevistionTagresource YAML fileapiVersion: sailoperator.io/v1 kind: IstioRevisionTag metadata: name: default spec: targetRef: kind: Istio name: defaultApply the
resource by running the following command:IstioRevisionTag$ oc apply -f istioRevisionTag.yamlVerify that the
resource has been created successfully by running the following command:IstioRevisionTag$ oc get istiorevisiontags.sailoperator.ioExample output
NAME STATUS IN USE REVISION AGE default Healthy True default-v1-24-3 4m23sIn this example, the new tag is referencing your active revision,
. Now you can use thedefault-v1-24-3label as if your revision was calledistio-injection=enabled.defaultConfirm that the pods are running without sidecars by running the following command. Any workloads that are already running in the desired namespace should show
containers in the1/1column.READY$ oc get pods -n bookinfoExample output
NAME READY STATUS RESTARTS AGE details-v1-65cfcf56f9-gm6v7 1/1 Running 0 4m55s productpage-v1-d5789fdfb-8x6bk 1/1 Running 0 4m53s ratings-v1-7c9bd4b87f-6v7hg 1/1 Running 0 4m55s reviews-v1-6584ddcf65-6wqtw 1/1 Running 0 4m54s reviews-v2-6f85cb9b7c-w9l8s 1/1 Running 0 4m54s reviews-v3-6f5b775685-mg5n6 1/1 Running 0 4m54sApply the injection label to the
namespace by running the following command:bookinfo$ oc label namespace bookinfo istio-injection=enabled \ namespace/bookinfo labeledTo ensure sidecar injection is applied, redeploy the workloads in the
namespace by running the following command:bookinfo$ oc -n bookinfo rollout restart deployments
Verification
Verify the rollout by running the following command and confirming that the new pods display
containers in the2/2column:READY$ oc get pods -n bookinfoExample output
NAME READY STATUS RESTARTS AGE details-v1-7745f84ff-bpf8f 2/2 Running 0 55s productpage-v1-54f48db985-gd5q9 2/2 Running 0 55s ratings-v1-5d645c985f-xsw7p 2/2 Running 0 55s reviews-v1-bd5f54b8c-zns4v 2/2 Running 0 55s reviews-v2-5d7b9dbf97-wbpjr 2/2 Running 0 55s reviews-v3-5fccc48c8c-bjktn 2/2 Running 0 55s
Chapter 4. Istio ambient mode Copy linkLink copied to clipboard!
Istio ambient mode provides a sidecar-less architecture for Red Hat OpenShift Service Mesh that reduces operational complexity and resource overhead by using node-level Layer 4 (L4) proxies and optional Layer 7 proxies.
4.1. About Istio ambient mode Copy linkLink copied to clipboard!
To understand the Istio ambient mode architecture, see the following definitions:
- ZTunnel proxy
- A per-node proxy that manages secure, transparent Transmission Control Protocol (TCP) connections for all workloads on the node. It operates at Layer 4 (L4), offloading mutual Transport Layer Security (mTLS) and L4 policy enforcement from application pods.
- Waypoint proxy
- An optional proxy that runs per service account or namespace to provide advanced Layer 7 (L7) features such as traffic management, policy enforcement, and observability. You can apply L7 features selectively to avoid the overhead of sidecars for every service.
- Istio CNI plugin
- Redirects traffic to the Ztunnel proxy on each node, enabling transparent interception without requiring modifications to application pods.
Istio ambient mode offers the following benefits:
- Simplified operations that remove the need to manage sidecar injection, reducing the complexity of mesh adoption and operations.
-
Reduced resource consumption with a per-node Ztunnel proxy that provides L4 service mesh features and an optional proxy that reduces resource overhead per pod.
waypoint Incremental adoption that enables workloads to join the mesh with the L4 features like mutual Transport Layer Security (mTLS) and basic policies with optional
proxies added later to use L7 service mesh features, such as HTTP(L7) traffic management.waypointNoteThe L7 features require deploying
proxies, which introduces minimal additional overhead for the selected services.waypoint- Enhanced security that provides a secure, zero-trust network foundation with mTLS by default for all meshed workloads.
Ambient mode is a newer architecture and may involve different operational considerations than traditional sidecar models.
While well-defined discovery selectors allow a service mesh deployed in ambient mode alongside a mesh in sidecar mode, this scenario has not been thoroughly validated. To avoid potential conflicts, install Istio ambient mode only on clusters that do not have an existing Red Hat OpenShift Service Mesh installation. Ambient mode remains a Technology Preview feature.
Istio ambient mode is not compatible with clusters that use Red Hat OpenShift Service Mesh 2.6 or earlier. You must not install or use them together.
4.2. Installing Istio ambient mode Copy linkLink copied to clipboard!
You can install Istio ambient mode on OpenShift Container Platform 4.19 or later and Red Hat OpenShift Service Mesh 3.1.0 or later with the required Gateway API custom resource definitions (CRDs).
Prerequisites
- You have deployed a cluster on OpenShift Container Platform 4.19 or later.
- You have installed the OpenShift Service Mesh Operator 3.1.0 or later in the OpenShift Container Platform cluster.
-
You are logged in to the OpenShift Container Platform cluster either through the web console as a user with the role, or with the
cluster-admincommand, depending on the installation method.oc login -
You have configured the OVN-Kubernetes Container Network Interface (CNI) to use local gateway mode by setting the field as
routingViaHostin thetruespecification for the Cluster Network Operator. For more information, see "Configuring gateway mode".gatewayConfig
Procedure
Install the Istio control plane:
Create the
namespace by running the following command:istio-system$ oc create namespace istio-systemCreate an
resource namedIstiosimilar to the following example:istio.yamlExample configuration
apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: namespace: istio-system profile: ambient values: pilot: trustedZtunnelNamespace: ztunnelImportantYou must set the
field toprofile, and configure theambientvalue to match the namespace where the.spec.values.pilot.trustedZtunnelNamespaceresource will be installed.ZTunnelApply the
custom resource (CR) by running the following command:Istio$ oc apply -f istio.yamlWait for the Istio control plane to contain the
status condition by running the following command:Ready$ oc wait --for=condition=Ready istios/default --timeout=3m
Install the Istio Container Network Interface (CNI):
Create the
namespace by running the following command:istio-cni$ oc create namespace istio-cniCreate the
resource namedIstioCNIsimilar to the following example:istio-cni.yamlExample configuration
apiVersion: sailoperator.io/v1 kind: IstioCNI metadata: name: default spec: namespace: istio-cni profile: ambientSet the
field toprofile.ambientApply the
CR by running the following command:IstioCNI$ oc apply -f istio-cni.yamlWait for the
pods to contain theIstioCNIstatus condition by running the following command:Ready$ oc wait --for=condition=Ready istios/default --timeout=3m
Install the Ztunnel proxy:
Create the
namespace for Ztunnel proxy by running the following command:ztunnel$ oc create namespace ztunnelThe namespace name for
project must match theztunnelparameter in Istio configuration.trustedZtunnelNamespaceCreate the
resource namedZtunnelsimilar to the following example:ztunnel.yamlExample configuration
apiVersion: sailoperator.io/v1alpha1 kind: ZTunnel metadata: name: default spec: namespace: ztunnel profile: ambientApply the
CR by running the following command:Ztunnel$ oc apply -f ztunnel.yamlWait for the
pods to contain theZtunnelstatus condition by running the following command:Ready$ oc wait --for=condition=Ready ztunnel/default --timeout=3m
4.3. About discovery selectors and Istio ambient mode Copy linkLink copied to clipboard!
Istio ambient mode includes workloads when the control plane discovers each workload and the appropriate label enables traffic redirection through the Ztunnel proxy. By default, the control plane discovers workloads in all namespaces across the cluster. As a result, each proxy receives configuration for every namespace, including workloads that are not enrolled in the mesh. In shared or multi-tenant clusters, limiting mesh participation to specific namespaces helps reduce configuration overhead and supports multiple service meshes within the same cluster.
For more information on discovery selectors, see "Scoping the Service Mesh with discovery selectors".
4.3.1. Scoping the Service Mesh with discovery selectors in Istio ambient mode Copy linkLink copied to clipboard!
To limit the scope of the OpenShift Service Mesh in Istio ambient mode, you can configure
discoverySelectors
meshConfig
Istio
Prerequisites
- You have deployed a cluster on OpenShift Container Platform 4.19 or later.
-
You have created an control plane resource.
Istio -
You have created an resource.
IstioCNI -
You have created a resource.
Ztunnel
Procedure
Add a label to the namespace containing the
control plane resource, for example, theIstionamespace, by running the following command:istio-system$ oc label namespace istio-system istio-discovery=enabledAdd a label to the namespace containing the
resource, for example, theIstioCNInamespace, by running the following command:istio-cni$ oc label namespace istio-cni istio-discovery=enabledAdd a label to the namespace containing the
resource, for example, theZtunnelnamespace, by running the following command:ztunnel$ oc label namespace ztunnel istio-discovery=enabledModify the
control plane resource to include aIstiosection with the same label:discoverySelectorsCreate a YAML file with the name
similar to the following example:istio-discovery-selectors.yamlExample configuration
apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: namespace: istio-system values: pilot: trustedZtunnelNamespace: ztunnel profile: ambient meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabledApply the YAML file to
control plane resource by running the following command:Istio$ oc apply -f istio-discovery-selectors.yaml
4.4. Deploying the Bookinfo application in Istio ambient mode Copy linkLink copied to clipboard!
You can deploy the
bookinfo
ZTunnel
bookinfo
Prerequisites
- You have deployed a cluster on OpenShift Container Platform 4.15 or later, which includes the supported Kubernetes Gateway API custom resource definitions (CRDs) required for Istio ambient mode.
-
You are logged in to the OpenShift Container Platform cluster either through the web console as a user with the role, or with the
cluster-admincommand, depending on the installation method.oc login - You have installed the Red Hat OpenShift Service Mesh Operator, created the Istio resource, and the Operator has deployed Istio.
-
You have created an resource, and the Operator has deployed the necessary
IstioCNIpods.IstioCNI -
You have created a resource, and the Operator has deployed the necessary
Ztunnelpods.Ztunnel
Procedure
Create the
namespace by running the following command:bookinfo$ oc create namespace bookinfoAdd the
label to theistio-discovery=enablednamespace by running the following command:bookinfo$ oc label namespace bookinfo istio-discovery=enabledApply the
YAML file to deploy thebookinfoapplication by running the following command:bookinfo$ oc apply -n bookinfo -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.26/samples/bookinfo/platform/kube/bookinfo.yamlApply the
YAML file to deploy thebookinfo-versionsapplication by running the following command:bookinfo$ oc apply -n bookinfo -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.26/samples/bookinfo/platform/kube/bookinfo-versions.yamlVerify that the
pods are running by entering the following command:bookinfo$ oc -n bookinfo get podsExample output
NAME READY STATUS RESTARTS AGE details-v1-54ffdd5947-8gk5h 1/1 Running 0 5m9s productpage-v1-d49bb79b4-cb9sl 1/1 Running 0 5m3s ratings-v1-856f65bcff-h6kkf 1/1 Running 0 5m7s reviews-v1-848b8749df-wl5br 1/1 Running 0 5m6s reviews-v2-5fdf9886c7-8xprg 1/1 Running 0 5m5s reviews-v3-bb6b8ddc7-bvcm5 1/1 Running 0 5m5sVerify that the
application is running by entering the following command:bookinfo$ oc exec "$(oc get pod -l app=ratings -n bookinfo \ -o jsonpath='{.items[0].metadata.name}')" \ -c ratings -n bookinfo \ -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"Add the bookinfo application to the Istio ambient mesh by labeling either the entire namespace or the individual pods:
To include all workloads in the bookinfo namespace, apply the
label to theistio.io/dataplane-mode=ambientnamespace, by running the following command:bookinfo$ oc label namespace bookinfo istio.io/dataplane-mode=ambient-
To include only specific workloads, apply the label directly to individual pods. See the "Additional resources" section for more details on the labels used to add or exclude workloads in a mesh.
istio.io/dataplane-mode=ambient
NoteAdding workloads to the ambient mesh does not require restarting or redeploying application pods. Unlike sidecar mode, the number of containers in each pod remains unchanged.
Confirm that Ztunnel proxy has successfully opened listening sockets in the pod network namespace by running the following command:
$ istioctl ztunnel-config workloads --namespace ztunnelExample output
NAMESPACE POD NAME ADDRESS NODE WAYPOINT PROTOCOL bookinfo details-v1-54ffdd5947-cflng 10.131.0.69 ip-10-0-47-239.ec2.internal None HBONE bookinfo productpage-v1-d49bb79b4-8sgwx 10.128.2.80 ip-10-0-24-198.ec2.internal None HBONE bookinfo ratings-v1-856f65bcff-c6ldn 10.131.0.70 ip-10-0-47-239.ec2.internal None HBONE bookinfo reviews-v1-848b8749df-45hfd 10.131.0.72 ip-10-0-47-239.ec2.internal None HBONE bookinfo reviews-v2-5fdf9886c7-mvwft 10.128.2.78 ip-10-0-24-198.ec2.internal None HBONE bookinfo reviews-v3-bb6b8ddc7-fl8q2 10.128.2.79 ip-10-0-24-198.ec2.internal None HBONE istio-cni istio-cni-node-7hwd2 10.0.61.108 ip-10-0-61-108.ec2.internal None TCP istio-cni istio-cni-node-bfqmb 10.0.30.129 ip-10-0-30-129.ec2.internal None TCP istio-cni istio-cni-node-cv8cw 10.0.75.71 ip-10-0-75-71.ec2.internal None TCP istio-cni istio-cni-node-hj9cz 10.0.47.239 ip-10-0-47-239.ec2.internal None TCP istio-cni istio-cni-node-p8wrg 10.0.24.198 ip-10-0-24-198.ec2.internal None TCP istio-system istiod-6bd6b8664b-r74js 10.131.0.80 ip-10-0-47-239.ec2.internal None TCP ztunnel ztunnel-2w5mj 10.128.2.61 ip-10-0-24-198.ec2.internal None TCP ztunnel ztunnel-6njq8 10.129.0.131 ip-10-0-75-71.ec2.internal None TCP ztunnel ztunnel-96j7k 10.130.0.146 ip-10-0-61-108.ec2.internal None TCP ztunnel ztunnel-98mrk 10.131.0.50 ip-10-0-47-239.ec2.internal None TCP ztunnel ztunnel-jqcxn 10.128.0.98 ip-10-0-30-129.ec2.internal None TCP
4.5. About waypoint proxies in Istio ambient mode Copy linkLink copied to clipboard!
After setting up Istio ambient mode with ztunnel proxies, you can add waypoint proxies to enable advanced Layer 7 (L7) processing features that Istio provides.
Istio ambient mode separates the functionality of Istio into two layers:
- A secure Layer 4 (L4) overlay managed by ztunnel proxies
- An L7 layer managed by optional waypoint proxies
A waypoint proxy is an Envoy-based proxy that performs L7 processing for workloads running in ambient mode. It functions as a gateway to a resource such as a namespace, service, or pod. You can install, upgrade, and scale waypoint proxies independently of applications. The configuration uses the Kubernetes Gateway API.
Unlike the sidecar model, where each workload runs its own Envoy proxy, waypoint proxies reduce resource use by serving multiple workloads within the same security boundary, such as all workloads in a namespace.
A destination waypoint enforces policies by acting as a gateway. All incoming traffic to a resource, such as a namespace, service, or pod, passes through the waypoint for policy enforcement.
The
ztunnel
15008
You can add a waypoint proxy if workloads require any of the following L7 capabilities:
- Traffic management
- Advanced HTTP routing, load balancing, circuit breaking, rate limiting, fault injection, retries, and timeouts
- Security
- Authorization policies based on L7 attributes such as request type or HTTP headers
- Observability
- HTTP metrics, access logging, and tracing for application traffic
4.6. Deploying waypoint proxies using gateway API Copy linkLink copied to clipboard!
You can deploy waypoint proxies using Kubernetes Gateway resource.
Prerequisites
- You have logged in to the OpenShift Container Platform 4.19 or later, which provides supported Kubernetes Gateway API CRDs required for ambient mode functionality.
- You have the Red Hat OpenShift Service Mesh Operator 3.2.0 or later installed on the OpenShift cluster.
- You have Istio deployed in ambient mode.
-
You have applied the required labels to workloads or namespaces to enable traffic redirection.
ztunnel
Istio ambient mode is not compatible with clusters that use Red Hat OpenShift Service Mesh 2.6 or earlier. You must not deploy both versions in the same cluster.
Procedure
On OpenShift Container Platform 4.18 and earlier, install the community-maintained Kubernetes Gateway API CRDs by running the following command:
$ oc get crd gateways.gateway.networking.k8s.io &> /dev/null || \ { oc apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/standard-install.yaml; }From OpenShift Container Platform 4.19 onwards, the Gateway API CRDs are installed by default.
The CRDs are community maintained and not supported by Red Hat. Upgrading to OpenShift Container Platform 4.19 or later, which includes supported Gateway API CRDs, may disrupt applications.
4.7. Deploying a waypoint proxy Copy linkLink copied to clipboard!
You can deploy a waypoint proxy in the
bookinfo
Prerequisites
- You have logged in to the OpenShift Container Platform 4.19 or later, which provides supported Kubernetes Gateway API custom resource definitions (CRDs) required for ambient mode functionality.
- You have the Red Hat OpenShift Service Mesh Operator 3.2.0 or later installed on the OpenShift cluster.
- You have Istio deployed in ambient mode.
-
You have deployed the sample application for the following example.
bookinfo -
You have added the to the target namespace.
label istio.io/dataplane-mode=ambient
Procedure
Deploy a waypoint proxy in the
application namespace similar to the following example:bookinfoExample configuration
apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: labels: istio.io/waypoint-for: service name: waypoint namespace: bookinfo spec: gatewayClassName: istio-waypoint listeners: - name: mesh port: 15008 protocol: HBONEApply the
custom resource (CR) by running the following command:waypoint$ oc apply -f waypoint.yamlThe
label indicates that the waypoint handles traffic for services. The label determines the type of traffic processed. For more information, see "Waypoint traffic types".istio.io/waypoint-for: serviceEnroll the
namespace to use the waypoint by running the following command:bookinfo$ oc label namespace bookinfo istio.io/use-waypoint=waypoint
After enrolling the namespace, requests from any pods using the ambient data plane to services in
bookinfo
Verification
Confirm that the waypoint proxy is used by all the services in the
namespace by running the following command:bookinfo$ istioctl ztunnel-config svc --namespace ztunnelExample output
NAMESPACE SERVICE NAME SERVICE VIP WAYPOINT ENDPOINTS bookinfo details 172.30.15.248 waypoint 1/1 bookinfo details-v1 172.30.114.128 waypoint 1/1 bookinfo productpage 172.30.155.45 waypoint 1/1 bookinfo productpage-v1 172.30.76.27 waypoint 1/1 bookinfo ratings 172.30.24.145 waypoint 1/1 bookinfo ratings-v1 172.30.139.144 waypoint 1/1 bookinfo reviews 172.30.196.50 waypoint 3/3 bookinfo reviews-v1 172.30.172.192 waypoint 1/1 bookinfo reviews-v2 172.30.12.41 waypoint 1/1 bookinfo reviews-v3 172.30.232.12 waypoint 1/1 bookinfo waypoint 172.30.92.147 None 1/1
You can also configure only specific services or pods to use a waypoint by labeling the respective service or pod. When enrolling a pod explicitly, also add the
istio.io/waypoint-for: workload
gateway
4.8. Enabling cross-namespace waypoint usage Copy linkLink copied to clipboard!
You can use a cross-namespace waypoint to allow resources in one namespace to route traffic through a waypoint deployed in a different namespace.
Procedure
Create a
resource that allows workloads in theGatewaynamespace to use thebookinfofrom thewaypoint-defaultnamespace similar to the following example:defaultExample configuration
apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: waypoint-default namespace: default spec: gatewayClassName: istio-waypoint listeners: - name: mesh port: 15008 protocol: HBONE allowedRoutes: namespaces: from: Selector selector: matchLabels: kubernetes.io/metadata.name: bookinfoApply the cross-namespace waypoint by running the following command:
$ oc apply -f waypoint-default.yamlAdd the labels required to use a cross-namespace waypoint:
Add the
label to specify the namespace where the waypoint resides by running the following command:istio.io/use-waypoint-namespace$ oc label namespace bookinfo istio.io/use-waypoint-namespace=defaultAdd the
label to specify the waypoint to use by running the following command:istio.io/use-waypoint$ oc label namespace bookinfo istio.io/use-waypoint=waypoint-default
4.9. About Layer 7 features in ambient mode Copy linkLink copied to clipboard!
Ambient mode includes stable Layer 7 (L7) capabilities implemented through the Gateway API
HTTPRoute
AuthorizationPolicy
The
AuthorizationPolicy
ztunnel
targetRef
You can attach Layer 4 (L4) or L7 policies to the waypoint proxy to ensure correct identity-based enforcement, as the destination
ztunnel
Istio peer authentication policies, which configure mutual TLS (mTLS) modes, are supported by ztunnel. In ambient mode, policies that set the mode to
DISABLE
4.10. Routing traffic using waypoint proxies Copy linkLink copied to clipboard!
You can use a deployed waypoint proxy to split traffic between different versions of the Bookinfo
reviews
Procedure
Create the traffic routing configuration similar to the following example:
Example configuration
apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: reviews namespace: bookinfo spec: parentRefs: - group: "" kind: Service name: reviews port: 9080 rules: - backendRefs: - name: reviews-v1 port: 9080 weight: 90 - name: reviews-v2 port: 9080 weight: 10Apply the traffic routing configuration by running the following command:
$ oc apply -f traffic-route.yaml
Verification
Access the
service from within the ratings pod by running the following command:productpage$ oc exec "$(oc get pod -l app=ratings -n bookinfo \ -o jsonpath='{.items[0].metadata.name}')" -c ratings -n bookinfo \ -- curl -sS productpage:9080/productpage | grep -om1 'reviews-v[12]'Most responses (90%) will contain
output, while a smaller portion (10%) will containreviews-v1output.reviews-v2
4.11. Adding authorization policy Copy linkLink copied to clipboard!
Use an Layer 7 (L7) authorization policy to explicitly allow the
curl
GET
productpage
Procedure
Create the authorization policy similar to the following example:
Example configuration
apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: productpage-waypoint namespace: bookinfo spec: targetRefs: - kind: Service group: "" name: productpage action: ALLOW rules: - from: - source: principals: - cluster.local/ns/curl/sa/curl to: - operation: methods: ["GET"]Apply the authorization policy by running the following command:
$ oc apply -f authorization-policy.yaml
The
targetRefs
Verification
Create a namespace for a
client by running the following command:curl$ oc create namespace curlDeploy a
client by running the following command:curl$ oc apply -n curl -f https://raw.githubusercontent.com/openshift-service-mesh/istio/refs/heads/master/samples/curl/curl.yamlApply the label for ambient mode to the
namespace by running the following command:curl$ oc label namespace curl istio.io/dataplane-mode=ambientVerify that a
request to theGETservice succeeds with an HTTP 200 response when made from theproductpagepod, by running the following command:default/curl$ oc -n curl exec deploy/curl -- sh -c \ 'curl -s -o /dev/null -w "HTTP %{http_code}\n" http://productpage.bookinfo.svc.cluster.local:9080/productpage'Verify that a
request to the same service is denied with an HTTP 403 response due to the applied authorization policy, by running the following command:POST$ oc -n curl exec deploy/curl -- sh -c \ 'curl -s -o /dev/null -w "HTTP %{http_code}\n" -X POST http://productpage.bookinfo.svc.cluster.local:9080/productpage'Verify that a
request from another service, such as theGETpod in theratingsnamespace, is also denied withbookinfo, by running the following command:RBAC: access denied$ oc exec "$(oc get pod -l app=ratings -n bookinfo \ -o jsonpath='{.items[0].metadata.name}')" \ -c ratings -n bookinfo \ -- curl -sS productpage:9080/productpageClean up the resources by running the following commands:
Delete the
application by running the following command:curl$ oc delete -n curl -f https://raw.githubusercontent.com/openshift-service-mesh/istio/refs/heads/master/samples/curl/curl.yamlDelete the
namespace by running the following command:curl$ oc delete namespace curl
Chapter 5. OpenShift Service Mesh and cert-manager Copy linkLink copied to clipboard!
The cert-manager tool provides a unified API to manage X.509 certificates for applications in a Kubernetes environment. You can use cert-manager to integrate with public or private key infrastructures (PKI) and automate certificate renewal.
5.1. About the cert-manager Operator istio-csr agent Copy linkLink copied to clipboard!
The cert-manager Operator for Red Hat OpenShift enhances certificate management for securing workloads and control plane components in Red Hat OpenShift Service Mesh and Istio. It supports issuing, delivering, and renewing certificates used for mutual Transport Layer Security (mTLS) through cert-manager issuers.
By integrating Istio with the
istio-csr
The cert-manager Operator for Red Hat OpenShift must be installed before you create and install your
Istio
5.1.1. Integrating Service Mesh with the cert-manager Operator by using the istio-csr agent Copy linkLink copied to clipboard!
You can integrate the cert-manager Operator with OpenShift Service Mesh by deploying the
istio-csr
Istio
istio-csr
issuer
Prerequisites
- You have installed the cert-manager Operator for Red Hat OpenShift version 1.15.1.
- You are logged in to OpenShift Container Platform 4.14 or later.
- You have installed the OpenShift Service Mesh Operator.
-
You have a instance running in the cluster.
IstioCNI -
You have installed the command.
istioctl
Procedure
Create the
namespace by running the following command:istio-system$ oc create namespace istio-systemPatch the cert-manager Operator to install the
agent by running the following command:istio-csr$ oc -n cert-manager-operator patch subscription openshift-cert-manager-operator \ --type='merge' -p \ '{"spec":{"config":{"env":[{"name":"UNSUPPORTED_ADDON_FEATURES","value":"IstioCSR=true"}]}}}'Create the root certificate authority (CA) issuer by creating an
object for theIssueragent:istio-csrCreate a new project for installing the
agent by running the following command:istio-csr$ oc new-project istio-csrCreate an
object similar to the following example:IssuerNoteThe
issuer is intended for demonstration, testing, or proof-of-concept environments. For production deployments, use a secure and trusted CA.selfSignedExample
issuer.yamlfileapiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned namespace: istio-system spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: istio-system spec: isCA: true duration: 87600h secretName: istio-ca commonName: istio-ca privateKey: algorithm: ECDSA size: 256 subject: organizations: - cluster.local - cert-manager issuerRef: name: selfsigned kind: Issuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: istio-ca namespace: istio-system spec: ca: secretName: istio-caCreate the objects by running the following command:
$ oc apply -f issuer.yamlWait for the
certificate to contain the "Ready" status condition by running the following command:istio-ca$ oc wait --for=condition=Ready certificates/istio-ca -n istio-system
Create the
custom resource:IstioCSRCreate the
custom resource similar to the following example:IstioCSRExample
istioCSR.yamlfileapiVersion: operator.openshift.io/v1alpha1 kind: IstioCSR metadata: name: default namespace: istio-csr spec: istioCSRConfig: certManager: issuerRef: name: istio-ca kind: Issuer group: cert-manager.io istiodTLSConfig: trustDomain: cluster.local istio: namespace: istio-systemCreate the
agent by by running the following command:istio-csr$ oc create -f istioCSR.yamlVerify that the
deployment is ready by running the following command:istio-csr$ oc get deployment -n istio-csr
Install the
resource:istioNoteThe configuration disables the built-in CA server for Istio and forwards certificate signing requests from
to theistiodagent. Theistio-csragent obtains certificates for bothistio-csrand mesh workloads from the cert-manager Operator. TheistiodTLS certificate that is generated by theistiodagent is mounted into the pod at a known location for use.istio-csrCreate the
object similar to the following example:IstioExample
istio.yamlfileapiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v1.24-latest namespace: istio-system values: global: caAddress: cert-manager-istio-csr.istio-csr.svc:443 pilot: env: ENABLE_CA_SERVER: "false"Create the
resource by running the following command:Istio$ oc apply -f istio.yamlVerify that the
resource displays the "Ready" status condition by running the following command:istio$ oc wait --for=condition=Ready istios/default -n istio-system
5.1.2. Verifying Service Mesh with the cert-manager Operator using the istio-csr agent Copy linkLink copied to clipboard!
You can use the sample
httpbin
sleep
Create the namespaces:
Create the
namespace by running the following command:apps-1$ oc new-project apps-1Create the
namespace by running the following command:apps-2$ oc new-project apps-2
Add the
label on the namespaces:istio-injection=enabledAdd the
label on theistio-injection=enablednamespace by running the following command:apps-1$ oc label namespaces apps-1 istio-injection=enabledAdd the
label on theistio-injection=enablednamespace by running the following command:apps-2$ oc label namespaces apps-2 istio-injection=enabled
Deploy the
app in the namespaces:httpbinDeploy the
app in thehttpbinnamespace by running the following command:apps-1$ oc apply -n apps-1 -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yamlDeploy the
app in thehttpbinnamespace by running the following command:apps-2$ oc apply -n apps-2 -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yaml
Deploy the
app in the namespaces:sleepDeploy the
app in thesleepnamespace by running the following command:apps-1$ oc apply -n apps-1 -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yamlDeploy the
app in thesleepnamespace by running the following command:apps-2$ oc apply -n apps-2 -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml
Verify that the created apps have sidecars injected:
Verify that the created apps have sidecars injected for
namespace by running the following command:apps-1$ oc get pods -n apps-1Verify that the created apps have sidecars injected for
namespace by running the following command:apps-2$ oc get pods -n apps-2
Create a mesh-wide strict mutual Transport Layer Security (mTLS) policy similar to the following example:
NoteEnabling
in strict mTLS mode verifies that certificates are distributed correctly and that mTLS communication functions between workloads.PeerAuthenticationExample
peer_auth.yamlfileapiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: istio-system spec: mtls: mode: STRICTApply the mTLS policy by running the following command:
$ oc apply -f peer_auth.yamlVerify that the
app can access theapps-1/sleepservice by running the following command:apps-2/httpbin$ oc -n apps-1 exec "$(oc -n apps-1 get pod \ -l app=sleep -o jsonpath={.items..metadata.name})" \ -c sleep -- curl -sIL http://httpbin.apps-2.svc.cluster.local:8000Example output
HTTP/1.1 200 OK access-control-allow-credentials: true access-control-allow-origin: * content-security-policy: default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' camo.githubusercontent.com content-type: text/html; charset=utf-8 date: Wed, 18 Jun 2025 09:20:55 GMT x-envoy-upstream-service-time: 14 server: envoy transfer-encoding: chunkedVerify that the
app can access theapps-2/sleepservice by running the following command:apps-1/httpbin$ oc -n apps-2 exec "$(oc -n apps-1 get pod \ -l app=sleep -o jsonpath={.items..metadata.name})" \ -c sleep -- curl -sIL http://httpbin.apps-2.svc.cluster.local:8000Example output
HTTP/1.1 200 OK access-control-allow-credentials: true access-control-allow-origin: * content-security-policy: default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' camo.githubusercontent.com content-type: text/html; charset=utf-8 date: Wed, 18 Jun 2025 09:21:23 GMT x-envoy-upstream-service-time: 16 server: envoy transfer-encoding: chunkedVerify that the
workload certificate matches as expected by running the following command:httpbin$ istioctl proxy-config secret -n apps-1 \ $(oc get pods -n apps-1 -o jsonpath='{.items..metadata.name}' --selector app=httpbin) \ -o json | jq -r '.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' \ | base64 --decode | openssl x509 -text -nooutExample output
... Issuer: O = cert-manager + O = cluster.local, CN = istio-ca ... X509v3 Subject Alternative Name: URI:spiffe://cluster.local/ns/apps-1/sa/httpbin
5.1.3. Uninstalling Service Mesh with the cert-manager Operator by using the istio-csr agent Copy linkLink copied to clipboard!
You can uninstall the cert-manager Operator with OpenShift Service Mesh by completing the following procedure. Before you remove the following resources, verify that no Red Hat OpenShift Service Mesh or Istio components reference the
Istio-CSR
Procedure
Remove the
custom resource by running the following command:IstioCSR$ oc -n <istio-csr_project_name> delete istiocsrs.operator.openshift.io defaultRemove the related resources:
List the cluster scoped-resources by running the following command:
$ oc get clusterrolebindings,clusterroles -l "app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr"Save the names of the listed resources for later reference.
List the resources in
agent deployed namespace by running the following command:istio-csr$ oc get certificate,deployments,services,serviceaccounts -l "app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr" -n <istio_csr_project_name>Save the names of the listed resources for later reference.
List the resources in Red Hat OpenShift Service Mesh or Istio deployed namespaces by running the following command:
$ oc get roles,rolebindings \ -l "app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr" \ -n <istio_csr_project_name>Save the names of the listed resources for later reference.
For each resource listed in previous steps, delete the resources by running the following command:
$ oc -n <istio_csr_project_name> delete <resource_type>/<resource_name>
Chapter 6. Multi-cluster topologies Copy linkLink copied to clipboard!
Multi-cluster topologies are useful for organizations with distributed systems or environments seeking enhanced scalability, fault tolerance, and regional redundancy.
6.1. About multi-cluster mesh topologies Copy linkLink copied to clipboard!
In a multi-cluster mesh topology, you install and manage a single Istio mesh across multiple OpenShift Container Platform clusters, enabling communication and service discovery between the services. Two factors determine the multi-cluster mesh topology: control plane topology and network topology. There are two options for each topology. Therefore, there are four possible multi-cluster mesh topology configurations.
- Multi-Primary Single Network: Combines the multi-primary control plane topology and the single network network topology models.
- Multi-Primary Multi-Network: Combines the multi-primary control plane topology and the multi-network network topology models.
- Primary-Remote Single Network: Combines the primary-remote control plane topology and the single network network topology models.
- Primary-Remote Multi-Network: Combines the primary-remote control plane topology and the multi-network network topology models.
6.1.1. Control plane topology models Copy linkLink copied to clipboard!
A multi-cluster mesh must use one of the following control plane topologies:
- Multi-Primary: In this configuration, a control plane resides on every cluster. Each control plane observes the API servers in all of the other clusters for services and endpoints.
- Primary-Remote: In this configuration, the control plane resides only on one cluster, called the primary cluster. No control plane runs on any of the other clusters, called remote clusters. The control plane on the primary cluster discovers services and endpoints and configures the sidecar proxies for the workloads in all clusters.
6.1.2. Network topology models Copy linkLink copied to clipboard!
A multi-cluster mesh must use one of the following network topologies:
- Single Network: All clusters reside on the same network and there is direct connectivity between the services in all the clusters. There is no need to use gateways for communication between the services across cluster boundaries.
- Multi-Network: Clusters reside on different networks and there is no direct connectivity between services. Gateways must be used to enable communication across network boundaries.
6.2. Multi-Cluster configuration overview Copy linkLink copied to clipboard!
To configure a multi-cluster topology you must perform the following actions:
- Install the OpenShift Service Mesh Operator for each cluster.
- Create or have access to root and intermediate certificates for each cluster.
- Apply the security certificates for each cluster.
- Install Istio for each cluster.
6.2.1. Creating certificates for a multi-cluster topology Copy linkLink copied to clipboard!
Create the root and intermediate certificate authority (CA) certificates for two clusters.
Prerequisites
- You have OpenSSL installed locally.
Procedure
Create the root CA certificate:
Create a key for the root certificate by running the following command:
$ openssl genrsa -out root-key.pem 4096Create an OpenSSL configuration certificate file named
for the root CA certificates:root-ca.confExample root certificate configuration file
encrypt_key = no prompt = no utf8 = yes default_md = sha256 default_bits = 4096 req_extensions = req_ext x509_extensions = req_ext distinguished_name = req_dn [ req_ext ] subjectKeyIdentifier = hash basicConstraints = critical, CA:true keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, keyCertSign [ req_dn ] O = Istio CN = Root CACreate the certificate signing request by running the following command:
$ openssl req -sha256 -new -key root-key.pem \ -config root-ca.conf \ -out root-cert.csrCreate a shared root certificate by running the following command:
$ openssl x509 -req -sha256 -days 3650 \ -signkey root-key.pem \ -extensions req_ext -extfile root-ca.conf \ -in root-cert.csr \ -out root-cert.pem
Create the intermediate CA certificate for the East cluster:
Create a directory named
by running the following command:east$ mkdir eastCreate a key for the intermediate certificate for the East cluster by running the following command:
$ openssl genrsa -out east/ca-key.pem 4096Create an OpenSSL configuration file named
in theintermediate.confdirectory for the intermediate certificate of the East cluster. Copy the following example file and save it locally:east/Example configuration file
[ req ] encrypt_key = no prompt = no utf8 = yes default_md = sha256 default_bits = 4096 req_extensions = req_ext x509_extensions = req_ext distinguished_name = req_dn [ req_ext ] subjectKeyIdentifier = hash basicConstraints = critical, CA:true, pathlen:0 keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, keyCertSign subjectAltName=@san [ san ] DNS.1 = istiod.istio-system.svc [ req_dn ] O = Istio CN = Intermediate CA L = eastCreate a certificate signing request by running the following command:
$ openssl req -new -config east/intermediate.conf \ -key east/ca-key.pem \ -out east/cluster-ca.csrCreate the intermediate CA certificate for the East cluster by running the following command:
$ openssl x509 -req -sha256 -days 3650 \ -CA root-cert.pem \ -CAkey root-key.pem -CAcreateserial \ -extensions req_ext -extfile east/intermediate.conf \ -in east/cluster-ca.csr \ -out east/ca-cert.pemCreate a certificate chain from the intermediate and root CA certificate for the east cluster by running the following command:
$ cat east/ca-cert.pem root-cert.pem > east/cert-chain.pem && cp root-cert.pem east
Create the intermediate CA certificate for the West cluster:
Create a directory named
by running the following command:west$ mkdir westCreate a key for the intermediate certificate for the West cluster by running the following command:
$ openssl genrsa -out west/ca-key.pem 4096Create an OpenSSL configuration file named
in theintermediate.confdirectory for for the intermediate certificate of the West cluster. Copy the following example file and save it locally:west/Example configuration file
[ req ] encrypt_key = no prompt = no utf8 = yes default_md = sha256 default_bits = 4096 req_extensions = req_ext x509_extensions = req_ext distinguished_name = req_dn [ req_ext ] subjectKeyIdentifier = hash basicConstraints = critical, CA:true, pathlen:0 keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, keyCertSign subjectAltName=@san [ san ] DNS.1 = istiod.istio-system.svc [ req_dn ] O = Istio CN = Intermediate CA L = westCreate a certificate signing request by running the following command:
$ openssl req -new -config west/intermediate.conf \ -key west/ca-key.pem \ -out west/cluster-ca.csrCreate the certificate by running the following command:
$ openssl x509 -req -sha256 -days 3650 \ -CA root-cert.pem \ -CAkey root-key.pem -CAcreateserial \ -extensions req_ext -extfile west/intermediate.conf \ -in west/cluster-ca.csr \ -out west/ca-cert.pemCreate the certificate chain by running the following command:
$ cat west/ca-cert.pem root-cert.pem > west/cert-chain.pem && cp root-cert.pem west
6.2.2. Applying certificates to a multi-cluster topology Copy linkLink copied to clipboard!
Apply root and intermediate certificate authority (CA) certificates to the clusters in a multi-cluster topology.
In this procedure,
CLUSTER1
CLUSTER2
Prerequisites
- You have access to two OpenShift Container Platform clusters with external load balancer support.
- You have created the root CA certificate and intermediate CA certificates for each cluster or someone has made them available for you.
Procedure
Apply the certificates to the East cluster of the multi-cluster topology:
Log in to East cluster by running the following command:
$ oc login -u https://<east_cluster_api_server_url>Set up the environment variable that contains the
command context for the East cluster by running the following command:oc$ export CTX_CLUSTER1=$(oc config current-context)Create a project called
by running the following command:istio-system$ oc get project istio-system --context "${CTX_CLUSTER1}" || oc new-project istio-system --context "${CTX_CLUSTER1}"Configure Istio to use
as the default network for the pods on the East cluster by running the following command:network1$ oc --context "${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1Create the CA certificates, certificate chain, and the private key for Istio on the East cluster by running the following command:
$ oc get secret -n istio-system --context "${CTX_CLUSTER1}" cacerts || oc create secret generic cacerts -n istio-system --context "${CTX_CLUSTER1}" \ --from-file=east/ca-cert.pem \ --from-file=east/ca-key.pem \ --from-file=east/root-cert.pem \ --from-file=east/cert-chain.pemNoteIf you followed the instructions in "Creating certificates for a multi-cluster mesh", your certificates will reside in the
directory. If your certificates reside in a different directory, modify the syntax accordingly.east/
Apply the certificates to the West cluster of the multi-cluster topology:
Log in to the West cluster by running the following command:
$ oc login -u https://<west_cluster_api_server_url>Set up the environment variable that contains the
command context for the West cluster by running the following command:oc$ export CTX_CLUSTER2=$(oc config current-context)Create a project called
by running the following command:istio-system$ oc get project istio-system --context "${CTX_CLUSTER2}" || oc new-project istio-system --context "${CTX_CLUSTER2}"Configure Istio to use
as the default network for the pods on the West cluster by running the following command:network2$ oc --context "${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2Create the CA certificate secret for Istio on the West cluster by running the following command:
$ oc get secret -n istio-system --context "${CTX_CLUSTER2}" cacerts || oc create secret generic cacerts -n istio-system --context "${CTX_CLUSTER2}" \ --from-file=west/ca-cert.pem \ --from-file=west/ca-key.pem \ --from-file=west/root-cert.pem \ --from-file=west/cert-chain.pemNoteIf you followed the instructions in "Creating certificates for a multi-cluster mesh", your certificates will reside in the
directory. If the certificates reside in a different directory, modify the syntax accordingly.west/
Next steps
Install Istio on all the clusters comprising the mesh topology.
6.3. Installing a multi-primary multi-network mesh Copy linkLink copied to clipboard!
Install Istio in the multi-primary multi-network topology on two OpenShift Container Platform clusters.
In this procedure,
CLUSTER1
CLUSTER2
You can adapt these instructions for a mesh spanning more than two clusters.
Prerequisites
- You have installed the OpenShift Service Mesh 3 Operator on all of the clusters that include the mesh.
- You have created certificates for the multi-cluster mesh.
- You have applied certificates to the multi-cluster topology.
- You have created an Istio Container Network Interface (CNI) resource.
-
You have installed.
istioctl
In on-premise environments, such as those running on bare metal, OpenShift Container Platform clusters often do not include a native load-balancer capability. A service of type
LoadBalancer
istio-eastwestgateway
- VMware vSphere
- IBM Z® and IBM® LinuxONE
- IBM Z® and IBM® LinuxONE for Red Hat Enterprise Linux (RHEL) KVM
- IBM Power®
For more information, see MetalLB Operator.
Procedure
Create an
environment variable that defines the Istio version to install by running the following command:ISTIO_VERSION$ export ISTIO_VERSION=1.24.3Install Istio on the East cluster:
Create an
resource on the East cluster by running the following command:Istio$ cat <<EOF | oc --context "${CTX_CLUSTER1}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system values: global: meshID: mesh1 multiCluster: clusterName: cluster1 network: network1 EOFWait for the control plane to return the
status condition by running the following command:Ready$ oc --context "${CTX_CLUSTER1}" wait --for condition=Ready istio/default --timeout=3mCreate an East-West gateway on the East cluster by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net1.yamlExpose the services through the gateway by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
Install Istio on the West cluster:
Create an
resource on the West cluster by running the following command:Istio$ cat <<EOF | oc --context "${CTX_CLUSTER2}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system values: global: meshID: mesh1 multiCluster: clusterName: cluster2 network: network2 EOFWait for the control plane to return the
status condition by running the following command:Ready$ oc --context "${CTX_CLUSTER2}" wait --for condition=Ready istio/default --timeout=3mCreate an East-West gateway on the West cluster by running the following command:
$ oc --context "${CTX_CLUSTER2}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net2.yamlExpose the services through the gateway by running the following command:
$ oc --context "${CTX_CLUSTER2}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
Create the
service account for the East cluster by running the following command:istio-reader-service-account$ oc --context="${CTX_CLUSTER1}" create serviceaccount istio-reader-service-account -n istio-systemCreate the
service account for the West cluster by running the following command:istio-reader-service-account$ oc --context="${CTX_CLUSTER2}" create serviceaccount istio-reader-service-account -n istio-systemAdd the
role to the East cluster by running the following command:cluster-reader$ oc --context="${CTX_CLUSTER1}" adm policy add-cluster-role-to-user cluster-reader -z istio-reader-service-account -n istio-systemAdd the
role to the West cluster by running the following command:cluster-reader$ oc --context="${CTX_CLUSTER2}" adm policy add-cluster-role-to-user cluster-reader -z istio-reader-service-account -n istio-systemInstall a remote secret on the East cluster that provides access to the API server on the West cluster by running the following command:
$ istioctl create-remote-secret \ --context="${CTX_CLUSTER2}" \ --name=cluster2 \ --create-service-account=false | \ oc --context="${CTX_CLUSTER1}" apply -f -Install a remote secret on the West cluster that provides access to the API server on the East cluster by running the following command:
$ istioctl create-remote-secret \ --context="${CTX_CLUSTER1}" \ --name=cluster1 \ --create-service-account=false | \ oc --context="${CTX_CLUSTER2}" apply -f -
6.3.1. Verifying a multi-cluster topology Copy linkLink copied to clipboard!
Deploy sample applications and verify traffic on a multi-cluster topology on two OpenShift Container Platform clusters.
In this procedure,
CLUSTER1
CLUSTER2
Prerequisites
- You have installed the OpenShift Service Mesh Operator on all of the clusters that comprise the mesh.
- You have completed "Creating certificates for a multi-cluster mesh".
- You have completed "Applying certificates to a multi-cluster topology".
- You have created an Istio Container Network Interface (CNI) resource.
-
You have installed on the laptop you will use to run these instructions.
istioctl - You have installed a multi-cluster topology.
Procedure
Deploy sample applications on the East cluster:
Create a sample application namespace on the East cluster by running the following command:
$ oc --context "${CTX_CLUSTER1}" get project sample || oc --context="${CTX_CLUSTER1}" new-project sampleLabel the application namespace to support sidecar injection by running the following command:
$ oc --context="${CTX_CLUSTER1}" label namespace sample istio-injection=enabledDeploy the
application:helloworldCreate the
service by running the following command:helloworld$ oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sampleCreate the
deployment by running the following command:helloworld-v1$ oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l version=v1 -n sample
Deploy the
application by running the following command:sleep$ oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml -n sampleWait for the
application on the East cluster to return thehelloworldstatus condition by running the following command:Ready$ oc --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/helloworld-v1Wait for the
application on the East cluster to return thesleepstatus condition by running the following command:Ready$ oc --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/sleep
Deploy the sample applications on the West cluster:
Create a sample application namespace on the West cluster by running the following command:
$ oc --context "${CTX_CLUSTER2}" get project sample || oc --context="${CTX_CLUSTER2}" new-project sampleLabel the application namespace to support sidecar injection by running the following command:
$ oc --context="${CTX_CLUSTER2}" label namespace sample istio-injection=enabledDeploy the
application:helloworldCreate the
service by running the following command:helloworld$ oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sampleCreate the
deployment by running the following command:helloworld-v2$ oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l version=v2 -n sample
Deploy the
application by running the following command:sleep$ oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml -n sampleWait for the
application on the West cluster to return thehelloworldstatus condition by running the following command:Ready$ oc --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/helloworld-v2Wait for the
application on the West cluster to return thesleepstatus condition by running the following command:Ready$ oc --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/sleep
Verifying traffic flows between clusters
For the East cluster, send 10 requests to the
service by running the following command:helloworld$ for i in {0..9}; do \ oc --context="${CTX_CLUSTER1}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \ doneVerify that you see responses from both clusters. This means version 1 and version 2 of the service can be seen in the responses.
For the West cluster, send 10 requests to the
service:helloworld$ for i in {0..9}; do \ oc --context="${CTX_CLUSTER2}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \ doneVerify that you see responses from both clusters. This means version 1 and version 2 of the service can be seen in the responses.
6.3.2. Removing a multi-cluster topology from a development environment Copy linkLink copied to clipboard!
After experimenting with the multi-cluster functionality in a development environment, remove the multi-cluster topology from all the clusters.
In this procedure,
CLUSTER1
CLUSTER2
Prerequisites
- You have installed a multi-cluster topology.
Procedure
Remove Istio and the sample applications from the East cluster of the development environment by running the following command:
$ oc --context="${CTX_CLUSTER1}" delete istio/default ns/istio-system ns/sample ns/istio-cniRemove Istio and the sample applications from the West cluster of development environment by running the following command:
$ oc --context="${CTX_CLUSTER2}" delete istio/default ns/istio-system ns/sample ns/istio-cni
6.4. Installing a primary-remote multi-network mesh Copy linkLink copied to clipboard!
Install Istio in a primary-remote multi-network topology on two OpenShift Container Platform clusters.
In this procedure,
CLUSTER1
CLUSTER2
You can adapt these instructions for a mesh spanning more than two clusters.
Prerequisites
- You have installed the OpenShift Service Mesh 3 Operator on all of the clusters that comprise the mesh.
- You have completed "Creating certificates for a multi-cluster mesh".
- You have completed "Applying certificates to a multi-cluster topology".
- You have created an Istio Container Network Interface (CNI) resource.
-
You have installed on the laptop you will use to run these instructions.
istioctl
Procedure
Create an
environment variable that defines the Istio version to install by running the following command:ISTIO_VERSION$ export ISTIO_VERSION=1.24.3Install Istio on the East cluster:
Set the default network for the East cluster by running the following command:
$ oc --context="${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1Create an
resource on the East cluster by running the following command:Istio$ cat <<EOF | oc --context "${CTX_CLUSTER1}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system values: global: meshID: mesh1 multiCluster: clusterName: cluster1 network: network1 externalIstiod: true1 EOF- 1
- This enables the control plane installed on the East cluster to serve as an external control plane for other remote clusters.
Wait for the control plane to return the "Ready" status condition by running the following command:
$ oc --context "${CTX_CLUSTER1}" wait --for condition=Ready istio/default --timeout=3mCreate an East-West gateway on the East cluster by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net1.yamlExpose the control plane through the gateway so that services in the West cluster can access the control plane by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-istiod.yamlExpose the application services through the gateway by running the following command:
$ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
Install Istio on the West cluster:
Save the IP address of the East-West gateway running in the East cluster by running the following command:
$ export DISCOVERY_ADDRESS=$(oc --context="${CTX_CLUSTER1}" \ -n istio-system get svc istio-eastwestgateway \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}')Create an
resource on the West cluster by running the following command:Istio$ cat <<EOF | oc --context "${CTX_CLUSTER2}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system profile: remote values: istiodRemote: injectionPath: /inject/cluster/cluster2/net/network2 global: remotePilotAddress: ${DISCOVERY_ADDRESS} EOFAnnotate the
namespace in the West cluster so that it is managed by the control plane in the East cluster by running the following command:istio-system$ oc --context="${CTX_CLUSTER2}" annotate namespace istio-system topology.istio.io/controlPlaneClusters=cluster1Set the default network for the West cluster by running the following command:
$ oc --context="${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2Install a remote secret on the East cluster that provides access to the API server on the West cluster by running the following command:
$ istioctl create-remote-secret \ --context="${CTX_CLUSTER2}" \ --name=cluster2 | \ oc --context="${CTX_CLUSTER1}" apply -f -Wait for the
resource to return the "Ready" status condition by running the following command:Istio$ oc --context "${CTX_CLUSTER2}" wait --for condition=Ready istio/default --timeout=3mCreate an East-West gateway on the West cluster by running the following command:
$ oc --context "${CTX_CLUSTER2}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net2.yamlNoteSince the West cluster is installed with a remote profile, exposing the application services on the East cluster exposes them on the East-West gateways of both clusters.
6.5. Installing Kiali in a multi-cluster mesh Copy linkLink copied to clipboard!
Install Kiali in a multi-cluster mesh configuration on two OpenShift Container Platform clusters.
In this procedure,
CLUSTER1
CLUSTER2
You can adapt these instructions for a mesh spanning more than two clusters.
Prerequisites
- You have installed the latest Kiali Operator on each cluster.
- Istio installed in a multi-cluster configuration on each cluster.
-
You have installed on the laptop you can use to run these instructions.
istioctl -
You are logged in to the OpenShift Container Platform web console as a user with the role.
cluster-admin - You have configured a metrics store so that Kiali can query metrics from all the clusters. Kiali queries metrics and traces from their respective endpoints.
Procedure
Install Kiali on the East cluster:
Create a YAML file named
that creates a namespace for the Kiali deployment.kiali.yamlExample configuration
apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali namespace: istio-system spec: version: default external_services: prometheus: auth: type: bearer use_kiali_token: true thanos_proxy: enabled: true url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091NoteThe endpoint for this example uses OpenShift Monitoring to configure metrics. For more information, see "Configuring OpenShift Monitoring with Kiali".
Apply the YAML file on the East cluster by running the following command:
$ oc --context cluster1 apply -f kiali.yamlExample output
kiali-istio-system.apps.example.com
Ensure that the Kiali custom resource (CR) is ready by running the following command:
$ oc wait --context cluster1 --for=condition=Successful kialis/kiali -n istio-system --timeout=3mExample output
kiali.kiali.io/kiali condition metDisplay your Kiali Route hostname.
$ oc --context cluster1 get route kiali -n istio-system -o jsonpath='{.spec.host}'Create a Kiali CR on the West cluster.
Example configuration
apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali namespace: istio-system spec: version: default auth: openshift: redirect_uris: # Replace kiali-route-hostname with the hostname from the previous step. - "https://{kiali-route-hostname}/api/auth/callback/cluster2" deployment: remote_cluster_resources_only: trueThe Kiali Operator creates the resources necessary for the Kiali server on the East cluster to connect to the West cluster. The Kiali server is not installed on the West cluster.
Apply the YAML file on the West cluster by running the following command:
$ oc --context cluster2 apply -f kiali-remote.yamlEnsure that the Kiali CR is ready by running the following command:
$ oc wait --context cluster2 --for=condition=Successful kialis/kiali -n istio-system --timeout=3mCreate a remote cluster secret so that Kiali installation in the East cluster can access the West cluster.
Create a long lived API token bound to the kiali-service-account in the West cluster. Kiali uses this token to authenticate to the West cluster.
Example configuration
apiVersion: v1 kind: Secret metadata: name: "kiali-service-account" namespace: "istio-system" annotations: kubernetes.io/service-account.name: "kiali-service-account" type: kubernetes.io/service-account-tokenApply the YAML file on the West cluster by running the following command:
$ oc --context cluster2 apply -f kiali-svc-account-token.yamlCreate a
file and save it as a secret in the namespace on the East cluster where the Kiali deployment resides.kubeconfigTo simplify this process, use the
script to generate thekiali-prepare-remote-cluster.shfile by running the followingkubeconfigcommand:curl$ curl -L -o kiali-prepare-remote-cluster.sh https://raw.githubusercontent.com/kiali/kiali/master/hack/istio/multicluster/kiali-prepare-remote-cluster.shModify the script to make it executeable by running the following command:
chmod +x kiali-prepare-remote-cluster.shExecute the script so that it passes the East and West cluster contexts to the
file by running the following command:kubeconfig$ ./kiali-prepare-remote-cluster.sh --kiali-cluster-context cluster1 --remote-cluster-context cluster2 --view-only false --kiali-resource-name kiali-service-account --remote-cluster-namespace istio-system --process-kiali-secret true --process-remote-resources false --remote-cluster-name cluster2NoteUse the
option to display additional details about how to use the script.--help
Trigger the reconciliation loop so that the Kiali Operator registers the remote secret that the CR contains by running the following command:
$ oc --context cluster1 annotate kiali kiali -n istio-system --overwrite kiali.io/reconcile="$(date)"Wait for Kiali resource to become ready by running the following command:
oc --context cluster1 wait --for=condition=Successful --timeout=2m kialis/kiali -n istio-systemWait for Kiali server to become ready by running the following command:
oc --context cluster1 rollout status deployments/kiali -n istio-systemLog in to Kiali.
-
When you first access Kiali, log in to the cluster that contains the Kiali deployment. In this example, access the cluster.
East Display the hostname of the Kiali route by running the following command:
oc --context cluster1 get route kiali -n istio-system -o jsonpath='{.spec.host}'- Navigate to the Kiali URL in your browser: https://<your-kiali-route-hostname>.
-
When you first access Kiali, log in to the cluster that contains the Kiali deployment. In this example, access the
Log in to the West cluster through Kiali.
In order to see other clusters in the Kiali UI, you must first login as a user to those clusters through Kiali.
- Click on the user profile dropdown in the top right hand menu.
- Select Login to West. You are redirected to an OpenShift login page and prompted for credentials for the West cluster.
Verify that Kiali shows information from both clusters.
- Click Overview and verify that you can see namespaces from both clusters.
- Click Navigate and verify that you see both clusters on the mesh graph.
Chapter 7. Deploying multiple service meshes on a single cluster Copy linkLink copied to clipboard!
You can use the Red Hat OpenShift Service Mesh to operate many service meshes in a single cluster, with each mesh managed by a separate control plane. Using discovery selectors and revisions prevents conflicts between control planes.
7.1. About deploying multiple control planes Copy linkLink copied to clipboard!
To configure a cluster to host two control planes, set up separate Istio resources with unique names in independent Istio system namespaces. Assign a unique revision name to each Istio resource to identify the control planes, workloads, or namespaces it manages. Apply these revision names using injection or
istio.io/rev
Each
Istio
istio-ca-root-cert
When adding an additional Istio control plane to a cluster with an existing control plane, ensure that the existing
Istio
Only one
IstioCNI
7.2. Using multiple control planes on a single cluster Copy linkLink copied to clipboard!
You can use discovery selectors to limit the visibility of an Istio control plane to specific namespaces in a cluster. By combining discovery selectors with control plane revisions, you can deploy multiple control planes in a single cluster, ensuring that each control plane manages only its assigned namespaces. This approach avoids conflicts between control planes and enables soft multi-tenancy for service meshes.
7.2.1. Deploying the first control plane Copy linkLink copied to clipboard!
You deploy the first control plane by creating its assigned namespace.
Prerequisites
- You have installed the OpenShift Service Mesh operator.
You have created an Istio Container Network Interface (CNI) resource.
NoteYou can run the following command to check for existing
instances:Istio$ oc get istios-
You have installed the binary on your localhost.
istioctl
You can have extended support for more than two control planes. The maximum number of service meshes in a single cluster depends on the available cluster resources.
Procedure
Create the namespace for the first Istio control plane called
by running the following command:istio-system-1$ oc new-project istio-system-1Add the following label to the first namespace, which is used with the Istio
field by running the following command:discoverySelectors$ oc label namespace istio-system-1 istio-discovery=mesh-1Create a YAML file named
with the nameistio-1.yamland themesh-1asdiscoverySelector:mesh-1Example configuration
kind: Istio apiVersion: sailoperator.io/v1 metadata: name: mesh-1 spec: namespace: istio-system-1 values: meshConfig: discoverySelectors: - matchLabels: istio-discovery: mesh-1 # ...Create the first
resource by running the following command:Istio$ oc apply -f istio-1.yamlTo restrict workloads in
from communicating freely with decrypted traffic between meshes, deploy amesh-1resource to enforce mutual TLS (mTLS) traffic within thePeerAuthenticationdata plane. Apply themesh-1resource in thePeerAuthenticationnamespace by using a configuration file, such asistio-system-1:peer-auth-1.yaml$ oc apply -f peer-auth-1.yamlExample configuration
apiVersion: security.istio.io/v1 kind: PeerAuthentication metadata: name: "mesh-1-peerauth" namespace: "istio-system-1" spec: mtls: mode: STRICT
7.2.2. Deploying the second control plane Copy linkLink copied to clipboard!
After deploying the first control plane, you can deploy the second control plane by creating its assigned namespace.
Procedure
Create a namespace for the second Istio control plane called
by running the following command:istio-system-2$ oc new-project istio-system-2Add the following label to the second namespace, which is used with the Istio
field by running the following command:discoverySelectors$ oc label namespace istio-system-2 istio-discovery=mesh-2Create a YAML file named
:istio-2.yamlExample configuration
kind: Istio apiVersion: sailoperator.io/v1 metadata: name: mesh-2 spec: namespace: istio-system-2 values: meshConfig: discoverySelectors: - matchLabels: istio-discovery: mesh-2 # ...Create the second
resource by running the following command:Istio$ oc apply -f istio-2.yamlDeploy a policy for workloads in the
namespace to only accept mutual TLS trafficistio-system-2by running the following command:peer-auth-2.yaml$ oc apply -f peer-auth-2.yamlExample configuration
apiVersion: security.istio.io/v1 kind: PeerAuthentication metadata: name: "mesh-2-peerauth" namespace: "istio-system-2" spec: mtls: mode: STRICT
7.2.3. Verifying multiple control planes Copy linkLink copied to clipboard!
Verify that both of the Istio control planes are deployed and running properly. You can validate that the
istiod
Verify that the workloads are assigned to the control plane in
by running the following command:istio-system-1$ oc get pods -n istio-system-1Example output
NAME READY STATUS RESTARTS AGE istiod-mesh-1-b69646b6f-kxrwk 1/1 Running 0 4m14sVerify that the workloads are assigned to the control plane in
by running the following command:istio-system-2$ oc get pods -n istio-system-2Example output
NAME READY STATUS RESTARTS AGE istiod-mesh-2-8666fdfc6-mqp45 1/1 Running 0 118s
7.3. Deploy application workloads in each mesh Copy linkLink copied to clipboard!
To deploy application workloads, assign each workload to a separate namespace.
Procedure
Create an application namespace called
by running the following command:app-ns-1$ oc create namespace app-ns-1To ensure that the namespace is discovered by the first control plane, add the
label by running the following command:istio-discovery=mesh-1$ oc label namespace app-ns-1 istio-discovery=mesh-1To enable sidecar injection into all the pods by default while ensuring that pods in this namespace are mapped to the first control plane, add the
label to the namespace by running the following command:istio.io/rev=mesh-1$ oc label namespace app-ns-1 istio.io/rev=mesh-1Optional: You can verify the
revision name by running the following command:mesh-1$ oc get istiorevisionsDeploy the
andsleepapplications by running the following command:httpbin$ oc apply -n app-ns-1 \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yamlWait for the
andhttpbinpods to run with sidecars injected by running the following command:sleep$ oc get pods -n app-ns-1Example output
NAME READY STATUS RESTARTS AGE httpbin-7f56dc944b-kpw2x 2/2 Running 0 2m26s sleep-5577c64d7c-b5wd2 2/2 Running 0 91mCreate a second application namespace called
by running the following command:app-ns-2$ oc create namespace app-ns-2Create a third application namespace called
by running the following command:app-ns-3$ oc create namespace app-ns-3Add the label
to both namespaces and the revision labelistio-discovery=mesh-2to match the discovery selector of the second control plane by running the following command:mesh-2$ oc label namespace app-ns-2 app-ns-3 istio-discovery=mesh-2 istio.io/rev=mesh-2Deploy the
andsleepapplications to thehttpbinnamespace by running the following command:app-ns-2$ oc apply -n app-ns-2 \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yamlDeploy the
andsleepapplications to thehttpbinnamespace by running the following command:app-ns-3$ oc apply -n app-ns-3 \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yamlOptional: Use the following command to wait for a deployment to be available:
$ oc wait deployments -n app-ns-2 --all --for condition=Available
Verification
Verify that each application workload is managed by its assigned control plane by using the
command after deploying the applications:istioctl psVerify that the workloads are assigned to the control plane in
by running the following command:istio-system-1$ istioctl ps -i istio-system-1Example output
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION httpbin-7f56dc944b-vwfm5.app-ns-1 Kubernetes SYNCED (11m) SYNCED (11m) SYNCED (11m) SYNCED (11m) IGNORED istiod-mesh-1-b69646b6f-kxrwk 1.23.0 sleep-5577c64d7c-d675f.app-ns-1 Kubernetes SYNCED (11m) SYNCED (11m) SYNCED (11m) SYNCED (11m) IGNORED istiod-mesh-1-b69646b6f-kxrwk 1.23.0Verify that the workloads are assigned to the control plane in
by running the following command:istio-system-2$ istioctl ps -i istio-system-2Example output
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION httpbin-7f56dc944b-54gjs.app-ns-3 Kubernetes SYNCED (3m59s) SYNCED (3m59s) SYNCED (3m59s) SYNCED (3m59s) IGNORED istiod-mesh-2-8666fdfc6-mqp45 1.23.0 httpbin-7f56dc944b-gnh72.app-ns-2 Kubernetes SYNCED (4m1s) SYNCED (4m1s) SYNCED (3m59s) SYNCED (4m1s) IGNORED istiod-mesh-2-8666fdfc6-mqp45 1.23.0 sleep-5577c64d7c-k9mxz.app-ns-2 Kubernetes SYNCED (4m1s) SYNCED (4m1s) SYNCED (3m59s) SYNCED (4m1s) IGNORED istiod-mesh-2-8666fdfc6-mqp45 1.23.0 sleep-5577c64d7c-m9hvm.app-ns-3 Kubernetes SYNCED (4m1s) SYNCED (4m1s) SYNCED (3m59s) SYNCED (4m1s) IGNORED istiod-mesh-2-8666fdfc6-mqp45 1.23.0
Verify that the application connectivity is restricted to workloads within their respective mesh:
Send a request from the
pod insleepto theapp-ns-1service inhttpbinto check that the communication fails by running the following command:app-ns-2$ oc -n app-ns-1 exec deploy/sleep -c sleep -- curl -sIL http://httpbin.app-ns-2.svc.cluster.local:8000The
resources created earlier enforce mutual TLS (mTLS) traffic inPeerAuthenticationmode within each mesh. Each mesh uses its own root certificate, managed by theSTRICTconfig map, which prevents communication between meshes. The output indicates a communication failure, similar to the following example:istio-ca-root-certExample output
HTTP/1.1 503 Service Unavailable content-length: 95 content-type: text/plain date: Wed, 16 Oct 2024 12:05:37 GMT server: envoyConfirm that the communication works by sending a request from the
pod to thesleepservice that are present in thehttpbinnamespace which is managed byapp-ns-2. Run the following command:mesh-2$ oc -n app-ns-2 exec deploy/sleep -c sleep -- curl -sIL http://httpbin.app-ns-3.svc.cluster.local:8000Example output
HTTP/1.1 200 OK access-control-allow-credentials: true access-control-allow-origin: * content-security-policy: default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' camo.githubusercontent.com content-type: text/html; charset=utf-8 date: Wed, 16 Oct 2024 12:06:30 GMT x-envoy-upstream-service-time: 8 server: envoy transfer-encoding: chunked
Chapter 8. External control plane topology Copy linkLink copied to clipboard!
You can use the external control plane topology to isolate the control plane from the data plane on separate clusters.
8.1. About external control plane topology Copy linkLink copied to clipboard!
The external control plane topology improves security and allows the Service Mesh to be hosted as a service. In this installation configuration one cluster hosts and manages the Istio control plane, and applications are hosted on other clusters.
8.1.1. Installing the control plane and data plane on separate clusters Copy linkLink copied to clipboard!
Install Istio on a control plane cluster and a separate data plane cluster. This installation approach provides increased security.
You can adapt these instructions for a mesh spanning more than one data plane cluster. You can also adapt these instructions for multiple meshes with multiple control planes on the same control plane cluster.
Prerequisites
- You have installed the OpenShift Service Mesh Operator on the control plane cluster and the data plane cluster.
-
You have installed on the laptop you will use to run these instructions.
istioctl
Procedure
Create an
environment variable that defines the Istio version to install on all the clusters by running the following command:ISTIO_VERSION$ export ISTIO_VERSION=1.24.3Create a
environment variable that defines the name of the cluster by running the following command:REMOTE_CLUSTER_NAME$ export REMOTE_CLUSTER_NAME=cluster1Set up the environment variable that contains the
command context for the control plane cluster by running the following command:oc$ export CTX_CONTROL_PLANE_CLUSTER=<context_name_of_the_control_plane_cluster>Set up the environment variable that contains the
command context for the data plane cluster by running the following command:oc$ export CTX_DATA_PLANE_CLUSTER=<context_name_of_the_data_plane_cluster>Set up the ingress gateway for the control plane:
Create a project called
by running the following command:istio-system$ oc get project istio-system --context "${CTX_CONTROL_PLANE_CLUSTER}" || oc new-project istio-system --context "${CTX_CONTROL_PLANE_CLUSTER}"Create an
resource on the control plane cluster to manage the ingress gateway by running the following command:Istio$ cat <<EOF | oc --context "${CTX_CONTROL_PLANE_CLUSTER}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-system value: global: network: network1 EOFCreate the ingress gateway for the control plane by running the following command:
$ oc --context "${CTX_CONTROL_PLANE_CLUSTER}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/controlplane-gateway.yamlGet the assigned IP address for the ingress gateway by running the following command:
$ oc --context "${CTX_CONTROL_PLANE_CLUSTER}" get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}'Store the IP address of the ingress gateway in an environment variable by running the following command:
$ export EXTERNAL_ISTIOD_ADDR=$(oc -n istio-system --context="${CTX_CONTROL_PLANE_CLUSTER}" get svc istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
Install Istio on the data plane cluster:
Create a project called
on the data plane cluster by running the following command:external-istiod$ oc get project external-istiod --context "${CTX_DATA_PLANE_CLUSTER}" || oc new-project external-istiod --context "${CTX_DATA_PLANE_CLUSTER}"Create an
resource on the data plane cluster by running the following command:Istio$ cat <<EOF | oc --context "${CTX_DATA_PLANE_CLUSTER}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: external-istiod spec: version: v${ISTIO_VERSION} namespace: external-istiod profile: remote values: defaultRevision: external-istiod global: remotePilotAddress: ${EXTERNAL_ISTIOD_ADDR} configCluster: true1 pilot: configMap: true istiodRemote: injectionPath: /inject/cluster/cluster2/net/network1 EOF- 1
- This setting identifies the data plane cluster as the source of the mesh configuration.
Create a project called
on the data plane cluster by running the following command:istio-cni$ oc get project istio-cni --context "${CTX_DATA_PLANE_CLUSTER}" || oc new-project istio-cni --context "${CTX_DATA_PLANE_CLUSTER}"Create an
resource on the data plane cluster by running the following command:IstioCNI$ cat <<EOF | oc --context "${CTX_DATA_PLANE_CLUSTER}" apply -f - apiVersion: sailoperator.io/v1 kind: IstioCNI metadata: name: default spec: version: v${ISTIO_VERSION} namespace: istio-cni EOF
Set up the external Istio control plane on the control plane cluster:
Create a project called
on the control plane cluster by running the following command:external-istiod$ oc get project external-istiod --context "${CTX_CONTROL_PLANE_CLUSTER}" || oc new-project external-istiod --context "${CTX_CONTROL_PLANE_CLUSTER}"Create a
resource on the control plane cluster by running the following command:ServiceAccount$ oc --context="${CTX_CONTROL_PLANE_CLUSTER}" create serviceaccount istiod-service-account -n external-istiodStore the API server address for the data plane cluster in an environment variable by running the following command:
$ DATA_PLANE_API_SERVER=https://<hostname_or_IP_address_of_the_API_server_for_the_data_plane_cluster>:6443Install a remote secret on the control plane cluster that provides access to the API server on the data plane cluster by running the following command:
$ istioctl create-remote-secret \ --context="${CTX_DATA_PLANE_CLUSTER}" \ --type=config \ --namespace=external-istiod \ --service-account=istiod-external-istiod \ --create-service-account=false \ --server="${DATA_PLANE_API_SERVER}" | \ oc --context="${CTX_CONTROL_PLANE_CLUSTER}" apply -f -Create an
resource on the control plane cluster by running the following command:Istio$ cat <<EOF | oc --context "${CTX_CONTROL_PLANE_CLUSTER}" apply -f - apiVersion: sailoperator.io/v1 kind: Istio metadata: name: external-istiod spec: version: v${ISTIO_VERSION} namespace: external-istiod profile: empty values: meshConfig: rootNamespace: external-istiod defaultConfig: discoveryAddress: $EXTERNAL_ISTIOD_ADDR:15012 pilot: enabled: true volumes: - name: config-volume configMap: name: istio-external-istiod - name: inject-volume configMap: name: istio-sidecar-injector-external-istiod volumeMounts: - name: config-volume mountPath: /etc/istio/config - name: inject-volume mountPath: /var/lib/istio/inject env: INJECTION_WEBHOOK_CONFIG_NAME: "istio-sidecar-injector-external-istiod-external-istiod" VALIDATION_WEBHOOK_CONFIG_NAME: "istio-validator-external-istiod-external-istiod" EXTERNAL_ISTIOD: "true" LOCAL_CLUSTER_SECRET_WATCHER: "true" CLUSTER_ID: cluster2 SHARED_MESH_CONFIG: istio global: caAddress: $EXTERNAL_ISTIOD_ADDR:15012 configValidation: false meshID: mesh1 multiCluster: clusterName: cluster2 network: network1 EOFCreate
andGatewayresources so that the sidecar proxies on the data plane cluster can access the control plane by running the following command:VirtualService$ oc --context "${CTX_CONTROL_PLANE_CLUSTER}" apply -f - <<EOF apiVersion: networking.istio.io/v1 kind: Gateway metadata: name: external-istiod-gw namespace: external-istiod spec: selector: istio: ingressgateway servers: - port: number: 15012 protocol: tls name: tls-XDS tls: mode: PASSTHROUGH hosts: - "*" - port: number: 15017 protocol: tls name: tls-WEBHOOK tls: mode: PASSTHROUGH hosts: - "*" --- apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: external-istiod-vs namespace: external-istiod spec: hosts: - "*" gateways: - external-istiod-gw tls: - match: - port: 15012 sniHosts: - "*" route: - destination: host: istiod-external-istiod.external-istiod.svc.cluster.local port: number: 15012 - match: - port: 15017 sniHosts: - "*" route: - destination: host: istiod-external-istiod.external-istiod.svc.cluster.local port: number: 443 EOFWait for the
external-istiodresource on the control plane cluster to return the "Ready" status condition by running the following command:Istio$ oc --context "${CTX_CONTROL_PLANE_CLUSTER}" wait --for condition=Ready istio/external-istiod --timeout=3mWait for the
resource on the data plane cluster to return the "Ready" status condition by running the following command:Istio$ oc --context "${CTX_DATA_PLANE_CLUSTER}" wait --for condition=Ready istio/external-istiod --timeout=3mWait for the
resource on the data plane cluster to return the "Ready" status condition by running the following command:IstioCNI$ oc --context "${CTX_DATA_PLANE_CLUSTER}" wait --for condition=Ready istiocni/default --timeout=3m
Verification
Deploy sample applications on the data plane cluster:
Create a namespace for sample applications on the data plane cluster by running the following command:
$ oc --context "${CTX_DATA_PLANE_CLUSTER}" get project sample || oc --context="${CTX_DATA_PLANE_CLUSTER}" new-project sampleLabel the namespace for the sample applications to support sidecar injection by running the following command:
$ oc --context="${CTX_DATA_PLANE_CLUSTER}" label namespace sample istio.io/rev=external-istiodDeploy the
application:helloworldCreate the
service by running the following command:helloworld$ oc --context="${CTX_DATA_PLANE_CLUSTER}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sampleCreate the
deployment by running the following command:helloworld-v1$ oc --context="${CTX_DATA_PLANE_CLUSTER}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/helloworld/helloworld.yaml \ -l version=v1 -n sample
Deploy the
application by running the following command:sleep$ oc --context="${CTX_DATA_PLANE_CLUSTER}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/sleep/sleep.yaml -n sampleVerify that the pods on the
namespace have a sidecar injected by running the following command:sample$ oc --context="${CTX_DATA_PLANE_CLUSTER}" get pods -n sampleThe terminal should return
for each pod on the2/2namespace by running the following command:sampleExample output
NAME READY STATUS RESTARTS AGE helloworld-v1-6d65866976-jb6qc 2/2 Running 0 1m sleep-5fcd8fd6c8-mg8n2 2/2 Running 0 1m
Verify that internal traffic can reach the applications on the cluster:
Verify a request can be sent to the
application through thehelloworldapplication by running the following command:sleep$ oc exec --context="${CTX_DATA_PLANE_CLUSTER}" -n sample -c sleep deploy/sleep -- curl -sS helloworld.sample:5000/helloThe terminal should return a response from the
application:helloworldExample output
Hello version: v1, instance: helloworld-v1-6d65866976-jb6qc
Install an ingress gateway to expose the sample application to external clients:
Create the ingress gateway by running the following command:
$ oc --context="${CTX_DATA_PLANE_CLUSTER}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/refs/heads/main/chart/samples/ingress-gateway.yaml -n sampleConfirm that the ingress gateway is running by running the following command:
$ oc get pod -l app=istio-ingressgateway -n sample --context="${CTX_DATA_PLANE_CLUSTER}"The terminal should return output confirming that the gateway is running:
Example output
NAME READY STATUS RESTARTS AGE istio-ingressgateway-7bcd5c6bbd-kmtl4 1/1 Running 0 8m4sExpose the
application through the ingress gateway by running the following command:helloworld$ oc apply -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/helloworld/helloworld-gateway.yaml -n sample --context="${CTX_DATA_PLANE_CLUSTER}"Set the gateway URL environment variable by running the following command:
$ export INGRESS_HOST=$(oc -n sample --context="${CTX_DATA_PLANE_CLUSTER}" get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'); \ export INGRESS_PORT=$(oc -n sample --context="${CTX_DATA_PLANE_CLUSTER}" get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}'); \ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
Verify that external traffic can reach the applications on the mesh:
Confirm that the
application is accessible through the gateway by running the following command:helloworld$ curl -s "http://${GATEWAY_URL}/hello"The
application should return a response.helloworldExample output
Hello version: v1, instance: helloworld-v1-6d65866976-jb6qc
Chapter 9. Istioctl tool Copy linkLink copied to clipboard!
Use the
istioctl
9.1. Support for Istioctl Copy linkLink copied to clipboard!
OpenShift Service Mesh 3 supports a selection of Istioctl commands.
| Command | Description |
|---|---|
|
| Manage the control plane (
|
|
| Analyze the Istio configuration and print validation messages |
|
| Generate the autocompletion script for the specified shell |
|
| Create a secret with credentials to allow Istio to access remote Kubernetes API servers |
|
| Display help about any command |
|
| Retrieve information about the proxy configuration from Envoy (Kubernetes only) |
|
| Retrieve the synchronization status of each Envoy in the mesh |
|
| List the remote clusters each
|
|
| Validate the Istio policy and rules files |
|
| Print out the build version information |
|
| Manage the waypoint configuration |
|
| Update or retrieve the current Ztunnel configuration. |
Any other commands display the WARNING: Not supported in OpenShift Service Mesh context message. Do not use these commands in production environments.
9.2. Installing the Istioctl tool Copy linkLink copied to clipboard!
Install the
istioctl
Prerequisites
- You have access to the OpenShift Container Platform web console.
- The OpenShift Service Mesh 3 Operator is installed and running.
-
You have created at least one resource.
Istio
Procedure
Confirm which version of the
resource runs on the installation by running the following command:Istio$ oc get istio -ojsonpath="{range .items[*]}{.spec.version}{'\n'}{end}" | sed s/^v// | sortIf there are multiple
resources with different versions, choose the latest version. The latest version is displayed last.Istio- In the OpenShift Container Platform web console, click the Help icon and select Command Line Tools.
Click Download istioctl. Choose the version and architecture that matches your system.
Extract the
binary file.istioctlIf you are using a Linux operating system, run the following command:
$ tar xzf istioctl-<VERSION>-<OS>-<ARCH>.tar.gz- If you are using an Apple Mac operating system, unpack and extract the archive.
- If you are using a Microsoft Windows operating system, use the zip software to extract the archive.
Move to the uncompressed directory by running the following command:
$ cd istioctl-<VERSION>-<OS>-<ARCH>Add the
client to the path by running the following command:istioctl$ export PATH=$PWD:$PATHConfirm that the
client version and the Istio control plane version match or are within one version by running the following command:istioctl$ istioctl versionSample output:
client version: 1.20.0 control plane version: 1.24.3_ossm data plane version: none
Chapter 10. Enabling mutual Transport Layer Security Copy linkLink copied to clipboard!
You can use Red Hat OpenShift Service Mesh for your application to customize the communication security between the complex array of microservices. Mutual Transport Layer Security (mTLS) is a protocol that enables two parties to authenticate each other.
10.1. About mutual Transport Layer Security (mTLS) Copy linkLink copied to clipboard!
In OpenShift Service Mesh 3, you use the
Istio
ServiceMeshControlPlane
In OpenShift Service Mesh 3, you configure
STRICT
PeerAuthentication
DestinationRule
Review the following
Istio
PeerAuthentication-
defines the type of mTLS traffic a sidecar accepts. In
PERMISSIVEmode, both plaintext and mTLS traffic are accepted. InSTRICTmode, only mTLS traffic is allowed. DestinationRule-
configures the type of TLS traffic a sidecar sends. In
DISABLEmode, the sidecar sends plaintext. InSIMPLE,MUTUAL, andISTIO_MUTUALmodes, the sidecar establishes a TLS connection. Auto mTLS-
ensures that all inter-mesh traffic is encrypted with mTLS by default, regardless of the
PeerAuthenticationmode configuration.Auto mTLSis controlled by the global mesh configuration fieldenableAutoMtls, which is enabled by default in OpenShift Service Mesh 2 and 3. The mTLS setting operates entirely between sidecar proxies, requiring no changes to application or service code.
By default,
PeerAuthentication
PERMISSIVE
10.2. Enabling strict mTLS mode by using the namespace Copy linkLink copied to clipboard!
You can restrict workloads to accept only encrypted mTLS traffic by enabling the
STRICT
PeerAuthentication
Example PeerAuthentication policy for a namespace
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
name: default
namespace: <namespace>
spec:
mtls:
mode: STRICT
You can enable mTLS for all destination hosts in the
<namespace>
DestinationRule
MUTUAL
ISTIO_MUTUAL
auto mTLS
PeerAuthentication
STRICT
Example DestinationRule policy for a namespace
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: enable-mtls
namespace: <namespace>
spec:
host: "*.<namespace>.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
10.3. Enabling strict mTLS across the whole service mesh Copy linkLink copied to clipboard!
You can configure mTLS across the entire mesh by applying the
PeerAuthentication
istiod
istio-system
istiod
spec.namespace
Istio
Example PeerAuthentication policy for the whole mesh
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
name: default
namespace: istio-system
spec:
mtls:
mode: STRICT
Additionally, create a
DestinationRule
DestinationRule
Example DestinationRule policy for the whole mesh
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: api-server
namespace: istio-system
spec:
host: kubernetes.default.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
10.4. Validating encryptions with Kiali Copy linkLink copied to clipboard!
The Kiali console offers several ways to validate whether or not your applications, services, and workloads have Mutual Transport Layer Security (mTLS) encryption enabled.
The Services Detail Overview page displays a Security icon on the graph edges where at least one request with mTLS enabled is present. Also note that Kiali displays a lock icon in the Network section next to ports that are configured for mTLS.
Chapter 11. Post-quantum cryptography Copy linkLink copied to clipboard!
Post-quantum cryptography (PQC) provides cryptographic algorithms resistant to quantum computing threats, replacing traditional methods such as RSA and ECDSA that are vulnerable to quantum-based attacks.
11.1. About post-quantum cryptography (PQC) in service mesh Copy linkLink copied to clipboard!
Post-quantum cryptography (PQC), also known as quantum-resistant cryptography, uses encryption algorithms designed to resist attacks from quantum computers.
Quantum computers use principles of quantum mechanics to perform certain calculations significantly faster than classical computers, compromising widely used cryptographic algorithms.
Most current encryption methods rely on mathematical problems that classical computers cannot solve in a practical time. Large-scale quantum computers could solve some of these problems more efficiently, which would weaken the security of existing cryptographic systems.
In Red Hat OpenShift Service Mesh, cryptographic algorithms protect control plane and data plane communications, including mutual TLS (mTLS) between workloads. Enabling PQC strengthens these communications by introducing quantum-resistant key exchange mechanisms while maintaining compatibility with existing infrastructure.
Post-quantum cryptography (PQC) algorithms are not available on OpenShift clusters running in FIPS mode.
11.2. Configuring service mesh with post-quantum cryptography (PQC) for gateways Copy linkLink copied to clipboard!
Configure a quantum-secure gateway by using hybrid key exchange to protect service mesh ingress traffic against quantum computing threats.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console 4.19+ as a user with the role.
cluster-admin - You have installed the Red Hat OpenShift Service Mesh Operator 3.2.1+.
-
You have deployed the and
Istioresources.IstioCNI You have installed the following CLI tools locally:
-
oc -
podman -
curl
-
Procedure
Update the
control plane to enable PQC by running the following command:Istio$ oc apply -f - <<EOF apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v1.27.8 namespace: istio-system updateStrategy: type: InPlace values: meshConfig: accessLogFile: /dev/stdout tlsDefaults: ecdhCurves: - X25519MLKEM768 EOF- defines the setting that applies to all non-mesh Transport Layer Security (TLS) connections in your Istio deployment, including:
spec.values.meshConfig.tlsDefaults.ecdhCurves- Ingress gateways: TLS connections from external clients.
- Egress gateways: TLS connections to external services.
- External service connections: Any TLS connections to services outside the mesh.
NoteThis setting does not apply to mesh-internal mutual Transport Layer Security (mTLS). Communication between services within the mesh uses the default Istio mTLS configuration.
-
defines a configuration that is a mesh-wide setting that applies to all gateways and mesh-internal traffic. You cannot enable PQC algorithms for individual workloads. To use different TLS configurations for specific gateways, you must deploy separate control planes with a unique
spec.values.meshConfig.tlsDefaultssettings.meshConfig.tlsDefaults
11.3. Configuring service mesh with mesh-wide post-quantum cryptography (PQC) Copy linkLink copied to clipboard!
Configure the Istio control plane to enforce a post-quantum cryptography (PQC) compliance policy, enabling quantum-resistant security for service mesh communications.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console 4.19+ as a user with the role.
cluster-admin - You have installed the Red Hat OpenShift Service Mesh Operator 3.2.1+.
-
You have deployed the and
Istioresources.IstioCNI You have installed the following CLI tools locally:
-
oc -
podman -
curl
-
Procedure
Update the
control plane to enable PQC by running the following command:Istio$ oc apply -f - <<EOF apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v1.27.8 namespace: istio-system updateStrategy: type: InPlace values: pilot: env: COMPLIANCE_POLICY: "pqc" EOF-
specifies the compliance policy that the Istio control plane enforces. Set the field to
spec.values.pilot.env.COMPLIANCE_POLICYto enable PQC.pqc
-
11.4. Configuring service mesh in ambient mode with post-quantum cryptography (PQC) Copy linkLink copied to clipboard!
Configure the Istio control plane and ztunnel to enforce a post-quantum cryptography (PQC) compliance policy, enabling quantum-resistant security for ambient mode service mesh communications.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console 4.19+ as a user with the role.
cluster-admin - You have installed the Red Hat OpenShift Service Mesh Operator 3.2.1+.
-
You have deployed the and
Istioresources with ambient mode enabled.IstioCNI You have installed the following CLI tools locally:
-
oc -
podman -
curl
-
Procedure
Update the
control plane andIstioto enable PQC by running the following command:ztunnel$ oc apply -f - <<EOF apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: version: v1.27.8 namespace: istio-system updateStrategy: type: InPlace values: pilot: env: COMPLIANCE_POLICY: "pqc" ztunnel: env: COMPLIANCE_POLICY: "pqc" EOF-
specifies the compliance policy for the Istio control plane. Set the field to
spec.values.pilot.env.COMPLIANCE_POLICYto enable PQC.pqc -
specifies the compliance policy for
spec.values.ztunnel.env.COMPLIANCE_POLICYin ambient mode. Set the field toztunnelto enable PQC.pqc
-