Installing
Installing OpenShift Service Mesh
Abstract
Chapter 1. Supported platforms and configurations Copy linkLink copied to clipboard!
Before you can install Red Hat OpenShift Service Mesh 3.0.4, you must subscribe to OpenShift Container Platform and install OpenShift Container Platform in a supported configuration. If you do not have a subscription on your Red Hat account, contact your sales representative for more information.
1.1. Supported platforms Copy linkLink copied to clipboard!
Version {MaistraVersion} Service Mesh control planes are supported on the following platform versions:
- Red Hat OpenShift Container Platform version 4.14 or later
- Red Hat OpenShift Dedicated version 4
- Azure Red Hat OpenShift (ARO) version 4
- Red Hat OpenShift Service on AWS (ROSA)
The Red Hat OpenShift Service Mesh Operator supports multiple versions of Istio
.
If you are installing Red Hat OpenShift Service Mesh on a restricted network, follow the instructions for your chosen OpenShift Container Platform infrastructure.
For additional information about Red Hat OpenShift Service Mesh lifecycle and supported platforms, refer to the Support Policy.
1.2. Supported configurations for Service Mesh Copy linkLink copied to clipboard!
Red Hat OpenShift Service Mesh supports the following configurations:
- This release of Red Hat OpenShift Service Mesh is supported on OpenShift Container Platform x86_64, IBM Z®, IBM Power®, and Advanced RISC Machine (ARM).
- Configurations where all Service Mesh components are contained within a single OpenShift Container Platform cluster.
- Configurations that do not integrate external services such as virtual machines.
Red Hat OpenShift Service Mesh does not support the EnvoyFilter
configuration except where explicitly documented.
1.3. Supported network configurations Copy linkLink copied to clipboard!
You can use the following OpenShift networking plugins for the Red Hat OpenShift Service Mesh:
- OpenShift-SDN.
- OVN-Kubernetes. See About the OVN-Kubernetes network plugin for more information.
- Third-Party Container Network Interface (CNI) plugins that have been certified on OpenShift Container Platform and passed Service Mesh conformance testing. See Certified OpenShift CNI Plug-ins for more information.
1.4. Supported configurations for Kiali Copy linkLink copied to clipboard!
- The Kiali console is supported on Google Chrome, Microsoft Edge, Mozilla Firefox, or Apple Safari browsers.
-
The
openshift
authentication strategy is the only supported authentication configuration when Kiali is deployed with Red Hat OpenShift Service Mesh (OSSM). Theopenshift
strategy controls access based on the user’s role-based access control (RBAC) roles of the OpenShift Container Platform.
Chapter 2. Installing OpenShift Service Mesh Copy linkLink copied to clipboard!
Installing OpenShift Service Mesh consists of three main tasks: installing the OpenShift Operator, deploying Istio, and customizing the Istio configuration. Then, you can also choose to install the sample bookinfo
application to push data through the mesh and explore mesh functionality.
Before installing OpenShift Service Mesh 3, make sure you are not running OpenShift Service Mesh 3 and OpenShift Service Mesh 2 in the same cluster, because it causes conflicts unless configured correctly. To migrate from OpenShift Service Mesh 2, see Migrating from OpenShift Service Mesh 2.6.
2.1. About deploying Istio using the Red Hat OpenShift Service Mesh Operator Copy linkLink copied to clipboard!
To deploy Istio using the Red Hat OpenShift Service Mesh Operator, you must create an Istio
resource. Then, the Operator creates an IstioRevision
resource, which represents one revision of the Istio control plane. Based on the IstioRevision
resource, the Operator deploys the Istio control plane, which includes the istiod
Deployment
resource and other resources.
The Red Hat OpenShift Service Mesh Operator may create additional instances of the IstioRevision
resource, depending on the update strategy defined in the Istio
resource.
2.1.1. About Istio control plane update strategies Copy linkLink copied to clipboard!
The update strategy affects how the update process is performed. The spec.updateStrategy
field in the Istio
resource configuration determines how the OpenShift Service Mesh Operator updates the Istio control plane. When the Operator detects a change in the spec.version
field or identifies a new minor release with a configured vX.Y-latest
alias, it initiates an upgrade procedure. For each mesh, you select one of two strategies:
-
InPlace
-
RevisionBased
InPlace
is the default strategy for updating OpenShift Service Mesh.
2.2. Installing the Service Mesh Operator Copy linkLink copied to clipboard!
For clusters without OpenShift Service Mesh instances, install the Service Mesh Operator. OpenShift Service Mesh operates cluster-wide and needs a scope configuration to prevent conflicts between Istio control planes. For clusters with OpenShift Service Mesh 3 or later, see "Deploying multiple service meshes on a single cluster".
Prerequisites
- You have deployed a cluster on OpenShift Container Platform 4.14 or later.
- You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.
Procedure
- In the OpenShift Container Platform web console, navigate to the Operators → OperatorHub page.
- Search for the Red Hat OpenShift Service Mesh 3 Operator.
- Locate the Service Mesh Operator, and click to select it.
- When the prompt that discusses the community operator opens, click Continue.
- Click Install.
On the Install Operator page, perform the following steps:
-
Select All namespaces on the cluster (default) as the Installation Mode. This mode installs the Operator in the default
openshift-operators
namespace, which enables the Operator to watch and be available to all namespaces in the cluster. - Select Automatic as the Approval Strategy. This ensures that the Operator Lifecycle Manager (OLM) handles the future upgrades to the Operator automatically. If you select the Manual approval strategy, OLM creates an update request. As a cluster administrator, you must then manually approve the OLM update request to update the Operator to the new version.
Select an Update Channel.
- Choose the stable channel to install the latest stable version of the Red Hat OpenShift Service Mesh 3 Operator. It is the default channel for installing the Operator.
-
To install a specific version of the Red Hat OpenShift Service Mesh 3 Operator, choose the corresponding
stable-<version>
channel. For example, to install the Red Hat OpenShift Service Mesh Operator version 3.0.x, use the stable-3.0 channel.
-
Select All namespaces on the cluster (default) as the Installation Mode. This mode installs the Operator in the default
- Click Install to install the Operator.
Verification
-
Click Operators → Installed Operators to verify that the Service Mesh Operator is installed.
Succeeded
should show in the Status column.
2.2.1. About Service Mesh custom resource definitions Copy linkLink copied to clipboard!
Installing the Red Hat OpenShift Service Mesh Operator also installs custom resource definitions (CRD) that administrators can use to configure Istio for Service Mesh installations. The Operator Lifecycle Manager (OLM) installs two categories of CRDs: Sail Operator CRDs and Istio CRDs.
Sail Operator CRDs define custom resources for installing and maintaining the Istio components required to operate a service mesh. These custom resources belong to the sailoperator.io
API group and include the Istio
, IstioRevision
, IstioCNI
, and ZTunnel
resource kinds. For more information on how to configure these resources, see the sailoperator.io
API reference documentation.
Istio CRDs are associated with mesh configuration and service management. These CRDs define custom resources in several istio.io
API groups, such as networking.istio.io
and security.istio.io
. The CRDs also include various resource kinds, such as AuthorizationPolicy
, DestinationRule
, and VirtualService
, that administrators use to configure a service mesh.
2.3. About Istio deployment Copy linkLink copied to clipboard!
To deploy Istio, you must create two resources: Istio
and IstioCNI
. The Istio
resource deploys and configures the Istio Control Plane. The IstioCNI
resource deploys and configures the Istio Container Network Interface (CNI) plugin. You should create these resources in separate projects; therefore, you must create two projects as part of the Istio deployment process.
You can use the OpenShift web console or the OpenShift CLI (oc) to create a project or a resource in your cluster.
In the OpenShift Container Platform, a project is essentially a Kubernetes namespace with additional annotations, such as the range of user IDs that can be used in the project. Typically, the OpenShift Container Platform web console uses the term project, and the CLI uses the term namespace, but the terms are essentially synonymous.
2.3.1. Creating the Istio project using the web console Copy linkLink copied to clipboard!
The Service Mesh Operator deploys the Istio control plane to a project that you create. In this example, istio-system
is the name of the project.
Prerequisties
- The Red Hat OpenShift Service Mesh Operator must be installed.
- You are logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
- In the OpenShift Container Platform web console, click Home → Projects.
- Click Create Project.
-
At the prompt, enter a name for the project in the Name field. For example,
istio-system
. The other fields provide supplementary information to theIstio
resource definition and are optional. - Click Create. The Service Mesh Operator deploys Istio to the project you specified.
2.3.2. Creating the Istio resource using the web console Copy linkLink copied to clipboard!
Create the Istio resource that will contain the YAML configuration file for your Istio deployment. The Red Hat OpenShift Service Mesh Operator uses information in the YAML file to create an instance of the Istio control plane.
Prerequisties
- The Service Mesh Operator must be installed.
- You are logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
- In the OpenShift Container Platform web console, click Operators → Installed Operators.
-
Select
istio-system
in the Project drop-down menu. - Click the Service Mesh Operator.
- Click Istio.
- Click Create Istio.
-
Select the
istio-system
project from the Namespace drop-down menu. Click Create. This action deploys the Istio control plane.
When
State: Healthy
appears in the Status column, Istio is successfully deployed.
2.3.3. Creating the IstioCNI project using the web console Copy linkLink copied to clipboard!
The Service Mesh Operator deploys the Istio CNI plugin to a project that you create. In this example, istio-cni
is the name of the project.
Prerequisties
- The Red Hat OpenShift Service Mesh Operator must be installed.
- You are logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
- In the OpenShift Container Platform web console, click Home → Projects.
- Click Create Project.
-
At the prompt, you must enter a name for the project in the Name field. For example,
istio-cni
. The other fields provide supplementary information and are optional. - Click Create.
2.3.4. Creating the IstioCNI resource using the web console Copy linkLink copied to clipboard!
Create an Istio Container Network Interface (CNI) resource, which contains the configuration file for the Istio CNI plugin. The Service Mesh Operator uses the configuration specified by this resource to deploy the CNI pod.
Prerequisties
- The Red Hat OpenShift Service Mesh Operator must be installed.
- You are logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
- In the OpenShift Container Platform web console, click Operators → Installed Operators.
-
Select
istio-cni
in the Project drop-down menu. - Click the Service Mesh Operator.
- Click IstioCNI.
- Click Create IstioCNI.
-
Ensure that the name is
default
. Click Create. This action deploys the Istio CNI plugin.
When
State: Healthy
appears in the Status column, the Istio CNI plugin is successfully deployed.
2.4. Scoping the Service Mesh with discovery selectors Copy linkLink copied to clipboard!
Service Mesh includes workloads that meet the following criteria:
- The control plane has discovered the workload.
- The workload has an Envoy proxy sidecar injected.
By default, the control plane discovers workloads in all namespaces across the cluster, with the following results:
- Each proxy instance receives configuration for all namespaces, including workloads not enrolled in the mesh.
- Any workload with the appropriate pod or namespace injection label receives a proxy sidecar.
In shared clusters, you might want to limit the scope of Service Mesh to only certain namespaces. This approach is especially useful if multiple service meshes run in the same cluster.
2.4.1. About discovery selectors Copy linkLink copied to clipboard!
With discovery selectors, the mesh administrator can control which namespaces the control plane can access. By using a Kubernetes label selector, the administrator sets the criteria for the namespaces visible to the control plane, excluding any namespaces that do not match the specified criteria.
Istiod always opens a watch to OpenShift for all namespaces. However, discovery selectors ignore objects that are not selected very early in its processing, minimizing costs.
The discoverySelectors
field accepts an array of Kubernetes selectors, which apply to labels on namespaces. You can configure each selector for different use cases:
-
Custom label names and values. For example, configure all namespaces with the label
istio-discovery=enabled
. -
A list of namespace labels by using set-based selectors with OR logic. For instance, configure namespaces with
istio-discovery=enabled
ORregion=us-east1
. -
Inclusion and exclusion of namespaces. For example, configure namespaces with
istio-discovery=enabled
AND the labelapp=helloworld
.
Discovery selectors are not a security boundary. Istiod continues to have access to all namespaces even when you have configured the discoverySelector
field.
2.4.2. Scoping a Service Mesh by using discovery selectors Copy linkLink copied to clipboard!
If you know which namespaces to include in the Service Mesh, configure discoverySelectors
during or after installation by adding the required selectors to the meshConfig.discoverySelectors
section of the Istio
resource. For example, configure Istio to discover only namespaces labeled istio-discovery=enabled
.
Prerequisites
- The OpenShift Service Mesh operator is installed.
- An Istio CNI resource is created.
Procedure
Add a label to the namespace containing the Istio control plane, for example, the
istio-system
system namespace.oc label namespace istio-system istio-discovery=enabled
$ oc label namespace istio-system istio-discovery=enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the
Istio
control plane resource to include adiscoverySelectors
section with the same label.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the Istio CR:
oc apply -f istio.yaml
$ oc apply -f istio.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Ensure that all namespaces that will contain workloads that are to be part of the Service Mesh have both the
discoverySelector
label and, if needed, the appropriate Istio injection label.
Discovery selectors help restrict the scope of a single Service Mesh and are essential for limiting the control plane scope when you deploy multiple Istio control planes in a single cluster.
2.5. About the Bookinfo application Copy linkLink copied to clipboard!
Installing the bookinfo
example application consists of two main tasks: deploying the application and creating a gateway so the application is accessible outside the cluster.
You can use the bookinfo
application to explore service mesh features. Using the bookinfo
application, you can easily confirm that requests from a web browser pass through the mesh and reach the application.
The bookinfo
application displays information about a book, similar to a single catalog entry of an online book store. The application displays a page that describes the book, lists book details (ISBN, number of pages, and other information), and book reviews.
The bookinfo
application is exposed through the mesh, and the mesh configuration determines how the microservices comprising the application are used to serve requests. The review information comes from one of three services: reviews-v1
, reviews-v2
or reviews-v3
. If you deploy the bookinfo
application without defining the reviews
virtual service, then the mesh uses a round robin rule to route requests to a service.
By deploying the reviews
virtual service, you can specify a different behavior. For example, you can specify that if a user logs into the bookinfo
application, then the mesh routes requests to the reviews-v2
service, and the application displays reviews with black stars. If a user does not log into the bookinfo
application, then the mesh routes requests to the reviews-v3
service, and the application displays reviews with red stars.
For more information, see Bookinfo Application in the upstream Istio documentation.
2.5.1. Deploying the Bookinfo application Copy linkLink copied to clipboard!
Prerequisites
- You have deployed a cluster on OpenShift Container Platform 4.15 or later.
-
You are logged in to the OpenShift Container Platform web console as a user with the
cluster-admin
role. - You have access to the OpenShift CLI (oc).
- You have installed the Red Hat OpenShift Service Mesh Operator, created the Istio resource, and the Operator has deployed Istio.
- You have created IstioCNI resource, and the Operator has deployed the necessary IstioCNI pods.
Procedure
- In the OpenShift Container Platform web console, navigate to the Home → Projects page.
- Click Create Project.
Enter
bookinfo
in the Project name field.The Display name and Description fields provide supplementary information and are not required.
- Click Create.
Apply the Istio discovery selector and injection label to the
bookinfo
namespace by entering the following command:oc label namespace bookinfo istio-discovery=enabled istio-injection=enabled
$ oc label namespace bookinfo istio-discovery=enabled istio-injection=enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIn this example, the name of the Istio resource is
default
. If the Istio resource name is different, you must set theistio.io/rev
label to the name of the Istio resource instead of adding theistio-injection=enabled
label.Apply the
bookinfo
YAML file to deploy thebookinfo
application by entering the following command:oc apply -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo
oc apply -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
bookinfo
service is available by running the following command:oc get services -n bookinfo
$ oc get services -n bookinfo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE details ClusterIP 172.30.137.21 <none> 9080/TCP 44s productpage ClusterIP 172.30.2.246 <none> 9080/TCP 43s ratings ClusterIP 172.30.33.85 <none> 9080/TCP 44s reviews ClusterIP 172.30.175.88 <none> 9080/TCP 44s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE details ClusterIP 172.30.137.21 <none> 9080/TCP 44s productpage ClusterIP 172.30.2.246 <none> 9080/TCP 43s ratings ClusterIP 172.30.33.85 <none> 9080/TCP 44s reviews ClusterIP 172.30.175.88 <none> 9080/TCP 44s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
bookinfo
pods are available by running the following command:oc get pods -n bookinfo
$ oc get pods -n bookinfo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the
Ready
columns displays2/2
, the proxy sidecar was successfully injected. Confirm thatRunning
appears in theStatus
column for each pod.Verify that the
bookinfo
application is running by sending a request to thebookinfo
page. Run the following command:oc exec "$(oc get pod -l app=ratings -n bookinfo -o jsonpath='{.items[0].metadata.name}')" -c ratings -n bookinfo -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"
$ oc exec "$(oc get pod -l app=ratings -n bookinfo -o jsonpath='{.items[0].metadata.name}')" -c ratings -n bookinfo -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.2. About accessing the Bookinfo application using a gateway Copy linkLink copied to clipboard!
The Red Hat OpenShift Service Mesh Operator does not deploy gateways. Gateways are not part of the control plane. As a security best-practice, Ingress and Egress gateways should be deployed in a different namespace than the namespace that contains the control plane.
You can deploy gateways using either the Gateway API or the gateway injection method.
2.5.3. Accessing the Bookinfo application by using Istio gateway injection Copy linkLink copied to clipboard!
Gateway injection uses the same mechanisms as Istio sidecar injection to create a gateway from a Deployment
resource that is paired with a Service
resource. The Service
resource can be made accessible from outside an OpenShift Container Platform cluster.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as
cluster-admin
. - The Red Hat OpenShift Service Mesh Operator must be installed.
- The Istio resource must be deployed.
Procedure
Create the
istio-ingressgateway
deployment and service by running the following command:oc apply -n bookinfo -f ingress-gateway.yaml
$ oc apply -n bookinfo -f ingress-gateway.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis example uses a sample
ingress-gateway.yaml
file that is available in the Istio community repository.Configure the
bookinfo
application to use the new gateway. Apply the gateway configuration by running the following command:oc apply -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/bookinfo/networking/bookinfo-gateway.yaml -n bookinfo
$ oc apply -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/bookinfo/networking/bookinfo-gateway.yaml -n bookinfo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo configure gateway injection with the
bookinfo
application, this example uses a sample gateway configuration file that must be applied in the namespace where the application is installed.Use a route to expose the gateway external to the cluster by running the following command:
oc expose service istio-ingressgateway -n bookinfo
$ oc expose service istio-ingressgateway -n bookinfo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the YAML file to automatically scale the pod when ingress traffic increases.
Example configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This example sets the the maximum replicas to
5
and the minimum replicas to2
. It also creates another replica when utilization reaches 80%.
Specify the minimum number of pods that must be running on the node.
Example configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This example ensures one replica is running if a pod gets restarted on a new node.
Obtain the gateway host name and the URL for the product page by running the following command:
HOST=$(oc get route istio-ingressgateway -n bookinfo -o jsonpath='{.spec.host}')
$ HOST=$(oc get route istio-ingressgateway -n bookinfo -o jsonpath='{.spec.host}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
productpage
is accessible from a web browser by running the following command:echo productpage URL: http://$HOST/productpage
$ echo productpage URL: http://$HOST/productpage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.4. Accessing the Bookinfo application by using Gateway API Copy linkLink copied to clipboard!
The Kubernetes Gateway API deploys a gateway by creating a Gateway
resource. In OpenShift Container Platform 4.15 and later, Red Hat OpenShift Service Mesh implements the Gateway API custom resource definitions (CRDs). However, in OpenShift Container Platform 4.18 and earlier, the CRDs are not installed by default. Hence, in OpenShift Container Platform 4.15 through 4.18, you must manually install the CRDs. Starting with OpenShift Container Platform 4.19, these CRDs are automatically installed and managed, and you can no longer create, update, or delete them.
For details about enabling Gateway API for Ingress in OpenShift Container Platform 4.19 and later, see "Configuring ingress cluster traffic" in the OpenShift Container Platform documentation.
Red Hat provides support for using the Kubernetes Gateway API with Red Hat OpenShift Service Mesh. Red Hat does not provide support for the Kubernetes Gateway API custom resource definitions (CRDs). In this procedure, the use of community Gateway API CRDs is shown for demonstration purposes only.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as
cluster-admin
. - The Red Hat OpenShift Service Mesh Operator must be installed.
- The Istio resource must be deployed.
Procedure
Enable the Gateway API CRDs for OpenShift Container Platform 4.18 and earlier, by running the following command:
oc get crd gateways.gateway.networking.k8s.io &> /dev/null || { oc kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.0.0" | oc apply -f -; }
$ oc get crd gateways.gateway.networking.k8s.io &> /dev/null || { oc kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.0.0" | oc apply -f -; }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create and configure a gateway by using the
Gateway
andHTTPRoute
resources by running the following command:oc apply -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/bookinfo/gateway-api/bookinfo-gateway.yaml -n bookinfo
$ oc apply -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/bookinfo/gateway-api/bookinfo-gateway.yaml -n bookinfo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo configure a gateway with the
bookinfo
application by using the Gateway API, this example uses a sample gateway configuration file that must be applied in the namespace where the application is installed.Ensure that the Gateway API service is ready, and has an address allocated by running the following command:
oc wait --for=condition=programmed gtw bookinfo-gateway -n bookinfo
$ oc wait --for=condition=programmed gtw bookinfo-gateway -n bookinfo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the host by running the following command:
export INGRESS_HOST=$(oc get gtw bookinfo-gateway -n bookinfo -o jsonpath='{.status.addresses[0].value}')
$ export INGRESS_HOST=$(oc get gtw bookinfo-gateway -n bookinfo -o jsonpath='{.status.addresses[0].value}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the port by running the following command:
export INGRESS_PORT=$(oc get gtw bookinfo-gateway -n bookinfo -o jsonpath='{.spec.listeners[?(@.name=="http")].port}')
$ export INGRESS_PORT=$(oc get gtw bookinfo-gateway -n bookinfo -o jsonpath='{.spec.listeners[?(@.name=="http")].port}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the gateway URL by running the following command:
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
$ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain the gateway host name and the URL of the product page by running the following command:
echo "http://${GATEWAY_URL}/productpage"
$ echo "http://${GATEWAY_URL}/productpage"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Verify that the productpage is accessible from a web browser.
2.6. Customizing Istio configuration Copy linkLink copied to clipboard!
The values
field of the Istio
custom resource definition, which was created when the control plane was deployed, can be used to customize Istio configuration using Istio’s Helm
configuration values. When you create this resource using the OpenShift Container Platform web console, it is pre-populated with configuration settings to enable Istio to run on OpenShift.
Procedure
- Click Operators → Installed Operators.
- Click Istio in the Provided APIs column.
-
Click the
Istio
instance, nameddefault
, in the Name column. -
Click YAML to view the
Istio
configuration and make modifications.
For a list of available configuration for the values
field, refer to Istio’s artifacthub chart documentation.
2.7. About Istio High Availability Copy linkLink copied to clipboard!
Running the Istio control plane in High Availability (HA) mode prevents single points of failure, and ensures continuous mesh operation even if an istiod
pod fails. By using HA, if one istiod
pod becomes unavailable, another one continues to manage and configure the Istio data plane, preventing service outages or disruptions. HA provides scalability by distributing the control plane workload, enables graceful upgrades, supports disaster recovery operations, and protects against zone-wide mesh outages.
There are two ways for a system administrator to configure HA for the Istio deployment:
-
Defining a static replica count: This approach involves setting a fixed number of
istiod
pods, providing a consistent level of redundancy. -
Using autoscaling: This approach dynamically adjusts the number of
istiod
pods based on resource utilization or custom metrics, providing more efficient resource consumption for fluctuating workloads.
2.7.1. Configuring Istio HA by using autoscaling Copy linkLink copied to clipboard!
Configure the Istio control plane in High Availability (HA) mode to prevent a single point of failure, and ensure continuous mesh operation even if one of the istiod
pods fails. Autoscaling defines the minimum and maximum number of Istio control plane pods that can operate. OpenShift Container Platform uses these values to scale the number of control planes in operation based on resource utilization, such as CPU or memory, to efficiently respond to the varying number of workloads and overall traffic patterns within the mesh.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as a user with the
cluster-admin
role. - You have installed the Red Hat OpenShift Service Mesh Operator.
- You have deployed the Istio resource.
Procedure
- In the OpenShift Container Platform web console, click Installed Operators.
- Click Red Hat OpenShift Service Mesh 3 Operator.
- Click Istio.
-
Click the name of the Istio installation. For example,
default
. - Click YAML.
Modify the
Istio
custom resource (CR) similar to the following example:Example configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the minimum number of Istio control plane replicas that always run.
- 2
- Specifies the maximum number of Istio control plane replicas, allowing for scaling based on load. To support HA, there must be at least two replicas.
- 3
- Specifies the target CPU utilization for autoscaling to 80%. If the average CPU usage exceeds this threshold, the Horizontal Pod Autoscaler (HPA) automatically increases the number of replicas.
- 4
- Specifies the target memory utilization for autoscaling to 80%. If the average memory usage exceeds this threshold, the HPA automatically increases the number of replicas.
Verification
Verify the status of the Istio control pods by running the following command:
oc get pods -n istio-system -l app=istiod
$ oc get pods -n istio-system -l app=istiod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE istiod-7c7b6564c9-nwhsg 1/1 Running 0 70s istiod-7c7b6564c9-xkmsl 1/1 Running 0 85s
NAME READY STATUS RESTARTS AGE istiod-7c7b6564c9-nwhsg 1/1 Running 0 70s istiod-7c7b6564c9-xkmsl 1/1 Running 0 85s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Two
istiod
pods are running. Two pods, the minimum requirement for an HA Istio control plane, indicates that a basic HA setup is in place.
2.7.1.1. API settings for Service Mesh HA autoscaling mode Copy linkLink copied to clipboard!
Use the following istio
custom resource definition (CRD) parameters when you configure a service mesh for High Availability (HA) by using autoscaling.
Parameter | Description |
---|---|
|
Defines the minimum number of OpenShift only uses this parameter when the Horizontal Pod Autoscaler (HPA) is enabled for the Istio deployment. This is the default behavior. |
|
Defines the maximum number of
For OpenShift to automatically scale the number of You must also configure metrics for autoscaling to work properly. If no metrics are configured, the autoscaler does not scale up or down. OpenShift only uses this parameter when Horizontal Pod Autoscaler (HPA) is enabled for the Istio deployment. This is the default behavior. |
|
Defines the target CPU utilization for the |
|
Defines the target memory utilization for the |
|
You can use the For more information, see Configurable Scaling Behavior. |
2.7.2. Configuring Istio HA by using replica count Copy linkLink copied to clipboard!
Configure the Istio control plane in High Availability (HA) mode to prevent a single point of failure, and ensure continuous mesh operation even if one of the istiod
pods fails. The replica count defines a fixed number of Istio control plane pods that can operate. Use replica count for mesh environments where the control plane workload is relatively stable or predictable, or when you prefer to manually scale the istiod
pod.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as a user with the
cluster-admin
role. - You have installed the Red Hat OpenShift Service Mesh Operator.
- You have deployed the Istio resource.
Procedure
Obtain the name of the
Istio
resource by running the following command:oc get istio -n istio-sytem
$ oc get istio -n istio-sytem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME REVISIONS READY IN USE ACTIVE REVISION STATUS VERSION AGE default 1 1 0 default Healthy v1.24.6 24m
NAME REVISIONS READY IN USE ACTIVE REVISION STATUS VERSION AGE default 1 1 0 default Healthy v1.24.6 24m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The name of the
Istio
resource isdefault
.Update the
Istio
custom resource (CR) by adding theautoscaleEnabled
andreplicaCount
parameters by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the status of the Istio control pods by running the following command:
oc get pods -n istio-system -l app=istiod
$ oc get pods -n istio-system -l app=istiod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE istiod-7c7b6564c9-nwhsg 1/1 Running 0 70s istiod-7c7b6564c9-xkmsl 1/1 Running 0 85s
NAME READY STATUS RESTARTS AGE istiod-7c7b6564c9-nwhsg 1/1 Running 0 70s istiod-7c7b6564c9-xkmsl 1/1 Running 0 85s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Two
istiod
pods are running, which is the minimum requirement for an HA Istio control plane and indicates that a basic HA setup is in place.
Chapter 3. Sidecar injection Copy linkLink copied to clipboard!
To use Istio’s capabilities within a service mesh, each pod needs a sidecar proxy, configured and managed by the Istio control plane.
3.1. About sidecar injection Copy linkLink copied to clipboard!
Sidecar injection is enabled using labels at the namespace or pod level. These labels also indicate the specific control plane managing the proxy. When you apply a valid injection label to the pod template defined in a deployment, any new pods created by that deployment automatically receive a sidecar. Similarly, applying a pod injection label at the namespace level ensures any new pods in that namespace include a sidecar.
Injection happens at pod creation through an admission controller, so changes appear on individual pods rather than the deployment resources. To confirm sidecar injection, check the pod details directly using oc describe
, where you can see the injected Istio proxy container.
3.2. Identifying the revision name Copy linkLink copied to clipboard!
The label required to enable sidecar injection is determined by the specific control plane instance, known as a revision. Each revision is managed by an IstioRevision
resource, which is automatically created and managed by the Istio
resource, so manual creation or modification of IstioRevision
resources is generally unnecessary.
The naming of an IstioRevision
depends on the spec.updateStrategy.type
setting in the Istio
resource. If set to InPlace
, the revision shares the Istio
resource name. If set to RevisionBased
, the revision name follows the format <Istio resource name>-v<version>
. Typically, each Istio
resource corresponds to a single IstioRevision
. However, during a revision-based upgrade, multiple IstioRevision
resources may exist, each representing a distinct control plane instance.
To see available revision names, use the following command:
oc get istiorevisions
$ oc get istiorevisions
You should see output similar to the following example:
Example output
NAME READY STATUS IN USE VERSION AGE my-mesh-v1-23-0 True Healthy False v1.23.0 114s
NAME READY STATUS IN USE VERSION AGE
my-mesh-v1-23-0 True Healthy False v1.23.0 114s
3.2.1. Enabling sidecar injection with default revision Copy linkLink copied to clipboard!
When the service mesh’s IstioRevision
name is default
, it’s possible to use the following labels on a namespace or a pod to enable sidecar injection:
Resource | Label | Enabled value | Disabled value |
---|---|---|---|
Namespace |
|
|
|
Pod |
|
|
|
You can also enable injection by setting the istio.io/rev: default
label in the namespace or pod.
3.2.2. Enabling sidecar injection with other revisions Copy linkLink copied to clipboard!
When the IstioRevision
name is not default
, use the specific IstioRevision
name with the istio.io/rev
label to map the pod to the desired control plane and enable sidecar injection. To enable injection, set the istio.io/rev: default
label in either the namespace or the pod, as adding it to both is not required.
For example, with the revision shown above, the following labels would enable sidecar injection:
Resource | Enabled label | Disabled label |
---|---|---|
Namespace |
|
|
Pod |
|
|
When both istio-injection
and istio.io/rev
labels are applied, the istio-injection
label takes precedence and treats the namespace as part of the default revision.
3.3. Enabling sidecar injection Copy linkLink copied to clipboard!
To demonstrate different approaches for configuring sidecar injection, the following procedures use the Bookinfo application.
Prerequisites
-
You have installed the Red Hat OpenShift Service Mesh Operator, created an
Istio
resource, and the Operator has deployed Istio. -
You have created the
IstioCNI
resource, and the Operator has deployed the necessaryIstioCNI
pods. - You have created the namespaces that are to be part of the mesh, and they are discoverable by the Istio control plane.
-
Optional: You have deployed the workloads to be included in the mesh. In the following examples, the Bookinfo has been deployed to the
bookinfo
namespace, but sidecar injection (step 5) has not been configured. For more information, see "Deploying the Bookinfo application".
3.3.1. Enabling sidecar injection with namespace labels Copy linkLink copied to clipboard!
In this example, all workloads within a namespace receive a sidecar proxy injection, making it the best approach when the majority of workloads in the namespace should be included in the mesh.
Procedure
Verify the revision name of the Istio control plane using the following command:
oc get istiorevisions
$ oc get istiorevisions
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should see output similar to the following example:
Example output
NAME TYPE READY STATUS IN USE VERSION AGE default Local True Healthy False v1.23.0 4m57s
NAME TYPE READY STATUS IN USE VERSION AGE default Local True Healthy False v1.23.0 4m57s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Since the revision name is default, you can use the default injection labels without referencing the exact revision name.
Verify that workloads already running in the desired namespace show
1/1
containers asREADY
by using the following command. This confirms that the pods are running without sidecars.oc get pods -n bookinfo
$ oc get pods -n bookinfo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should see output similar to the following example:
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To apply the injection label to the
bookinfo
namespace, run the following command at the CLI:oc label namespace bookinfo istio-injection=enabled
$ oc label namespace bookinfo istio-injection=enabled namespace/bookinfo labeled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To ensure sidecar injection is applied, redeploy the existing workloads in the
bookinfo
namespace. Use the following command to perform a rolling update of all workloads:oc -n bookinfo rollout restart deployments
$ oc -n bookinfo rollout restart deployments
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the rollout by checking that the new pods display
2/2
containers asREADY
, confirming successful sidecar injection by running the following command:oc get pods -n bookinfo
$ oc get pods -n bookinfo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should see output similar to the following example:
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3.2. Exclude a workload from the mesh Copy linkLink copied to clipboard!
You can exclude specific workloads from sidecar injection within a namespace where injection is enabled for all workloads.
This example is for demonstration purposes only. The bookinfo application requires all workloads to be part of the mesh for proper functionality.
Procedure
-
Open the application’s
Deployment
resource in an editor. In this case, exclude theratings-v1
service. Modify the
spec.template.metadata.labels
section of yourDeployment
resource to include the labelsidecar.istio.io/inject: false
to disable sidecar injection.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAdding the label to the top-level
labels
section of theDeployment
does not affect sidecar injection.Updating the deployment triggers a rollout, creating a new ReplicaSet with updated pod(s).
Verification
Verify that the updated pod(s) do not contain a sidecar container and show
1/1
containers asRunning
by running the following command:oc get pods -n bookinfo
$ oc get pods -n bookinfo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should see output similar to the following example:
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3.3. Enabling sidecar injection with pod labels Copy linkLink copied to clipboard!
This approach allows you to include individual workloads for sidecar injection instead of applying it to all workloads within a namespace, making it ideal for scenarios where only a few workloads need to be part of a service mesh. This example also demonstrates the use of a revision label for sidecar injection, where the Istio
resource is created with the name my-mesh
. A unique Istio
resource name is required when multiple Istio control planes are present in the same cluster or during a revision-based control plane upgrade.
Procedure
Verify the revision name of the Istio control plane by running the following command:
oc get istiorevisions
$ oc get istiorevisions
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should see output similar to the following example:
Example output
NAME TYPE READY STATUS IN USE VERSION AGE my-mesh Local True Healthy False v1.23.0 47s
NAME TYPE READY STATUS IN USE VERSION AGE my-mesh Local True Healthy False v1.23.0 47s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Since the revision name is
my-mesh
, use the revision labelistio.io/rev=my-mesh
to enable sidecar injection.Verify that workloads already running show
1/1
containers asREADY
, indicating that the pods are running without sidecars by running the following command:oc get pods -n bookinfo
$ oc get pods -n bookinfo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should see output similar to the following example:
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Open the application’s
Deployment
resource in an editor. In this case, update theratings-v1
service. Update the
spec.template.metadata.labels
section of yourDeployment
to include the appropriate pod injection or revision label. In this case,istio.io/rev: my-mesh
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAdding the label to the top-level
labels
section of theDeployment
resource does not impact sidecar injection.Updating the deployment triggers a rollout, creating a new ReplicaSet with the updated pod(s).
Verification
Verify that only the ratings-v1 pod now shows
2/2
containersREADY
, indicating that the sidecar has been successfully injected by running the following command:oc get pods -n bookinfo
$ oc get pods -n bookinfo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should see output similar to the following example:
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat for other workloads that you wish to include in the mesh.
3.4. Enabling sidecar injection with namespace labels and an IstioRevisionTag resource Copy linkLink copied to clipboard!
To use the istio-injection=enabled
label when your revision name is not default
, you must create an IstioRevisionTag
resource with the name default
that references your Istio
resource.
Prerequisites
-
You have installed the Red Hat OpenShift Service Mesh Operator, created an
Istio
resource, and the Operator has deployed Istio. -
You have created the
IstioCNI
resource, and the Operator has deployed the necessaryIstioCNI
pods. - You have created the namespaces that are to be part of the mesh, and they are discoverable by the Istio control plane.
-
Optional: You have deployed the workloads to be included in the mesh. In the following examples, the Bookinfo has been deployed to the
bookinfo
namespace, but sidecar injection (step 5 in "Deploying the Bookinfo application" procedure) has not been configured. For more information, see "Deploying the Bookinfo application".
Procedure
Find the name of your
Istio
resource by running the following command:oc get istio
$ oc get istio
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME REVISIONS READY IN USE ACTIVE REVISION STATUS VERSION AGE default 1 1 1 default-v1-24-3 Healthy v1.24.3 11s
NAME REVISIONS READY IN USE ACTIVE REVISION STATUS VERSION AGE default 1 1 1 default-v1-24-3 Healthy v1.24.3 11s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the
Istio
resource has the namedefault
, but the underlying revision is calleddefault-v1-24-3
.Create the
IstioRevisionTag
resource in a YAML file:Example
IstioRevistionTag
resource YAML fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
IstioRevisionTag
resource by running the following command:oc apply -f istioRevisionTag.yaml
$ oc apply -f istioRevisionTag.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
IstioRevisionTag
resource has been created successfully by running the following command:oc get istiorevisiontags.sailoperator.io
$ oc get istiorevisiontags.sailoperator.io
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS IN USE REVISION AGE default Healthy True default-v1-24-3 4m23s
NAME STATUS IN USE REVISION AGE default Healthy True default-v1-24-3 4m23s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the new tag is referencing your active revision,
default-v1-24-3
. Now you can use theistio-injection=enabled
label as if your revision was calleddefault
.Confirm that the pods are running without sidecars by running the following command. Any workloads that are already running in the desired namespace should show
1/1
containers in theREADY
column.oc get pods -n bookinfo
$ oc get pods -n bookinfo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the injection label to the
bookinfo
namespace by running the following command:oc label namespace bookinfo istio-injection=enabled \ namespace/bookinfo labeled
$ oc label namespace bookinfo istio-injection=enabled \ namespace/bookinfo labeled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To ensure sidecar injection is applied, redeploy the workloads in the
bookinfo
namespace by running the following command:oc -n bookinfo rollout restart deployments
$ oc -n bookinfo rollout restart deployments
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the rollout by running the following command and confirming that the new pods display
2/2
containers in theREADY
column:oc get pods -n bookinfo
$ oc get pods -n bookinfo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. OpenShift Service Mesh and cert-manager Copy linkLink copied to clipboard!
The cert-manager tool is a solution for X.509 certificate management on Kubernetes. It delivers a unified API to integrate applications with private or public key infrastructure (PKI), such as Vault, Google Cloud Certificate Authority Service, Let’s Encrypt, and other providers.
The cert-manager tool must be installed before you create and install your Istio
resource.
The cert-manager tool ensures the certificates are valid and up-to-date by attempting to renew certificates at a configured time before they expire.
4.1. About integrating Service Mesh with cert-manager and istio-csr Copy linkLink copied to clipboard!
The cert-manager tool provides integration with Istio through an external agent called istio-csr
. The istio-csr
agent handles certificate signing requests (CSR) from Istio proxies and the controlplane
in the following ways:
- Verifying the identity of the workload.
- Creating a CSR through cert-manager for the workload.
The cert-manager tool then creates a CSR to the configured CA Issuer, which signs the certificate.
Red Hat provides support for integrating with istio-csr
and cert-manager. Red Hat does not provide direct support for the istio-csr
or the community cert-manager components. The use of community cert-manager shown here is for demonstration purposes only.
Prerequisites
One of these versions of cert-manager:
- Red Hat cert-manager Operator 1.10 or later
- community cert-manager Operator 1.11 or later
- cert-manager 1.11 or later
- Red Hat OpenShift Service Mesh 3.0 or later
-
An
IstioCNI
instance is running in the cluster -
Istio CLI (
istioctl
) tool is installed -
jq
is installed - Helm is installed
4.2. Installing cert-manager Copy linkLink copied to clipboard!
You can integrate cert-manager with OpenShift Service Mesh by deploying istio-csr
and then creating an Istio
resource that uses the istio-csr
agent to process workload and control plane certificate signing requests. This example creates a self-signed Issuer
, but any other Issuer
can be used instead.
You must install cert-manager before installing your Istio
resource.
Procedure
Create the
istio-system
namespace by running the following command:oc create namespace istio-system
$ oc create namespace istio-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the root issuer by creating an
Issuer
object in a YAML file.Create an
Issuer
object similar to the following example:Example
issuer.yaml
fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the objects by running the following command:
oc apply -f issuer.yaml
$ oc apply -f issuer.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the
istio-ca
certificate to contain the "Ready" status condition by running the following command:oc wait --for=condition=Ready certificates/istio-ca -n istio-system
$ oc wait --for=condition=Ready certificates/istio-ca -n istio-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Copy the
istio-ca
certificate to thecert-manager
namespace so it can be used by istio-csr:Copy the secret to a local file by running the following command:
oc get -n istio-system secret istio-ca -o jsonpath='{.data.tls\.crt}' | base64 -d > ca.pem
$ oc get -n istio-system secret istio-ca -o jsonpath='{.data.tls\.crt}' | base64 -d > ca.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a secret from the local certificate file in the
cert-manager
namespace by running the following command:oc create secret generic -n cert-manager istio-root-ca --from-file=ca.pem=ca.pem
$ oc create secret generic -n cert-manager istio-root-ca --from-file=ca.pem=ca.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
To install istio-csr
, you must follow the istio-csr
installation instructions for the type of update strategy you want. By default, spec.updateStrategy
is set to InPlace
when you create and install your Istio
resource. You create and install your Istio
resource after you install istio-csr
.
4.2.1. Installing the istio-csr agent by using the in place update strategy Copy linkLink copied to clipboard!
Istio resources use the in place update strategy by default. Follow this procedure if you plan to leave spec.updateStrategy
as InPlace
when you create and install your Istio
resource.
Procedure
Add the Jetstack charts repository to your local Helm repository by running the following command:
helm repo add jetstack https://charts.jetstack.io --force-update
$ helm repo add jetstack https://charts.jetstack.io --force-update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
istio-csr
chart by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
4.2.2. Installing the istio-csr agent by using the revision based update strategy Copy linkLink copied to clipboard!
Istio resources use the in place update strategy by default. Follow this procedure if you plan to change spec.updateStrategy
to RevisionBased
when you create and install your Istio
resource.
Procedure
-
Specify all the Istio revisions to your
istio-csr
deployment. See "istio-csr deployment". Add the Jetstack charts to your local Helm repository by running the following command:
helm repo add jetstack https://charts.jetstack.io --force-update
$ helm repo add jetstack https://charts.jetstack.io --force-update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
istio-csr
chart with your revision name by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteRevision names use the following format,
<istio-name>-v<major_version>-<minor_version>-<patch_version>
. For example:default-v1-24-3
.
Additional resources
Next steps
4.2.3. Installing your Istio resource Copy linkLink copied to clipboard!
After you have installed istio-csr
by following the procedure for either an in place or revision based update strategy, you can install the Istio
resource.
You need to disable Istio’s built in CA server and tell istiod to use the istio-csr
CA server. The istio-csr
CA server issues certificates for both istiod and user workloads.
Procedure
Create the
Istio
object as shown in the following example:Example
istio.yaml
objectCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you installed your CSR agent with a revision based update strategy, then you need to add the following to your
Istio
object YAML:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
Istio
resource by running the following command:oc apply -f istio.yaml
$ oc apply -f istio.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the
Istio
object to become ready by running the following command:oc wait --for=condition=Ready istios/default -n istio-system
$ oc wait --for=condition=Ready istios/default -n istio-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.4. Verifying cert-manager installation Copy linkLink copied to clipboard!
You can use the sample httpbin
service and sleep
application to check communication between the workloads. You can also check the workload certificate of the proxy to verify that the cert-manager tool is installed correctly.
Procedure
Create the
sample
namespace by running the following command:oc new-project sample
$ oc new-project sample
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find your active Istio revision by running the following command:
oc get istios default -o jsonpath='{.status.activeRevisionName}'
$ oc get istios default -o jsonpath='{.status.activeRevisionName}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the injection label for your active revision to the
sample
namespace by running the following command:oc label namespace sample istio.io/rev=<your-active-revision-name> --overwrite=true
$ oc label namespace sample istio.io/rev=<your-active-revision-name> --overwrite=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the sample
httpbin
service by running the following command:oc apply -n sample -f https://raw.githubusercontent.com/openshift-service-mesh/istio/refs/heads/master/samples/httpbin/httpbin.yaml
$ oc apply -n sample -f https://raw.githubusercontent.com/openshift-service-mesh/istio/refs/heads/master/samples/httpbin/httpbin.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the sample
sleep
application by running the following command:oc apply -n sample -f https://raw.githubusercontent.com/openshift-service-mesh/istio/refs/heads/master/samples/sleep/sleep.yaml
$ oc apply -n sample -f https://raw.githubusercontent.com/openshift-service-mesh/istio/refs/heads/master/samples/sleep/sleep.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for both applications to become ready by running the following command:
oc rollout status -n sample deployment httpbin sleep
$ oc rollout status -n sample deployment httpbin sleep
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that
sleep
application can access thehttpbin
service by running the following command:oc exec "$(oc get pod -l app=sleep -n sample \ -o jsonpath={.items..metadata.name})" -c sleep -n sample -- \ curl http://httpbin.sample:8000/ip -s -o /dev/null \ -w "%{http_code}\n"
$ oc exec "$(oc get pod -l app=sleep -n sample \ -o jsonpath={.items..metadata.name})" -c sleep -n sample -- \ curl http://httpbin.sample:8000/ip -s -o /dev/null \ -w "%{http_code}\n"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example of a successful output
200
200
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to print the workload certificate for the
httpbin
service and verify the output:istioctl proxy-config secret -n sample $(oc get pods -n sample -o jsonpath='{.items..metadata.name}' --selector app=httpbin) -o json | jq -r '.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' | base64 --decode | openssl x509 -text -noout
$ istioctl proxy-config secret -n sample $(oc get pods -n sample -o jsonpath='{.items..metadata.name}' --selector app=httpbin) -o json | jq -r '.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' | base64 --decode | openssl x509 -text -noout
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
... Issuer: O = cert-manager + O = cluster.local, CN = istio-ca ... X509v3 Subject Alternative Name: URI:spiffe://cluster.local/ns/sample/sa/httpbin
... Issuer: O = cert-manager + O = cluster.local, CN = istio-ca ... X509v3 Subject Alternative Name: URI:spiffe://cluster.local/ns/sample/sa/httpbin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Updating istio-csr agents with revision-based update strategies Copy linkLink copied to clipboard!
If you deployed your Istio resource using the revision based update strategy, you must pass all revisions each time you update your control plane. You must perform the update in the following order:
-
Update the
istio-csr
deployment with the new revision. -
Update the value of
Istio.spec.version
parameter/field.
Example update for RevisionBased control plane
In this example, the controlplane
is being updated from v1.24.0
to 1.24.1.
Update the
istio-csr
deployment with the new revision by running the following command:helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr \ --wait \ --reuse-values \ --set "app.istio.revisions={<old_revision>,<new_revision>}"
$ helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr \ --wait \ --reuse-values \ --set "app.istio.revisions={<old_revision>,<new_revision>}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
old_revision
-
Specifies the old revision in the
<istio-name>-v<major_version>-<minor_version>-<patch_version>
format. For example:default-v1-24-0
. new_revision
-
Specifies the new revision in the
<istio-name>-v<major_version>-<minor_version>-<patch_version>
format. For example:default-v1-24-1
.
Update the
istio.spec.version
in theIstio
object similar to the following example:Example
istio.yaml
fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Update to the new revision prefixed with the letter v, such as
v1.24.1
Remove the old revision from your
istio-csr
deployment by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 5. Multi-Cluster topologies Copy linkLink copied to clipboard!
Multi-Cluster topologies are useful for organizations with distributed systems or environments seeking enhanced scalability, fault tolerance, and regional redundancy.
5.1. About multi-cluster mesh topologies Copy linkLink copied to clipboard!
In a multi-cluster mesh topology, you install and manage a single Istio mesh across multiple OpenShift Container Platform clusters, enabling communication and service discovery between the services. Two factors determine the multi-cluster mesh topology: control plane topology and network topology. There are two options for each topology. Therefore, there are four possible multi-cluster mesh topology configurations.
- Multi-Primary Single Network: Combines the multi-primary control plane topology and the single network network topology models.
- Multi-Primary Multi-Network: Combines the multi-primary control plane topology and the multi-network network topology models.
- Primary-Remote Single Network: Combines the primary-remote control plane topology and the single network network topology models.
- Primary-Remote Multi-Network: Combines the primary-remote control plane topology and the multi-network network topology models.
5.1.1. Control plane topology models Copy linkLink copied to clipboard!
A multi-cluster mesh must use one of the following control plane topologies:
- Multi-Primary: In this configuration, a control plane resides on every cluster. Each control plane observes the API servers in all of the other clusters for services and endpoints.
- Primary-Remote: In this configuration, the control plane resides only on one cluster, called the primary cluster. No control plane runs on any of the other clusters, called remote clusters. The control plane on the primary cluster discovers services and endpoints and configures the sidecar proxies for the workloads in all clusters.
5.1.2. Network topology models Copy linkLink copied to clipboard!
A multi-cluster mesh must use one of the following network topologies:
- Single Network: All clusters reside on the same network and there is direct connectivity between the services in all the clusters. There is no need to use gateways for communication between the services across cluster boundaries.
- Multi-Network: Clusters reside on different networks and there is no direct connectivity between services. Gateways must be used to enable communication across network boundaries.
5.2. Multi-Cluster configuration overview Copy linkLink copied to clipboard!
To configure a multi-cluster topology you must perform the following actions:
- Install the OpenShift Service Mesh Operator for each cluster.
- Create or have access to root and intermediate certificates for each cluster.
- Apply the security certificates for each cluster.
- Install Istio for each cluster.
5.2.1. Creating certificates for a multi-cluster topology Copy linkLink copied to clipboard!
Create the root and intermediate certificate authority (CA) certificates for two clusters.
Prerequisites
- You have OpenSSL installed locally.
Procedure
Create the root CA certificate:
Create a key for the root certificate by running the following command:
openssl genrsa -out root-key.pem 4096
$ openssl genrsa -out root-key.pem 4096
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an OpenSSL configuration certificate file named
root-ca.conf
for the root CA certificates:Example root certificate configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the certificate signing request by running the following command:
openssl req -sha256 -new -key root-key.pem \ -config root-ca.conf \ -out root-cert.csr
$ openssl req -sha256 -new -key root-key.pem \ -config root-ca.conf \ -out root-cert.csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a shared root certificate by running the following command:
openssl x509 -req -sha256 -days 3650 \ -signkey root-key.pem \ -extensions req_ext -extfile root-ca.conf \ -in root-cert.csr \ -out root-cert.pem
$ openssl x509 -req -sha256 -days 3650 \ -signkey root-key.pem \ -extensions req_ext -extfile root-ca.conf \ -in root-cert.csr \ -out root-cert.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the intermediate CA certificate for the East cluster:
Create a directory named
east
by running the following command:mkdir east
$ mkdir east
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a key for the intermediate certificate for the East cluster by running the following command:
openssl genrsa -out east/ca-key.pem 4096
$ openssl genrsa -out east/ca-key.pem 4096
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an OpenSSL configuration file named
intermediate.conf
in theeast/
directory for the intermediate certificate of the East cluster. Copy the following example file and save it locally:Example configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a certificate signing request by running the following command:
openssl req -new -config east/intermediate.conf \ -key east/ca-key.pem \ -out east/cluster-ca.csr
$ openssl req -new -config east/intermediate.conf \ -key east/ca-key.pem \ -out east/cluster-ca.csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the intermediate CA certificate for the East cluster by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a certificate chain from the intermediate and root CA certificate for the east cluster by running the following command:
cat east/ca-cert.pem root-cert.pem > east/cert-chain.pem && cp root-cert.pem east
$ cat east/ca-cert.pem root-cert.pem > east/cert-chain.pem && cp root-cert.pem east
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the intermediate CA certificate for the West cluster:
Create a directory named
west
by running the following command:mkdir west
$ mkdir west
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a key for the intermediate certificate for the West cluster by running the following command:
openssl genrsa -out west/ca-key.pem 4096
$ openssl genrsa -out west/ca-key.pem 4096
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an OpenSSL configuration file named
intermediate.conf
in thewest/
directory for for the intermediate certificate of the West cluster. Copy the following example file and save it locally:Example configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a certificate signing request by running the following command:
openssl req -new -config west/intermediate.conf \ -key west/ca-key.pem \ -out west/cluster-ca.csr
$ openssl req -new -config west/intermediate.conf \ -key west/ca-key.pem \ -out west/cluster-ca.csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the certificate by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the certificate chain by running the following command:
cat west/ca-cert.pem root-cert.pem > west/cert-chain.pem && cp root-cert.pem west
$ cat west/ca-cert.pem root-cert.pem > west/cert-chain.pem && cp root-cert.pem west
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.2. Applying certificates to a multi-cluster topology Copy linkLink copied to clipboard!
Apply root and intermediate certificate authority (CA) certificates to the clusters in a multi-cluster topology.
In this procedure, CLUSTER1
is the East cluster and CLUSTER2
is the West cluster.
Prerequisites
- You have access to two OpenShift Container Platform clusters with external load balancer support.
- You have created the root CA certificate and intermediate CA certificates for each cluster or someone has made them available for you.
Procedure
Apply the certificates to the East cluster of the multi-cluster topology:
Log in to East cluster by running the following command:
oc login -u https://<east_cluster_api_server_url>
$ oc login -u https://<east_cluster_api_server_url>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set up the environment variable that contains the
oc
command context for the East cluster by running the following command:export CTX_CLUSTER1=$(oc config current-context)
$ export CTX_CLUSTER1=$(oc config current-context)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a project called
istio-system
by running the following command:oc get project istio-system --context "${CTX_CLUSTER1}" || oc new-project istio-system --context "${CTX_CLUSTER1}"
$ oc get project istio-system --context "${CTX_CLUSTER1}" || oc new-project istio-system --context "${CTX_CLUSTER1}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure Istio to use
network1
as the default network for the pods on the East cluster by running the following command:oc --context "${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1
$ oc --context "${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the CA certificates, certificate chain, and the private key for Istio on the East cluster by running the following command:
oc get secret -n istio-system --context "${CTX_CLUSTER1}" cacerts || oc create secret generic cacerts -n istio-system --context "${CTX_CLUSTER1}" \ --from-file=east/ca-cert.pem \ --from-file=east/ca-key.pem \ --from-file=east/root-cert.pem \ --from-file=east/cert-chain.pem
$ oc get secret -n istio-system --context "${CTX_CLUSTER1}" cacerts || oc create secret generic cacerts -n istio-system --context "${CTX_CLUSTER1}" \ --from-file=east/ca-cert.pem \ --from-file=east/ca-key.pem \ --from-file=east/root-cert.pem \ --from-file=east/cert-chain.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you followed the instructions in "Creating certificates for a multi-cluster mesh", your certificates will reside in the
east/
directory. If your certificates reside in a different directory, modify the syntax accordingly.
Apply the certificates to the West cluster of the multi-cluster topology:
Log in to the West cluster by running the following command:
oc login -u https://<west_cluster_api_server_url>
$ oc login -u https://<west_cluster_api_server_url>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set up the environment variable that contains the
oc
command context for the West cluster by running the following command:export CTX_CLUSTER2=$(oc config current-context)
$ export CTX_CLUSTER2=$(oc config current-context)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a project called
istio-system
by running the following command:oc get project istio-system --context "${CTX_CLUSTER2}" || oc new-project istio-system --context "${CTX_CLUSTER2}"
$ oc get project istio-system --context "${CTX_CLUSTER2}" || oc new-project istio-system --context "${CTX_CLUSTER2}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure Istio to use
network2
as the default network for the pods on the West cluster by running the following command:oc --context "${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2
$ oc --context "${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the CA certificate secret for Istio on the West cluster by running the following command:
oc get secret -n istio-system --context "${CTX_CLUSTER2}" cacerts || oc create secret generic cacerts -n istio-system --context "${CTX_CLUSTER2}" \ --from-file=west/ca-cert.pem \ --from-file=west/ca-key.pem \ --from-file=west/root-cert.pem \ --from-file=west/cert-chain.pem
$ oc get secret -n istio-system --context "${CTX_CLUSTER2}" cacerts || oc create secret generic cacerts -n istio-system --context "${CTX_CLUSTER2}" \ --from-file=west/ca-cert.pem \ --from-file=west/ca-key.pem \ --from-file=west/root-cert.pem \ --from-file=west/cert-chain.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you followed the instructions in "Creating certificates for a multi-cluster mesh", your certificates will reside in the
west/
directory. If the certificates reside in a different directory, modify the syntax accordingly.
Next steps
Install Istio on all the clusters comprising the mesh topology.
5.3. Installing a multi-primary multi-network mesh Copy linkLink copied to clipboard!
Install Istio in the multi-primary multi-network topology on two OpenShift Container Platform clusters.
In this procedure, CLUSTER1
is the East cluster and CLUSTER2
is the West cluster.
You can adapt these instructions for a mesh spanning more than two clusters.
Prerequisites
- You have installed the OpenShift Service Mesh 3 Operator on all of the clusters that comprise the mesh.
- You have completed "Creating certificates for a multi-cluster mesh".
- You have completed "Applying certificates to a multi-cluster topology".
- You have created an Istio Container Network Interface (CNI) resource.
-
You have
istioctl
installed on the laptop you can use to run these instructions.
Procedure
Create an
ISTIO_VERSION
environment variable that defines the Istio version to install by running the following command:export ISTIO_VERSION=1.24.3
$ export ISTIO_VERSION=1.24.3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install Istio on the East cluster:
Create an
Istio
resource on the East cluster by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the control plane to return the
Ready
status condition by running the following command:oc --context "${CTX_CLUSTER1}" wait --for condition=Ready istio/default --timeout=3m
$ oc --context "${CTX_CLUSTER1}" wait --for condition=Ready istio/default --timeout=3m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an East-West gateway on the East cluster by running the following command:
oc --context "${CTX_CLUSTER1}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net1.yaml
$ oc --context "${CTX_CLUSTER1}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net1.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expose the services through the gateway by running the following command:
oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
$ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Install Istio on the West cluster:
Create an
Istio
resource on the West cluster by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the control plane to return the
Ready
status condition by running the following command:oc --context "${CTX_CLUSTER2}" wait --for condition=Ready istio/default --timeout=3m
$ oc --context "${CTX_CLUSTER2}" wait --for condition=Ready istio/default --timeout=3m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an East-West gateway on the West cluster by running the following command:
oc --context "${CTX_CLUSTER2}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net2.yaml
$ oc --context "${CTX_CLUSTER2}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net2.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expose the services through the gateway by running the following command:
oc --context "${CTX_CLUSTER2}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
$ oc --context "${CTX_CLUSTER2}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the
istio-reader-service-account
service account for the East cluster by running the following command:oc --context="${CTX_CLUSTER1}" create serviceaccount istio-reader-service-account -n istio-system
$ oc --context="${CTX_CLUSTER1}" create serviceaccount istio-reader-service-account -n istio-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
istio-reader-service-account
service account for the West cluster by running the following command:oc --context="${CTX_CLUSTER2}" create serviceaccount istio-reader-service-account -n istio-system
$ oc --context="${CTX_CLUSTER2}" create serviceaccount istio-reader-service-account -n istio-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
cluster-reader
role to the East cluster by running the following command:oc --context="${CTX_CLUSTER1}" adm policy add-cluster-role-to-user cluster-reader -z istio-reader-service-account -n istio-system
$ oc --context="${CTX_CLUSTER1}" adm policy add-cluster-role-to-user cluster-reader -z istio-reader-service-account -n istio-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
cluster-reader
role to the West cluster by running the following command:oc --context="${CTX_CLUSTER2}" adm policy add-cluster-role-to-user cluster-reader -z istio-reader-service-account -n istio-system
$ oc --context="${CTX_CLUSTER2}" adm policy add-cluster-role-to-user cluster-reader -z istio-reader-service-account -n istio-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install a remote secret on the East cluster that provides access to the API server on the West cluster by running the following command:
istioctl create-remote-secret \ --context="${CTX_CLUSTER2}" \ --name=cluster2 \ --create-service-account=false | \ oc --context="${CTX_CLUSTER1}" apply -f -
$ istioctl create-remote-secret \ --context="${CTX_CLUSTER2}" \ --name=cluster2 \ --create-service-account=false | \ oc --context="${CTX_CLUSTER1}" apply -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install a remote secret on the West cluster that provides access to the API server on the East cluster by running the following command:
istioctl create-remote-secret \ --context="${CTX_CLUSTER1}" \ --name=cluster1 \ --create-service-account=false | \ oc --context="${CTX_CLUSTER2}" apply -f -
$ istioctl create-remote-secret \ --context="${CTX_CLUSTER1}" \ --name=cluster1 \ --create-service-account=false | \ oc --context="${CTX_CLUSTER2}" apply -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.1. Verifying a multi-cluster topology Copy linkLink copied to clipboard!
Deploy sample applications and verify traffic on a multi-cluster topology on two OpenShift Container Platform clusters.
In this procedure, CLUSTER1
is the East cluster and CLUSTER2
is the West cluster.
Prerequisites
- You have installed the OpenShift Service Mesh Operator on all of the clusters that comprise the mesh.
- You have completed "Creating certificates for a multi-cluster mesh".
- You have completed "Applying certificates to a multi-cluster topology".
- You have created an Istio Container Network Interface (CNI) resource.
-
You have
istioctl
installed on the laptop you will use to run these instructions. - You have installed a multi-cluster topology.
Procedure
Deploy sample applications on the East cluster:
Create a sample application namespace on the East cluster by running the following command:
oc --context "${CTX_CLUSTER1}" get project sample || oc --context="${CTX_CLUSTER1}" new-project sample
$ oc --context "${CTX_CLUSTER1}" get project sample || oc --context="${CTX_CLUSTER1}" new-project sample
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Label the application namespace to support sidecar injection by running the following command:
oc --context="${CTX_CLUSTER1}" label namespace sample istio-injection=enabled
$ oc --context="${CTX_CLUSTER1}" label namespace sample istio-injection=enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the
helloworld
application:Create the
helloworld
service by running the following command:oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sample
$ oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sample
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
helloworld-v1
deployment by running the following command:oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l version=v1 -n sample
$ oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l version=v1 -n sample
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Deploy the
sleep
application by running the following command:oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml -n sample
$ oc --context="${CTX_CLUSTER1}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml -n sample
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the
helloworld
application on the East cluster to return theReady
status condition by running the following command:oc --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/helloworld-v1
$ oc --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/helloworld-v1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the
sleep
application on the East cluster to return theReady
status condition by running the following command:oc --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/sleep
$ oc --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/sleep
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Deploy the sample applications on the West cluster:
Create a sample application namespace on the West cluster by running the following command:
oc --context "${CTX_CLUSTER2}" get project sample || oc --context="${CTX_CLUSTER2}" new-project sample
$ oc --context "${CTX_CLUSTER2}" get project sample || oc --context="${CTX_CLUSTER2}" new-project sample
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Label the application namespace to support sidecar injection by running the following command:
oc --context="${CTX_CLUSTER2}" label namespace sample istio-injection=enabled
$ oc --context="${CTX_CLUSTER2}" label namespace sample istio-injection=enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the
helloworld
application:Create the
helloworld
service by running the following command:oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sample
$ oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sample
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
helloworld-v2
deployment by running the following command:oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l version=v2 -n sample
$ oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \ -l version=v2 -n sample
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Deploy the
sleep
application by running the following command:oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml -n sample
$ oc --context="${CTX_CLUSTER2}" apply \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml -n sample
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the
helloworld
application on the West cluster to return theReady
status condition by running the following command:oc --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/helloworld-v2
$ oc --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/helloworld-v2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the
sleep
application on the West cluster to return theReady
status condition by running the following command:oc --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/sleep
$ oc --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/sleep
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verifying traffic flows between clusters
For the East cluster, send 10 requests to the
helloworld
service by running the following command:for i in {0..9}; do \ oc --context="${CTX_CLUSTER1}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \ done
$ for i in {0..9}; do \ oc --context="${CTX_CLUSTER1}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \ done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that you see responses from both clusters. This means version 1 and version 2 of the service can be seen in the responses.
For the West cluster, send 10 requests to the
helloworld
service:for i in {0..9}; do \ oc --context="${CTX_CLUSTER2}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \ done
$ for i in {0..9}; do \ oc --context="${CTX_CLUSTER2}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \ done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that you see responses from both clusters. This means version 1 and version 2 of the service can be seen in the responses.
5.3.2. Removing a multi-cluster topology from a development environment Copy linkLink copied to clipboard!
After experimenting with the multi-cluster functionality in a development environment, remove the multi-cluster topology from all the clusters.
In this procedure, CLUSTER1
is the East cluster and CLUSTER2
is the West cluster.
Prerequisites
- You have installed a multi-cluster topology.
Procedure
Remove Istio and the sample applications from the East cluster of the development environment by running the following command:
oc --context="${CTX_CLUSTER1}" delete istio/default ns/istio-system ns/sample ns/istio-cni
$ oc --context="${CTX_CLUSTER1}" delete istio/default ns/istio-system ns/sample ns/istio-cni
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove Istio and the sample applications from the West cluster of development environment by running the following command:
oc --context="${CTX_CLUSTER2}" delete istio/default ns/istio-system ns/sample ns/istio-cni
$ oc --context="${CTX_CLUSTER2}" delete istio/default ns/istio-system ns/sample ns/istio-cni
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4. Installing a primary-remote multi-network mesh Copy linkLink copied to clipboard!
Install Istio in a primary-remote multi-network topology on two OpenShift Container Platform clusters.
In this procedure, CLUSTER1
is the East cluster and CLUSTER2
is the West cluster. The East cluster is the primary cluster and the West cluster is the remote cluster.
You can adapt these instructions for a mesh spanning more than two clusters.
Prerequisites
- You have installed the OpenShift Service Mesh 3 Operator on all of the clusters that comprise the mesh.
- You have completed "Creating certificates for a multi-cluster mesh".
- You have completed "Applying certificates to a multi-cluster topology".
- You have created an Istio Container Network Interface (CNI) resource.
-
You have
istioctl
installed on the laptop you will use to run these instructions.
Procedure
Create an
ISTIO_VERSION
environment variable that defines the Istio version to install by running the following command:export ISTIO_VERSION=1.24.3
$ export ISTIO_VERSION=1.24.3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install Istio on the East cluster:
Set the default network for the East cluster by running the following command:
oc --context="${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1
$ oc --context="${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
Istio
resource on the East cluster by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This enables the control plane installed on the East cluster to serve as an external control plane for other remote clusters.
Wait for the control plane to return the "Ready" status condition by running the following command:
oc --context "${CTX_CLUSTER1}" wait --for condition=Ready istio/default --timeout=3m
$ oc --context "${CTX_CLUSTER1}" wait --for condition=Ready istio/default --timeout=3m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an East-West gateway on the East cluster by running the following command:
oc --context "${CTX_CLUSTER1}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net1.yaml
$ oc --context "${CTX_CLUSTER1}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net1.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expose the control plane through the gateway so that services in the West cluster can access the control plane by running the following command:
oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-istiod.yaml
$ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-istiod.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expose the application services through the gateway by running the following command:
oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
$ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Install Istio on the West cluster:
Save the IP address of the East-West gateway running in the East cluster by running the following command:
export DISCOVERY_ADDRESS=$(oc --context="${CTX_CLUSTER1}" \ -n istio-system get svc istio-eastwestgateway \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ export DISCOVERY_ADDRESS=$(oc --context="${CTX_CLUSTER1}" \ -n istio-system get svc istio-eastwestgateway \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
Istio
resource on the West cluster by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Annotate the
istio-system
namespace in the West cluster so that it is managed by the control plane in the East cluster by running the following command:oc --context="${CTX_CLUSTER2}" annotate namespace istio-system topology.istio.io/controlPlaneClusters=cluster1
$ oc --context="${CTX_CLUSTER2}" annotate namespace istio-system topology.istio.io/controlPlaneClusters=cluster1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the default network for the West cluster by running the following command:
oc --context="${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2
$ oc --context="${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install a remote secret on the East cluster that provides access to the API server on the West cluster by running the following command:
istioctl create-remote-secret \ --context="${CTX_CLUSTER2}" \ --name=cluster2 | \ oc --context="${CTX_CLUSTER1}" apply -f -
$ istioctl create-remote-secret \ --context="${CTX_CLUSTER2}" \ --name=cluster2 | \ oc --context="${CTX_CLUSTER1}" apply -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the
Istio
resource to return the "Ready" status condition by running the following command:oc --context "${CTX_CLUSTER2}" wait --for condition=Ready istio/default --timeout=3m
$ oc --context "${CTX_CLUSTER2}" wait --for condition=Ready istio/default --timeout=3m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an East-West gateway on the West cluster by running the following command:
oc --context "${CTX_CLUSTER2}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net2.yaml
$ oc --context "${CTX_CLUSTER2}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net2.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSince the West cluster is installed with a remote profile, exposing the application services on the East cluster exposes them on the East-West gateways of both clusters.
5.5. Installing Kiali in a multi-cluster mesh Copy linkLink copied to clipboard!
Install Kiali in a multi-cluster mesh configuration on two OpenShift Container Platform clusters.
In this procedure, CLUSTER1
is the East cluster and CLUSTER2
is the West cluster.
You can adapt these instructions for a mesh spanning more than two clusters.
Prerequisites
- You have installed the latest Kiali Operator on each cluster.
- Istio installed in a multi-cluster configuration on each cluster.
-
You have
istioctl
installed on the laptop you can use to run these instructions. -
You are logged in to the OpenShift Container Platform web console as a user with the
cluster-admin
role. - You have configured a metrics store so that Kiali can query metrics from all the clusters. Kiali queries metrics and traces from their respective endpoints.
Procedure
Install Kiali on the East cluster:
Create a YAML file named
kiali.yaml
that creates a namespace for the Kiali deployment.Example configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe endpoint for this example uses OpenShift Monitoring to configure metrics. For more information, see "Configuring OpenShift Monitoring with Kiali".
Apply the YAML file on the East cluster by running the following command:
oc --context cluster1 apply -f kiali.yaml
$ oc --context cluster1 apply -f kiali.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
kiali-istio-system.apps.example.com
kiali-istio-system.apps.example.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Ensure that the Kiali custom resource (CR) is ready by running the following command:
oc wait --context cluster1 --for=condition=Successful kialis/kiali -n istio-system --timeout=3m
$ oc wait --context cluster1 --for=condition=Successful kialis/kiali -n istio-system --timeout=3m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
kiali.kiali.io/kiali condition met
kiali.kiali.io/kiali condition met
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display your Kiali Route hostname.
oc --context cluster1 get route kiali -n istio-system -o jsonpath='{.spec.host}'
$ oc --context cluster1 get route kiali -n istio-system -o jsonpath='{.spec.host}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Kiali CR on the West cluster.
Example configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The Kiali Operator creates the resources necessary for the Kiali server on the East cluster to connect to the West cluster. The Kiali server is not installed on the West cluster.
Apply the YAML file on the West cluster by running the following command:
oc --context cluster2 apply -f kiali-remote.yaml
$ oc --context cluster2 apply -f kiali-remote.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the Kiali CR is ready by running the following command:
oc wait --context cluster2 --for=condition=Successful kialis/kiali -n istio-system --timeout=3m
$ oc wait --context cluster2 --for=condition=Successful kialis/kiali -n istio-system --timeout=3m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a remote cluster secret so that Kiali installation in the East cluster can access the West cluster.
Create a long lived API token bound to the kiali-service-account in the West cluster. Kiali uses this token to authenticate to the West cluster.
Example configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the YAML file on the West cluster by running the following command:
oc --context cluster2 apply -f kiali-svc-account-token.yaml
$ oc --context cluster2 apply -f kiali-svc-account-token.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
kubeconfig
file and save it as a secret in the namespace on the East cluster where the Kiali deployment resides.To simplify this process, use the
kiali-prepare-remote-cluster.sh
script to generate thekubeconfig
file by running the followingcurl
command:curl -L -o kiali-prepare-remote-cluster.sh https://raw.githubusercontent.com/kiali/kiali/master/hack/istio/multicluster/kiali-prepare-remote-cluster.sh
$ curl -L -o kiali-prepare-remote-cluster.sh https://raw.githubusercontent.com/kiali/kiali/master/hack/istio/multicluster/kiali-prepare-remote-cluster.sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the script to make it executeable by running the following command:
chmod +x kiali-prepare-remote-cluster.sh
chmod +x kiali-prepare-remote-cluster.sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the script so that it passes the East and West cluster contexts to the
kubeconfig
file by running the following command:./kiali-prepare-remote-cluster.sh --kiali-cluster-context cluster1 --remote-cluster-context cluster2 --view-only false --kiali-resource-name kiali-service-account --remote-cluster-namespace istio-system --process-kiali-secret true --process-remote-resources false --remote-cluster-name cluster2
$ ./kiali-prepare-remote-cluster.sh --kiali-cluster-context cluster1 --remote-cluster-context cluster2 --view-only false --kiali-resource-name kiali-service-account --remote-cluster-namespace istio-system --process-kiali-secret true --process-remote-resources false --remote-cluster-name cluster2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteUse the
--help
option to display additional details about how to use the script.
Trigger the reconciliation loop so that the Kiali Operator registers the remote secret that the CR contains by running the following command:
oc --context cluster1 annotate kiali kiali -n istio-system --overwrite kiali.io/reconcile="$(date)"
$ oc --context cluster1 annotate kiali kiali -n istio-system --overwrite kiali.io/reconcile="$(date)"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for Kiali resource to become ready by running the following command:
oc --context cluster1 wait --for=condition=Successful --timeout=2m kialis/kiali -n istio-system
oc --context cluster1 wait --for=condition=Successful --timeout=2m kialis/kiali -n istio-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for Kiali server to become ready by running the following command:
oc --context cluster1 rollout status deployments/kiali -n istio-system
oc --context cluster1 rollout status deployments/kiali -n istio-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to Kiali.
-
When you first access Kiali, log in to the cluster that contains the Kiali deployment. In this example, access the
East
cluster. Display the hostname of the Kiali route by running the following command:
oc --context cluster1 get route kiali -n istio-system -o jsonpath='{.spec.host}'
oc --context cluster1 get route kiali -n istio-system -o jsonpath='{.spec.host}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Navigate to the Kiali URL in your browser: https://<your-kiali-route-hostname>.
-
When you first access Kiali, log in to the cluster that contains the Kiali deployment. In this example, access the
Log in to the West cluster through Kiali.
In order to see other clusters in the Kiali UI, you must first login as a user to those clusters through Kiali.
- Click on the user profile dropdown in the top right hand menu.
- Select Login to West. You are redirected to an OpenShift login page and prompted for credentials for the West cluster.
Verify that Kiali shows information from both clusters.
- Click Overview and verify that you can see namespaces from both clusters.
- Click Navigate and verify that you see both clusters on the mesh graph.
Chapter 6. Deploying multiple service meshes on a single cluster Copy linkLink copied to clipboard!
You can use the Red Hat OpenShift Service Mesh to operate multiple service meshes in a single cluster, with each mesh managed by a separate control plane. Using discovery selectors and revisions prevents conflicts between control planes.
6.1. Prerequisites Copy linkLink copied to clipboard!
- You have installed the OpenShift Service Mesh operator.
You have created an Istio Container Network Interface (CNI) resource.
NoteYou can run the following command to check for existing
Istio
instances:oc get istios
$ oc get istios
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
You have installed the
istioctl
binary on your localhost.
6.2. About deploying multiple control planes Copy linkLink copied to clipboard!
To configure a cluster to host two control planes, set up separate Istio resources with unique names in independent Istio system namespaces. Assign a unique revision name to each Istio resource to identify the control planes, workloads, or namespaces it manages. Apply these revision names using injection or istio.io/rev
labels to specify which control plane injects the sidecar proxy into application pods.
Each Istio
resource must also configure discovery selectors to specify which namespaces the Istio control plane observes. Only namespaces with labels that match the configured discovery selectors can join the mesh. Additionally, discovery selectors determine which control plane creates the istio-ca-root-cert
config map in each namespace, which is used to encrypt traffic between services with mutual TLS within each mesh.
When adding an additional Istio control plane to a cluster with an existing control plane, ensure that the existing Istio
instance has discovery selectors configured to avoid overlapping with the new control plane.
Only one IstioCNI
resource is shared by all control planes in a cluster, and you must update this resource independent of other cluster resources.
6.3. Using multiple control planes on a single cluster Copy linkLink copied to clipboard!
You can use discovery selectors to limit the visibility of an Istio control plane to specific namespaces in a cluster. By combining discovery selectors with control plane revisions, you can deploy multiple control planes in a single cluster, ensuring that each control plane manages only its assigned namespaces. This approach avoids conflicts between control planes and enables soft multi-tenancy for service meshes.
6.4. Deploying multiple control planes Copy linkLink copied to clipboard!
You can have extended support for more than two control planes. The maximum number of service meshes in a single cluster depends on the available cluster resources.
6.4.1. Deploying the first control plane Copy linkLink copied to clipboard!
You deploy the first control plane by creating its assigned namespace.
Procedure
Create the namespace for the first Istio control plane called
istio-system-1
by running the following command:oc new-project istio-system-1
$ oc new-project istio-system-1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following label to the first namespace, which is used with the Istio
discoverySelectors
field by running the following command:oc label namespace istio-system-1 istio-discovery=mesh-1
$ oc label namespace istio-system-1 istio-discovery=mesh-1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a YAML file named
istio-1.yaml
with the namemesh-1
and thediscoverySelector
asmesh-1
:Example configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the first
Istio
resource by running the following command:oc apply -f istio-1.yaml
$ oc apply -f istio-1.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To restrict workloads in
mesh-1
from communicating freely with decrypted traffic between meshes, deploy aPeerAuthentication
resource to enforce mutual TLS (mTLS) traffic within themesh-1
data plane. Apply thePeerAuthentication
resource in theistio-system-1
namespace by using a configuration file, such aspeer-auth-1.yaml
:oc apply -f peer-auth-1.yaml
$ oc apply -f peer-auth-1.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.4.2. Deploying the second control plane Copy linkLink copied to clipboard!
After deploying the first control plane, you can deploy the second control plane by creating its assigned namespace.
Procedure
Create a namespace for the second Istio control plane called
istio-system-2
by running the following command:oc new-project istio-system-2
$ oc new-project istio-system-2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following label to the second namespace, which is used with the Istio
discoverySelectors
field by running the following command:oc label namespace istio-system-2 istio-discovery=mesh-2
$ oc label namespace istio-system-2 istio-discovery=mesh-2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a YAML file named
istio-2.yaml
:Example configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the second
Istio
resource by running the following command:oc apply -f istio-2.yaml
$ oc apply -f istio-2.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy a policy for workloads in the
istio-system-2
namespace to only accept mutual TLS trafficpeer-auth-2.yaml
by running the following command:oc apply -f peer-auth-2.yaml
$ oc apply -f peer-auth-2.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.4.3. Verifying multiple control planes Copy linkLink copied to clipboard!
Verify that both of the Istio control planes are deployed and running properly. You can validate that the istiod
pod is successfully running in each Istio system namespace.
Verify that the workloads are assigned to the control plane in
istio-system-1
by running the following command:oc get pods -n istio-system-1
$ oc get pods -n istio-system-1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE istiod-mesh-1-b69646b6f-kxrwk 1/1 Running 0 4m14s
NAME READY STATUS RESTARTS AGE istiod-mesh-1-b69646b6f-kxrwk 1/1 Running 0 4m14s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the workloads are assigned to the control plane in
istio-system-2
by running the following command:oc get pods -n istio-system-2
$ oc get pods -n istio-system-2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE istiod-mesh-2-8666fdfc6-mqp45 1/1 Running 0 118s
NAME READY STATUS RESTARTS AGE istiod-mesh-2-8666fdfc6-mqp45 1/1 Running 0 118s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.5. Deploy application workloads in each mesh Copy linkLink copied to clipboard!
To deploy application workloads, assign each workload to a separate namespace.
Procedure
Create an application namespace called
app-ns-1
by running the following command:oc create namespace app-ns-1
$ oc create namespace app-ns-1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To ensure that the namespace is discovered by the first control plane, add the
istio-discovery=mesh-1
label by running the following command:oc label namespace app-ns-1 istio-discovery=mesh-1
$ oc label namespace app-ns-1 istio-discovery=mesh-1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enable sidecar injection into all the pods by default while ensuring that pods in this namespace are mapped to the first control plane, add the
istio.io/rev=mesh-1
label to the namespace by running the following command:oc label namespace app-ns-1 istio.io/rev=mesh-1
$ oc label namespace app-ns-1 istio.io/rev=mesh-1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: You can verify the
mesh-1
revision name by running the following command:oc get istiorevisions
$ oc get istiorevisions
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the
sleep
andhttpbin
applications by running the following command:oc apply -n app-ns-1 \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yaml
$ oc apply -n app-ns-1 \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the
httpbin
andsleep
pods to run with sidecars injected by running the following command:oc get pods -n app-ns-1
$ oc get pods -n app-ns-1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE httpbin-7f56dc944b-kpw2x 2/2 Running 0 2m26s sleep-5577c64d7c-b5wd2 2/2 Running 0 91m
NAME READY STATUS RESTARTS AGE httpbin-7f56dc944b-kpw2x 2/2 Running 0 2m26s sleep-5577c64d7c-b5wd2 2/2 Running 0 91m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a second application namespace called
app-ns-2
by running the following command:oc create namespace app-ns-2
$ oc create namespace app-ns-2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a third application namespace called
app-ns-3
by running the following command:oc create namespace app-ns-3
$ oc create namespace app-ns-3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the label
istio-discovery=mesh-2
to both namespaces and the revision labelmesh-2
to match the discovery selector of the second control plane by running the following command:oc label namespace app-ns-2 app-ns-3 istio-discovery=mesh-2 istio.io/rev=mesh-2
$ oc label namespace app-ns-2 app-ns-3 istio-discovery=mesh-2 istio.io/rev=mesh-2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the
sleep
andhttpbin
applications to theapp-ns-2
namespace by running the following command:oc apply -n app-ns-2 \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yaml
$ oc apply -n app-ns-2 \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the
sleep
andhttpbin
applications to theapp-ns-3
namespace by running the following command:oc apply -n app-ns-3 \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yaml
$ oc apply -n app-ns-3 \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml \ -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Use the following command to wait for a deployment to be available:
oc wait deployments -n app-ns-2 --all --for condition=Available
$ oc wait deployments -n app-ns-2 --all --for condition=Available
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that each application workload is managed by its assigned control plane by using the
istioctl ps
command after deploying the applications:Verify that the workloads are assigned to the control plane in
istio-system-1
by running the following command:istioctl ps -i istio-system-1
$ istioctl ps -i istio-system-1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION httpbin-7f56dc944b-vwfm5.app-ns-1 Kubernetes SYNCED (11m) SYNCED (11m) SYNCED (11m) SYNCED (11m) IGNORED istiod-mesh-1-b69646b6f-kxrwk 1.23.0 sleep-5577c64d7c-d675f.app-ns-1 Kubernetes SYNCED (11m) SYNCED (11m) SYNCED (11m) SYNCED (11m) IGNORED istiod-mesh-1-b69646b6f-kxrwk 1.23.0
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION httpbin-7f56dc944b-vwfm5.app-ns-1 Kubernetes SYNCED (11m) SYNCED (11m) SYNCED (11m) SYNCED (11m) IGNORED istiod-mesh-1-b69646b6f-kxrwk 1.23.0 sleep-5577c64d7c-d675f.app-ns-1 Kubernetes SYNCED (11m) SYNCED (11m) SYNCED (11m) SYNCED (11m) IGNORED istiod-mesh-1-b69646b6f-kxrwk 1.23.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the workloads are assigned to the control plane in
istio-system-2
by running the following command:istioctl ps -i istio-system-2
$ istioctl ps -i istio-system-2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION httpbin-7f56dc944b-54gjs.app-ns-3 Kubernetes SYNCED (3m59s) SYNCED (3m59s) SYNCED (3m59s) SYNCED (3m59s) IGNORED istiod-mesh-2-8666fdfc6-mqp45 1.23.0 httpbin-7f56dc944b-gnh72.app-ns-2 Kubernetes SYNCED (4m1s) SYNCED (4m1s) SYNCED (3m59s) SYNCED (4m1s) IGNORED istiod-mesh-2-8666fdfc6-mqp45 1.23.0 sleep-5577c64d7c-k9mxz.app-ns-2 Kubernetes SYNCED (4m1s) SYNCED (4m1s) SYNCED (3m59s) SYNCED (4m1s) IGNORED istiod-mesh-2-8666fdfc6-mqp45 1.23.0 sleep-5577c64d7c-m9hvm.app-ns-3 Kubernetes SYNCED (4m1s) SYNCED (4m1s) SYNCED (3m59s) SYNCED (4m1s) IGNORED istiod-mesh-2-8666fdfc6-mqp45 1.23.0
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION httpbin-7f56dc944b-54gjs.app-ns-3 Kubernetes SYNCED (3m59s) SYNCED (3m59s) SYNCED (3m59s) SYNCED (3m59s) IGNORED istiod-mesh-2-8666fdfc6-mqp45 1.23.0 httpbin-7f56dc944b-gnh72.app-ns-2 Kubernetes SYNCED (4m1s) SYNCED (4m1s) SYNCED (3m59s) SYNCED (4m1s) IGNORED istiod-mesh-2-8666fdfc6-mqp45 1.23.0 sleep-5577c64d7c-k9mxz.app-ns-2 Kubernetes SYNCED (4m1s) SYNCED (4m1s) SYNCED (3m59s) SYNCED (4m1s) IGNORED istiod-mesh-2-8666fdfc6-mqp45 1.23.0 sleep-5577c64d7c-m9hvm.app-ns-3 Kubernetes SYNCED (4m1s) SYNCED (4m1s) SYNCED (3m59s) SYNCED (4m1s) IGNORED istiod-mesh-2-8666fdfc6-mqp45 1.23.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the application connectivity is restricted to workloads within their respective mesh:
Send a request from the
sleep
pod inapp-ns-1
to thehttpbin
service inapp-ns-2
to check that the communication fails by running the following command:oc -n app-ns-1 exec deploy/sleep -c sleep -- curl -sIL http://httpbin.app-ns-2.svc.cluster.local:8000
$ oc -n app-ns-1 exec deploy/sleep -c sleep -- curl -sIL http://httpbin.app-ns-2.svc.cluster.local:8000
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
PeerAuthentication
resources created earlier enforce mutual TLS (mTLS) traffic inSTRICT
mode within each mesh. Each mesh uses its own root certificate, managed by theistio-ca-root-cert
config map, which prevents communication between meshes. The output indicates a communication failure, similar to the following example:Example output
HTTP/1.1 503 Service Unavailable content-length: 95 content-type: text/plain date: Wed, 16 Oct 2024 12:05:37 GMT server: envoy
HTTP/1.1 503 Service Unavailable content-length: 95 content-type: text/plain date: Wed, 16 Oct 2024 12:05:37 GMT server: envoy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the communication works by sending a request from the
sleep
pod to thehttpbin
service that are present in theapp-ns-2
namespace which is managed bymesh-2
. Run the following command:oc -n app-ns-2 exec deploy/sleep -c sleep -- curl -sIL http://httpbin.app-ns-3.svc.cluster.local:8000
$ oc -n app-ns-2 exec deploy/sleep -c sleep -- curl -sIL http://httpbin.app-ns-3.svc.cluster.local:8000
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 7. External control plane topology Copy linkLink copied to clipboard!
Use the external control plane topology to isolate the control plane from the data plane on separate clusters.
7.1. About external control plane topology Copy linkLink copied to clipboard!
The external control plane topology improves security and allows the Service Mesh to be hosted as a service. In this installation configuration one cluster hosts and manages the Istio control plane, and applications are hosted on other clusters.
7.1.1. Installing the control plane and data plane on separate clusters Copy linkLink copied to clipboard!
Install Istio on a control plane cluster and a separate data plane cluster. This installation approach provides increased security.
You can adapt these instructions for a mesh spanning more than one data plane cluster. You can also adapt these instructions for multiple meshes with multiple control planes on the same control plane cluster.
Prerequisites
- You have installed the OpenShift Service Mesh Operator on the control plane cluster and the data plane cluster.
-
You have
istioctl
installed on the laptop you will use to run these instructions.
Procedure
Create an
ISTIO_VERSION
environment variable that defines the Istio version to install on all the clusters by running the following command:export ISTIO_VERSION=1.24.3
$ export ISTIO_VERSION=1.24.3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
REMOTE_CLUSTER_NAME
environment variable that defines the name of the cluster by running the following command:export REMOTE_CLUSTER_NAME=cluster1
$ export REMOTE_CLUSTER_NAME=cluster1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set up the environment variable that contains the
oc
command context for the control plane cluster by running the following command:export CTX_CONTROL_PLANE_CLUSTER=<context_name_of_the_control_plane_cluster>
$ export CTX_CONTROL_PLANE_CLUSTER=<context_name_of_the_control_plane_cluster>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set up the environment variable that contains the
oc
command context for the data plane cluster by running the following command:export CTX_DATA_PLANE_CLUSTER=<context_name_of_the_data_plane_cluster>
$ export CTX_DATA_PLANE_CLUSTER=<context_name_of_the_data_plane_cluster>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set up the ingress gateway for the control plane:
Create a project called
istio-system
by running the following command:oc get project istio-system --context "${CTX_CONTROL_PLANE_CLUSTER}" || oc new-project istio-system --context "${CTX_CONTROL_PLANE_CLUSTER}"
$ oc get project istio-system --context "${CTX_CONTROL_PLANE_CLUSTER}" || oc new-project istio-system --context "${CTX_CONTROL_PLANE_CLUSTER}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
Istio
resource on the control plane cluster to manage the ingress gateway by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the ingress gateway for the control plane by running the following command:
oc --context "${CTX_CONTROL_PLANE_CLUSTER}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/controlplane-gateway.yaml
$ oc --context "${CTX_CONTROL_PLANE_CLUSTER}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/controlplane-gateway.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the assigned IP address for the ingress gateway by running the following command:
oc --context "${CTX_CONTROL_PLANE_CLUSTER}" get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
$ oc --context "${CTX_CONTROL_PLANE_CLUSTER}" get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Store the IP address of the ingress gateway in an environment variable by running the following command:
export EXTERNAL_ISTIOD_ADDR=$(oc -n istio-system --context="${CTX_CONTROL_PLANE_CLUSTER}" get svc istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ export EXTERNAL_ISTIOD_ADDR=$(oc -n istio-system --context="${CTX_CONTROL_PLANE_CLUSTER}" get svc istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Install Istio on the data plane cluster:
Create a project called
external-istiod
on the data plane cluster by running the following command:oc get project external-istiod --context "${CTX_DATA_PLANE_CLUSTER}" || oc new-project external-istiod --context "${CTX_DATA_PLANE_CLUSTER}"
$ oc get project external-istiod --context "${CTX_DATA_PLANE_CLUSTER}" || oc new-project external-istiod --context "${CTX_DATA_PLANE_CLUSTER}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
Istio
resource on the data plane cluster by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This setting identifies the data plane cluster as the source of the mesh configuration.
Create a project called
istio-cni
on the data plane cluster by running the following command:oc get project istio-cni --context "${CTX_DATA_PLANE_CLUSTER}" || oc new-project istio-cni --context "${CTX_DATA_PLANE_CLUSTER}"
$ oc get project istio-cni --context "${CTX_DATA_PLANE_CLUSTER}" || oc new-project istio-cni --context "${CTX_DATA_PLANE_CLUSTER}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
IstioCNI
resource on the data plane cluster by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Set up the external Istio control plane on the control plane cluster:
Create a project called
external-istiod
on the control plane cluster by running the following command:oc get project external-istiod --context "${CTX_CONTROL_PLANE_CLUSTER}" || oc new-project external-istiod --context "${CTX_CONTROL_PLANE_CLUSTER}"
$ oc get project external-istiod --context "${CTX_CONTROL_PLANE_CLUSTER}" || oc new-project external-istiod --context "${CTX_CONTROL_PLANE_CLUSTER}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ServiceAccount
resource on the control plane cluster by running the following command:oc --context="${CTX_CONTROL_PLANE_CLUSTER}" create serviceaccount istiod-service-account -n external-istiod
$ oc --context="${CTX_CONTROL_PLANE_CLUSTER}" create serviceaccount istiod-service-account -n external-istiod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Store the API server address for the data plane cluster in an environment variable by running the following command:
DATA_PLANE_API_SERVER=https://<hostname_or_IP_address_of_the_API_server_for_the_data_plane_cluster>:6443
$ DATA_PLANE_API_SERVER=https://<hostname_or_IP_address_of_the_API_server_for_the_data_plane_cluster>:6443
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install a remote secret on the control plane cluster that provides access to the API server on the data plane cluster by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
Istio
resource on the control plane cluster by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create
Gateway
andVirtualService
resources so that the sidecar proxies on the data plane cluster can access the control plane by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the
external-istiod
Istio
resource on the control plane cluster to return the "Ready" status condition by running the following command:oc --context "${CTX_CONTROL_PLANE_CLUSTER}" wait --for condition=Ready istio/external-istiod --timeout=3m
$ oc --context "${CTX_CONTROL_PLANE_CLUSTER}" wait --for condition=Ready istio/external-istiod --timeout=3m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the
Istio
resource on the data plane cluster to return the "Ready" status condition by running the following command:oc --context "${CTX_DATA_PLANE_CLUSTER}" wait --for condition=Ready istio/external-istiod --timeout=3m
$ oc --context "${CTX_DATA_PLANE_CLUSTER}" wait --for condition=Ready istio/external-istiod --timeout=3m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the
IstioCNI
resource on the data plane cluster to return the "Ready" status condition by running the following command:oc --context "${CTX_DATA_PLANE_CLUSTER}" wait --for condition=Ready istiocni/default --timeout=3m
$ oc --context "${CTX_DATA_PLANE_CLUSTER}" wait --for condition=Ready istiocni/default --timeout=3m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Deploy sample applications on the data plane cluster:
Create a namespace for sample applications on the data plane cluster by running the following command:
oc --context "${CTX_DATA_PLANE_CLUSTER}" get project sample || oc --context="${CTX_DATA_PLANE_CLUSTER}" new-project sample
$ oc --context "${CTX_DATA_PLANE_CLUSTER}" get project sample || oc --context="${CTX_DATA_PLANE_CLUSTER}" new-project sample
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Label the namespace for the sample applications to support sidecar injection by running the following command:
oc --context="${CTX_DATA_PLANE_CLUSTER}" label namespace sample istio.io/rev=external-istiod
$ oc --context="${CTX_DATA_PLANE_CLUSTER}" label namespace sample istio.io/rev=external-istiod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the
helloworld
application:Create the
helloworld
service by running the following command:oc --context="${CTX_DATA_PLANE_CLUSTER}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sample
$ oc --context="${CTX_DATA_PLANE_CLUSTER}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sample
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
helloworld-v1
deployment by running the following command:oc --context="${CTX_DATA_PLANE_CLUSTER}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/helloworld/helloworld.yaml \ -l version=v1 -n sample
$ oc --context="${CTX_DATA_PLANE_CLUSTER}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/helloworld/helloworld.yaml \ -l version=v1 -n sample
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Deploy the
sleep
application by running the following command:oc --context="${CTX_DATA_PLANE_CLUSTER}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/sleep/sleep.yaml -n sample
$ oc --context="${CTX_DATA_PLANE_CLUSTER}" apply \ -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/sleep/sleep.yaml -n sample
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the pods on the
sample
namespace have a sidecar injected by running the following command:oc --context="${CTX_DATA_PLANE_CLUSTER}" get pods -n sample
$ oc --context="${CTX_DATA_PLANE_CLUSTER}" get pods -n sample
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The terminal should return
2/2
for each pod on thesample
namespace by running the following command:Example output
NAME READY STATUS RESTARTS AGE helloworld-v1-6d65866976-jb6qc 2/2 Running 0 1m sleep-5fcd8fd6c8-mg8n2 2/2 Running 0 1m
NAME READY STATUS RESTARTS AGE helloworld-v1-6d65866976-jb6qc 2/2 Running 0 1m sleep-5fcd8fd6c8-mg8n2 2/2 Running 0 1m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that internal traffic can reach the applications on the cluster:
Verify a request can be sent to the
helloworld
application through thesleep
application by running the following command:oc exec --context="${CTX_DATA_PLANE_CLUSTER}" -n sample -c sleep deploy/sleep -- curl -sS helloworld.sample:5000/hello
$ oc exec --context="${CTX_DATA_PLANE_CLUSTER}" -n sample -c sleep deploy/sleep -- curl -sS helloworld.sample:5000/hello
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The terminal should return a response from the
helloworld
application:Example output
Hello version: v1, instance: helloworld-v1-6d65866976-jb6qc
Hello version: v1, instance: helloworld-v1-6d65866976-jb6qc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Install an ingress gateway to expose the sample application to external clients:
Create the ingress gateway by running the following command:
oc --context="${CTX_DATA_PLANE_CLUSTER}" apply
$ oc --context="${CTX_DATA_PLANE_CLUSTER}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/refs/heads/main/chart/samples/ingress-gateway.yaml -n sample
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the ingress gateway is running by running the following command:
oc get pod -l app=istio-ingressgateway -n sample --context="${CTX_DATA_PLANE_CLUSTER}"
$ oc get pod -l app=istio-ingressgateway -n sample --context="${CTX_DATA_PLANE_CLUSTER}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The terminal should return output confirming that the gateway is running:
Example output
NAME READY STATUS RESTARTS AGE istio-ingressgateway-7bcd5c6bbd-kmtl4 1/1 Running 0 8m4s
NAME READY STATUS RESTARTS AGE istio-ingressgateway-7bcd5c6bbd-kmtl4 1/1 Running 0 8m4s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expose the
helloworld
application through the ingress gateway by running the following command:oc apply -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/helloworld/helloworld-gateway.yaml -n sample --context="${CTX_DATA_PLANE_CLUSTER}"
$ oc apply -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/helloworld/helloworld-gateway.yaml -n sample --context="${CTX_DATA_PLANE_CLUSTER}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the gateway URL environment variable by running the following command:
export INGRESS_HOST=$(oc -n sample --context="${CTX_DATA_PLANE_CLUSTER}" get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'); \ export INGRESS_PORT=$(oc -n sample --context="${CTX_DATA_PLANE_CLUSTER}" get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}'); \ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
$ export INGRESS_HOST=$(oc -n sample --context="${CTX_DATA_PLANE_CLUSTER}" get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'); \ export INGRESS_PORT=$(oc -n sample --context="${CTX_DATA_PLANE_CLUSTER}" get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}'); \ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that external traffic can reach the applications on the mesh:
Confirm that the
helloworld
application is accessible through the gateway by running the following command:curl -s "http://${GATEWAY_URL}/hello"
$ curl -s "http://${GATEWAY_URL}/hello"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
helloworld
application should return a response.Example output
Hello version: v1, instance: helloworld-v1-6d65866976-jb6qc
Hello version: v1, instance: helloworld-v1-6d65866976-jb6qc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 8. Istioctl tool Copy linkLink copied to clipboard!
OpenShift Service Mesh 3 supports istioctl
, the command line utility for the Istio project that includes many diagnostic and debugging utilities.
8.1. Support for Istioctl Copy linkLink copied to clipboard!
OpenShift Service Mesh 3 supports a selection of Istioctl commands.
Command | Description |
---|---|
|
Manage the control plane ( |
| Analyze the Istio configuration and print validation messages |
| Generate the autocompletion script for the specified shell |
| Create a secret with credentials to allow Istio to access remote Kubernetes API servers |
| Display help about any command |
| Retrieve information about the proxy configuration from Envoy (Kubernetes only) |
| Retrieve the synchronization status of each Envoy in the mesh |
|
List the remote clusters each |
| Validate the Istio policy and rules files |
| Print out the build version information |
| Manage the waypoint configuration |
| Update or retrieve the current Ztunnel configuration. |
8.2. Installing the Istioctl tool Copy linkLink copied to clipboard!
Install the istioctl
command-line utility to debug and diagnose Istio service mesh deployments.
Prerequisites
- You have access to the OpenShift Container Platform web console.
- The OpenShift Service Mesh 3 Operator is installed and running.
-
You have created at least one
Istio
resource.
Procedure
Confirm which version of the
Istio
resource runs on the installation by running the following command:oc get istio -ojsonpath="{range .items[*]}{.spec.version}{'\n'}{end}" | sed s/^v// | sort
$ oc get istio -ojsonpath="{range .items[*]}{.spec.version}{'\n'}{end}" | sed s/^v// | sort
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If there are multiple
Istio
resources with different versions, choose the latest version. The latest version is displayed last.- In the OpenShift Container Platform web console, click the Help icon and select Command Line Tools.
Click Download istioctl. Choose the version and architecture that matches your system.
Extract the
istioctl
binary file.If you are using a Linux operating system, run the following command:
tar xzf istioctl-<VERSION>-<OS>-<ARCH>.tar.gz
$ tar xzf istioctl-<VERSION>-<OS>-<ARCH>.tar.gz
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you are using an Apple Mac operating system, unpack and extract the archive.
- If you are using a Microsoft Windows operating system, use the zip software to extract the archive.
Move to the uncompressed directory by running the following command:
cd istioctl-<VERSION>-<OS>-<ARCH>
$ cd istioctl-<VERSION>-<OS>-<ARCH>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
istioctl
client to the path by running the following command:export PATH=$PWD:$PATH
$ export PATH=$PWD:$PATH
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the
istioctl
client version and the Istio control plane version match or are within one version by running the following command:istioctl version
$ istioctl version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output:
client version: 1.20.0 control plane version: 1.24.3_ossm data plane version: none
client version: 1.20.0 control plane version: 1.24.3_ossm data plane version: none
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 9. Enabling mutual Transport Layer Security Copy linkLink copied to clipboard!
You can use Red Hat OpenShift Service Mesh for your application to customize the communication security between the complex array of microservices. Mutual Transport Layer Security (mTLS) is a protocol that enables two parties to authenticate each other.
9.1. About mutual Transport Layer Security (mTLS) Copy linkLink copied to clipboard!
In OpenShift Service Mesh 3, you use the Istio
resource instead of the ServiceMeshControlPlane
resource to configure mTLS settings.
In OpenShift Service Mesh 3, you configure STRICT
mTLS mode by using the PeerAuthentication
and DestinationRule
resources. You set TLS protocol versions through Istio Workload Minimum TLS Version Configuration.
Review the following Istio
resources and concepts to configure mTLS settings properly:
PeerAuthentication
-
defines the type of mTLS traffic a sidecar accepts. In
PERMISSIVE
mode, both plaintext and mTLS traffic are accepted. InSTRICT
mode, only mTLS traffic is allowed. DestinationRule
-
configures the type of TLS traffic a sidecar sends. In
DISABLE
mode, the sidecar sends plaintext. InSIMPLE
,MUTUAL
, andISTIO_MUTUAL
modes, the sidecar establishes a TLS connection. Auto mTLS
-
ensures that all inter-mesh traffic is encrypted with mTLS by default, regardless of the
PeerAuthentication
mode configuration.Auto mTLS
is controlled by the global mesh configuration fieldenableAutoMtls
, which is enabled by default in OpenShift Service Mesh 2 and 3. The mTLS setting operates entirely between sidecar proxies, requiring no changes to application or service code.
By default, PeerAuthentication
is set to PERMISSIVE
mode, allowing sidecars in the Service Mesh to accept both plain-text and mTLS-encrypted traffic.
9.2. Enabling strict mTLS mode by using the namespace Copy linkLink copied to clipboard!
You can restrict workloads to accept only encrypted mTLS traffic by enabling the STRICT
mode in PeerAuthentication
.
Example PeerAuthentication
policy for a namespace
You can enable mTLS for all destination hosts in the <namespace>
by creating a DestinationRule
resource with MUTUAL
or ISTIO_MUTUAL
mode when auto mTLS
is disabled and PeerAuthentication
is set to STRICT
mode.
Example DestinationRule
policy for a namespace
9.3. Enabling strict mTLS across the whole service mesh Copy linkLink copied to clipboard!
You can configure mTLS across the entire mesh by applying the PeerAuthentication
policy to the istiod
namespace, such as istio-system
. The istiod
namespace name must match to the spec.namespace
field of your Istio
resource.
Example PeerAuthentication
policy for the whole mesh
Additionally, create a DestinationRule
resource to disable mTLS for communication with the API server, as it does not have a sidecar. Apply similar DestinationRule
configurations for other services without sidecars.
Example DestinationRule
policy for the whole mesh
9.4. Validating encryptions with Kiali Copy linkLink copied to clipboard!
The Kiali console offers several ways to validate whether or not your applications, services, and workloads have mTLS encryption enabled.
The Services Detail Overview page displays a Security icon on the graph edges where at least one request with mTLS enabled is present. Also note that Kiali displays a lock icon in the Network section next to ports that are configured for mTLS.