Installing


Red Hat OpenShift Service Mesh 3.3

Installing OpenShift Service Mesh

Red Hat OpenShift Documentation Team

Abstract

This documentation provides information about installing OpenShift Service Mesh.

Chapter 1. Supported platforms and configurations

Before you can install Red Hat OpenShift Service Mesh 3.3.0, you must subscribe to OpenShift Container Platform and install OpenShift Container Platform in a supported configuration. If you do not have a subscription on your Red Hat account, contact your sales representative for more information.

1.1. Supported platforms for Service Mesh

The following platform versions support Service Mesh control plane version 3.3.0:

  • Red Hat OpenShift Container Platform version 4.18 or later
  • Red Hat OpenShift Dedicated version 4
  • Azure Red Hat OpenShift (ARO) version 4
  • Red Hat OpenShift Service on AWS (ROSA)

The Red Hat OpenShift Service Mesh Operator supports multiple versions of

Istio
.

If you are installing Red Hat OpenShift Service Mesh on a restricted network, follow the instructions for your chosen OpenShift Container Platform infrastructure.

For additional information about Red Hat OpenShift Service Mesh lifecycle and supported platforms, refer to the Support Policy.

1.2. Supported configurations for Service Mesh

Red Hat OpenShift Service Mesh supports the following configurations:

  • This release of Red Hat OpenShift Service Mesh is supported on OpenShift Container Platform x86_64, IBM Z®, IBM Power®, and Advanced RISC Machine (ARM).
  • Configurations where all Service Mesh components are contained within a single OpenShift Container Platform cluster.
  • Configurations that do not integrate external services such as virtual machines.
Note

Red Hat OpenShift Service Mesh does not support the

EnvoyFilter
configuration except where explicitly documented.

You can use the following OpenShift networking plugins for the Red Hat OpenShift Service Mesh:

1.3.1. Supported configurations for Kiali

  • The Kiali console is supported on Google Chrome, Microsoft Edge, Mozilla Firefox, or Apple Safari browsers.
  • The
    openshift
    authentication strategy is the only supported authentication configuration when Kiali is deployed with Red Hat OpenShift Service Mesh (OSSM). The
    openshift
    strategy controls access based on the user’s role-based access control (RBAC) roles of the OpenShift Container Platform.

Chapter 2. Installing OpenShift Service Mesh

Installing OpenShift Service Mesh consists of three main tasks: installing the OpenShift Operator, deploying Istio, and customizing the Istio configuration. Then, you can also choose to install the sample

bookinfo
application to push data through the mesh and explore mesh functionality.

Warning

Before installing OpenShift Service Mesh 3, make sure you are not running OpenShift Service Mesh 3 and OpenShift Service Mesh 2 in the same cluster, because it causes conflicts unless configured correctly. To migrate from OpenShift Service Mesh 2, see Migrating from OpenShift Service Mesh 2.6.

To deploy Istio using the Red Hat OpenShift Service Mesh Operator, you must create an

Istio
resource. Then, the Operator creates an
IstioRevision
resource, which represents one revision of the Istio control plane. Based on the
IstioRevision
resource, the Operator deploys the Istio control plane, which includes the
istiod
Deployment
resource and other resources.

The Red Hat OpenShift Service Mesh Operator may create additional instances of the

IstioRevision
resource, depending on the update strategy defined in the
Istio
resource.

2.1.1. About Istio control plane update strategies

The update strategy affects how the update process is performed. The

spec.updateStrategy
field in the
Istio
resource configuration determines how the OpenShift Service Mesh Operator updates the Istio control plane. When the Operator detects a change in the
spec.version
field or identifies a new minor release with a configured
vX.Y-latest
alias, it initiates an upgrade procedure. For each mesh, you select one of two strategies:

  • InPlace
  • RevisionBased

InPlace
is the default strategy for updating OpenShift Service Mesh. Both the update strategies apply to sidecar and ambient modes.

If you use ambient mode, you must update the Istio Container Network Interface (CNI) and

ZTunnel
components in addition to the standard control plane update procedures.

Important

The

InPlace
update strategy is recommended for ambient mode. Using
RevisionBased
updates with ambient mode has limitations and requires manual intervention.

2.2. Installing the Service Mesh Operator

Warning

For clusters without OpenShift Service Mesh instances, install the Service Mesh Operator. OpenShift Service Mesh operates cluster-wide and needs a scope configuration to prevent conflicts between Istio control planes. For clusters with OpenShift Service Mesh 3 or later, see "Deploying multiple service meshes on a single cluster".

Prerequisites

  • You have deployed a cluster on OpenShift Container Platform 4.14 or later.
  • You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.

Procedure

  1. In the OpenShift Container Platform web console, navigate to the OperatorsOperatorHub page.
  2. Search for the Red Hat OpenShift Service Mesh 3 Operator.
  3. Locate the Service Mesh Operator, and click to select it.
  4. When the prompt that discusses the community operator opens, click Continue.
  5. Click Install.
  6. On the Install Operator page, perform the following steps:

    1. Select All namespaces on the cluster (default) as the Installation Mode. This mode installs the Operator in the default
      openshift-operators
      namespace, which enables the Operator to watch and be available to all namespaces in the cluster.
    2. Select Automatic as the Approval Strategy. This ensures that the Operator Lifecycle Manager (OLM) handles the future upgrades to the Operator automatically. If you select the Manual approval strategy, OLM creates an update request. As a cluster administrator, you must then manually approve the OLM update request to update the Operator to the new version.
    3. Select an Update Channel.

      • Choose the stable channel to install the latest stable version of the Red Hat OpenShift Service Mesh 3 Operator. It is the default channel for installing the Operator.
      • To install a specific version of the Red Hat OpenShift Service Mesh 3 Operator, choose the corresponding
        stable-<version>
        channel. For example, to install the Red Hat OpenShift Service Mesh Operator version 3.0.x, use the stable-3.0 channel.
  7. Click Install to install the Operator.

Verification

  1. Click OperatorsInstalled Operators to verify that the Service Mesh Operator is installed.
    Succeeded
    should show in the Status column.

Installing the Red Hat OpenShift Service Mesh Operator also installs custom resource definitions (CRD) that administrators can use to configure Istio for Service Mesh installations. The Operator Lifecycle Manager (OLM) installs two categories of CRDs: Sail Operator CRDs and Istio CRDs.

Sail Operator CRDs define custom resources for installing and maintaining the Istio components required to operate a service mesh. These custom resources belong to the

sailoperator.io
API group and include the
Istio
,
IstioRevision
,
IstioCNI
, and
ZTunnel
resource kinds. For more information on how to configure these resources, see the
sailoperator.io
API reference documentation.

Istio CRDs are associated with mesh configuration and service management. These CRDs define custom resources in several

istio.io
API groups, such as
networking.istio.io
and
security.istio.io
. The CRDs also include various resource kinds, such as
AuthorizationPolicy
,
DestinationRule
, and
VirtualService
, that administrators use to configure a service mesh.

2.3. About Istio deployment

To deploy Istio, you must create two resources:

Istio
and
IstioCNI
. The
Istio
resource deploys and configures the Istio Control Plane. The
IstioCNI
resource deploys and configures the Istio Container Network Interface (CNI) plugin. You should create these resources in separate projects; therefore, you must create two projects as part of the Istio deployment process.

You can use the OpenShift web console or the OpenShift CLI (oc) to create a project or a resource in your cluster.

Note

In the OpenShift Container Platform, a project is essentially a Kubernetes namespace with additional annotations, such as the range of user IDs that can be used in the project. Typically, the OpenShift Container Platform web console uses the term project, and the CLI uses the term namespace, but the terms are essentially synonymous.

The Service Mesh Operator deploys the Istio control plane to a project that you create. In this example,

istio-system
is the name of the project.

Prerequisties

  • The Red Hat OpenShift Service Mesh Operator must be installed.
  • You are logged in to the OpenShift Container Platform web console as cluster-admin.

Procedure

  1. In the OpenShift Container Platform web console, click HomeProjects.
  2. Click Create Project.
  3. At the prompt, enter a name for the project in the Name field. For example,
    istio-system
    . The other fields provide supplementary information to the
    Istio
    resource definition and are optional.
  4. Click Create. The Service Mesh Operator deploys Istio to the project you specified.

Create the Istio resource that will contain the YAML configuration file for your Istio deployment. The Red Hat OpenShift Service Mesh Operator uses information in the YAML file to create an instance of the Istio control plane.

Prerequisties

  • The Service Mesh Operator must be installed.
  • You are logged in to the OpenShift Container Platform web console as cluster-admin.

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsInstalled Operators.
  2. Select
    istio-system
    in the Project drop-down menu.
  3. Click the Service Mesh Operator.
  4. Click Istio.
  5. Click Create Istio.
  6. Select the
    istio-system
    project from the Namespace drop-down menu.
  7. Click Create. This action deploys the Istio control plane.

    When

    State: Healthy
    appears in the Status column, Istio is successfully deployed.

The Service Mesh Operator deploys the Istio CNI plugin to a project that you create. In this example,

istio-cni
is the name of the project.

Prerequisties

  • The Red Hat OpenShift Service Mesh Operator must be installed.
  • You are logged in to the OpenShift Container Platform web console as cluster-admin.

Procedure

  1. In the OpenShift Container Platform web console, click HomeProjects.
  2. Click Create Project.
  3. At the prompt, you must enter a name for the project in the Name field. For example,
    istio-cni
    . The other fields provide supplementary information and are optional.
  4. Click Create.

Create an Istio Container Network Interface (CNI) resource, which contains the configuration file for the Istio CNI plugin. The Service Mesh Operator uses the configuration specified by this resource to deploy the CNI pod.

Prerequisties

  • The Red Hat OpenShift Service Mesh Operator must be installed.
  • You are logged in to the OpenShift Container Platform web console as cluster-admin.

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsInstalled Operators.
  2. Select
    istio-cni
    in the Project drop-down menu.
  3. Click the Service Mesh Operator.
  4. Click IstioCNI.
  5. Click Create IstioCNI.
  6. Ensure that the name is
    default
    .
  7. Click Create. This action deploys the Istio CNI plugin.

    When

    State: Healthy
    appears in the Status column, the Istio CNI plugin is successfully deployed.

Service Mesh includes workloads that meet the following criteria:

  • The control plane has discovered the workload.
  • The workload has an Envoy proxy sidecar injected.

By default, the control plane discovers workloads in all namespaces across the cluster, with the following results:

  • Each proxy instance receives configuration for all namespaces, including workloads not enrolled in the mesh.
  • Any workload with the appropriate pod or namespace injection label receives a proxy sidecar.

In shared clusters, you might want to limit the scope of Service Mesh to only certain namespaces. This approach is especially useful if multiple service meshes run in the same cluster.

2.4.1. About discovery selectors

With discovery selectors, the mesh administrator can control which namespaces the control plane can access. By using a Kubernetes label selector, the administrator sets the criteria for the namespaces visible to the control plane, excluding any namespaces that do not match the specified criteria.

Note

Istiod always opens a watch to OpenShift for all namespaces. However, discovery selectors ignore objects that are not selected very early in its processing, minimizing costs.

The

discoverySelectors
field accepts an array of Kubernetes selectors, which apply to labels on namespaces. You can configure each selector for different use cases:

  • Custom label names and values. For example, configure all namespaces with the label
    istio-discovery=enabled
    .
  • A list of namespace labels by using set-based selectors with OR logic. For instance, configure namespaces with
    istio-discovery=enabled
    OR
    region=us-east1
    .
  • Inclusion and exclusion of namespaces. For example, configure namespaces with
    istio-discovery=enabled
    AND the label
    app=helloworld
    .
Note

Discovery selectors are not a security boundary. Istiod continues to have access to all namespaces even when you have configured the

discoverySelector
field.

If you know which namespaces to include in the Service Mesh, configure

discoverySelectors
during or after installation by adding the required selectors to the
meshConfig.discoverySelectors
section of the
Istio
resource. For example, configure Istio to discover only namespaces labeled
istio-discovery=enabled
.

Prerequisites

  • The OpenShift Service Mesh operator is installed.
  • An Istio CNI resource is created.

Procedure

  1. Add a label to the namespace containing the Istio control plane, for example, the

    istio-system
    system namespace.

    $ oc label namespace istio-system istio-discovery=enabled
  2. Modify the

    Istio
    control plane resource to include a
    discoverySelectors
    section with the same label.

    kind: Istio
    apiVersion: sailoperator.io/v1
    metadata:
      name: default
    spec:
      namespace: istio-system
      values:
        meshConfig:
          discoverySelectors:
            - matchLabels:
                istio-discovery: enabled
  3. Apply the Istio CR:

    $ oc apply -f istio.yaml
  4. Ensure that all namespaces that will contain workloads that are to be part of the Service Mesh have both the
    discoverySelector
    label and, if needed, the appropriate Istio injection label.
Note

Discovery selectors help restrict the scope of a single Service Mesh and are essential for limiting the control plane scope when you deploy multiple Istio control planes in a single cluster.

2.5. About the Bookinfo application

Installing the

bookinfo
example application consists of two main tasks: deploying the application and creating a gateway so the application is accessible outside the cluster.

You can use the

bookinfo
application to explore service mesh features. Using the
bookinfo
application, you can easily confirm that requests from a web browser pass through the mesh and reach the application.

The

bookinfo
application displays information about a book, similar to a single catalog entry of an online book store. The application displays a page that describes the book, lists book details (ISBN, number of pages, and other information), and book reviews.

The

bookinfo
application is exposed through the mesh, and the mesh configuration determines how the microservices comprising the application are used to serve requests. The review information comes from one of three services:
reviews-v1
,
reviews-v2
or
reviews-v3
. If you deploy the
bookinfo
application without defining the
reviews
virtual service, then the mesh uses a round robin rule to route requests to a service.

By deploying the

reviews
virtual service, you can specify a different behavior. For example, you can specify that if a user logs into the
bookinfo
application, then the mesh routes requests to the
reviews-v2
service, and the application displays reviews with black stars. If a user does not log into the
bookinfo
application, then the mesh routes requests to the
reviews-v3
service, and the application displays reviews with red stars.

For more information, see Bookinfo Application in the upstream Istio documentation.

2.5.1. Deploying the Bookinfo application

Prerequisites

  • You have deployed a cluster on OpenShift Container Platform 4.15 or later.
  • You are logged in to the OpenShift Container Platform web console as a user with the
    cluster-admin
    role.
  • You have access to the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift Service Mesh Operator, created the Istio resource, and the Operator has deployed Istio.
  • You have created IstioCNI resource, and the Operator has deployed the necessary IstioCNI pods.

Procedure

  1. In the OpenShift Container Platform web console, navigate to the HomeProjects page.
  2. Click Create Project.
  3. Enter

    bookinfo
    in the Project name field.

    The Display name and Description fields provide supplementary information and are not required.

  4. Click Create.
  5. Apply the Istio discovery selector and injection label to the

    bookinfo
    namespace by entering the following command:

    $ oc label namespace bookinfo istio-discovery=enabled istio-injection=enabled
    Note

    In this example, the name of the Istio resource is

    default
    . If the Istio resource name is different, you must set the
    istio.io/rev
    label to the name of the Istio resource instead of adding the
    istio-injection=enabled
    label.

  6. Apply the

    bookinfo
    YAML file to deploy the
    bookinfo
    application by entering the following command:

    oc apply -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo

Verification

  1. Verify that the

    bookinfo
    service is available by running the following command:

    $ oc get services -n bookinfo

    Example output

    NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    details       ClusterIP   172.30.137.21   <none>        9080/TCP   44s
    productpage   ClusterIP   172.30.2.246    <none>        9080/TCP   43s
    ratings       ClusterIP   172.30.33.85    <none>        9080/TCP   44s
    reviews       ClusterIP   172.30.175.88   <none>        9080/TCP   44s

  2. Verify that the

    bookinfo
    pods are available by running the following command:

    $ oc get pods -n bookinfo

    Example output

    NAME                             READY   STATUS    RESTARTS   AGE
    details-v1-698d88b-km2jg         2/2     Running   0          66s
    productpage-v1-675fc69cf-cvxv9   2/2     Running   0          65s
    ratings-v1-6484c4d9bb-tpx7d      2/2     Running   0          65s
    reviews-v1-5b5d6494f4-wsrwp      2/2     Running   0          65s
    reviews-v2-5b667bcbf8-4lsfd      2/2     Running   0          65s
    reviews-v3-5b9bd44f4-44hr6       2/2     Running   0          65s

    When the

    Ready
    columns displays
    2/2
    , the proxy sidecar was successfully injected. Confirm that
    Running
    appears in the
    Status
    column for each pod.

  3. Verify that the

    bookinfo
    application is running by sending a request to the
    bookinfo
    page. Run the following command:

    $ oc exec "$(oc get pod -l app=ratings -n bookinfo -o jsonpath='{.items[0].metadata.name}')" -c ratings -n bookinfo -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"

The Red Hat OpenShift Service Mesh Operator does not deploy gateways. Gateways are not part of the control plane. As a security best-practice, Ingress and Egress gateways should be deployed in a different namespace than the namespace that contains the control plane.

You can deploy gateways using either the Gateway API or the gateway injection method.

Gateway injection uses the same mechanisms as Istio sidecar injection to create a gateway from a

Deployment
resource that is paired with a
Service
resource. The
Service
resource can be made accessible from outside an OpenShift Container Platform cluster.

Prerequisites

  • You are logged in to the OpenShift Container Platform web console as
    cluster-admin
    .
  • The Red Hat OpenShift Service Mesh Operator must be installed.
  • The Istio resource must be deployed.

Procedure

  1. Create the

    istio-ingressgateway
    deployment and service by running the following command:

    $ oc apply -n bookinfo -f ingress-gateway.yaml
    Note

    This example uses a sample

    ingress-gateway.yaml
    file that is available in the Istio community repository.

  2. Configure the

    bookinfo
    application to use the new gateway. Apply the gateway configuration by running the following command:

    $ oc apply -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/bookinfo/networking/bookinfo-gateway.yaml -n bookinfo
    Note

    To configure gateway injection with the

    bookinfo
    application, this example uses a sample gateway configuration file that must be applied in the namespace where the application is installed.

  3. Use a route to expose the gateway external to the cluster by running the following command:

    $ oc expose service istio-ingressgateway -n bookinfo
  4. Modify the YAML file to automatically scale the pod when ingress traffic increases.

    Example configuration

    apiVersion: autoscaling/v2
    kind: HorizontalPodAutoscaler
    metadata:
      labels:
        istio: ingressgateway
        release: istio
      name: ingressgatewayhpa
      namespace: bookinfo
    spec:
      maxReplicas: 5 
    1
    
      metrics:
      - resource:
          name: cpu
          target:
            averageUtilization: 80
            type: Utilization
        type: Resource
      minReplicas: 2
      scaleTargetRef:
        apiVersion: apps/v1
        kind: Deployment
        name: istio-ingressgateway

    1
    This example sets the the maximum replicas to 5 and the minimum replicas to 2. It also creates another replica when utilization reaches 80%.
  5. Specify the minimum number of pods that must be running on the node.

    Example configuration

    apiVersion: policy/v1
    kind: PodDisruptionBudget
    metadata:
      labels:
        istio: ingressgateway
        release: istio
      name: ingressgatewaypdb
      namespace: bookinfo
    spec:
      minAvailable: 1 
    1
    
      selector:
        matchLabels:
          istio: ingressgateway

    1
    This example ensures one replica is running if a pod gets restarted on a new node.
  6. Obtain the gateway host name and the URL for the product page by running the following command:

    $ HOST=$(oc get route istio-ingressgateway -n bookinfo -o jsonpath='{.spec.host}')
  7. Verify that the

    productpage
    is accessible from a web browser by running the following command:

    $ echo productpage URL: http://$HOST/productpage

The Kubernetes Gateway API deploys a gateway by creating a

Gateway
resource. In OpenShift Container Platform 4.15 and later, Red Hat OpenShift Service Mesh implements the Gateway API custom resource definitions (CRDs). However, in OpenShift Container Platform 4.18 and earlier, the CRDs are not installed by default. Hence, in OpenShift Container Platform 4.15 through 4.18, you must manually install the CRDs. Starting with OpenShift Container Platform 4.19, these CRDs are automatically installed and managed, and you can no longer create, update, or delete them.

For details about enabling Gateway API for Ingress in OpenShift Container Platform 4.19 and later, see "Configuring ingress cluster traffic" in the OpenShift Container Platform documentation.

Note

Red Hat provides support for using the Kubernetes Gateway API with Red Hat OpenShift Service Mesh. Red Hat does not provide support for the Kubernetes Gateway API custom resource definitions (CRDs). In this procedure, the use of community Gateway API CRDs is shown for demonstration purposes only.

Prerequisites

  • You are logged in to the OpenShift Container Platform web console as
    cluster-admin
    .
  • The Red Hat OpenShift Service Mesh Operator must be installed.
  • The Istio resource must be deployed.

Procedure

  1. Enable the Gateway API CRDs for OpenShift Container Platform 4.18 and earlier, by running the following command:

    $ oc get crd gateways.gateway.networking.k8s.io &> /dev/null ||  { oc kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.0.0" | oc apply -f -; }
  2. Create and configure a gateway by using the

    Gateway
    and
    HTTPRoute
    resources by running the following command:

    $ oc apply -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/bookinfo/gateway-api/bookinfo-gateway.yaml -n bookinfo
    Note

    To configure a gateway with the

    bookinfo
    application by using the Gateway API, this example uses a sample gateway configuration file that must be applied in the namespace where the application is installed.

  3. Ensure that the Gateway API service is ready, and has an address allocated by running the following command:

    $ oc wait --for=condition=programmed gtw bookinfo-gateway -n bookinfo
  4. Retrieve the host by running the following command:

    $ export INGRESS_HOST=$(oc get gtw bookinfo-gateway -n bookinfo -o jsonpath='{.status.addresses[0].value}')
  5. Retrieve the port by running the following command:

    $ export INGRESS_PORT=$(oc get gtw bookinfo-gateway -n bookinfo -o jsonpath='{.spec.listeners[?(@.name=="http")].port}')
  6. Retrieve the gateway URL by running the following command:

    $ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
  7. Obtain the gateway host name and the URL of the product page by running the following command:

    $ echo "http://${GATEWAY_URL}/productpage"

Verification

  • Verify that the productpage is accessible from a web browser.

2.6. Customizing Istio configuration

The

values
field of the
Istio
custom resource definition, which was created when the control plane was deployed, can be used to customize Istio configuration using Istio’s
Helm
configuration values. When you create this resource using the OpenShift Container Platform web console, it is pre-populated with configuration settings to enable Istio to run on OpenShift.

Procedure

  1. Click OperatorsInstalled Operators.
  2. Click Istio in the Provided APIs column.
  3. Click the
    Istio
    instance, named
    default
    , in the Name column.
  4. Click YAML to view the
    Istio
    configuration and make modifications.

For a list of available configuration for the

values
field, refer to Istio’s artifacthub chart documentation.

2.7. About Istio High Availability

Running the Istio control plane in High Availability (HA) mode prevents single points of failure, and ensures continuous mesh operation even if an

istiod
pod fails. By using HA, if one
istiod
pod becomes unavailable, another one continues to manage and configure the Istio data plane, preventing service outages or disruptions. HA provides scalability by distributing the control plane workload, enables graceful upgrades, supports disaster recovery operations, and protects against zone-wide mesh outages.

There are two ways for a system administrator to configure HA for the Istio deployment:

  • Defining a static replica count: This approach involves setting a fixed number of
    istiod
    pods, providing a consistent level of redundancy.
  • Using autoscaling: This approach dynamically adjusts the number of
    istiod
    pods based on resource utilization or custom metrics, providing more efficient resource consumption for fluctuating workloads.

2.7.1. Configuring Istio HA by using autoscaling

Configure the Istio control plane in High Availability (HA) mode to prevent a single point of failure, and ensure continuous mesh operation even if one of the

istiod
pods fails. Autoscaling defines the minimum and maximum number of Istio control plane pods that can operate. OpenShift Container Platform uses these values to scale the number of control planes in operation based on resource utilization, such as CPU or memory, to efficiently respond to the varying number of workloads and overall traffic patterns within the mesh.

Prerequisites

  • You are logged in to the OpenShift Container Platform web console as a user with the
    cluster-admin
    role.
  • You have installed the Red Hat OpenShift Service Mesh Operator.
  • You have deployed the Istio resource.

Procedure

  1. In the OpenShift Container Platform web console, click Installed Operators.
  2. Click Red Hat OpenShift Service Mesh 3 Operator.
  3. Click Istio.
  4. Click the name of the Istio installation. For example,
    default
    .
  5. Click YAML.
  6. Modify the

    Istio
    custom resource (CR) similar to the following example:

    Example configuration

    apiVersion: sailoperator.io/v1
    kind: Istio
    metadata:
      name: default
    spec:
      namespace: istio-system
      values:
        pilot:
          autoscaleMin: 2 
    1
    
          autoscaleMax: 5 
    2
    
          cpu:
            targetAverageUtilization: 80 
    3
    
          memory:
            targetAverageUtilization: 80 
    4

    1
    Specifies the minimum number of Istio control plane replicas that always run.
    2
    Specifies the maximum number of Istio control plane replicas, allowing for scaling based on load. To support HA, there must be at least two replicas.
    3
    Specifies the target CPU utilization for autoscaling to 80%. If the average CPU usage exceeds this threshold, the Horizontal Pod Autoscaler (HPA) automatically increases the number of replicas.
    4
    Specifies the target memory utilization for autoscaling to 80%. If the average memory usage exceeds this threshold, the HPA automatically increases the number of replicas.

Verification

  • Verify the status of the Istio control pods by running the following command:

    $ oc get pods -n istio-system -l app=istiod

    Example output

    NAME                      READY   STATUS    RESTARTS   AGE
    istiod-7c7b6564c9-nwhsg   1/1     Running   0          70s
    istiod-7c7b6564c9-xkmsl   1/1     Running   0          85s

    Two

    istiod
    pods are running. Two pods, the minimum requirement for an HA Istio control plane, indicates that a basic HA setup is in place.

Use the following

istio
custom resource definition (CRD) parameters when you configure a service mesh for High Availability (HA) by using autoscaling.

Expand
Table 2.1. HA API parameters
ParameterDescription

autoScaleMin

Defines the minimum number of

istiod
pods for an istio deployment. Each pod contains one instance of the Istio control plane.

OpenShift only uses this parameter when the Horizontal Pod Autoscaler (HPA) is enabled for the Istio deployment. This is the default behavior.

autoScaleMax

Defines the maximum number of

istiod
pods for an Istio deployment. Each pod contains one instance of the Istio control plane.

For OpenShift to automatically scale the number of

istiod
pods based on load, you must set this parameter to a value that is greater than the value that you defined for the
autoScaleMin
parameter.

You must also configure metrics for autoscaling to work properly. If no metrics are configured, the autoscaler does not scale up or down.

OpenShift only uses this parameter when Horizontal Pod Autoscaler (HPA) is enabled for the Istio deployment. This is the default behavior.

cpu.targetAverageUtilization

Defines the target CPU utilization for the

istiod
pod. If the average CPU usage exceeds the threshold that this parameter defines, the HPA automatically increases the number of replica pods.

memory.targetAverageUtilization

Defines the target memory utilization for the

istiod
pod. If the average memory usage exceeds the threshold that this parameter defines, the HPA automatically increases the number of replica pods.

behavior

You can use the

behavior
field to define additional policies that OpenShift uses to scale Istio resources up or down.

For more information, see Configurable Scaling Behavior.

2.7.2. Configuring Istio HA by using replica count

Configure the Istio control plane in High Availability (HA) mode to prevent a single point of failure, and ensure continuous mesh operation even if one of the

istiod
pods fails. The replica count defines a fixed number of Istio control plane pods that can operate. Use replica count for mesh environments where the control plane workload is relatively stable or predictable, or when you prefer to manually scale the
istiod
pod.

Prerequisites

  • You are logged in to the OpenShift Container Platform web console as a user with the
    cluster-admin
    role.
  • You have installed the Red Hat OpenShift Service Mesh Operator.
  • You have deployed the Istio resource.

Procedure

  1. Obtain the name of the

    Istio
    resource by running the following command:

    $ oc get istio -n istio-sytem

    Example output

    NAME      REVISIONS   READY   IN USE   ACTIVE REVISION   STATUS    VERSION   AGE
    default   1           1       0        default           Healthy   v1.24.6   24m

    The name of the

    Istio
    resource is
    default
    .

  2. Update the

    Istio
    custom resource (CR) by adding the
    autoscaleEnabled
    and
    replicaCount
    parameters by running the following command:

    $ oc patch istio default -n istio-system --type merge -p '
    spec:
      values:
        pilot:
          autoscaleEnabled: false 
    1
    
          replicaCount: 2 
    2
    
    '
    1
    Specifies a setting that disables autoscaling and ensures that the number of replicas remains fixed.
    2
    Specifies the number of Istio control plane replicas. To support HA, there must be at least two replicas.

Verification

  1. Verify the status of the Istio control pods by running the following command:

    $ oc get pods -n istio-system -l app=istiod

    Example output

    NAME                      READY   STATUS    RESTARTS   AGE
    istiod-7c7b6564c9-nwhsg   1/1     Running   0          70s
    istiod-7c7b6564c9-xkmsl   1/1     Running   0          85s

    Two

    istiod
    pods are running, which is the minimum requirement for an HA Istio control plane and indicates that a basic HA setup is in place.

Chapter 3. Sidecar injection

Sidecar proxies are deployed into each application pod to intercept network traffic and enable service mesh features like security, observability, and traffic management.

3.1. About sidecar injection

Sidecar injection is enabled using labels at the namespace or pod level. These labels also indicate the specific control plane managing the proxy. When you apply a valid injection label to the pod template defined in a deployment, any new pods created by that deployment automatically receive a sidecar. Similarly, applying a pod injection label at the namespace level ensures any new pods in that namespace include a sidecar.

Note

Injection happens at pod creation through an admission controller, so changes appear on individual pods rather than the deployment resources. To confirm sidecar injection, check the pod details directly using

oc describe
, where you can see the injected Istio proxy container.

3.2. Identifying the revision name

The label required to enable sidecar injection is determined by the specific control plane instance, known as a revision. Each revision is managed by an

IstioRevision
resource, which is automatically created and managed by the
Istio
resource, so manual creation or modification of
IstioRevision
resources is generally unnecessary.

The naming of an

IstioRevision
depends on the
spec.updateStrategy.type
setting in the
Istio
resource. If set to
InPlace
, the revision shares the
Istio
resource name. If set to
RevisionBased
, the revision name follows the format
<Istio resource name>-v<version>
. Typically, each
Istio
resource corresponds to a single
IstioRevision
. However, during a revision-based upgrade, multiple
IstioRevision
resources may exist, each representing a distinct control plane instance.

To see available revision names, use the following command:

$ oc get istiorevisions

You should see output similar to the following example:

Example output

NAME              READY   STATUS    IN USE   VERSION   AGE
my-mesh-v1-23-0   True    Healthy   False    v1.23.0   114s

When the service mesh’s

IstioRevision
name is
default
, it’s possible to use the following labels on a namespace or a pod to enable sidecar injection:

Expand
ResourceLabelEnabled valueDisabled value

Namespace

istio-injection

enabled

disabled

Pod

sidecar.istio.io/inject

true

false

Note

You can also enable injection by setting the

istio.io/rev: default
label in the namespace or pod.

When the

IstioRevision
name is not
default
, use the specific
IstioRevision
name with the
istio.io/rev
label to map the pod to the desired control plane and enable sidecar injection. To enable injection, set the
istio.io/rev: default
label in either the namespace or the pod, as adding it to both is not required.

For example, with the revision shown above, the following labels would enable sidecar injection:

Expand
ResourceEnabled labelDisabled label

Namespace

istio.io/rev=my-mesh-v1-23-0

istio-injection=disabled

Pod

istio.io/rev=my-mesh-v1-23-0

sidecar.istio.io/inject="false"

Note

When both

istio-injection
and
istio.io/rev
labels are applied, the
istio-injection
label takes precedence and treats the namespace as part of the default revision.

3.3. Enabling sidecar injection

To demonstrate different approaches for configuring sidecar injection, the following procedures use the Bookinfo application.

Prerequisites

  • You have installed the Red Hat OpenShift Service Mesh Operator, created an
    Istio
    resource, and the Operator has deployed Istio.
  • You have created the
    IstioCNI
    resource, and the Operator has deployed the necessary
    IstioCNI
    pods.
  • You have created the namespaces that are to be part of the mesh, and they are discoverable by the Istio control plane.
  • Optional: You have deployed the workloads to be included in the mesh. In the following examples, the Bookinfo has been deployed to the
    bookinfo
    namespace, but sidecar injection (step 5) has not been configured. For more information, see "Deploying the Bookinfo application".

In this example, all workloads within a namespace receive a sidecar proxy injection, making it the best approach when the majority of workloads in the namespace should be included in the mesh.

Procedure

  1. Verify the revision name of the Istio control plane using the following command:

    $ oc get istiorevisions

    You should see output similar to the following example:

    Example output

    NAME      TYPE    READY   STATUS    IN USE   VERSION   AGE
    default   Local   True    Healthy   False    v1.23.0   4m57s

    Since the revision name is default, you can use the default injection labels without referencing the exact revision name.

  2. Verify that workloads already running in the desired namespace show

    1/1
    containers as
    READY
    by using the following command. This confirms that the pods are running without sidecars.

    $ oc get pods -n bookinfo

    You should see output similar to the following example:

    Example output

    NAME                             READY   STATUS    RESTARTS   AGE
    details-v1-65cfcf56f9-gm6v7      1/1     Running   0          4m55s
    productpage-v1-d5789fdfb-8x6bk   1/1     Running   0          4m53s
    ratings-v1-7c9bd4b87f-6v7hg      1/1     Running   0          4m55s
    reviews-v1-6584ddcf65-6wqtw      1/1     Running   0          4m54s
    reviews-v2-6f85cb9b7c-w9l8s      1/1     Running   0          4m54s
    reviews-v3-6f5b775685-mg5n6      1/1     Running   0          4m54s

  3. To apply the injection label to the

    bookinfo
    namespace, run the following command at the CLI:

    $ oc label namespace bookinfo istio-injection=enabled
    namespace/bookinfo labeled
  4. To ensure sidecar injection is applied, redeploy the existing workloads in the

    bookinfo
    namespace. Use the following command to perform a rolling update of all workloads:

    $ oc -n bookinfo rollout restart deployments

Verification

  1. Verify the rollout by checking that the new pods display

    2/2
    containers as
    READY
    , confirming successful sidecar injection by running the following command:

    $ oc get pods -n bookinfo

    You should see output similar to the following example:

    Example output

    NAME                              READY   STATUS    RESTARTS   AGE
    details-v1-7745f84ff-bpf8f        2/2     Running   0          55s
    productpage-v1-54f48db985-gd5q9   2/2     Running   0          55s
    ratings-v1-5d645c985f-xsw7p       2/2     Running   0          55s
    reviews-v1-bd5f54b8c-zns4v        2/2     Running   0          55s
    reviews-v2-5d7b9dbf97-wbpjr       2/2     Running   0          55s
    reviews-v3-5fccc48c8c-bjktn       2/2     Running   0          55s

3.3.2. Exclude a workload from the mesh

You can exclude specific workloads from sidecar injection within a namespace where injection is enabled for all workloads.

Note

This example is for demonstration purposes only. The bookinfo application requires all workloads to be part of the mesh for proper functionality.

Procedure

  1. Open the application’s
    Deployment
    resource in an editor. In this case, exclude the
    ratings-v1
    service.
  2. Modify the

    spec.template.metadata.labels
    section of your
    Deployment
    resource to include the label
    sidecar.istio.io/inject: false
    to disable sidecar injection.

    kind: Deployment
    apiVersion: apps/v1
    metadata:
    name: ratings-v1
    namespace: bookinfo
    labels:
      app: ratings
      version: v1
    spec:
      template:
        metadata:
          labels:
            sidecar.istio.io/inject: 'false'
    Note

    Adding the label to the top-level

    labels
    section of the
    Deployment
    does not affect sidecar injection.

    Updating the deployment triggers a rollout, creating a new ReplicaSet with updated pod(s).

Verification

  1. Verify that the updated pod(s) do not contain a sidecar container and show

    1/1
    containers as
    Running
    by running the following command:

    $ oc get pods -n bookinfo

    You should see output similar to the following example:

    Example output

    NAME                              READY   STATUS    RESTARTS   AGE
    details-v1-6bc7b69776-7f6wz       2/2     Running   0          29m
    productpage-v1-54f48db985-gd5q9   2/2     Running   0          29m
    ratings-v1-5d645c985f-xsw7p       1/1     Running   0          7s
    reviews-v1-bd5f54b8c-zns4v        2/2     Running   0          29m
    reviews-v2-5d7b9dbf97-wbpjr       2/2     Running   0          29m
    reviews-v3-5fccc48c8c-bjktn       2/2     Running   0          29m

3.3.3. Enabling sidecar injection with pod labels

This approach allows you to include individual workloads for sidecar injection instead of applying it to all workloads within a namespace, making it ideal for scenarios where only a few workloads need to be part of a service mesh. This example also demonstrates the use of a revision label for sidecar injection, where the

Istio
resource is created with the name
my-mesh
. A unique
Istio
resource name is required when multiple Istio control planes are present in the same cluster or during a revision-based control plane upgrade.

Procedure

  1. Verify the revision name of the Istio control plane by running the following command:

    $ oc get istiorevisions

    You should see output similar to the following example:

    Example output

    NAME      TYPE    READY   STATUS    IN USE   VERSION   AGE
    my-mesh   Local   True    Healthy   False    v1.23.0   47s

    Since the revision name is

    my-mesh
    , use the revision label
    istio.io/rev=my-mesh
    to enable sidecar injection.

  2. Verify that workloads already running show

    1/1
    containers as
    READY
    , indicating that the pods are running without sidecars by running the following command:

    $ oc get pods -n bookinfo

    You should see output similar to the following example:

    Example output

    NAME                             READY   STATUS    RESTARTS   AGE
    details-v1-65cfcf56f9-gm6v7      1/1     Running   0          4m55s
    productpage-v1-d5789fdfb-8x6bk   1/1     Running   0          4m53s
    ratings-v1-7c9bd4b87f-6v7hg      1/1     Running   0          4m55s
    reviews-v1-6584ddcf65-6wqtw      1/1     Running   0          4m54s
    reviews-v2-6f85cb9b7c-w9l8s      1/1     Running   0          4m54s
    reviews-v3-6f5b775685-mg5n6      1/1     Running   0          4m54s

  3. Open the application’s
    Deployment
    resource in an editor. In this case, update the
    ratings-v1
    service.
  4. Update the

    spec.template.metadata.labels
    section of your
    Deployment
    to include the appropriate pod injection or revision label. In this case,
    istio.io/rev: my-mesh
    :

    kind: Deployment
    apiVersion: apps/v1
    metadata:
    name: ratings-v1
    namespace: bookinfo
    labels:
      app: ratings
      version: v1
    spec:
      template:
        metadata:
          labels:
            istio.io/rev: my-mesh
    Note

    Adding the label to the top-level

    labels
    section of the
    Deployment
    resource does not impact sidecar injection.

    Updating the deployment triggers a rollout, creating a new ReplicaSet with the updated pod(s).

Verification

  1. Verify that only the ratings-v1 pod now shows

    2/2
    containers
    READY
    , indicating that the sidecar has been successfully injected by running the following command:

    $ oc get pods -n bookinfo

    You should see output similar to the following example:

    Example output

    NAME                              READY   STATUS    RESTARTS   AGE
    details-v1-559cd49f6c-b89hw       1/1     Running   0          42m
    productpage-v1-5f48cdcb85-8ppz5   1/1     Running   0          42m
    ratings-v1-848bf79888-krdch       2/2     Running   0          9s
    reviews-v1-6b7444ffbd-7m5wp       1/1     Running   0          42m
    reviews-v2-67876d7b7-9nmw5        1/1     Running   0          42m
    reviews-v3-84b55b667c-x5t8s       1/1     Running   0          42m

  2. Repeat for other workloads that you wish to include in the mesh.

To use the

istio-injection=enabled
label when your revision name is not
default
, you must create an
IstioRevisionTag
resource with the name
default
that references your
Istio
resource.

Prerequisites

  • You have installed the Red Hat OpenShift Service Mesh Operator, created an
    Istio
    resource, and the Operator has deployed Istio.
  • You have created the
    IstioCNI
    resource, and the Operator has deployed the necessary
    IstioCNI
    pods.
  • You have created the namespaces that are to be part of the mesh, and they are discoverable by the Istio control plane.
  • Optional: You have deployed the workloads to be included in the mesh. In the following examples, the Bookinfo has been deployed to the
    bookinfo
    namespace, but sidecar injection (step 5 in "Deploying the Bookinfo application" procedure) has not been configured. For more information, see "Deploying the Bookinfo application".

Procedure

  1. Find the name of your

    Istio
    resource by running the following command:

    $ oc get istio

    Example output

    NAME      REVISIONS   READY   IN USE   ACTIVE REVISION   STATUS    VERSION   AGE
    default   1           1       1        default-v1-24-3   Healthy   v1.24.3   11s

    In this example, the

    Istio
    resource has the name
    default
    , but the underlying revision is called
    default-v1-24-3
    .

  2. Create the

    IstioRevisionTag
    resource in a YAML file:

    Example IstioRevistionTag resource YAML file

    apiVersion: sailoperator.io/v1
    kind: IstioRevisionTag
    metadata:
      name: default
    spec:
      targetRef:
        kind: Istio
        name: default

  3. Apply the

    IstioRevisionTag
    resource by running the following command:

    $ oc apply -f istioRevisionTag.yaml
  4. Verify that the

    IstioRevisionTag
    resource has been created successfully by running the following command:

    $ oc get istiorevisiontags.sailoperator.io

    Example output

    NAME      STATUS    IN USE   REVISION          AGE
    default   Healthy   True     default-v1-24-3   4m23s

    In this example, the new tag is referencing your active revision,

    default-v1-24-3
    . Now you can use the
    istio-injection=enabled
    label as if your revision was called
    default
    .

  5. Confirm that the pods are running without sidecars by running the following command. Any workloads that are already running in the desired namespace should show

    1/1
    containers in the
    READY
    column.

    $ oc get pods -n bookinfo

    Example output

    NAME                             READY   STATUS    RESTARTS   AGE
    details-v1-65cfcf56f9-gm6v7      1/1     Running   0          4m55s
    productpage-v1-d5789fdfb-8x6bk   1/1     Running   0          4m53s
    ratings-v1-7c9bd4b87f-6v7hg      1/1     Running   0          4m55s
    reviews-v1-6584ddcf65-6wqtw      1/1     Running   0          4m54s
    reviews-v2-6f85cb9b7c-w9l8s      1/1     Running   0          4m54s
    reviews-v3-6f5b775685-mg5n6      1/1     Running   0          4m54s

  6. Apply the injection label to the

    bookinfo
    namespace by running the following command:

    $ oc label namespace bookinfo istio-injection=enabled \
    namespace/bookinfo labeled
  7. To ensure sidecar injection is applied, redeploy the workloads in the

    bookinfo
    namespace by running the following command:

    $ oc -n bookinfo rollout restart deployments

Verification

  1. Verify the rollout by running the following command and confirming that the new pods display

    2/2
    containers in the
    READY
    column:

    $ oc get pods -n bookinfo

    Example output

    NAME                              READY   STATUS    RESTARTS   AGE
    details-v1-7745f84ff-bpf8f        2/2     Running   0          55s
    productpage-v1-54f48db985-gd5q9   2/2     Running   0          55s
    ratings-v1-5d645c985f-xsw7p       2/2     Running   0          55s
    reviews-v1-bd5f54b8c-zns4v        2/2     Running   0          55s
    reviews-v2-5d7b9dbf97-wbpjr       2/2     Running   0          55s
    reviews-v3-5fccc48c8c-bjktn       2/2     Running   0          55s

Chapter 4. Istio ambient mode

Istio ambient mode provides a sidecar-less architecture for Red Hat OpenShift Service Mesh that reduces operational complexity and resource overhead by using node-level Layer 4 (L4) proxies and optional Layer 7 proxies.

4.1. About Istio ambient mode

To understand the Istio ambient mode architecture, see the following definitions:

ZTunnel proxy
A per-node proxy that manages secure, transparent Transmission Control Protocol (TCP) connections for all workloads on the node. It operates at Layer 4 (L4), offloading mutual Transport Layer Security (mTLS) and L4 policy enforcement from application pods.
Waypoint proxy
An optional proxy that runs per service account or namespace to provide advanced Layer 7 (L7) features such as traffic management, policy enforcement, and observability. You can apply L7 features selectively to avoid the overhead of sidecars for every service.
Istio CNI plugin
Redirects traffic to the Ztunnel proxy on each node, enabling transparent interception without requiring modifications to application pods.

Istio ambient mode offers the following benefits:

  • Simplified operations that remove the need to manage sidecar injection, reducing the complexity of mesh adoption and operations.
  • Reduced resource consumption with a per-node Ztunnel proxy that provides L4 service mesh features and an optional
    waypoint
    proxy that reduces resource overhead per pod.
  • Incremental adoption that enables workloads to join the mesh with the L4 features like mutual Transport Layer Security (mTLS) and basic policies with optional

    waypoint
    proxies added later to use L7 service mesh features, such as HTTP(L7) traffic management.

    Note

    The L7 features require deploying

    waypoint
    proxies, which introduces minimal additional overhead for the selected services.

  • Enhanced security that provides a secure, zero-trust network foundation with mTLS by default for all meshed workloads.
Note

Ambient mode is a newer architecture and may involve different operational considerations than traditional sidecar models.

While well-defined discovery selectors allow a service mesh deployed in ambient mode alongside a mesh in sidecar mode, this scenario has not been thoroughly validated. To avoid potential conflicts, install Istio ambient mode only on clusters that do not have an existing Red Hat OpenShift Service Mesh installation. Ambient mode remains a Technology Preview feature.

Important

Istio ambient mode is not compatible with clusters that use Red Hat OpenShift Service Mesh 2.6 or earlier. You must not install or use them together.

4.2. Installing Istio ambient mode

You can install Istio ambient mode on OpenShift Container Platform 4.19 or later and Red Hat OpenShift Service Mesh 3.1.0 or later with the required Gateway API custom resource definitions (CRDs).

Prerequisites

  • You have deployed a cluster on OpenShift Container Platform 4.19 or later.
  • You have installed the OpenShift Service Mesh Operator 3.1.0 or later in the OpenShift Container Platform cluster.
  • You are logged in to the OpenShift Container Platform cluster either through the web console as a user with the
    cluster-admin
    role, or with the
    oc login
    command, depending on the installation method.
  • You have configured the OVN-Kubernetes Container Network Interface (CNI) to use local gateway mode by setting the
    routingViaHost
    field as
    true
    in the
    gatewayConfig
    specification for the Cluster Network Operator. For more information, see "Configuring gateway mode".

Procedure

  1. Install the Istio control plane:

    1. Create the

      istio-system
      namespace by running the following command:

      $ oc create namespace istio-system
    2. Create an

      Istio
      resource named
      istio.yaml
      similar to the following example:

      Example configuration

      apiVersion: sailoperator.io/v1
      kind: Istio
      metadata:
        name: default
      spec:
        namespace: istio-system
        profile: ambient
        values:
          pilot:
            trustedZtunnelNamespace: ztunnel

      Important

      You must set the

      profile
      field to
      ambient
      , and configure the
      .spec.values.pilot.trustedZtunnelNamespace
      value to match the namespace where the
      ZTunnel
      resource will be installed.

    3. Apply the

      Istio
      custom resource (CR) by running the following command:

      $ oc apply -f istio.yaml
    4. Wait for the Istio control plane to contain the

      Ready
      status condition by running the following command:

      $ oc wait --for=condition=Ready istios/default --timeout=3m
  2. Install the Istio Container Network Interface (CNI):

    1. Create the

      istio-cni
      namespace by running the following command:

      $ oc create namespace istio-cni
    2. Create the

      IstioCNI
      resource named
      istio-cni.yaml
      similar to the following example:

      Example configuration

      apiVersion: sailoperator.io/v1
      kind: IstioCNI
      metadata:
        name: default
      spec:
        namespace: istio-cni
        profile: ambient

      Set the

      profile
      field to
      ambient
      .

    3. Apply the

      IstioCNI
      CR by running the following command:

      $ oc apply -f istio-cni.yaml
    4. Wait for the

      IstioCNI
      pods to contain the
      Ready
      status condition by running the following command:

      $ oc wait --for=condition=Ready istios/default --timeout=3m
  3. Install the Ztunnel proxy:

    1. Create the

      ztunnel
      namespace for Ztunnel proxy by running the following command:

      $ oc create namespace ztunnel

      The namespace name for

      ztunnel
      project must match the
      trustedZtunnelNamespace
      parameter in Istio configuration.

    2. Create the

      Ztunnel
      resource named
      ztunnel.yaml
      similar to the following example:

      Example configuration

      apiVersion: sailoperator.io/v1alpha1
      kind: ZTunnel
      metadata:
        name: default
      spec:
        namespace: ztunnel
        profile: ambient

    3. Apply the

      Ztunnel
      CR by running the following command:

      $ oc apply -f ztunnel.yaml
    4. Wait for the

      Ztunnel
      pods to contain the
      Ready
      status condition by running the following command:

      $ oc wait --for=condition=Ready ztunnel/default --timeout=3m

Istio ambient mode includes workloads when the control plane discovers each workload and the appropriate label enables traffic redirection through the Ztunnel proxy. By default, the control plane discovers workloads in all namespaces across the cluster. As a result, each proxy receives configuration for every namespace, including workloads that are not enrolled in the mesh. In shared or multi-tenant clusters, limiting mesh participation to specific namespaces helps reduce configuration overhead and supports multiple service meshes within the same cluster.

For more information on discovery selectors, see "Scoping the Service Mesh with discovery selectors".

To limit the scope of the OpenShift Service Mesh in Istio ambient mode, you can configure

discoverySelectors
parameter in the
meshConfig
section of the
Istio
resource. The configuration controls which namespaces the control plane discovers based on label selectors.

Prerequisites

  • You have deployed a cluster on OpenShift Container Platform 4.19 or later.
  • You have created an
    Istio
    control plane resource.
  • You have created an
    IstioCNI
    resource.
  • You have created a
    Ztunnel
    resource.

Procedure

  1. Add a label to the namespace containing the

    Istio
    control plane resource, for example, the
    istio-system
    namespace, by running the following command:

    $ oc label namespace istio-system istio-discovery=enabled
  2. Add a label to the namespace containing the

    IstioCNI
    resource, for example, the
    istio-cni
    namespace, by running the following command:

    $ oc label namespace istio-cni istio-discovery=enabled
  3. Add a label to the namespace containing the

    Ztunnel
    resource, for example, the
    ztunnel
    namespace, by running the following command:

    $ oc label namespace ztunnel istio-discovery=enabled
  4. Modify the

    Istio
    control plane resource to include a
    discoverySelectors
    section with the same label:

    1. Create a YAML file with the name

      istio-discovery-selectors.yaml
      similar to the following example:

      Example configuration

      apiVersion: sailoperator.io/v1
      kind: Istio
      metadata:
        name: default
      spec:
        namespace: istio-system
        values:
          pilot:
            trustedZtunnelNamespace: ztunnel
          profile: ambient
          meshConfig:
            discoverySelectors:
            - matchLabels:
                istio-discovery: enabled

    2. Apply the YAML file to

      Istio
      control plane resource by running the following command:

      $ oc apply -f istio-discovery-selectors.yaml

You can deploy the

bookinfo
sample application in Istio ambient mode without sidecar injection by using the
ZTunnel
proxy. For more information on
bookinfo
application, see "About the Bookinfo application".

Prerequisites

  • You have deployed a cluster on OpenShift Container Platform 4.15 or later, which includes the supported Kubernetes Gateway API custom resource definitions (CRDs) required for Istio ambient mode.
  • You are logged in to the OpenShift Container Platform cluster either through the web console as a user with the
    cluster-admin
    role, or with the
    oc login
    command, depending on the installation method.
  • You have installed the Red Hat OpenShift Service Mesh Operator, created the Istio resource, and the Operator has deployed Istio.
  • You have created an
    IstioCNI
    resource, and the Operator has deployed the necessary
    IstioCNI
    pods.
  • You have created a
    Ztunnel
    resource, and the Operator has deployed the necessary
    Ztunnel
    pods.

Procedure

  1. Create the

    bookinfo
    namespace by running the following command:

    $ oc create namespace bookinfo
  2. Add the

    istio-discovery=enabled
    label to the
    bookinfo
    namespace by running the following command:

    $ oc label namespace bookinfo istio-discovery=enabled
  3. Apply the

    bookinfo
    YAML file to deploy the
    bookinfo
    application by running the following command:

    $ oc apply -n bookinfo -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.26/samples/bookinfo/platform/kube/bookinfo.yaml
  4. Apply the

    bookinfo-versions
    YAML file to deploy the
    bookinfo
    application by running the following command:

    $ oc apply -n bookinfo -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.26/samples/bookinfo/platform/kube/bookinfo-versions.yaml
  5. Verify that the

    bookinfo
    pods are running by entering the following command:

    $ oc -n bookinfo get pods

    Example output

    NAME                             READY   STATUS    RESTARTS   AGE
    details-v1-54ffdd5947-8gk5h      1/1     Running   0          5m9s
    productpage-v1-d49bb79b4-cb9sl   1/1     Running   0          5m3s
    ratings-v1-856f65bcff-h6kkf      1/1     Running   0          5m7s
    reviews-v1-848b8749df-wl5br      1/1     Running   0          5m6s
    reviews-v2-5fdf9886c7-8xprg      1/1     Running   0          5m5s
    reviews-v3-bb6b8ddc7-bvcm5       1/1     Running   0          5m5s

  6. Verify that the

    bookinfo
    application is running by entering the following command:

    $ oc exec "$(oc get pod -l app=ratings -n bookinfo \
      -o jsonpath='{.items[0].metadata.name}')" \
      -c ratings -n bookinfo \
      -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"
  7. Add the bookinfo application to the Istio ambient mesh by labeling either the entire namespace or the individual pods:

    1. To include all workloads in the bookinfo namespace, apply the

      istio.io/dataplane-mode=ambient
      label to the
      bookinfo
      namespace, by running the following command:

      $ oc label namespace bookinfo istio.io/dataplane-mode=ambient
    2. To include only specific workloads, apply the
      istio.io/dataplane-mode=ambient
      label directly to individual pods. See the "Additional resources" section for more details on the labels used to add or exclude workloads in a mesh.
    Note

    Adding workloads to the ambient mesh does not require restarting or redeploying application pods. Unlike sidecar mode, the number of containers in each pod remains unchanged.

  8. Confirm that Ztunnel proxy has successfully opened listening sockets in the pod network namespace by running the following command:

    $ istioctl ztunnel-config workloads --namespace ztunnel

    Example output

    NAMESPACE    POD NAME                       ADDRESS      NODE                        WAYPOINT PROTOCOL
    bookinfo     details-v1-54ffdd5947-cflng    10.131.0.69  ip-10-0-47-239.ec2.internal None     HBONE
    bookinfo     productpage-v1-d49bb79b4-8sgwx 10.128.2.80  ip-10-0-24-198.ec2.internal None     HBONE
    bookinfo     ratings-v1-856f65bcff-c6ldn    10.131.0.70  ip-10-0-47-239.ec2.internal None     HBONE
    bookinfo     reviews-v1-848b8749df-45hfd    10.131.0.72  ip-10-0-47-239.ec2.internal None     HBONE
    bookinfo     reviews-v2-5fdf9886c7-mvwft    10.128.2.78  ip-10-0-24-198.ec2.internal None     HBONE
    bookinfo     reviews-v3-bb6b8ddc7-fl8q2     10.128.2.79  ip-10-0-24-198.ec2.internal None     HBONE
    istio-cni    istio-cni-node-7hwd2           10.0.61.108  ip-10-0-61-108.ec2.internal None     TCP
    istio-cni    istio-cni-node-bfqmb           10.0.30.129  ip-10-0-30-129.ec2.internal None     TCP
    istio-cni    istio-cni-node-cv8cw           10.0.75.71   ip-10-0-75-71.ec2.internal  None     TCP
    istio-cni    istio-cni-node-hj9cz           10.0.47.239  ip-10-0-47-239.ec2.internal None     TCP
    istio-cni    istio-cni-node-p8wrg           10.0.24.198  ip-10-0-24-198.ec2.internal None     TCP
    istio-system istiod-6bd6b8664b-r74js        10.131.0.80  ip-10-0-47-239.ec2.internal None     TCP
    ztunnel      ztunnel-2w5mj                  10.128.2.61  ip-10-0-24-198.ec2.internal None     TCP
    ztunnel      ztunnel-6njq8                  10.129.0.131 ip-10-0-75-71.ec2.internal  None     TCP
    ztunnel      ztunnel-96j7k                  10.130.0.146 ip-10-0-61-108.ec2.internal None     TCP
    ztunnel      ztunnel-98mrk                  10.131.0.50  ip-10-0-47-239.ec2.internal None     TCP
    ztunnel      ztunnel-jqcxn                  10.128.0.98  ip-10-0-30-129.ec2.internal None     TCP

4.5. About waypoint proxies in Istio ambient mode

After setting up Istio ambient mode with ztunnel proxies, you can add waypoint proxies to enable advanced Layer 7 (L7) processing features that Istio provides.

Istio ambient mode separates the functionality of Istio into two layers:

  • A secure Layer 4 (L4) overlay managed by ztunnel proxies
  • An L7 layer managed by optional waypoint proxies

A waypoint proxy is an Envoy-based proxy that performs L7 processing for workloads running in ambient mode. It functions as a gateway to a resource such as a namespace, service, or pod. You can install, upgrade, and scale waypoint proxies independently of applications. The configuration uses the Kubernetes Gateway API.

Unlike the sidecar model, where each workload runs its own Envoy proxy, waypoint proxies reduce resource use by serving multiple workloads within the same security boundary, such as all workloads in a namespace.

A destination waypoint enforces policies by acting as a gateway. All incoming traffic to a resource, such as a namespace, service, or pod, passes through the waypoint for policy enforcement.

The

ztunnel
node proxy manages L4 functions in ambient mode, including mutual Transport Layer Security (mTLS) encryption, L4 traffic processing, and telemetry. Ztunnel and waypoint proxies communicate using HBONE (HTTP-Based Overlay Network), a protocol that tunnels traffic over HTTP/2 CONNECT with mutual TLS (mTLS) on port
15008
.

You can add a waypoint proxy if workloads require any of the following L7 capabilities:

Traffic management
Advanced HTTP routing, load balancing, circuit breaking, rate limiting, fault injection, retries, and timeouts
Security
Authorization policies based on L7 attributes such as request type or HTTP headers
Observability
HTTP metrics, access logging, and tracing for application traffic

4.6. Deploying waypoint proxies using gateway API

You can deploy waypoint proxies using Kubernetes Gateway resource.

Prerequisites

  • You have logged in to the OpenShift Container Platform 4.19 or later, which provides supported Kubernetes Gateway API CRDs required for ambient mode functionality.
  • You have the Red Hat OpenShift Service Mesh Operator 3.2.0 or later installed on the OpenShift cluster.
  • You have Istio deployed in ambient mode.
  • You have applied the required labels to workloads or namespaces to enable
    ztunnel
    traffic redirection.
Important

Istio ambient mode is not compatible with clusters that use Red Hat OpenShift Service Mesh 2.6 or earlier. You must not deploy both versions in the same cluster.

Procedure

  • On OpenShift Container Platform 4.18 and earlier, install the community-maintained Kubernetes Gateway API CRDs by running the following command:

    $ oc get crd gateways.gateway.networking.k8s.io &> /dev/null || \
      { oc apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/standard-install.yaml; }

    From OpenShift Container Platform 4.19 onwards, the Gateway API CRDs are installed by default.

Note

The CRDs are community maintained and not supported by Red Hat. Upgrading to OpenShift Container Platform 4.19 or later, which includes supported Gateway API CRDs, may disrupt applications.

4.7. Deploying a waypoint proxy

You can deploy a waypoint proxy in the

bookinfo
application namespace to route traffic through the Istio ambient data plane and enforce L7 policies.

Prerequisites

  • You have logged in to the OpenShift Container Platform 4.19 or later, which provides supported Kubernetes Gateway API custom resource definitions (CRDs) required for ambient mode functionality.
  • You have the Red Hat OpenShift Service Mesh Operator 3.2.0 or later installed on the OpenShift cluster.
  • You have Istio deployed in ambient mode.
  • You have deployed the
    bookinfo
    sample application for the following example.
  • You have added the
    label istio.io/dataplane-mode=ambient
    to the target namespace.

Procedure

  1. Deploy a waypoint proxy in the

    bookinfo
    application namespace similar to the following example:

    Example configuration

    apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
      labels:
        istio.io/waypoint-for: service
      name: waypoint
      namespace: bookinfo
    spec:
      gatewayClassName: istio-waypoint
      listeners:
      - name: mesh
        port: 15008
        protocol: HBONE

  2. Apply the

    waypoint
    custom resource (CR) by running the following command:

    $ oc apply -f waypoint.yaml

    The

    istio.io/waypoint-for: service
    label indicates that the waypoint handles traffic for services. The label determines the type of traffic processed. For more information, see "Waypoint traffic types".

  3. Enroll the

    bookinfo
    namespace to use the waypoint by running the following command:

    $ oc label namespace bookinfo istio.io/use-waypoint=waypoint

After enrolling the namespace, requests from any pods using the ambient data plane to services in

bookinfo
will route through the waypoint for L7 processing and policy enforcement.

Verification

  1. Confirm that the waypoint proxy is used by all the services in the

    bookinfo
    namespace by running the following command:

    $ istioctl ztunnel-config svc --namespace ztunnel

    Example output

    NAMESPACE    SERVICE NAME     SERVICE VIP     WAYPOINT   ENDPOINTS
    bookinfo     details          172.30.15.248   waypoint   1/1
    bookinfo     details-v1       172.30.114.128  waypoint   1/1
    bookinfo     productpage      172.30.155.45   waypoint   1/1
    bookinfo     productpage-v1   172.30.76.27    waypoint   1/1
    bookinfo     ratings          172.30.24.145   waypoint   1/1
    bookinfo     ratings-v1       172.30.139.144  waypoint   1/1
    bookinfo     reviews          172.30.196.50   waypoint   3/3
    bookinfo     reviews-v1       172.30.172.192  waypoint   1/1
    bookinfo     reviews-v2       172.30.12.41    waypoint   1/1
    bookinfo     reviews-v3       172.30.232.12   waypoint   1/1
    bookinfo     waypoint         172.30.92.147   None       1/1

Note

You can also configure only specific services or pods to use a waypoint by labeling the respective service or pod. When enrolling a pod explicitly, also add the

istio.io/waypoint-for: workload
label to the corresponding
gateway
resource.

4.8. Enabling cross-namespace waypoint usage

You can use a cross-namespace waypoint to allow resources in one namespace to route traffic through a waypoint deployed in a different namespace.

Procedure

  1. Create a

    Gateway
    resource that allows workloads in the
    bookinfo
    namespace to use the
    waypoint-default
    from the
    default
    namespace similar to the following example:

    Example configuration

    apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
      name: waypoint-default
      namespace: default
    spec:
      gatewayClassName: istio-waypoint
      listeners:
      - name: mesh
        port: 15008
        protocol: HBONE
        allowedRoutes:
          namespaces:
            from: Selector
            selector:
              matchLabels:
                kubernetes.io/metadata.name: bookinfo

  2. Apply the cross-namespace waypoint by running the following command:

    $ oc apply -f waypoint-default.yaml
  3. Add the labels required to use a cross-namespace waypoint:

    1. Add the

      istio.io/use-waypoint-namespace
      label to specify the namespace where the waypoint resides by running the following command:

      $ oc label namespace bookinfo istio.io/use-waypoint-namespace=default
    2. Add the

      istio.io/use-waypoint
      label to specify the waypoint to use by running the following command:

      $ oc label namespace bookinfo istio.io/use-waypoint=waypoint-default

4.9. About Layer 7 features in ambient mode

Ambient mode includes stable Layer 7 (L7) capabilities implemented through the Gateway API

HTTPRoute
resource and the Istio
AuthorizationPolicy
resource.

The

AuthorizationPolicy
resource works in both sidecar and ambient modes. In ambient mode, authorization policies can be targeted for
ztunnel
enforcement or attached for waypoint enforcement. To attach a policy to a waypoint, include a
targetRef
that references either the waypoint itself or a Service configured to use that waypoint.

You can attach Layer 4 (L4) or L7 policies to the waypoint proxy to ensure correct identity-based enforcement, as the destination

ztunnel
recognizes traffic by the identity of the waypoint, once it is part of the traffic path.

Istio peer authentication policies, which configure mutual TLS (mTLS) modes, are supported by ztunnel. In ambient mode, policies that set the mode to

DISABLE
are ignored because ztunnel and HBONE always enforce mTLS. For more information, see "Peer authentication".

4.10. Routing traffic using waypoint proxies

You can use a deployed waypoint proxy to split traffic between different versions of the Bookinfo

reviews
service for feature testing or A/B testing.

Procedure

  1. Create the traffic routing configuration similar to the following example:

    Example configuration

    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: reviews
      namespace: bookinfo
    spec:
      parentRefs:
      - group: ""
        kind: Service
        name: reviews
        port: 9080
      rules:
      - backendRefs:
        - name: reviews-v1
          port: 9080
          weight: 90
        - name: reviews-v2
          port: 9080
          weight: 10

  2. Apply the traffic routing configuration by running the following command:

    $ oc apply -f traffic-route.yaml

Verification

  • Access the

    productpage
    service from within the ratings pod by running the following command:

    $ oc exec "$(oc get pod -l app=ratings -n bookinfo \
    -o jsonpath='{.items[0].metadata.name}')" -c ratings -n bookinfo \
    -- curl -sS productpage:9080/productpage | grep -om1 'reviews-v[12]'

    Most responses (90%) will contain

    reviews-v1
    output, while a smaller portion (10%) will contain
    reviews-v2
    output.

4.11. Adding authorization policy

Use an Layer 7 (L7) authorization policy to explicitly allow the

curl
service to send
GET
requests to the
productpage
service while blocking all other operations.

Procedure

  1. Create the authorization policy similar to the following example:

    Example configuration

    apiVersion: security.istio.io/v1
    kind: AuthorizationPolicy
    metadata:
      name: productpage-waypoint
      namespace: bookinfo
    spec:
      targetRefs:
      - kind: Service
        group: ""
        name: productpage
      action: ALLOW
      rules:
      - from:
        - source:
            principals:
            - cluster.local/ns/curl/sa/curl
        to:
        - operation:
            methods: ["GET"]

  2. Apply the authorization policy by running the following command:

    $ oc apply -f authorization-policy.yaml
Note

The

targetRefs
field specifies the service targeted by the authorization policy of the waypoint proxy.

Verification

  1. Create a namespace for a

    curl
    client by running the following command:

    $ oc create namespace curl
  2. Deploy a

    curl
    client by running the following command:

    $ oc apply -n curl -f https://raw.githubusercontent.com/openshift-service-mesh/istio/refs/heads/master/samples/curl/curl.yaml
  3. Apply the label for ambient mode to the

    curl
    namespace by running the following command:

    $ oc label namespace curl istio.io/dataplane-mode=ambient
  4. Verify that a

    GET
    request to the
    productpage
    service succeeds with an HTTP 200 response when made from the
    default/curl
    pod, by running the following command:

    $ oc -n curl exec deploy/curl -- sh -c \
      'curl -s -o /dev/null -w "HTTP %{http_code}\n" http://productpage.bookinfo.svc.cluster.local:9080/productpage'
  5. Verify that a

    POST
    request to the same service is denied with an HTTP 403 response due to the applied authorization policy, by running the following command:

    $ oc -n curl exec deploy/curl -- sh -c \
      'curl -s -o /dev/null -w "HTTP %{http_code}\n" -X POST http://productpage.bookinfo.svc.cluster.local:9080/productpage'
  6. Verify that a

    GET
    request from another service, such as the
    ratings
    pod in the
    bookinfo
    namespace, is also denied with
    RBAC: access denied
    , by running the following command:

    $ oc exec "$(oc get pod -l app=ratings -n bookinfo \
    -o jsonpath='{.items[0].metadata.name}')" \
    -c ratings -n bookinfo \
    -- curl -sS productpage:9080/productpage
  7. Clean up the resources by running the following commands:

    1. Delete the

      curl
      application by running the following command:

      $ oc delete -n curl -f https://raw.githubusercontent.com/openshift-service-mesh/istio/refs/heads/master/samples/curl/curl.yaml
    2. Delete the

      curl
      namespace by running the following command:

      $ oc delete namespace curl

The cert-manager tool provides a unified API to manage X.509 certificates for applications in a Kubernetes environment. You can use cert-manager to integrate with public or private key infrastructures (PKI) and automate certificate renewal.

The cert-manager Operator for Red Hat OpenShift enhances certificate management for securing workloads and control plane components in Red Hat OpenShift Service Mesh and Istio. It supports issuing, delivering, and renewing certificates used for mutual Transport Layer Security (mTLS) through cert-manager issuers.

By integrating Istio with the

istio-csr
agent that is managed by the cert-manager Operator, you enable Istio to request and manage the certificates directly. The integration simplifies security configuration and centralizes certificate management within the cluster.

Note

The cert-manager Operator for Red Hat OpenShift must be installed before you create and install your

Istio
resource.

You can integrate the cert-manager Operator with OpenShift Service Mesh by deploying the

istio-csr
agent and configuring an
Istio
resource that uses the
istio-csr
agent to process workload and control plane certificate signing requests. The following procedure creates a self-signed
issuer
object.

Prerequisites

  • You have installed the cert-manager Operator for Red Hat OpenShift version 1.15.1.
  • You are logged in to OpenShift Container Platform 4.14 or later.
  • You have installed the OpenShift Service Mesh Operator.
  • You have a
    IstioCNI
    instance running in the cluster.
  • You have installed the
    istioctl
    command.

Procedure

  1. Create the

    istio-system
    namespace by running the following command:

    $ oc create namespace istio-system
  2. Patch the cert-manager Operator to install the

    istio-csr
    agent by running the following command:

    $ oc -n cert-manager-operator patch subscription openshift-cert-manager-operator \
      --type='merge' -p \
      '{"spec":{"config":{"env":[{"name":"UNSUPPORTED_ADDON_FEATURES","value":"IstioCSR=true"}]}}}'
  3. Create the root certificate authority (CA) issuer by creating an

    Issuer
    object for the
    istio-csr
    agent:

    1. Create a new project for installing the

      istio-csr
      agent by running the following command:

      $ oc new-project istio-csr
    2. Create an

      Issuer
      object similar to the following example:

      Note

      The

      selfSigned
      issuer is intended for demonstration, testing, or proof-of-concept environments. For production deployments, use a secure and trusted CA.

      Example issuer.yaml file

      apiVersion: cert-manager.io/v1
      kind: Issuer
      metadata:
        name: selfsigned
        namespace: istio-system
      spec:
        selfSigned: {}
      ---
      apiVersion: cert-manager.io/v1
      kind: Certificate
      metadata:
        name: istio-ca
        namespace: istio-system
      spec:
        isCA: true
        duration: 87600h
        secretName: istio-ca
        commonName: istio-ca
        privateKey:
          algorithm: ECDSA
          size: 256
        subject:
          organizations:
            - cluster.local
            - cert-manager
        issuerRef:
          name: selfsigned
          kind: Issuer
          group: cert-manager.io
      ---
      apiVersion: cert-manager.io/v1
      kind: Issuer
      metadata:
        name: istio-ca
        namespace: istio-system
      spec:
        ca:
          secretName: istio-ca

    3. Create the objects by running the following command:

      $ oc apply -f issuer.yaml
    4. Wait for the

      istio-ca
      certificate to contain the "Ready" status condition by running the following command:

      $ oc wait --for=condition=Ready certificates/istio-ca -n istio-system
  4. Create the

    IstioCSR
    custom resource:

    1. Create the

      IstioCSR
      custom resource similar to the following example:

      Example istioCSR.yaml file

      apiVersion: operator.openshift.io/v1alpha1
      kind: IstioCSR
      metadata:
        name: default
        namespace: istio-csr
      spec:
        istioCSRConfig:
          certManager:
            issuerRef:
              name: istio-ca
              kind: Issuer
              group: cert-manager.io
          istiodTLSConfig:
            trustDomain: cluster.local
          istio:
            namespace: istio-system

    2. Create the

      istio-csr
      agent by by running the following command:

      $ oc create -f istioCSR.yaml
    3. Verify that the

      istio-csr
      deployment is ready by running the following command:

      $ oc get deployment -n istio-csr
  5. Install the

    istio
    resource:

    Note

    The configuration disables the built-in CA server for Istio and forwards certificate signing requests from

    istiod
    to the
    istio-csr
    agent. The
    istio-csr
    agent obtains certificates for both
    istiod
    and mesh workloads from the cert-manager Operator. The
    istiod
    TLS certificate that is generated by the
    istio-csr
    agent is mounted into the pod at a known location for use.

    1. Create the

      Istio
      object similar to the following example:

      Example istio.yaml file

      apiVersion: sailoperator.io/v1
      kind: Istio
      metadata:
        name: default
      spec:
        version: v1.24-latest
        namespace: istio-system
        values:
          global:
            caAddress: cert-manager-istio-csr.istio-csr.svc:443
          pilot:
            env:
              ENABLE_CA_SERVER: "false"

    2. Create the

      Istio
      resource by running the following command:

      $ oc apply -f istio.yaml
    3. Verify that the

      istio
      resource displays the "Ready" status condition by running the following command:

      $ oc wait --for=condition=Ready istios/default -n istio-system

You can use the sample

httpbin
service and
sleep
application to verify traffic between workloads. Check the workload proxy certificate to verify that the cert-manager Operator is installed correctly.

  1. Create the namespaces:

    1. Create the

      apps-1
      namespace by running the following command:

      $ oc new-project apps-1
    2. Create the

      apps-2
      namespace by running the following command:

      $ oc new-project apps-2
  2. Add the

    istio-injection=enabled
    label on the namespaces:

    1. Add the

      istio-injection=enabled
      label on the
      apps-1
      namespace by running the following command:

      $ oc label namespaces apps-1 istio-injection=enabled
    2. Add the

      istio-injection=enabled
      label on the
      apps-2
      namespace by running the following command:

      $ oc label namespaces apps-2 istio-injection=enabled
  3. Deploy the

    httpbin
    app in the namespaces:

    1. Deploy the

      httpbin
      app in the
      apps-1
      namespace by running the following command:

      $ oc apply -n apps-1 -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yaml
    2. Deploy the

      httpbin
      app in the
      apps-2
      namespace by running the following command:

      $ oc apply -n apps-2 -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yaml
  4. Deploy the

    sleep
    app in the namespaces:

    1. Deploy the

      sleep
      app in the
      apps-1
      namespace by running the following command:

      $ oc apply -n apps-1 -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml
    2. Deploy the

      sleep
      app in the
      apps-2
      namespace by running the following command:

      $ oc apply -n apps-2 -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml
  5. Verify that the created apps have sidecars injected:

    1. Verify that the created apps have sidecars injected for

      apps-1
      namespace by running the following command:

      $ oc get pods -n apps-1
    2. Verify that the created apps have sidecars injected for

      apps-2
      namespace by running the following command:

      $ oc get pods -n apps-2
  6. Create a mesh-wide strict mutual Transport Layer Security (mTLS) policy similar to the following example:

    Note

    Enabling

    PeerAuthentication
    in strict mTLS mode verifies that certificates are distributed correctly and that mTLS communication functions between workloads.

    Example peer_auth.yaml file

    apiVersion: security.istio.io/v1beta1
    kind: PeerAuthentication
    metadata:
      name: default
      namespace: istio-system
    spec:
      mtls:
        mode: STRICT

  7. Apply the mTLS policy by running the following command:

    $ oc apply -f peer_auth.yaml
  8. Verify that the

    apps-1/sleep
    app can access the
    apps-2/httpbin
    service by running the following command:

    $ oc -n apps-1 exec "$(oc -n apps-1 get pod \
      -l app=sleep -o jsonpath={.items..metadata.name})" \
      -c sleep -- curl -sIL http://httpbin.apps-2.svc.cluster.local:8000

    Example output

    HTTP/1.1 200 OK
    access-control-allow-credentials: true
    access-control-allow-origin: *
    content-security-policy: default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' camo.githubusercontent.com
    content-type: text/html; charset=utf-8
    date: Wed, 18 Jun 2025 09:20:55 GMT
    x-envoy-upstream-service-time: 14
    server: envoy
    transfer-encoding: chunked

  9. Verify that the

    apps-2/sleep
    app can access the
    apps-1/httpbin
    service by running the following command:

    $ oc -n apps-2 exec "$(oc -n apps-1 get pod \
      -l app=sleep -o jsonpath={.items..metadata.name})" \
      -c sleep -- curl -sIL http://httpbin.apps-2.svc.cluster.local:8000

    Example output

    HTTP/1.1 200 OK
    access-control-allow-credentials: true
    access-control-allow-origin: *
    content-security-policy: default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' camo.githubusercontent.com
    content-type: text/html; charset=utf-8
    date: Wed, 18 Jun 2025 09:21:23 GMT
    x-envoy-upstream-service-time: 16
    server: envoy
    transfer-encoding: chunked

  10. Verify that the

    httpbin
    workload certificate matches as expected by running the following command:

    $ istioctl proxy-config secret -n apps-1 \
      $(oc get pods -n apps-1 -o jsonpath='{.items..metadata.name}' --selector app=httpbin) \
      -o json | jq -r '.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' \
      | base64 --decode | openssl x509 -text -noout

    Example output

    ...
    Issuer: O = cert-manager + O = cluster.local, CN = istio-ca
    ...
    X509v3 Subject Alternative Name:
    URI:spiffe://cluster.local/ns/apps-1/sa/httpbin

You can uninstall the cert-manager Operator with OpenShift Service Mesh by completing the following procedure. Before you remove the following resources, verify that no Red Hat OpenShift Service Mesh or Istio components reference the

Istio-CSR
agent or the certificates it issued. Removing these resources while they are still in use might disrupt mesh functionality.

Procedure

  1. Remove the

    IstioCSR
    custom resource by running the following command:

    $ oc -n <istio-csr_project_name> delete istiocsrs.operator.openshift.io default
  2. Remove the related resources:

    1. List the cluster scoped-resources by running the following command:

      $ oc get clusterrolebindings,clusterroles -l "app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr"

      Save the names of the listed resources for later reference.

    2. List the resources in

      istio-csr
      agent deployed namespace by running the following command:

      $ oc get certificate,deployments,services,serviceaccounts -l "app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr" -n <istio_csr_project_name>

      Save the names of the listed resources for later reference.

    3. List the resources in Red Hat OpenShift Service Mesh or Istio deployed namespaces by running the following command:

      $ oc get roles,rolebindings \
        -l "app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr" \
        -n <istio_csr_project_name>

      Save the names of the listed resources for later reference.

    4. For each resource listed in previous steps, delete the resources by running the following command:

      $ oc -n <istio_csr_project_name> delete <resource_type>/<resource_name>

Chapter 6. Multi-cluster topologies

Multi-cluster topologies are useful for organizations with distributed systems or environments seeking enhanced scalability, fault tolerance, and regional redundancy.

6.1. About multi-cluster mesh topologies

In a multi-cluster mesh topology, you install and manage a single Istio mesh across multiple OpenShift Container Platform clusters, enabling communication and service discovery between the services. Two factors determine the multi-cluster mesh topology: control plane topology and network topology. There are two options for each topology. Therefore, there are four possible multi-cluster mesh topology configurations.

  • Multi-Primary Single Network: Combines the multi-primary control plane topology and the single network network topology models.
  • Multi-Primary Multi-Network: Combines the multi-primary control plane topology and the multi-network network topology models.
  • Primary-Remote Single Network: Combines the primary-remote control plane topology and the single network network topology models.
  • Primary-Remote Multi-Network: Combines the primary-remote control plane topology and the multi-network network topology models.

6.1.1. Control plane topology models

A multi-cluster mesh must use one of the following control plane topologies:

  • Multi-Primary: In this configuration, a control plane resides on every cluster. Each control plane observes the API servers in all of the other clusters for services and endpoints.
  • Primary-Remote: In this configuration, the control plane resides only on one cluster, called the primary cluster. No control plane runs on any of the other clusters, called remote clusters. The control plane on the primary cluster discovers services and endpoints and configures the sidecar proxies for the workloads in all clusters.

6.1.2. Network topology models

A multi-cluster mesh must use one of the following network topologies:

  • Single Network: All clusters reside on the same network and there is direct connectivity between the services in all the clusters. There is no need to use gateways for communication between the services across cluster boundaries.
  • Multi-Network: Clusters reside on different networks and there is no direct connectivity between services. Gateways must be used to enable communication across network boundaries.

6.2. Multi-Cluster configuration overview

To configure a multi-cluster topology you must perform the following actions:

  • Install the OpenShift Service Mesh Operator for each cluster.
  • Create or have access to root and intermediate certificates for each cluster.
  • Apply the security certificates for each cluster.
  • Install Istio for each cluster.

Create the root and intermediate certificate authority (CA) certificates for two clusters.

Prerequisites

  • You have OpenSSL installed locally.

Procedure

  1. Create the root CA certificate:

    1. Create a key for the root certificate by running the following command:

      $ openssl genrsa -out root-key.pem 4096
    2. Create an OpenSSL configuration certificate file named

      root-ca.conf
      for the root CA certificates:

      Example root certificate configuration file

      encrypt_key = no
      prompt = no
      utf8 = yes
      default_md = sha256
      default_bits = 4096
      req_extensions = req_ext
      x509_extensions = req_ext
      distinguished_name = req_dn
      [ req_ext ]
      subjectKeyIdentifier = hash
      basicConstraints = critical, CA:true
      keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, keyCertSign
      [ req_dn ]
      O = Istio
      CN = Root CA

    3. Create the certificate signing request by running the following command:

      $ openssl req -sha256 -new -key root-key.pem \
        -config root-ca.conf \
        -out root-cert.csr
    4. Create a shared root certificate by running the following command:

      $ openssl x509 -req -sha256 -days 3650 \
        -signkey root-key.pem \
        -extensions req_ext -extfile root-ca.conf \
        -in root-cert.csr \
        -out root-cert.pem
  2. Create the intermediate CA certificate for the East cluster:

    1. Create a directory named

      east
      by running the following command:

      $ mkdir east
    2. Create a key for the intermediate certificate for the East cluster by running the following command:

      $ openssl genrsa -out east/ca-key.pem 4096
    3. Create an OpenSSL configuration file named

      intermediate.conf
      in the
      east/
      directory for the intermediate certificate of the East cluster. Copy the following example file and save it locally:

      Example configuration file

      [ req ]
      encrypt_key = no
      prompt = no
      utf8 = yes
      default_md = sha256
      default_bits = 4096
      req_extensions = req_ext
      x509_extensions = req_ext
      distinguished_name = req_dn
      [ req_ext ]
      subjectKeyIdentifier = hash
      basicConstraints = critical, CA:true, pathlen:0
      keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, keyCertSign
      subjectAltName=@san
      [ san ]
      DNS.1 = istiod.istio-system.svc
      [ req_dn ]
      O = Istio
      CN = Intermediate CA
      L = east

    4. Create a certificate signing request by running the following command:

      $ openssl req -new -config east/intermediate.conf \
         -key east/ca-key.pem \
         -out east/cluster-ca.csr
    5. Create the intermediate CA certificate for the East cluster by running the following command:

      $ openssl x509 -req -sha256 -days 3650 \
         -CA root-cert.pem \
         -CAkey root-key.pem -CAcreateserial \
         -extensions req_ext -extfile east/intermediate.conf \
         -in east/cluster-ca.csr \
         -out east/ca-cert.pem
    6. Create a certificate chain from the intermediate and root CA certificate for the east cluster by running the following command:

      $ cat east/ca-cert.pem root-cert.pem > east/cert-chain.pem && cp root-cert.pem east
  3. Create the intermediate CA certificate for the West cluster:

    1. Create a directory named

      west
      by running the following command:

      $ mkdir west
    2. Create a key for the intermediate certificate for the West cluster by running the following command:

      $ openssl genrsa -out west/ca-key.pem 4096
    3. Create an OpenSSL configuration file named

      intermediate.conf
      in the
      west/
      directory for for the intermediate certificate of the West cluster. Copy the following example file and save it locally:

      Example configuration file

      [ req ]
      encrypt_key = no
      prompt = no
      utf8 = yes
      default_md = sha256
      default_bits = 4096
      req_extensions = req_ext
      x509_extensions = req_ext
      distinguished_name = req_dn
      [ req_ext ]
      subjectKeyIdentifier = hash
      basicConstraints = critical, CA:true, pathlen:0
      keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, keyCertSign
      subjectAltName=@san
      [ san ]
      DNS.1 = istiod.istio-system.svc
      [ req_dn ]
      O = Istio
      CN = Intermediate CA
      L = west

    4. Create a certificate signing request by running the following command:

      $ openssl req -new -config west/intermediate.conf \
         -key west/ca-key.pem \
         -out west/cluster-ca.csr
    5. Create the certificate by running the following command:

      $ openssl x509 -req -sha256 -days 3650 \
         -CA root-cert.pem \
         -CAkey root-key.pem -CAcreateserial \
         -extensions req_ext -extfile west/intermediate.conf \
         -in west/cluster-ca.csr \
         -out west/ca-cert.pem
    6. Create the certificate chain by running the following command:

      $ cat west/ca-cert.pem root-cert.pem > west/cert-chain.pem && cp root-cert.pem west

Apply root and intermediate certificate authority (CA) certificates to the clusters in a multi-cluster topology.

Note

In this procedure,

CLUSTER1
is the East cluster and
CLUSTER2
is the West cluster.

Prerequisites

  • You have access to two OpenShift Container Platform clusters with external load balancer support.
  • You have created the root CA certificate and intermediate CA certificates for each cluster or someone has made them available for you.

Procedure

  1. Apply the certificates to the East cluster of the multi-cluster topology:

    1. Log in to East cluster by running the following command:

      $ oc login -u https://<east_cluster_api_server_url>
    2. Set up the environment variable that contains the

      oc
      command context for the East cluster by running the following command:

      $ export CTX_CLUSTER1=$(oc config current-context)
    3. Create a project called

      istio-system
      by running the following command:

      $ oc get project istio-system --context "${CTX_CLUSTER1}" || oc new-project istio-system --context "${CTX_CLUSTER1}"
    4. Configure Istio to use

      network1
      as the default network for the pods on the East cluster by running the following command:

      $ oc --context "${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1
    5. Create the CA certificates, certificate chain, and the private key for Istio on the East cluster by running the following command:

      $ oc get secret -n istio-system --context "${CTX_CLUSTER1}" cacerts || oc create secret generic cacerts -n istio-system --context "${CTX_CLUSTER1}" \
        --from-file=east/ca-cert.pem \
        --from-file=east/ca-key.pem \
        --from-file=east/root-cert.pem \
        --from-file=east/cert-chain.pem
      Note

      If you followed the instructions in "Creating certificates for a multi-cluster mesh", your certificates will reside in the

      east/
      directory. If your certificates reside in a different directory, modify the syntax accordingly.

  2. Apply the certificates to the West cluster of the multi-cluster topology:

    1. Log in to the West cluster by running the following command:

      $ oc login -u https://<west_cluster_api_server_url>
    2. Set up the environment variable that contains the

      oc
      command context for the West cluster by running the following command:

      $ export CTX_CLUSTER2=$(oc config current-context)
    3. Create a project called

      istio-system
      by running the following command:

      $ oc get project istio-system --context "${CTX_CLUSTER2}" || oc new-project istio-system --context "${CTX_CLUSTER2}"
    4. Configure Istio to use

      network2
      as the default network for the pods on the West cluster by running the following command:

      $ oc --context "${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2
    5. Create the CA certificate secret for Istio on the West cluster by running the following command:

      $ oc get secret -n istio-system --context "${CTX_CLUSTER2}" cacerts || oc create secret generic cacerts -n istio-system --context "${CTX_CLUSTER2}" \
        --from-file=west/ca-cert.pem \
        --from-file=west/ca-key.pem \
        --from-file=west/root-cert.pem \
        --from-file=west/cert-chain.pem
      Note

      If you followed the instructions in "Creating certificates for a multi-cluster mesh", your certificates will reside in the

      west/
      directory. If the certificates reside in a different directory, modify the syntax accordingly.

Next steps

Install Istio on all the clusters comprising the mesh topology.

6.3. Installing a multi-primary multi-network mesh

Install Istio in the multi-primary multi-network topology on two OpenShift Container Platform clusters.

Note

In this procedure,

CLUSTER1
is the East cluster and
CLUSTER2
is the West cluster.

You can adapt these instructions for a mesh spanning more than two clusters.

Prerequisites

  • You have installed the OpenShift Service Mesh 3 Operator on all of the clusters that include the mesh.
  • You have created certificates for the multi-cluster mesh.
  • You have applied certificates to the multi-cluster topology.
  • You have created an Istio Container Network Interface (CNI) resource.
  • You have
    istioctl
    installed.
Important

In on-premise environments, such as those running on bare metal, OpenShift Container Platform clusters often do not include a native load-balancer capability. A service of type

LoadBalancer
, such as the
istio-eastwestgateway
, will not automatically be assigned an external IP address. To ensure the required external IP assignment for cross-cluster communication, cluster administrators must install and configure the MetalLB Operator. MetalLB is valuable in bare metal or bare metal-like infrastructures when fault-tolerant access to an application via an external IP address is necessary. Once deployed, MetalLB provides a platform-native load balancer. In addition to bare metal, the MetalLB Operator can offer load balancing for installations on other infrastructures that might lack native load-balancer capability, including:

  • VMware vSphere
  • IBM Z® and IBM® LinuxONE
  • IBM Z® and IBM® LinuxONE for Red Hat Enterprise Linux (RHEL) KVM
  • IBM Power®

For more information, see MetalLB Operator.

Procedure

  1. Create an

    ISTIO_VERSION
    environment variable that defines the Istio version to install by running the following command:

    $ export ISTIO_VERSION=1.24.3
  2. Install Istio on the East cluster:

    1. Create an

      Istio
      resource on the East cluster by running the following command:

      $ cat <<EOF | oc --context "${CTX_CLUSTER1}" apply -f -
      apiVersion: sailoperator.io/v1
      kind: Istio
      metadata:
        name: default
      spec:
        version: v${ISTIO_VERSION}
        namespace: istio-system
        values:
          global:
            meshID: mesh1
            multiCluster:
              clusterName: cluster1
            network: network1
      EOF
    2. Wait for the control plane to return the

      Ready
      status condition by running the following command:

      $ oc --context "${CTX_CLUSTER1}" wait --for condition=Ready istio/default --timeout=3m
    3. Create an East-West gateway on the East cluster by running the following command:

      $ oc --context "${CTX_CLUSTER1}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net1.yaml
    4. Expose the services through the gateway by running the following command:

      $ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
  3. Install Istio on the West cluster:

    1. Create an

      Istio
      resource on the West cluster by running the following command:

      $ cat <<EOF | oc --context "${CTX_CLUSTER2}" apply -f -
      apiVersion: sailoperator.io/v1
      kind: Istio
      metadata:
        name: default
      spec:
        version: v${ISTIO_VERSION}
        namespace: istio-system
        values:
          global:
            meshID: mesh1
            multiCluster:
              clusterName: cluster2
            network: network2
      EOF
    2. Wait for the control plane to return the

      Ready
      status condition by running the following command:

      $ oc --context "${CTX_CLUSTER2}" wait --for condition=Ready istio/default --timeout=3m
    3. Create an East-West gateway on the West cluster by running the following command:

      $ oc --context "${CTX_CLUSTER2}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net2.yaml
    4. Expose the services through the gateway by running the following command:

      $ oc --context "${CTX_CLUSTER2}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
  4. Create the

    istio-reader-service-account
    service account for the East cluster by running the following command:

    $ oc --context="${CTX_CLUSTER1}" create serviceaccount istio-reader-service-account -n istio-system
  5. Create the

    istio-reader-service-account
    service account for the West cluster by running the following command:

    $ oc --context="${CTX_CLUSTER2}" create serviceaccount istio-reader-service-account -n istio-system
  6. Add the

    cluster-reader
    role to the East cluster by running the following command:

    $ oc --context="${CTX_CLUSTER1}" adm policy add-cluster-role-to-user cluster-reader -z istio-reader-service-account -n istio-system
  7. Add the

    cluster-reader
    role to the West cluster by running the following command:

    $ oc --context="${CTX_CLUSTER2}" adm policy add-cluster-role-to-user cluster-reader -z istio-reader-service-account -n istio-system
  8. Install a remote secret on the East cluster that provides access to the API server on the West cluster by running the following command:

    $ istioctl create-remote-secret \
      --context="${CTX_CLUSTER2}" \
      --name=cluster2 \
      --create-service-account=false | \
      oc --context="${CTX_CLUSTER1}" apply -f -
  9. Install a remote secret on the West cluster that provides access to the API server on the East cluster by running the following command:

    $ istioctl create-remote-secret \
      --context="${CTX_CLUSTER1}" \
      --name=cluster1 \
      --create-service-account=false | \
      oc --context="${CTX_CLUSTER2}" apply -f -

6.3.1. Verifying a multi-cluster topology

Deploy sample applications and verify traffic on a multi-cluster topology on two OpenShift Container Platform clusters.

Note

In this procedure,

CLUSTER1
is the East cluster and
CLUSTER2
is the West cluster.

Prerequisites

  • You have installed the OpenShift Service Mesh Operator on all of the clusters that comprise the mesh.
  • You have completed "Creating certificates for a multi-cluster mesh".
  • You have completed "Applying certificates to a multi-cluster topology".
  • You have created an Istio Container Network Interface (CNI) resource.
  • You have
    istioctl
    installed on the laptop you will use to run these instructions.
  • You have installed a multi-cluster topology.

Procedure

  1. Deploy sample applications on the East cluster:

    1. Create a sample application namespace on the East cluster by running the following command:

      $ oc --context "${CTX_CLUSTER1}" get project sample || oc --context="${CTX_CLUSTER1}" new-project sample
    2. Label the application namespace to support sidecar injection by running the following command:

      $ oc --context="${CTX_CLUSTER1}" label namespace sample istio-injection=enabled
    3. Deploy the

      helloworld
      application:

      1. Create the

        helloworld
        service by running the following command:

        $ oc --context="${CTX_CLUSTER1}" apply \
          -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \
          -l service=helloworld -n sample
      2. Create the

        helloworld-v1
        deployment by running the following command:

        $ oc --context="${CTX_CLUSTER1}" apply \
          -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \
          -l version=v1 -n sample
    4. Deploy the

      sleep
      application by running the following command:

      $ oc --context="${CTX_CLUSTER1}" apply \
        -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml -n sample
    5. Wait for the

      helloworld
      application on the East cluster to return the
      Ready
      status condition by running the following command:

      $ oc --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/helloworld-v1
    6. Wait for the

      sleep
      application on the East cluster to return the
      Ready
      status condition by running the following command:

      $ oc --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/sleep
  2. Deploy the sample applications on the West cluster:

    1. Create a sample application namespace on the West cluster by running the following command:

      $ oc --context "${CTX_CLUSTER2}" get project sample || oc --context="${CTX_CLUSTER2}" new-project sample
    2. Label the application namespace to support sidecar injection by running the following command:

      $ oc --context="${CTX_CLUSTER2}" label namespace sample istio-injection=enabled
    3. Deploy the

      helloworld
      application:

      1. Create the

        helloworld
        service by running the following command:

        $ oc --context="${CTX_CLUSTER2}" apply \
          -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \
          -l service=helloworld -n sample
      2. Create the

        helloworld-v2
        deployment by running the following command:

        $ oc --context="${CTX_CLUSTER2}" apply \
          -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/helloworld/helloworld.yaml \
          -l version=v2 -n sample
    4. Deploy the

      sleep
      application by running the following command:

      $ oc --context="${CTX_CLUSTER2}" apply \
        -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml -n sample
    5. Wait for the

      helloworld
      application on the West cluster to return the
      Ready
      status condition by running the following command:

      $ oc --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/helloworld-v2
    6. Wait for the

      sleep
      application on the West cluster to return the
      Ready
      status condition by running the following command:

      $ oc --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/sleep

Verifying traffic flows between clusters

  1. For the East cluster, send 10 requests to the

    helloworld
    service by running the following command:

    $ for i in {0..9}; do \
      oc --context="${CTX_CLUSTER1}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \
    done

    Verify that you see responses from both clusters. This means version 1 and version 2 of the service can be seen in the responses.

  2. For the West cluster, send 10 requests to the

    helloworld
    service:

    $ for i in {0..9}; do \
      oc --context="${CTX_CLUSTER2}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \
    done

    Verify that you see responses from both clusters. This means version 1 and version 2 of the service can be seen in the responses.

After experimenting with the multi-cluster functionality in a development environment, remove the multi-cluster topology from all the clusters.

Note

In this procedure,

CLUSTER1
is the East cluster and
CLUSTER2
is the West cluster.

Prerequisites

  • You have installed a multi-cluster topology.

Procedure

  1. Remove Istio and the sample applications from the East cluster of the development environment by running the following command:

    $ oc --context="${CTX_CLUSTER1}" delete istio/default ns/istio-system ns/sample ns/istio-cni
  2. Remove Istio and the sample applications from the West cluster of development environment by running the following command:

    $ oc --context="${CTX_CLUSTER2}" delete istio/default ns/istio-system ns/sample ns/istio-cni

Install Istio in a primary-remote multi-network topology on two OpenShift Container Platform clusters.

Note

In this procedure,

CLUSTER1
is the East cluster and
CLUSTER2
is the West cluster. The East cluster is the primary cluster and the West cluster is the remote cluster.

You can adapt these instructions for a mesh spanning more than two clusters.

Prerequisites

  • You have installed the OpenShift Service Mesh 3 Operator on all of the clusters that comprise the mesh.
  • You have completed "Creating certificates for a multi-cluster mesh".
  • You have completed "Applying certificates to a multi-cluster topology".
  • You have created an Istio Container Network Interface (CNI) resource.
  • You have
    istioctl
    installed on the laptop you will use to run these instructions.

Procedure

  1. Create an

    ISTIO_VERSION
    environment variable that defines the Istio version to install by running the following command:

    $ export ISTIO_VERSION=1.24.3
  2. Install Istio on the East cluster:

    1. Set the default network for the East cluster by running the following command:

      $ oc --context="${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1
    2. Create an

      Istio
      resource on the East cluster by running the following command:

      $ cat <<EOF | oc --context "${CTX_CLUSTER1}" apply -f -
      apiVersion: sailoperator.io/v1
      kind: Istio
      metadata:
        name: default
      spec:
        version: v${ISTIO_VERSION}
        namespace: istio-system
        values:
          global:
            meshID: mesh1
            multiCluster:
              clusterName: cluster1
            network: network1
            externalIstiod: true 
      1
      
      EOF
      1
      This enables the control plane installed on the East cluster to serve as an external control plane for other remote clusters.
    3. Wait for the control plane to return the "Ready" status condition by running the following command:

      $ oc --context "${CTX_CLUSTER1}" wait --for condition=Ready istio/default --timeout=3m
    4. Create an East-West gateway on the East cluster by running the following command:

      $ oc --context "${CTX_CLUSTER1}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net1.yaml
    5. Expose the control plane through the gateway so that services in the West cluster can access the control plane by running the following command:

      $ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-istiod.yaml
    6. Expose the application services through the gateway by running the following command:

      $ oc --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/expose-services.yaml
  3. Install Istio on the West cluster:

    1. Save the IP address of the East-West gateway running in the East cluster by running the following command:

      $ export DISCOVERY_ADDRESS=$(oc --context="${CTX_CLUSTER1}" \
          -n istio-system get svc istio-eastwestgateway \
          -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    2. Create an

      Istio
      resource on the West cluster by running the following command:

      $ cat <<EOF | oc --context "${CTX_CLUSTER2}" apply -f -
      apiVersion: sailoperator.io/v1
      kind: Istio
      metadata:
        name: default
      spec:
        version: v${ISTIO_VERSION}
        namespace: istio-system
        profile: remote
        values:
          istiodRemote:
            injectionPath: /inject/cluster/cluster2/net/network2
          global:
            remotePilotAddress: ${DISCOVERY_ADDRESS}
      EOF
    3. Annotate the

      istio-system
      namespace in the West cluster so that it is managed by the control plane in the East cluster by running the following command:

      $ oc --context="${CTX_CLUSTER2}" annotate namespace istio-system topology.istio.io/controlPlaneClusters=cluster1
    4. Set the default network for the West cluster by running the following command:

      $ oc --context="${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2
    5. Install a remote secret on the East cluster that provides access to the API server on the West cluster by running the following command:

      $ istioctl create-remote-secret \
        --context="${CTX_CLUSTER2}" \
        --name=cluster2 | \
        oc --context="${CTX_CLUSTER1}" apply -f -
    6. Wait for the

      Istio
      resource to return the "Ready" status condition by running the following command:

      $ oc --context "${CTX_CLUSTER2}" wait --for condition=Ready istio/default --timeout=3m
    7. Create an East-West gateway on the West cluster by running the following command:

      $ oc --context "${CTX_CLUSTER2}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net2.yaml
      Note

      Since the West cluster is installed with a remote profile, exposing the application services on the East cluster exposes them on the East-West gateways of both clusters.

6.5. Installing Kiali in a multi-cluster mesh

Install Kiali in a multi-cluster mesh configuration on two OpenShift Container Platform clusters.

Note

In this procedure,

CLUSTER1
is the East cluster and
CLUSTER2
is the West cluster.

You can adapt these instructions for a mesh spanning more than two clusters.

Prerequisites

  • You have installed the latest Kiali Operator on each cluster.
  • Istio installed in a multi-cluster configuration on each cluster.
  • You have
    istioctl
    installed on the laptop you can use to run these instructions.
  • You are logged in to the OpenShift Container Platform web console as a user with the
    cluster-admin
    role.
  • You have configured a metrics store so that Kiali can query metrics from all the clusters. Kiali queries metrics and traces from their respective endpoints.

Procedure

  1. Install Kiali on the East cluster:

    1. Create a YAML file named

      kiali.yaml
      that creates a namespace for the Kiali deployment.

      Example configuration

      apiVersion: kiali.io/v1alpha1
      kind: Kiali
      metadata:
        name: kiali
        namespace: istio-system
      spec:
        version: default
        external_services:
          prometheus:
            auth:
              type: bearer
              use_kiali_token: true
            thanos_proxy:
              enabled: true
            url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091

      Note

      The endpoint for this example uses OpenShift Monitoring to configure metrics. For more information, see "Configuring OpenShift Monitoring with Kiali".

    2. Apply the YAML file on the East cluster by running the following command:

      $ oc --context cluster1 apply -f kiali.yaml

      Example output

      kiali-istio-system.apps.example.com

  2. Ensure that the Kiali custom resource (CR) is ready by running the following command:

    $ oc wait --context cluster1 --for=condition=Successful kialis/kiali -n istio-system --timeout=3m

    Example output

    kiali.kiali.io/kiali condition met

  3. Display your Kiali Route hostname.

    $ oc --context cluster1 get route kiali -n istio-system -o jsonpath='{.spec.host}'
  4. Create a Kiali CR on the West cluster.

    Example configuration

    apiVersion: kiali.io/v1alpha1
    kind: Kiali
    metadata:
      name: kiali
      namespace: istio-system
    spec:
      version: default
      auth:
        openshift:
          redirect_uris:
            # Replace kiali-route-hostname with the hostname from the previous step.
            - "https://{kiali-route-hostname}/api/auth/callback/cluster2"
      deployment:
        remote_cluster_resources_only: true

    The Kiali Operator creates the resources necessary for the Kiali server on the East cluster to connect to the West cluster. The Kiali server is not installed on the West cluster.

  5. Apply the YAML file on the West cluster by running the following command:

    $ oc --context cluster2 apply -f kiali-remote.yaml
  6. Ensure that the Kiali CR is ready by running the following command:

    $ oc wait --context cluster2 --for=condition=Successful kialis/kiali -n istio-system --timeout=3m
  7. Create a remote cluster secret so that Kiali installation in the East cluster can access the West cluster.

    1. Create a long lived API token bound to the kiali-service-account in the West cluster. Kiali uses this token to authenticate to the West cluster.

      Example configuration

      apiVersion: v1
      kind: Secret
      metadata:
        name: "kiali-service-account"
        namespace: "istio-system"
        annotations:
          kubernetes.io/service-account.name: "kiali-service-account"
      type: kubernetes.io/service-account-token

    2. Apply the YAML file on the West cluster by running the following command:

      $ oc --context cluster2 apply -f kiali-svc-account-token.yaml
    3. Create a

      kubeconfig
      file and save it as a secret in the namespace on the East cluster where the Kiali deployment resides.

      To simplify this process, use the

      kiali-prepare-remote-cluster.sh
      script to generate the
      kubeconfig
      file by running the following
      curl
      command:

      $ curl -L -o kiali-prepare-remote-cluster.sh https://raw.githubusercontent.com/kiali/kiali/master/hack/istio/multicluster/kiali-prepare-remote-cluster.sh
    4. Modify the script to make it executeable by running the following command:

      chmod +x kiali-prepare-remote-cluster.sh
    5. Execute the script so that it passes the East and West cluster contexts to the

      kubeconfig
      file by running the following command:

      $ ./kiali-prepare-remote-cluster.sh --kiali-cluster-context cluster1 --remote-cluster-context cluster2 --view-only false --kiali-resource-name kiali-service-account --remote-cluster-namespace istio-system --process-kiali-secret true --process-remote-resources false --remote-cluster-name cluster2
      Note

      Use the

      --help
      option to display additional details about how to use the script.

  8. Trigger the reconciliation loop so that the Kiali Operator registers the remote secret that the CR contains by running the following command:

    $ oc --context cluster1 annotate kiali kiali -n istio-system --overwrite kiali.io/reconcile="$(date)"
  9. Wait for Kiali resource to become ready by running the following command:

    oc --context cluster1 wait --for=condition=Successful --timeout=2m kialis/kiali -n istio-system
  10. Wait for Kiali server to become ready by running the following command:

    oc --context cluster1 rollout status deployments/kiali -n istio-system
  11. Log in to Kiali.

    1. When you first access Kiali, log in to the cluster that contains the Kiali deployment. In this example, access the
      East
      cluster.
    2. Display the hostname of the Kiali route by running the following command:

      oc --context cluster1 get route kiali -n istio-system -o jsonpath='{.spec.host}'
    3. Navigate to the Kiali URL in your browser: https://<your-kiali-route-hostname>.
  12. Log in to the West cluster through Kiali.

    In order to see other clusters in the Kiali UI, you must first login as a user to those clusters through Kiali.

    1. Click on the user profile dropdown in the top right hand menu.
    2. Select Login to West. You are redirected to an OpenShift login page and prompted for credentials for the West cluster.
  13. Verify that Kiali shows information from both clusters.

    1. Click Overview and verify that you can see namespaces from both clusters.
    2. Click Navigate and verify that you see both clusters on the mesh graph.

You can use the Red Hat OpenShift Service Mesh to operate many service meshes in a single cluster, with each mesh managed by a separate control plane. Using discovery selectors and revisions prevents conflicts between control planes.

7.1. About deploying multiple control planes

To configure a cluster to host two control planes, set up separate Istio resources with unique names in independent Istio system namespaces. Assign a unique revision name to each Istio resource to identify the control planes, workloads, or namespaces it manages. Apply these revision names using injection or

istio.io/rev
labels to specify which control plane injects the sidecar proxy into application pods.

Each

Istio
resource must also configure discovery selectors to specify which namespaces the Istio control plane observes. Only namespaces with labels that match the configured discovery selectors can join the mesh. Additionally, discovery selectors determine which control plane creates the
istio-ca-root-cert
config map in each namespace, which is used to encrypt traffic between services with mutual TLS within each mesh.

When adding an additional Istio control plane to a cluster with an existing control plane, ensure that the existing

Istio
instance has discovery selectors configured to avoid overlapping with the new control plane.

Note

Only one

IstioCNI
resource is shared by all control planes in a cluster, and you must update this resource independent of other cluster resources.

You can use discovery selectors to limit the visibility of an Istio control plane to specific namespaces in a cluster. By combining discovery selectors with control plane revisions, you can deploy multiple control planes in a single cluster, ensuring that each control plane manages only its assigned namespaces. This approach avoids conflicts between control planes and enables soft multi-tenancy for service meshes.

7.2.1. Deploying the first control plane

You deploy the first control plane by creating its assigned namespace.

Prerequisites

  • You have installed the OpenShift Service Mesh operator.
  • You have created an Istio Container Network Interface (CNI) resource.

    Note

    You can run the following command to check for existing

    Istio
    instances:

    $ oc get istios
  • You have installed the
    istioctl
    binary on your localhost.
Note

You can have extended support for more than two control planes. The maximum number of service meshes in a single cluster depends on the available cluster resources.

Procedure

  1. Create the namespace for the first Istio control plane called

    istio-system-1
    by running the following command:

    $ oc new-project istio-system-1
  2. Add the following label to the first namespace, which is used with the Istio

    discoverySelectors
    field by running the following command:

    $ oc label namespace istio-system-1 istio-discovery=mesh-1
  3. Create a YAML file named

    istio-1.yaml
    with the name
    mesh-1
    and the
    discoverySelector
    as
    mesh-1
    :

    Example configuration

    kind: Istio
    apiVersion: sailoperator.io/v1
    metadata:
      name: mesh-1
    spec:
      namespace: istio-system-1
      values:
        meshConfig:
          discoverySelectors:
            - matchLabels:
                istio-discovery: mesh-1
    # ...

  4. Create the first

    Istio
    resource by running the following command:

    $ oc apply -f istio-1.yaml
  5. To restrict workloads in

    mesh-1
    from communicating freely with decrypted traffic between meshes, deploy a
    PeerAuthentication
    resource to enforce mutual TLS (mTLS) traffic within the
    mesh-1
    data plane. Apply the
    PeerAuthentication
    resource in the
    istio-system-1
    namespace by using a configuration file, such as
    peer-auth-1.yaml
    :

    $ oc apply -f peer-auth-1.yaml

    Example configuration

    apiVersion: security.istio.io/v1
    kind: PeerAuthentication
    metadata:
      name: "mesh-1-peerauth"
      namespace: "istio-system-1"
    spec:
      mtls:
        mode: STRICT

7.2.2. Deploying the second control plane

After deploying the first control plane, you can deploy the second control plane by creating its assigned namespace.

Procedure

  1. Create a namespace for the second Istio control plane called

    istio-system-2
    by running the following command:

    $ oc new-project istio-system-2
  2. Add the following label to the second namespace, which is used with the Istio

    discoverySelectors
    field by running the following command:

    $ oc label namespace istio-system-2 istio-discovery=mesh-2
  3. Create a YAML file named

    istio-2.yaml
    :

    Example configuration

    kind: Istio
    apiVersion: sailoperator.io/v1
    metadata:
      name: mesh-2
    spec:
      namespace: istio-system-2
      values:
        meshConfig:
          discoverySelectors:
            - matchLabels:
                istio-discovery: mesh-2
    # ...

  4. Create the second

    Istio
    resource by running the following command:

    $ oc apply -f istio-2.yaml
  5. Deploy a policy for workloads in the

    istio-system-2
    namespace to only accept mutual TLS traffic
    peer-auth-2.yaml
    by running the following command:

    $ oc apply -f peer-auth-2.yaml

    Example configuration

    apiVersion: security.istio.io/v1
    kind: PeerAuthentication
    metadata:
      name: "mesh-2-peerauth"
      namespace: "istio-system-2"
    spec:
      mtls:
        mode: STRICT

7.2.3. Verifying multiple control planes

Verify that both of the Istio control planes are deployed and running properly. You can validate that the

istiod
pod is successfully running in each Istio system namespace.

  1. Verify that the workloads are assigned to the control plane in

    istio-system-1
    by running the following command:

    $ oc get pods -n istio-system-1

    Example output

    NAME                            READY   STATUS    RESTARTS   AGE
    istiod-mesh-1-b69646b6f-kxrwk   1/1     Running   0          4m14s

  2. Verify that the workloads are assigned to the control plane in

    istio-system-2
    by running the following command:

    $ oc get pods -n istio-system-2

    Example output

    NAME                            READY   STATUS    RESTARTS   AGE
    istiod-mesh-2-8666fdfc6-mqp45   1/1     Running   0          118s

7.3. Deploy application workloads in each mesh

To deploy application workloads, assign each workload to a separate namespace.

Procedure

  1. Create an application namespace called

    app-ns-1
    by running the following command:

    $ oc create namespace app-ns-1
  2. To ensure that the namespace is discovered by the first control plane, add the

    istio-discovery=mesh-1
    label by running the following command:

    $ oc label namespace app-ns-1 istio-discovery=mesh-1
  3. To enable sidecar injection into all the pods by default while ensuring that pods in this namespace are mapped to the first control plane, add the

    istio.io/rev=mesh-1
    label to the namespace by running the following command:

    $ oc label namespace app-ns-1 istio.io/rev=mesh-1
  4. Optional: You can verify the

    mesh-1
    revision name by running the following command:

    $ oc get istiorevisions
  5. Deploy the

    sleep
    and
    httpbin
    applications by running the following command:

    $ oc apply -n app-ns-1 \
       -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml \
       -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yaml
  6. Wait for the

    httpbin
    and
    sleep
    pods to run with sidecars injected by running the following command:

    $ oc get pods -n app-ns-1

    Example output

    NAME                       READY   STATUS    RESTARTS   AGE
    httpbin-7f56dc944b-kpw2x   2/2     Running   0          2m26s
    sleep-5577c64d7c-b5wd2     2/2     Running   0          91m

  7. Create a second application namespace called

    app-ns-2
    by running the following command:

    $ oc create namespace app-ns-2
  8. Create a third application namespace called

    app-ns-3
    by running the following command:

    $ oc create namespace app-ns-3
  9. Add the label

    istio-discovery=mesh-2
    to both namespaces and the revision label
    mesh-2
    to match the discovery selector of the second control plane by running the following command:

    $ oc label namespace app-ns-2 app-ns-3 istio-discovery=mesh-2 istio.io/rev=mesh-2
  10. Deploy the

    sleep
    and
    httpbin
    applications to the
    app-ns-2
    namespace by running the following command:

    $ oc apply -n app-ns-2 \
       -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml \
       -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yaml
  11. Deploy the

    sleep
    and
    httpbin
    applications to the
    app-ns-3
    namespace by running the following command:

    $ oc apply -n app-ns-3 \
       -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/sleep/sleep.yaml \
       -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/httpbin/httpbin.yaml
  12. Optional: Use the following command to wait for a deployment to be available:

    $ oc wait deployments -n app-ns-2 --all --for condition=Available

Verification

  1. Verify that each application workload is managed by its assigned control plane by using the

    istioctl ps
    command after deploying the applications:

    1. Verify that the workloads are assigned to the control plane in

      istio-system-1
      by running the following command:

      $ istioctl ps -i istio-system-1

      Example output

      NAME                                  CLUSTER        CDS              LDS              EDS              RDS              ECDS        ISTIOD                            VERSION
      httpbin-7f56dc944b-vwfm5.app-ns-1     Kubernetes     SYNCED (11m)     SYNCED (11m)     SYNCED (11m)     SYNCED (11m)     IGNORED     istiod-mesh-1-b69646b6f-kxrwk     1.23.0
      sleep-5577c64d7c-d675f.app-ns-1       Kubernetes     SYNCED (11m)     SYNCED (11m)     SYNCED (11m)     SYNCED (11m)     IGNORED     istiod-mesh-1-b69646b6f-kxrwk     1.23.0

    2. Verify that the workloads are assigned to the control plane in

      istio-system-2
      by running the following command:

      $ istioctl ps -i istio-system-2

      Example output

      NAME                                  CLUSTER        CDS                LDS                EDS                RDS                ECDS        ISTIOD                            VERSION
      httpbin-7f56dc944b-54gjs.app-ns-3     Kubernetes     SYNCED (3m59s)     SYNCED (3m59s)     SYNCED (3m59s)     SYNCED (3m59s)     IGNORED     istiod-mesh-2-8666fdfc6-mqp45     1.23.0
      httpbin-7f56dc944b-gnh72.app-ns-2     Kubernetes     SYNCED (4m1s)      SYNCED (4m1s)      SYNCED (3m59s)     SYNCED (4m1s)      IGNORED     istiod-mesh-2-8666fdfc6-mqp45     1.23.0
      sleep-5577c64d7c-k9mxz.app-ns-2       Kubernetes     SYNCED (4m1s)      SYNCED (4m1s)      SYNCED (3m59s)     SYNCED (4m1s)      IGNORED     istiod-mesh-2-8666fdfc6-mqp45     1.23.0
      sleep-5577c64d7c-m9hvm.app-ns-3       Kubernetes     SYNCED (4m1s)      SYNCED (4m1s)      SYNCED (3m59s)     SYNCED (4m1s)      IGNORED     istiod-mesh-2-8666fdfc6-mqp45     1.23.0

  2. Verify that the application connectivity is restricted to workloads within their respective mesh:

    1. Send a request from the

      sleep
      pod in
      app-ns-1
      to the
      httpbin
      service in
      app-ns-2
      to check that the communication fails by running the following command:

      $ oc -n app-ns-1 exec deploy/sleep -c sleep -- curl -sIL http://httpbin.app-ns-2.svc.cluster.local:8000

      The

      PeerAuthentication
      resources created earlier enforce mutual TLS (mTLS) traffic in
      STRICT
      mode within each mesh. Each mesh uses its own root certificate, managed by the
      istio-ca-root-cert
      config map, which prevents communication between meshes. The output indicates a communication failure, similar to the following example:

      Example output

      HTTP/1.1 503 Service Unavailable
      content-length: 95
      content-type: text/plain
      date: Wed, 16 Oct 2024 12:05:37 GMT
      server: envoy

    2. Confirm that the communication works by sending a request from the

      sleep
      pod to the
      httpbin
      service that are present in the
      app-ns-2
      namespace which is managed by
      mesh-2
      . Run the following command:

      $ oc -n app-ns-2 exec deploy/sleep -c sleep -- curl -sIL http://httpbin.app-ns-3.svc.cluster.local:8000

      Example output

      HTTP/1.1 200 OK
      access-control-allow-credentials: true
      access-control-allow-origin: *
      content-security-policy: default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' camo.githubusercontent.com
      content-type: text/html; charset=utf-8
      date: Wed, 16 Oct 2024 12:06:30 GMT
      x-envoy-upstream-service-time: 8
      server: envoy
      transfer-encoding: chunked

Chapter 8. External control plane topology

You can use the external control plane topology to isolate the control plane from the data plane on separate clusters.

8.1. About external control plane topology

The external control plane topology improves security and allows the Service Mesh to be hosted as a service. In this installation configuration one cluster hosts and manages the Istio control plane, and applications are hosted on other clusters.

Install Istio on a control plane cluster and a separate data plane cluster. This installation approach provides increased security.

Note

You can adapt these instructions for a mesh spanning more than one data plane cluster. You can also adapt these instructions for multiple meshes with multiple control planes on the same control plane cluster.

Prerequisites

  • You have installed the OpenShift Service Mesh Operator on the control plane cluster and the data plane cluster.
  • You have
    istioctl
    installed on the laptop you will use to run these instructions.

Procedure

  1. Create an

    ISTIO_VERSION
    environment variable that defines the Istio version to install on all the clusters by running the following command:

    $ export ISTIO_VERSION=1.24.3
  2. Create a

    REMOTE_CLUSTER_NAME
    environment variable that defines the name of the cluster by running the following command:

    $ export REMOTE_CLUSTER_NAME=cluster1
  3. Set up the environment variable that contains the

    oc
    command context for the control plane cluster by running the following command:

    $ export CTX_CONTROL_PLANE_CLUSTER=<context_name_of_the_control_plane_cluster>
  4. Set up the environment variable that contains the

    oc
    command context for the data plane cluster by running the following command:

    $ export CTX_DATA_PLANE_CLUSTER=<context_name_of_the_data_plane_cluster>
  5. Set up the ingress gateway for the control plane:

    1. Create a project called

      istio-system
      by running the following command:

      $ oc get project istio-system --context "${CTX_CONTROL_PLANE_CLUSTER}" || oc new-project istio-system --context "${CTX_CONTROL_PLANE_CLUSTER}"
    2. Create an

      Istio
      resource on the control plane cluster to manage the ingress gateway by running the following command:

      $ cat <<EOF | oc --context "${CTX_CONTROL_PLANE_CLUSTER}" apply -f -
      apiVersion: sailoperator.io/v1
      kind: Istio
      metadata:
        name: default
      spec:
        version: v${ISTIO_VERSION}
        namespace: istio-system
        value:
          global:
            network: network1
      EOF
    3. Create the ingress gateway for the control plane by running the following command:

      $ oc --context "${CTX_CONTROL_PLANE_CLUSTER}" apply -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/main/docs/deployment-models/resources/controlplane-gateway.yaml
    4. Get the assigned IP address for the ingress gateway by running the following command:

      $ oc --context "${CTX_CONTROL_PLANE_CLUSTER}" get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
    5. Store the IP address of the ingress gateway in an environment variable by running the following command:

      $ export EXTERNAL_ISTIOD_ADDR=$(oc -n istio-system --context="${CTX_CONTROL_PLANE_CLUSTER}" get svc istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  6. Install Istio on the data plane cluster:

    1. Create a project called

      external-istiod
      on the data plane cluster by running the following command:

      $ oc get project external-istiod --context "${CTX_DATA_PLANE_CLUSTER}" || oc new-project external-istiod --context "${CTX_DATA_PLANE_CLUSTER}"
    2. Create an

      Istio
      resource on the data plane cluster by running the following command:

      $ cat <<EOF | oc --context "${CTX_DATA_PLANE_CLUSTER}" apply -f -
      apiVersion: sailoperator.io/v1
      kind: Istio
      metadata:
        name: external-istiod
      spec:
        version: v${ISTIO_VERSION}
        namespace: external-istiod
        profile: remote
        values:
          defaultRevision: external-istiod
          global:
            remotePilotAddress: ${EXTERNAL_ISTIOD_ADDR}
            configCluster: true 
      1
      
          pilot:
            configMap: true
            istiodRemote:
              injectionPath: /inject/cluster/cluster2/net/network1
      EOF
      1
      This setting identifies the data plane cluster as the source of the mesh configuration.
  7. Create a project called

    istio-cni
    on the data plane cluster by running the following command:

    $ oc get project istio-cni --context "${CTX_DATA_PLANE_CLUSTER}" || oc new-project istio-cni --context "${CTX_DATA_PLANE_CLUSTER}"
    1. Create an

      IstioCNI
      resource on the data plane cluster by running the following command:

      $ cat <<EOF | oc --context "${CTX_DATA_PLANE_CLUSTER}" apply -f -
      apiVersion: sailoperator.io/v1
      kind: IstioCNI
      metadata:
        name: default
      spec:
        version: v${ISTIO_VERSION}
        namespace: istio-cni
      EOF
  8. Set up the external Istio control plane on the control plane cluster:

    1. Create a project called

      external-istiod
      on the control plane cluster by running the following command:

      $ oc get project external-istiod --context "${CTX_CONTROL_PLANE_CLUSTER}" || oc new-project external-istiod --context "${CTX_CONTROL_PLANE_CLUSTER}"
    2. Create a

      ServiceAccount
      resource on the control plane cluster by running the following command:

      $ oc --context="${CTX_CONTROL_PLANE_CLUSTER}" create serviceaccount istiod-service-account -n external-istiod
    3. Store the API server address for the data plane cluster in an environment variable by running the following command:

      $ DATA_PLANE_API_SERVER=https://<hostname_or_IP_address_of_the_API_server_for_the_data_plane_cluster>:6443
    4. Install a remote secret on the control plane cluster that provides access to the API server on the data plane cluster by running the following command:

      $ istioctl create-remote-secret \
        --context="${CTX_DATA_PLANE_CLUSTER}" \
        --type=config \
        --namespace=external-istiod \
        --service-account=istiod-external-istiod \
        --create-service-account=false \
        --server="${DATA_PLANE_API_SERVER}" | \
        oc --context="${CTX_CONTROL_PLANE_CLUSTER}" apply -f -
    5. Create an

      Istio
      resource on the control plane cluster by running the following command:

      $ cat <<EOF | oc --context "${CTX_CONTROL_PLANE_CLUSTER}" apply -f -
      apiVersion: sailoperator.io/v1
      kind: Istio
      metadata:
        name: external-istiod
      spec:
        version: v${ISTIO_VERSION}
        namespace: external-istiod
        profile: empty
        values:
          meshConfig:
            rootNamespace: external-istiod
            defaultConfig:
              discoveryAddress: $EXTERNAL_ISTIOD_ADDR:15012
          pilot:
            enabled: true
            volumes:
              - name: config-volume
                configMap:
                  name: istio-external-istiod
              - name: inject-volume
                configMap:
                  name: istio-sidecar-injector-external-istiod
            volumeMounts:
              - name: config-volume
                mountPath: /etc/istio/config
              - name: inject-volume
                mountPath: /var/lib/istio/inject
            env:
              INJECTION_WEBHOOK_CONFIG_NAME: "istio-sidecar-injector-external-istiod-external-istiod"
              VALIDATION_WEBHOOK_CONFIG_NAME: "istio-validator-external-istiod-external-istiod"
              EXTERNAL_ISTIOD: "true"
              LOCAL_CLUSTER_SECRET_WATCHER: "true"
              CLUSTER_ID: cluster2
              SHARED_MESH_CONFIG: istio
          global:
            caAddress: $EXTERNAL_ISTIOD_ADDR:15012
            configValidation: false
            meshID: mesh1
            multiCluster:
              clusterName: cluster2
            network: network1
      EOF
    6. Create

      Gateway
      and
      VirtualService
      resources so that the sidecar proxies on the data plane cluster can access the control plane by running the following command:

      $ oc --context "${CTX_CONTROL_PLANE_CLUSTER}" apply -f - <<EOF
      apiVersion: networking.istio.io/v1
      kind: Gateway
      metadata:
        name: external-istiod-gw
        namespace: external-istiod
      spec:
        selector:
          istio: ingressgateway
        servers:
          - port:
              number: 15012
              protocol: tls
              name: tls-XDS
            tls:
              mode: PASSTHROUGH
            hosts:
            - "*"
          - port:
              number: 15017
              protocol: tls
              name: tls-WEBHOOK
            tls:
              mode: PASSTHROUGH
            hosts:
            - "*"
      ---
      apiVersion: networking.istio.io/v1
      kind: VirtualService
      metadata:
        name: external-istiod-vs
        namespace: external-istiod
      spec:
          hosts:
          - "*"
          gateways:
          - external-istiod-gw
          tls:
          - match:
            - port: 15012
              sniHosts:
              - "*"
            route:
            - destination:
                host: istiod-external-istiod.external-istiod.svc.cluster.local
                port:
                  number: 15012
          - match:
            - port: 15017
              sniHosts:
              - "*"
            route:
            - destination:
                host: istiod-external-istiod.external-istiod.svc.cluster.local
                port:
                  number: 443
      EOF
    7. Wait for the

      external-istiod
      Istio
      resource on the control plane cluster to return the "Ready" status condition by running the following command:

      $ oc --context "${CTX_CONTROL_PLANE_CLUSTER}" wait --for condition=Ready istio/external-istiod --timeout=3m
    8. Wait for the

      Istio
      resource on the data plane cluster to return the "Ready" status condition by running the following command:

      $ oc --context "${CTX_DATA_PLANE_CLUSTER}" wait --for condition=Ready istio/external-istiod --timeout=3m
    9. Wait for the

      IstioCNI
      resource on the data plane cluster to return the "Ready" status condition by running the following command:

      $ oc --context "${CTX_DATA_PLANE_CLUSTER}" wait --for condition=Ready istiocni/default --timeout=3m

Verification

  1. Deploy sample applications on the data plane cluster:

    1. Create a namespace for sample applications on the data plane cluster by running the following command:

      $ oc --context "${CTX_DATA_PLANE_CLUSTER}" get project sample || oc --context="${CTX_DATA_PLANE_CLUSTER}" new-project sample
    2. Label the namespace for the sample applications to support sidecar injection by running the following command:

      $ oc --context="${CTX_DATA_PLANE_CLUSTER}" label namespace sample istio.io/rev=external-istiod
    3. Deploy the

      helloworld
      application:

      1. Create the

        helloworld
        service by running the following command:

        $ oc --context="${CTX_DATA_PLANE_CLUSTER}" apply \
          -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/helloworld/helloworld.yaml \
          -l service=helloworld -n sample
      2. Create the

        helloworld-v1
        deployment by running the following command:

        $ oc --context="${CTX_DATA_PLANE_CLUSTER}" apply \
          -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/helloworld/helloworld.yaml \
          -l version=v1 -n sample
    4. Deploy the

      sleep
      application by running the following command:

      $ oc --context="${CTX_DATA_PLANE_CLUSTER}" apply \
        -f https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/sleep/sleep.yaml -n sample
    5. Verify that the pods on the

      sample
      namespace have a sidecar injected by running the following command:

      $ oc --context="${CTX_DATA_PLANE_CLUSTER}" get pods -n sample

      The terminal should return

      2/2
      for each pod on the
      sample
      namespace by running the following command:

      Example output

      NAME                             READY   STATUS    RESTARTS   AGE
      helloworld-v1-6d65866976-jb6qc   2/2     Running   0          1m
      sleep-5fcd8fd6c8-mg8n2           2/2     Running   0          1m

  2. Verify that internal traffic can reach the applications on the cluster:

    1. Verify a request can be sent to the

      helloworld
      application through the
      sleep
      application by running the following command:

      $ oc exec --context="${CTX_DATA_PLANE_CLUSTER}" -n sample -c sleep deploy/sleep -- curl -sS helloworld.sample:5000/hello

      The terminal should return a response from the

      helloworld
      application:

      Example output

      Hello version: v1, instance: helloworld-v1-6d65866976-jb6qc

  3. Install an ingress gateway to expose the sample application to external clients:

    1. Create the ingress gateway by running the following command:

      $ oc --context="${CTX_DATA_PLANE_CLUSTER}" apply
      -f https://raw.githubusercontent.com/istio-ecosystem/sail-operator/refs/heads/main/chart/samples/ingress-gateway.yaml -n sample
    2. Confirm that the ingress gateway is running by running the following command:

      $ oc get pod -l app=istio-ingressgateway -n sample --context="${CTX_DATA_PLANE_CLUSTER}"

      The terminal should return output confirming that the gateway is running:

      Example output

      NAME                                    READY   STATUS    RESTARTS   AGE
      istio-ingressgateway-7bcd5c6bbd-kmtl4   1/1     Running   0          8m4s

    3. Expose the

      helloworld
      application through the ingress gateway by running the following command:

      $ oc apply -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/helloworld/helloworld-gateway.yaml -n sample --context="${CTX_DATA_PLANE_CLUSTER}"
    4. Set the gateway URL environment variable by running the following command:

      $ export INGRESS_HOST=$(oc -n sample --context="${CTX_DATA_PLANE_CLUSTER}" get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'); \
        export INGRESS_PORT=$(oc -n sample --context="${CTX_DATA_PLANE_CLUSTER}" get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}'); \
        export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
  4. Verify that external traffic can reach the applications on the mesh:

    1. Confirm that the

      helloworld
      application is accessible through the gateway by running the following command:

      $ curl -s "http://${GATEWAY_URL}/hello"

      The

      helloworld
      application should return a response.

      Example output

      Hello version: v1, instance: helloworld-v1-6d65866976-jb6qc

Chapter 9. Istioctl tool

Use the

istioctl
command line utility to perform diagnostic and debugging tasks for OpenShift Service Mesh 3 service mesh components.

9.1. Support for Istioctl

OpenShift Service Mesh 3 supports a selection of Istioctl commands.

Expand
Table 9.1. Supported Istioctl commands
CommandDescription

admin

Manage the control plane (

istiod
) configuration

analyze

Analyze the Istio configuration and print validation messages

completion

Generate the autocompletion script for the specified shell

create-remote-secret

Create a secret with credentials to allow Istio to access remote Kubernetes API servers

help

Display help about any command

proxy-config
,
pc

Retrieve information about the proxy configuration from Envoy (Kubernetes only)

proxy-status
,
ps

Retrieve the synchronization status of each Envoy in the mesh

remote-clusters

List the remote clusters each

istiod
instance is connected to

validate
,
v

Validate the Istio policy and rules files

version

Print out the build version information

waypoint

Manage the waypoint configuration

ztunnel-config

Update or retrieve the current Ztunnel configuration.

Note

Any other commands display the WARNING: Not supported in OpenShift Service Mesh context message. Do not use these commands in production environments.

9.2. Installing the Istioctl tool

Install the

istioctl
command-line utility to debug and diagnose Istio service mesh deployments.

Prerequisites

  • You have access to the OpenShift Container Platform web console.
  • The OpenShift Service Mesh 3 Operator is installed and running.
  • You have created at least one
    Istio
    resource.

Procedure

  1. Confirm which version of the

    Istio
    resource runs on the installation by running the following command:

    $ oc get istio -ojsonpath="{range .items[*]}{.spec.version}{'\n'}{end}" | sed s/^v// | sort

    If there are multiple

    Istio
    resources with different versions, choose the latest version. The latest version is displayed last.

  2. In the OpenShift Container Platform web console, click the Help icon and select Command Line Tools.
  3. Click Download istioctl. Choose the version and architecture that matches your system.

  4. Extract the

    istioctl
    binary file.

    1. If you are using a Linux operating system, run the following command:

      $ tar xzf istioctl-<VERSION>-<OS>-<ARCH>.tar.gz
    2. If you are using an Apple Mac operating system, unpack and extract the archive.
    3. If you are using a Microsoft Windows operating system, use the zip software to extract the archive.
  5. Move to the uncompressed directory by running the following command:

    $ cd istioctl-<VERSION>-<OS>-<ARCH>
  6. Add the

    istioctl
    client to the path by running the following command:

    $ export PATH=$PWD:$PATH
  7. Confirm that the

    istioctl
    client version and the Istio control plane version match or are within one version by running the following command:

    $ istioctl version

    Sample output:

    client version: 1.20.0
    control plane version: 1.24.3_ossm
    data plane version: none

You can use Red Hat OpenShift Service Mesh for your application to customize the communication security between the complex array of microservices. Mutual Transport Layer Security (mTLS) is a protocol that enables two parties to authenticate each other.

10.1. About mutual Transport Layer Security (mTLS)

In OpenShift Service Mesh 3, you use the

Istio
resource instead of the
ServiceMeshControlPlane
resource to configure mTLS settings.

In OpenShift Service Mesh 3, you configure

STRICT
mTLS mode by using the
PeerAuthentication
and
DestinationRule
resources. You set TLS protocol versions through Istio Workload Minimum TLS Version Configuration.

Review the following

Istio
resources and concepts to configure mTLS settings properly:

PeerAuthentication
defines the type of mTLS traffic a sidecar accepts. In PERMISSIVE mode, both plaintext and mTLS traffic are accepted. In STRICT mode, only mTLS traffic is allowed.
DestinationRule
configures the type of TLS traffic a sidecar sends. In DISABLE mode, the sidecar sends plaintext. In SIMPLE, MUTUAL, and ISTIO_MUTUAL modes, the sidecar establishes a TLS connection.
Auto mTLS
ensures that all inter-mesh traffic is encrypted with mTLS by default, regardless of the PeerAuthentication mode configuration. Auto mTLS is controlled by the global mesh configuration field enableAutoMtls, which is enabled by default in OpenShift Service Mesh 2 and 3. The mTLS setting operates entirely between sidecar proxies, requiring no changes to application or service code.

By default,

PeerAuthentication
is set to
PERMISSIVE
mode, allowing sidecars in the Service Mesh to accept both plain-text and mTLS-encrypted traffic.

You can restrict workloads to accept only encrypted mTLS traffic by enabling the

STRICT
mode in
PeerAuthentication
.

Example PeerAuthentication policy for a namespace

apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
  name: default
  namespace: <namespace>
spec:
  mtls:
    mode: STRICT

You can enable mTLS for all destination hosts in the

<namespace>
by creating a
DestinationRule
resource with
MUTUAL
or
ISTIO_MUTUAL
mode when
auto mTLS
is disabled and
PeerAuthentication
is set to
STRICT
mode.

Example DestinationRule policy for a namespace

apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
  name: enable-mtls
  namespace: <namespace>
spec:
  host: "*.<namespace>.svc.cluster.local"
  trafficPolicy:
   tls:
    mode: ISTIO_MUTUAL

You can configure mTLS across the entire mesh by applying the

PeerAuthentication
policy to the
istiod
namespace, such as
istio-system
. The
istiod
namespace name must match to the
spec.namespace
field of your
Istio
resource.

Example PeerAuthentication policy for the whole mesh

apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
  name: default
  namespace: istio-system
spec:
  mtls:
    mode: STRICT

Additionally, create a

DestinationRule
resource to disable mTLS for communication with the API server, as it does not have a sidecar. Apply similar
DestinationRule
configurations for other services without sidecars.

Example DestinationRule policy for the whole mesh

apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
  name: api-server
  namespace: istio-system
spec:
  host: kubernetes.default.svc.cluster.local
  trafficPolicy:
    tls:
      mode: DISABLE

10.4. Validating encryptions with Kiali

The Kiali console offers several ways to validate whether or not your applications, services, and workloads have Mutual Transport Layer Security (mTLS) encryption enabled.

The Services Detail Overview page displays a Security icon on the graph edges where at least one request with mTLS enabled is present. Also note that Kiali displays a lock icon in the Network section next to ports that are configured for mTLS.

Chapter 11. Post-quantum cryptography

Post-quantum cryptography (PQC) provides cryptographic algorithms resistant to quantum computing threats, replacing traditional methods such as RSA and ECDSA that are vulnerable to quantum-based attacks.

Post-quantum cryptography (PQC), also known as quantum-resistant cryptography, uses encryption algorithms designed to resist attacks from quantum computers.

Quantum computers use principles of quantum mechanics to perform certain calculations significantly faster than classical computers, compromising widely used cryptographic algorithms.

Most current encryption methods rely on mathematical problems that classical computers cannot solve in a practical time. Large-scale quantum computers could solve some of these problems more efficiently, which would weaken the security of existing cryptographic systems.

In Red Hat OpenShift Service Mesh, cryptographic algorithms protect control plane and data plane communications, including mutual TLS (mTLS) between workloads. Enabling PQC strengthens these communications by introducing quantum-resistant key exchange mechanisms while maintaining compatibility with existing infrastructure.

Note

Post-quantum cryptography (PQC) algorithms are not available on OpenShift clusters running in FIPS mode.

Configure a quantum-secure gateway by using hybrid key exchange to protect service mesh ingress traffic against quantum computing threats.

Prerequisites

  • You are logged in to the OpenShift Container Platform web console 4.19+ as a user with the
    cluster-admin
    role.
  • You have installed the Red Hat OpenShift Service Mesh Operator 3.2.1+.
  • You have deployed the
    Istio
    and
    IstioCNI
    resources.
  • You have installed the following CLI tools locally:

    • oc
    • podman
    • curl

Procedure

  • Update the

    Istio
    control plane to enable PQC by running the following command:

    $ oc apply -f - <<EOF
    apiVersion: sailoperator.io/v1
    kind: Istio
    metadata:
      name: default
    spec:
      version: v1.27.8
      namespace: istio-system
      updateStrategy:
        type: InPlace
      values:
        meshConfig:
          accessLogFile: /dev/stdout
          tlsDefaults:
            ecdhCurves:
            - X25519MLKEM768
    EOF
    • spec.values.meshConfig.tlsDefaults.ecdhCurves
      defines the setting that applies to all non-mesh Transport Layer Security (TLS) connections in your Istio deployment, including:

      • Ingress gateways: TLS connections from external clients.
      • Egress gateways: TLS connections to external services.
      • External service connections: Any TLS connections to services outside the mesh.
    Note

    This setting does not apply to mesh-internal mutual Transport Layer Security (mTLS). Communication between services within the mesh uses the default Istio mTLS configuration.

    • spec.values.meshConfig.tlsDefaults
      defines a configuration that is a mesh-wide setting that applies to all gateways and mesh-internal traffic. You cannot enable PQC algorithms for individual workloads. To use different TLS configurations for specific gateways, you must deploy separate control planes with a unique
      meshConfig.tlsDefaults
      settings.

Configure the Istio control plane to enforce a post-quantum cryptography (PQC) compliance policy, enabling quantum-resistant security for service mesh communications.

Prerequisites

  • You are logged in to the OpenShift Container Platform web console 4.19+ as a user with the
    cluster-admin
    role.
  • You have installed the Red Hat OpenShift Service Mesh Operator 3.2.1+.
  • You have deployed the
    Istio
    and
    IstioCNI
    resources.
  • You have installed the following CLI tools locally:

    • oc
    • podman
    • curl

Procedure

  • Update the

    Istio
    control plane to enable PQC by running the following command:

    $ oc apply -f - <<EOF
    apiVersion: sailoperator.io/v1
    kind: Istio
    metadata:
      name: default
    spec:
      version: v1.27.8
      namespace: istio-system
      updateStrategy:
        type: InPlace
      values:
        pilot:
          env:
            COMPLIANCE_POLICY: "pqc"
    EOF
    • spec.values.pilot.env.COMPLIANCE_POLICY
      specifies the compliance policy that the Istio control plane enforces. Set the field to
      pqc
      to enable PQC.

Configure the Istio control plane and ztunnel to enforce a post-quantum cryptography (PQC) compliance policy, enabling quantum-resistant security for ambient mode service mesh communications.

Prerequisites

  • You are logged in to the OpenShift Container Platform web console 4.19+ as a user with the
    cluster-admin
    role.
  • You have installed the Red Hat OpenShift Service Mesh Operator 3.2.1+.
  • You have deployed the
    Istio
    and
    IstioCNI
    resources with ambient mode enabled.
  • You have installed the following CLI tools locally:

    • oc
    • podman
    • curl

Procedure

  • Update the

    Istio
    control plane and
    ztunnel
    to enable PQC by running the following command:

    $ oc apply -f - <<EOF
    apiVersion: sailoperator.io/v1
    kind: Istio
    metadata:
      name: default
    spec:
      version: v1.27.8
      namespace: istio-system
      updateStrategy:
        type: InPlace
      values:
        pilot:
          env:
            COMPLIANCE_POLICY: "pqc"
        ztunnel:
          env:
            COMPLIANCE_POLICY: "pqc"
    EOF
    • spec.values.pilot.env.COMPLIANCE_POLICY
      specifies the compliance policy for the Istio control plane. Set the field to
      pqc
      to enable PQC.
    • spec.values.ztunnel.env.COMPLIANCE_POLICY
      specifies the compliance policy for
      ztunnel
      in ambient mode. Set the field to
      pqc
      to enable PQC.

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top