Chapter 2. Before migrating


If you are moving from Red Hat OpenShift Service Mesh 2.6 to Red Hat OpenShift Service Mesh 3, read the content in this section first as it contains important information and explanations on the differences between the versions. These differences have a direct impact on your installation and configuration of OpenShift Service Mesh 3.

If you are a current Red Hat OpenShift Service Mesh user, there are a number of important differences you need to understand between OpenShift Service Mesh 2 and OpenShift Service Mesh 3 before you migrate, including the following:

  • A new Operator
  • Integrations like Observability and Kiali are installed separately
  • New resources: Istio and IstioCNI
  • Scoping of a mesh with discoverySelectors and labels
  • New considerations for sidecar injection
  • Support for multiple control planes
  • Independently managed gateways
  • Explicit Istio OpenShift route creation
  • Canary upgrades
  • Support for Istio multi-cluster topologies
  • Support for Istioctl
  • Change to Kubernetes network policy management
  • Transport layer security (TLS) configuration change

You must be using OpenShift Service Mesh 2.6 to migrate to OpenShift Service Mesh 3.

Red Hat OpenShift Service Mesh 3 is a major update with a feature set closer to the Istio project. Whereas OpenShift Service Mesh 2 was based on the midstream Maistra project, OpenShift Service Mesh 3 is based directly on Istio. This means OpenShift Service Mesh 3 is managed using a different, simplified Operator and provides greater support for the latest stable features of Istio.

This alignment with the Istio project along with lessons learned in the first two major releases of OpenShift Service Mesh have resulted in the following changes:

2.1.2.1. From Maistra to Istio

OpenShift Service Mesh 1 and 2 were based on Istio, and included additional functionality that was maintained as part of the midstream Maistra project, but not part of the upstream Istio project. While this provided extra features to OpenShift Service Mesh users, the effort to maintain Maistra meant that OpenShift Service Mesh 2 was usually several releases behind Istio, and did not support major features like multi-cluster deployment. Since the release of OpenShift Service Mesh 1 and 2, Istio has matured to cover most of the use cases addressed by Maistra.

Basing OpenShift Service Mesh 3 directly on Istio ensures that OpenShift Service Mesh 3 supports users on the latest stable Istio features while Red Hat contributes directly to the Istio community on behalf of its customers.

2.1.2.2. OpenShift Service Mesh 3 Operator

OpenShift Service Mesh 3 uses an Operator that is maintained upstream as the Sail Operator in the istio-ecosystem organization on GitHub. The OpenShift Service Mesh 3 Operator is smaller in scope and includes significant changes from the Operator used in OpenShift Service Mesh 2:

  • The Istio resource replaces the ServiceMeshControlPlane resource.
  • The IstioCNI resource manages the Istio Container Network Interface (CNI).
  • Red Hat OpenShift Observability components are installed and configured separately.

2.1.2.3. About Operator and component versioning

All Red Hat OpenShift Service Mesh Operators use versioning and manage at least one underlying component (operand), which is often versioned independently through a custom resource definition (CRD).

In OpenShift Service Mesh 2, the ServiceMeshControlPlane resource managed multiple operands, including Istio, Kiali, and Jaeger. Each component maintained its own version containing the following three levels of versioning:

  • The Operator version
  • The ServiceMeshControlPlane version
  • The individual component versions

For example, the OpenShift Service Mesh 2.6 Operator managed the 2.6 version of the control plane, which included Istio 1.20, Kiali 1.73, and other component versions. Each Operator version also supported multiple control plane versions. The OpenShift Service Mesh 2.6 Operator supported versions 2.4, 2.5, and 2.6 of the control plane.

OpenShift Service Mesh 3 simplifies versioning by limiting Operator management to the Istio resource. The Istio resource is responsible only for the Istio component and does not manage Kiali or other components. So, the Istio resource specifies only the Istio component version.

Each OpenShift Service Mesh release supports the latest available Istio version for that Operator version. For example, OpenShift Service Mesh 3.0.0 supports Istio 1.24.0. While the Operator might contain other Istio versions to support upgrades, product support, including patches for Common Vulnerabilities and Exposures (CVEs), covers only the latest Istio version in a given Operator release. For each Operator release, update to the most recent Istio version available.

2.1.3. New resources in OpenShift Service Mesh 3

Red Hat OpenShift Service Mesh 3 uses the following two new resources:

  • Istio
  • IstioCNI

OpenShift Service Mesh 2 uses a resource called ServiceMeshControlPlane to configure Istio. In OpenShift Service Mesh 3, the ServiceMeshControlPlane resource is replaced with a resource called Istio.

The Istio resource contains a spec.values field that derives its schema from Istio’s Helm chart values. This means that configuration examples from the community Istio documentation can often be applied directly to the OpenShift Service Mesh 3 Istio resource.

You can view an additional validation schema of the Istio resource by running the following command:

$ oc explain istios.spec.values
Copy to Clipboard Toggle word wrap

2.1.3.2. About Istio control plane versioning

In OpenShift Service Mesh 2.6 and earlier, the version field in the ServiceMeshControlPlane resource specified the control plane version. The version field accepted only minor versions, such as v2.5 or v2.6. The Operator automatically applied new patch versions, such as 2.6.1, without requiring changes to the resource.

OpenShift Service Mesh 3.0 introduces the Istio resource to manage Istio control planes. This resource also includes a version field, but it uses Istio versioning instead of OpenShift Service Mesh versions. The field accepts specific patch versions, such as v1.24.1, which the Operator maintains without applying automatic updates.

To enable automatic patch updates, use a version in the format v1.24-latest. This instructs the Operator to keep the Istio control plane updated with the latest available patch release of Istio 1.24.

2.1.3.3. New resource: IstioCNI

The Istio Container Network Interface (CNI) node agent is used to configure traffic redirection for pods in the mesh. It runs as a daemon set, on every node, with elevated privileges.

In OpenShift Service Mesh 2, the Operator deployed an Istio CNI instance for each minor version of Istio present in the cluster, and pods were automatically annotated during sidecar injection so they picked up the correct Istio CNI. The Istio CNI agent has an independent lifecycle from the Istio control plane and, in some cases, you must upgrade the Istio CNI agent separately.

For these reasons, the OpenShift Service Mesh 3 Operator manages the Istio CNI node agent with a separate resource called IstioCNI. A single instance of this resource is shared by all Istio control planes, which are managed by Istio resources.

A significant change in Red Hat OpenShift Service Mesh 3 is that the Operator no longer installs and manages observability components such as Prometheus and Grafana with the Istio control plane. It also no longer installs and manages Red Hat OpenShift distributed tracing platform components such as distributed tracing platform (Tempo) and Red Hat OpenShift distributed tracing data collection (previously Jaeger and Elasticsearch), or Kiali.

The OpenShift Service Mesh 3 Operator limits its scope to Istio-related resources, with observability components supported and managed by the independent Operators that make up Red Hat OpenShift Observability, such as the following:

  • Logging
  • User workload monitoring
  • Red Hat OpenShift distributed tracing platform

Kiali and the OpenShift Service Mesh Console (OSSMC) plugin are still supported with the Kiali Operator provided by Red Hat.

This simplification greatly reduces the footprint and complexity of OpenShift Service Mesh 3, while providing better, production-grade support for observability through Red Hat OpenShift Observability components.

In OpenShift Service Mesh 2.4, a cluster-wide mode was introduced to allow a mesh to be cluster-scoped, with the option to limit the mesh using an Istio feature called discoverySelectors. Using discoverySelectors limits the Istio control plane’s visibility to a set of namespaces defined with a label selector. This aligned with how community Istio worked, and allowed Istio to manage cluster-level resources. For more information, see "Labels and Selectors".

OpenShift Service Mesh 3 makes all meshes cluster-wide by default. This change means that Istio control planes are all cluster-scoped resources and the resources ServiceMeshMemberRoll and ServiceMeshMember are no longer present, with control planes watching, or discovering, the entire cluster by default. The control plane’s discovery of namespaces can be limited using the discoverySelectors feature.

2.1.6. New considerations for sidecar injection

Red Hat OpenShift Service Mesh 2 supported using pod annotations and labels to configure sidecar injection and there was no need to indicate which control plane a workload belonged to.

With OpenShift Service Mesh 3, even though the Istio control plane discovers a namespace, the workloads present in that namespace still require sidecar proxies to be included as workloads in the service mesh, and to be able to use Istio’s many features.

In OpenShift Service Mesh 3, sidecar injection works the same way as it does for Istio, with pod or namespace labels used to trigger sidecar injection. However, it might be necessary to include a label that indicates which control plane the workload belongs to.

Note

The Istio Project has deprecated pod annotations in favor of labels for sidecar injection.

When an Istio resource has the name default and InPlace upgrades are used, there is a single IstioRevision with the name default and the label istio-injection=enabled for sidecar injection.

However, an IstioRevision resource is required to have a different name in the following cases:

  • Multiple control plane instances are present.
  • A RevisionBased, canary-style control plane upgrade is in progress.

If there are running multiple control plane instances, or you chose the RevisionBased update strategy during your OpenShift Service Mesh 3 installation, then an IstioRevision resource is required to have a different name than default. When that happens, it is necessary to use a label that indicates which control plane revision the workloads belong to by specifying istio.io/rev=<istiorevision_name>.

These labels can be applied at the workload or namespace level.

You can inspect available revisions by running the following command:

$ oc get istiorevision
Copy to Clipboard Toggle word wrap

2.1.7. Support for multiple control planes

Red Hat OpenShift Service Mesh 3 supports multiple service meshes in the same cluster, but in a different manner than in OpenShift Service Mesh 2. A cluster administrator must create multiple Istio instances and then configure discoverySelectors appropriately to ensure that there is no overlap between mesh namespaces.

As Istio resources are cluster-scoped, they must have unique names to represent unique meshes within the same cluster. The OpenShift Service Mesh 3 Operator uses this unique name to create a resource called IstioRevision with a name in the format of {Istio name} or {Istio name}-{Istio version}.

Each instance of IstioRevision is responsible for managing a single control plane. Workloads are assigned to a specific control plane using Istio’s revision labels of the format istio.io/rev={IstioRevision name}. The name with the version identifier becomes important to support canary-style control plane upgrades.

2.1.8. Independently managed Istio gateways

In Istio, gateways are used to manage traffic entering (ingress) and exiting (egress) the mesh. Red Hat OpenShift Service Mesh 2 deployed and managed an ingress gateway and an egress gateway with the Service Mesh control plane. Both an ingress gateway and an egress gateway were configured using the ServiceMeshControlPlane resource.

The OpenShift Service Mesh 3 Operator does not create or manage gateways.

Instead, gateways in OpenShift Service Mesh 3 are created and managed independent of the Operator and control plane using gateway injection or the Kubernetes Gateway API. This provides greater flexibility and ensures that gateways can be fully customized and managed as part of a Red Hat OpenShift GitOps pipeline. This allows the gateways to be deployed and managed alongside their applications with the same lifecycle.

This change was made for two reasons:

  • To start with a gateway configuration that can expand over time to meet the more robust needs of a production environment.
  • Gateways are better managed together with their corresponding workloads.

Gateways may continue to be deployed onto nodes or namespaces independent of applications. For example, a centralized gateway node. Istio gateways also remain eligible to be deployed on OpenShift Container Platform infrastructure nodes.

Next steps

If you are using OpenShift Service Mesh 2.6, and have not migrated from ServiceMeshControlPlane defined gateways to gateway injection, then you must follow the OpenShift Service Mesh 2.x gateway migration procedure before you can move to OpenShift Service Mesh 3.

2.1.9. Explicitly create OpenShift Routes

An OpenShift Route resource allows an application to be exposed with a public URL using OpenShift Container Platform Ingress Operator for managing HAProxy based Ingress controllers.

Red Hat OpenShift Service Mesh 2 used Istio OpenShift Routing (IOR) that automatically created and managed OpenShift routes for Istio gateways. While this was convenient, as the Operator managed these routes for you, it also caused confusion around ownership as many Route resources are managed by administrators. Istio OpenShift Routing also lacked the ability to configure an independent Route resource, created unnecessary routes, and exhibited unpredictable behavior during updates.

Thus, in OpenShift Service Mesh 3, when a Route is desired to expose an Istio gateway, you must create and manage it manually. You can also expose an Istio gateway through a Kubernetes service of type LoadBalancer if a route is not desired.

2.1.10. Introducing canary updates

Red Hat OpenShift Service Mesh 2 supported only in-place style updates, which created risk for large meshes where, after the control plane was updated, all workloads must update to the new control plane version without a simple way to roll back if something goes wrong.

OpenShift Service Mesh 3 retains support for simple in-place style updates, and adds support for canary-style updates of the Istio control plane using Istio’s revision feature.

The Istio resource manages Istio revision labels using the IstioRevision resource. When the Istio resource’s updateStrategy type is set to RevisionBased, it creates Istio revision labels using the Istio resource’s name combined with the Istio version, for example mymesh-v1-21-2.

During an updates, a new IstioRevision deploys the new Istio control plane with an updated revision label, for example mymesh-v1-22-0. Workloads can then be migrated between control planes using the revision label on namespaces or workloads, for example istio.io/rev=mymesh-v1-22-0.

Setting your updateStrategy to RevisionBased also has implications for integrations, such as the cert-manager tool, and gateways.

Next steps

You can set updateStrategy to RevisionBased to use canary updates.

Be aware that setting the updateStrategy to RevisionBased also has implications for some integrations with OpenShift Service Mesh, such as the cert-manager tool integration.

2.1.11. Supported multi-cluster topologies

Red Hat OpenShift Service Mesh 2 supported one form of multi-cluster, federation, which was introduced in OpenShift Service Mesh 2.1. Each cluster maintained its own independent control plane in this topology, with services only shared between those meshes on an as-needed basis.

Communication between federated meshes is through Istio gateways, so there was no need for Service Mesh control planes to watch remote Kubernetes control planes, as is the case with Istio’s multi-cluster service mesh topologies. Federation is ideal where service meshes are loosely coupled, such as those managed by different administrative teams.

OpenShift Service Mesh 3 introduces support for the following Istio multi-cluster topologies as well:

  • Multi-Primary
  • Primary-Remote
  • External control planes

These topologies effectively stretch a single, unified service mesh across multiple clusters, which is ideal when all clusters involved are managed by the same administrative team. Istio’s multi-cluster topologies are also ideal for implementing high-availability or failover use cases across a commonly managed set of applications.

2.1.12. Support for Istioctl

Red Hat OpenShift Service Mesh 1 and 2 did not include support for Istioctl, the command line utility for the Istio project that includes many diagnostic and debugging utilities. OpenShift Service Mesh 3 introduces support for Istioctl for select commands.

Expand
Table 2.1. Supported Istioctl commands
CommandDescription

admin

Manage the control plane (istiod) configuration

analyze

Analyze the Istio configuration and print validation messages

completion

Generate the autocompletion script for the specified shell

create-remote-secret

Create a secret with credentials to allow Istio to access remote Kubernetes API servers

help

Display help about any command

proxy-config, pc

Retrieve information about the proxy configuration from Envoy (Kubernetes only)

proxy-status, ps

Retrieve the synchronization status of each Envoy in the mesh

remote-clusters

List the remote clusters each istiod instance is connected to

validate, v

Validate the Istio policy and rules files

version

Print out build version information

waypoint

Manage the waypoint configuration

ztunnel-config

Update or retrieve the current Ztunnel configuration.

Installation and management of Istio is only supported by the OpenShift Service Mesh 3 Operator.

Next steps

2.1.13. Kubernetes network policy management

By default, Red Hat OpenShift Service Mesh 2 created Kubernetes NetworkPolicy resources with the following behavior:

  • Ensured network applications and the control plane could communicate with each other.
  • Restricted ingress for mesh applications to only member projects.

OpenShift Service Mesh 3 does not create these policies. Instead, you must configure the level of isolation required for your environment. Istio provides fine grained access control of service mesh workloads through Authorization Policies. For more information, see "Authorization Policies".

In Red Hat OpenShift Service Mesh 2, you created the ServiceMeshControlPlane resource, and enabled mTLS strict mode by setting spec.security.dataPlane.mtls to true.

You were able to set the minimum and maximum TLS protocol versions by setting the spec.security.controlPlane.tls.minProtocolVersion or spec.security.controlPlane.tls.maxProtocolVersion in your ServiceMeshControlPlane resource.

In OpenShift Service Mesh 3, the Istio resource replaces the ServiceMeshControlPlane resource and does not include these settings.

To enable enable mTLS strict mode in OpenShift Service Mesh 3, you must apply the corresponding PeerAuthentication and DestinationRule resources.

In OpenShift Service Mesh 3, you can enable the minimum TLS protocol by setting spec.meshConfig.tlsDefaults.minProtocolVersion in your Istio resource. For more information, see "Istio Workload Minimum TLS Version Configuration".

In OpenShift Service Mesh 2 and OpenShift Service Mesh 3, auto mTLS remains enabled by default.

2.2. Premigration checklists

Before you begin

  • You have read "Migrating from Service Mesh 2 to Service Mesh 3".
  • You have read and understand the differences between OpenShift Service Mesh 2 and OpenShift Service Mesh 3.
  • You have reviewed the Migrating references material.
  • You want to migrate from OpenShift Service Mesh 2 to OpenShift Service Mesh 3.
  • You are running OpenShift Service Mesh 2.6.9.
  • You have upgraded your ServiceMeshControlPlane resource to the latest version.
  • If you are using the Kiali Operator provided by Red Hat, you are running the latest version.
  • You have installed the OpenShift Service Mesh Operator 3. To install the OpenShift Service Mesh 3 Operator, see: Installing OpenShift Service Mesh
Important

You must complete the following checklists before you can begin migrating your deployment and workloads.

2.2.2. Migrate to explicitly managed routes

Automatic route creation, also known as Istio OpenShift Routing (IOR), is a deprecated feature that is disabled by default for any ServiceMeshControlPlane resource created using OpenShift Service Mesh 2.5 and later. To move from OpenShift Service Mesh 2 to OpenShift Service Mesh 3, you need to migrate from IOR to explicitly-managed routes.

If you already moved to explicitly-managed routes in OpenShift Service Mesh 2, then continue to gateway injection.

2.2.3. Migrate to gateway injection

Gateways were controlled by the ServiceMeshControlPlane (SMCP) resource in OpenShift Service Mesh 2. The OpenShift Service Mesh 3 control plane does not manage gateways so you must migrate from SMCP-Defined gateways to gateway injection.

2.2.4. Disable network policy management

If you do not want your network policies in place during your migration:

  • ❏ Disable network policy management in the OpenShift Service Mesh 2 ServiceMeshControlPlane resource: spec.security.manageNetworkPolicy=false.
  • ❏ Complete the rest of the checklists.
  • ❏ Migrate your deployment and workloads.
  • ❏ Manually recreate your network policies after you have migrated your workloads.

If you want your network policies in place during your migration:

  • ❏ Manually set up network policies to use during migration.
  • ❏ Disable network policy management in the OpenShift Service Mesh 2 ServiceMeshControlPlane resource: spec.security.manageNetworkPolicy=false.
  • ❏ Complete the rest of the checklists.
  • ❏ Migrate your deployment and workloads.

2.2.5. Disable Grafana in OpenShift Service Mesh 2

Grafana is not supported in OpenShift Service Mesh 3, and must be disabled in your OpenShift Service Mesh 2 ServiceMeshControlPlane.

  • ❏ Disable Grafana in your OpenShift Service Mesh 2 ServiceMeshControlPlane: spec.addons.grafana.enabled=false.

2.2.6. Example resource files

After you have completed the premigration procedures, your OpenShift Service Mesh 2 resources might be similar to the following examples:

2.2.6.1. The ServiceMeshControlPlane resource file

Example ServiceMeshControlPlane resource

apiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
  name: basic
  namespace: istio-system
spec:
  version: v2.6 
1

  security: 
2

    manageNetworkPolicy: false
  addons: 
3

    grafana:
      enabled: false
    kiali:
      enabled: false
    prometheus:
      enabled: false
  meshConfig: 
4

    extensionProviders:
      - name: prometheus
        prometheus: {}
      - name: otel
        opentelemetry:
          port: 4317
          service: otel-collector.istio-system.svc.cluster.local
  gateways: 
5

    enabled: false
    openshiftRoute:
      enabled: false
  mode: MultiTenant 
6

  tracing: 
7

    type: None
Copy to Clipboard Toggle word wrap

1
Update your ServiceMeshControlPlane resource to the latest OpenShift Service Mesh version.
2
Disable network policy management.
3
Disable all resources in the addons stanza.
4
Your ServiceMeshControlPlane resource is configured to use external metrics and tracing providers.
5
Disable managed gateways.
6
Set to either MultiTenant or ClusterWide.
7
Disable tracing.

2.2.6.2. Telemetry resource file

The Telemetry resource file is located in your root namespace. The following example uses istio-system as the root namespace.

Example Telemetry resource

apiVersion: telemetry.istio.io/v1
kind: Telemetry
metadata:
  name: mesh-default
  namespace: istio-system
spec:
  metrics: 
1

    - providers:
        - name: prometheus
  tracing: 
2

    - providers:
        - name: otel
Copy to Clipboard Toggle word wrap

1
Specify your metrics provider. The name field must match what is specified in your ServiceMeshControlPlane resource in the spec.meshConfig.extensionProviders field.
2
Specify yur tracing provider. The name field must match what is specified in your ServiceMeshControlPlane resource in the spec.meshConfig.extensionProviders field.

2.2.6.3. Kiali resource file

Example Kiali resource

apiVersion: kiali.io/v1alpha1
kind: Kiali
metadata:
  name: kiali
  namespace: istio-system
spec:
  version: default 
1

  external_services:
    prometheus: 
2

      auth:
        type: bearer
        use_kiali_token: true
      thanos_proxy:
        enabled: true
      url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091
    tracing: 
3

      enabled: true
      provider: tempo
      use_grpc: false
      internal_url: http://tempo-sample-query-frontend.tempo:3200
      external_url: https://tempo-sample-query-frontend-tempo.apps-crc.testing
    grafana: 
4

      enabled: false
Copy to Clipboard Toggle word wrap

1
You can use the default value of the version parameter if you install OpenShift Service Mesh 3 before updating Kiali. The default version is compatible with both 2.6 and 3.0 control planes.
2
Configure Kiali to use external Prometheus.
3
Configure Kiali to use the external tracing store.
4
Disable the Grafana configuration. Grafana is not supported with OpenShift Service Mesh 3.0.

2.2.7. Find your deployment model

Run the following command to find your deployment model:

oc get smcp <smcp-name> -n <smcp-namespace> -o jsonpath='{.spec.mode}'
Copy to Clipboard Toggle word wrap
Note

If you did not set a value for the .spec.mode parameter in your ServiceMeshControlPlane resource, your deployment is multitenant.

2.2.8. Migrate based on your deployment model

If you are not using the cert-manager tool with your deployment, you are ready to migrate your deployment.

If you are unsure, you can check if you are using the cert-manager tool with your deployment.

You must check if your deployment uses the cert-manager tool. If you are using the cert-manager tool with OpenShift Service Mesh 2, you must perform additional configurations before you can start migrating your deployments and workloads.

Procedure

  1. Use one of the following ways to check if you are using the cert-manager tool with your deployment:

    1. Inspect your ServiceMeshControlPlane resource to verify that the spec.security.certificateAuthority.type parameter is set to cert-manager:

      apiVersion: maistra.io/v2
      kind: ServiceMeshControlPlane
      metadata:
        name: basic
        namespace: istio-system
      spec:
        ...
        security:
          certificateAuthority:
            cert-manager:
              address: cert-manager-istio-csr.istio-system.svc:443
            type: cert-manager
          dataPlane:
            mtls: true
          identity:
            type: ThirdParty
          manageNetworkPolicy: false
      Copy to Clipboard Toggle word wrap
    2. Run the following command to verify that the spec.security.certificateAuthority.type parameter is set to cert-manager:

      oc get smcp <smcp-name> -n <smcp-namespace> -o jsonpath='{.spec.security.certificateAuthority.type}'
      Copy to Clipboard Toggle word wrap

      Example output

      cert-manager
      Copy to Clipboard Toggle word wrap

Next steps for migrating with the cert-manager tool

There are some configurations you must complete first before you can start migrating your deployments.

In Red Hat OpenShift Service Mesh 2, network policies are created by default when the spec.security.manageNetworkPolicy field is set to true in the ServiceMeshControlPlane resource. During the migration to OpenShift Service Mesh 3, these policies are removed.

It is recommended to re-create your network policies after you have migrated your deployment and workloads. However, if your security policies require you to keep your network policies, you must re-create them first, and then set the spec.security.manageNetworkPolicy field to false as outlined in the migration checklists.

You can set up network policies to use during your migration.

Important
  • During the recreation of network policies from OpenShift Service Mesh 2 to OpenShift Service Mesh 3, both control planes must have access to all workloads and all workloads must have access to control planes.
  • The maistra.io/member-of: label is removed from the namespaces during migration.

Prerequisites

  • You have deployed OpenShift Container Platform 4.14 or later.
  • You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.
  • You have the OpenShift Service Mesh 2.6.9 Operator installed.
  • You have the ServiceMeshControlPlane 2.6 resource installed.
  • In OpenShift Service Mesh 2, you have set spec.security.manageNetworkPolicy=true in your ServiceMeshControlPlane resource.
  • You have deployed the bookinfo and bookinfo2 applications.

Procedure

  1. Label namespaces by running the following command:

    $ oc label namespace <app_namespace> service-mesh=enabled
    Copy to Clipboard Toggle word wrap
    Note

    Use a label scoped specifically to your mesh that you can reuse for discovery selectors.

  2. Create your network policies by using the following NetworkPolicy example configurations:

    Example of an Istiod network policy in a mesh namespace

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: istiod-basic
      namespace: istio-system
    spec:
      ingress:
        - {}
      podSelector:
        matchLabels:
          app: istiod
          istio.io/rev: basic
      policyTypes:
        - Ingress
    Copy to Clipboard Toggle word wrap

    Example of an expose route policy in a mesh namespace

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: expose-route-basic
      namespace: istio-system
    spec:
      podSelector:
        matchLabels:
          maistra.io/expose-route: "true"
      ingress:
        - from:
            - namespaceSelector:
                matchLabels:
                  network.openshift.io/policy-group: ingress
      policyTypes:
        - Ingress
    Copy to Clipboard Toggle word wrap

    Example of a default mesh network policy in a mesh namespace

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: istio-mesh
      namespace: istio-system
    spec:
      ingress:
        - from:
            - namespaceSelector:
                matchLabels:
                  service-mesh: enabled
      podSelector: {}
      policyTypes:
        - Ingress
    Copy to Clipboard Toggle word wrap

    Example expose route network policy in the bookinfo namespace

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: istio-expose-route
      namespace: bookinfo
    spec:
      podSelector:
        matchLabels:
          maistra.io/expose-route: "true"
      ingress:
        - from:
            - namespaceSelector:
                matchLabels:
                  network.openshift.io/policy-group: ingress
      policyTypes:
        - Ingress
    Copy to Clipboard Toggle word wrap

    Example mesh network policy in the bookinfo namespace

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: istio-mesh
      namespace: bookinfo
    spec:
      ingress:
        - from:
            - namespaceSelector:
                matchLabels:
                  service-mesh: enabled
      podSelector: {}
      policyTypes:
        - Ingress
    Copy to Clipboard Toggle word wrap

    Example expose route network policy in the bookinfo2 namespace

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: istio-expose-route
      namespace: bookinfo2
    spec:
      podSelector:
        matchLabels:
          maistra.io/expose-route: "true"
      ingress:
        - from:
            - namespaceSelector:
                matchLabels:
                  network.openshift.io/policy-group: ingress
      policyTypes:
    Copy to Clipboard Toggle word wrap

    Example mesh network policy in the bookinfo2 namespace

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: istio-mesh
      namespace: bookinfo2
    spec:
      ingress:
        - from:
            - namespaceSelector:
                matchLabels:
                  service-mesh: enabled
      podSelector: {}
      policyTypes:
        - Ingress
    Copy to Clipboard Toggle word wrap

  3. Disable network policies in OpenShift Service Mesh 2 by setting the spec.security.manageNetworkPolicy field to false in your ServiceMeshControlPlane resource.

    Note

    Setting the spec.security.manageNetworkPolicy field to false in your ServiceMeshControlPlane resource removes the network policies created by default in OpenShift Service Mesh 2.

  4. Find your current active revision by running the following command:

    $ oc get istios <istio_name>
    Copy to Clipboard Toggle word wrap

    Example output

    NAME             REVISIONS   READY   IN USE   ACTIVE REVISION   STATUS    VERSION   AGE
    istio-tenant-a   1           1       0        istio-tenant-a    Healthy   v1.24.3   30s
    Copy to Clipboard Toggle word wrap

  5. Copy the active revision name from the output to use for your istio.io/rev label in your second Istiod network policy for OpenShift Service Mesh 3.
  6. Create a second Istiod network policy for OpenShift Service Mesh 3 by using the following NetworkPolicy example configuration:

    Sample policy

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: istio-istiod-v3
      namespace: istio-system
    spec:
      ingress:
        - {}
      podSelector:
        matchLabels:
          app: istiod
          istio.io/rev: istio-tenant-a 
    1
    
      policyTypes:
        - Ingress
    Copy to Clipboard Toggle word wrap

    1
    Must match your current active revision name.

Next steps

  • In OpenShift Service Mesh 2, set the spec.security.manageNetworkPolicy field to false in your ServiceMeshControlPlane resource, and continue with the migration checklists.

Kiali Operator provided by Red Hat with Red Hat OpenShift Service Mesh 3 introduces the following changes:

  • New topology graphs
  • Deprecated configuration settings
  • Renamed configuration settings

2.4.1. New topology graphs

The Traffic Page Graph page has been reorganized and built using Patternfly topology with a new topology view showcasing the mesh infrastructure.

2.4.2. Deprecated configuration settings

To control which namespaces are accessible or visible to users in OpenShift Service Mesh 3, Kiali relies on discoverySelectors.

By default, deployment.cluster_wide_access=true is enabled, granting Kiali cluster-wide access to all namespaces in the local cluster. If you are migrating a cluster-wide deployment with Kiali, you must remove the following deprecated and unavailable configuration settings from your Kiali custom resource (CR):

  • spec.deployment.accessible_namespaces
  • api.namespaces.exclude
  • api.namespaces.include
  • api.namespaces.label_selector_exclude
  • api.namespaces.label_selector_include

If you are are using discovery selectors in Istio to restrict the namespaces that Istiod watches, then those must match the discovery selectors in your Kiali CR.

2.4.3. Renamed configuration settings

The following configuration settings have been renamed:

Expand
Old configurationNew configuration

external_service.grafana.in_cluster_url

external_service.grafana.internal_url

external_service.grafana.url

external_service.grafana.external_url

external_service.tracing.in_cluster_url

external_service.tracing.internal_url

external_service.tracing.url

external_service.tracing.external_url

These changes reflect evolving capabilities and configuration standards of Kiali within OpenShift Service Mesh 3.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat