Migrating from Service Mesh 2 to Service Mesh 3


Red Hat OpenShift Service Mesh 3.0

Migrating from Service Mesh 2 to Service Mesh 3 Hub

Red Hat OpenShift Documentation Team

Abstract

This content provides an overview for migrating from OpenShift Service Mesh 2 to OpenShift Service Mesh 3.

Transition your environment from OpenShift Service Mesh 2.6 to 3.0 by following a guided migration path that includes pre-migration readiness checks, workload relocation strategies, and best practices for maintaining service availability throughout the upgrade process.

The content in this migration section applies only in the following cases:

  • You are an existing Red Hat OpenShift Service Mesh user running OpenShift Service Mesh 2.6.14.
  • You want to move to OpenShift Service Mesh 3.0.
Important

If you are not running OpenShift Service Mesh 2.6.14, you must update before you can continue. For more information, see "Upgrading Service Mesh".

During the migration process, you might need to reference, or you might be directed to OpenShift Service Mesh 2.x content. It can be beneficial to open "OpenShift Service Mesh 2.x" in a new tab or window for easier reference. This can be especially helpful when you move between OpenShift Service Mesh 2 ServiceMeshControlPlane resource content and OpenShift Service Mesh 3 Istio resource content.

1.2. Recommendations for migrating

Consider the following recommendations to limit the risk of misconfigurations or possible conflicts:

  • If you have automated updates configured in the OpenShift Container Platform web console, change them to Manual. By switching to manual updates, you can prevent updates to the OpenShift Service Mesh 2 Operator and control planes during your migration.
  • Keep the amount of the service mesh configuration changes, such as changes for traffic management or security or adding new workloads or gateways, to a minimum during the migration.
Note

If you need to add a new workload namespace during the migration, it must be managed by the OpenShift Service Mesh 3 control plane and labeled with maistra.io/ignore-namespace: "true" to avoid conflicts between the OpenShift Service Mesh 3 control plane and the OpenShift Service Mesh 2 ServiceMeshControlPlane resource.

  • Finish the migration without unnecessary delays.

1.3. Using the migration guides

The migration guides are designed to help you move from OpenShift Service Mesh 2.6.14 to OpenShift Service Mesh 3.0, based on your deployment model:

  • Mulitenant
  • Mulitenant with cert-manager
  • Cluster-wide
  • Cluster-wide with cert-manager

The migration guides use the bookinfo application as an example for demonstrations purposes.

You can also migrate gateways.

1.4. Premigration checklists

Before you can begin your migration, complete the premigration checklists.

These checklists include steps for managing your network policies and configuration updates to set up add ons such as Kiali Operator provided by Red Hat and distributed tracing platform (Tempo).

1.5. Migrating your deployment and workloads

After you complete the premigration checklists, you can migrate your workloads and gateways, based on your deployment model:

  • Mulitenant
  • Mulitenant with cert-manager
  • Cluster-wide
  • Cluster-wide with cert-manager

1.6. Completing your migration

Depending on your deployment model, you might need to take extra steps to complete the migration. For example, when you migrate with the cert-manager tool, you need to take additional steps before you can remove OpenShift Service Mesh 2 resources.

After you complete your migration, you can add new workloads, re-create network policies, and remove OpenShift Service Mesh 2 resources.

Chapter 2. Before migrating

Identify the architectural and functional differences between Red Hat OpenShift Service Mesh 2.6 and Red Hat OpenShift Service Mesh 3 to prepare for the specific installation and configuration changes required for a successful migration.

If you are a current Red Hat OpenShift Service Mesh user, there are several important differences you need to understand between OpenShift Service Mesh 2 and OpenShift Service Mesh 3 before you migrate, including the following:

  • A new Operator
  • Integrations such as Observability and Kiali are installed separately
  • New resources: Istio and IstioCNI
  • Scoping of a mesh with discoverySelectors and labels
  • New considerations for sidecar injection
  • Support for multiple control planes
  • Independently managed gateways
  • Explicit Istio OpenShift route creation
  • Canary upgrades
  • Support for Istio multi-cluster topologies
  • Support for Istioctl
  • Change to Kubernetes network policy management
  • Transport layer security (TLS) configuration change
  • DNS capture configuration for ServiceEntry resources

You must be using OpenShift Service Mesh 2.6 to migrate to OpenShift Service Mesh 3.

Red Hat OpenShift Service Mesh 3 is a major update with a feature set closer to the "Istio project". Whereas OpenShift Service Mesh 2 was based on the midstream Maistra project, OpenShift Service Mesh 3 is based directly on Istio. This means OpenShift Service Mesh 3 is managed using a different, simplified Operator and provides greater support for the latest stable features of Istio.

This alignment with the Istio project along with lessons learned in the first two major releases of OpenShift Service Mesh have resulted in the following changes:

2.1.2.1. From Maistra to Istio

OpenShift Service Mesh 1 and 2 were based on Istio, and included additional functionality that was maintained as part of the midstream Maistra project, but not part of the upstream Istio project. While this provided extra features to OpenShift Service Mesh users, the effort to support Maistra meant that OpenShift Service Mesh 2 was usually several releases behind Istio, and did not support major features such as multi-cluster deployment. Since the release of OpenShift Service Mesh 1 and 2, Istio has matured to cover most of the use cases addressed by Maistra.

Basing OpenShift Service Mesh 3 directly on Istio ensures that OpenShift Service Mesh 3 supports users on the latest stable Istio features while Red Hat contributes directly to the Istio community on behalf of its customers.

2.1.2.2. OpenShift Service Mesh 3 Operator

OpenShift Service Mesh 3 uses an Operator that is maintained upstream as the Sail Operator in the istio-ecosystem organization on GitHub. The OpenShift Service Mesh 3 Operator is smaller in scope and includes significant changes from the Operator used in OpenShift Service Mesh 2:

  • The Istio resource replaces the ServiceMeshControlPlane resource.
  • The IstioCNI resource manages the Istio Container Network Interface (CNI).
  • Red Hat OpenShift Observability components are installed and configured separately.
2.1.2.3. About Operator and component versioning

All Red Hat OpenShift Service Mesh Operators use versioning and manage at least one underlying component (operand), which is often versioned independently through a custom resource definition (CRD).

In OpenShift Service Mesh 2, the ServiceMeshControlPlane resource managed many operands, including Istio, Kiali, and Jaeger. Each component maintained its own version containing the following three levels of versioning:

  • The Operator version
  • The ServiceMeshControlPlane version
  • The individual component versions

For example, the OpenShift Service Mesh 2.6 Operator managed the 2.6 version of the control plane, which included Istio 1.20, Kiali 1.73, and other component versions. Each Operator version also supported multiple control plane versions. The OpenShift Service Mesh 2.6 Operator supported versions 2.4, 2.5, and 2.6 of the control plane.

OpenShift Service Mesh 3 simplifies versioning by limiting Operator management to the Istio resource. The Istio resource is responsible only for the Istio component and does not manage Kiali or other components. So, the Istio resource specifies only the Istio component version.

Each OpenShift Service Mesh release supports the latest available Istio version for that Operator version. For example, OpenShift Service Mesh 3.0.0 supports Istio 1.24.0. While the Operator might contain other Istio versions to support upgrades, product support, including patches for Common Vulnerabilities and Exposures (CVEs), covers only the latest Istio version in a given Operator release. For each Operator release, update to the most recent Istio version available.

2.1.3. New resources in OpenShift Service Mesh 3

Red Hat OpenShift Service Mesh 3 uses the following two new resources:

  • Istio
  • IstioCNI

OpenShift Service Mesh 2 uses a resource called ServiceMeshControlPlane to configure Istio. In OpenShift Service Mesh 3, the ServiceMeshControlPlane resource is replaced with a resource called Istio.

The Istio resource contains a spec.values field that derives its schema from Istio’s Helm chart values. This means that configuration examples from the community Istio documentation can often be applied directly to the OpenShift Service Mesh 3 Istio resource.

You can view an additional validation schema of the Istio resource by running the following command:

$ oc explain istios.spec.values
2.1.3.2. About Istio control plane versioning

In OpenShift Service Mesh 2.6 and earlier, the version field in the ServiceMeshControlPlane resource specified the control plane version. The version field accepted only minor versions, such as v2.5 or v2.6. The Operator automatically applied new patch versions, such as 2.6.1, without requiring changes to the resource.

OpenShift Service Mesh 3.0 introduces the Istio resource to manage Istio control planes. This resource also includes a version field, but it uses Istio versioning instead of OpenShift Service Mesh versions. The field accepts specific patch versions, such as v1.24.1, which the Operator maintains without applying automatic updates.

To enable automatic patch updates, use a version in the format v1.24-latest. This instructs the Operator to keep the Istio control plane updated with the latest available patch release of Istio 1.24.

2.1.3.3. New resource: IstioCNI

The Istio Container Network Interface (CNI) node agent is used to configure traffic redirection for pods in the mesh. It runs as a daemon set, on every node, with elevated privileges.

In OpenShift Service Mesh 2, the Operator deployed an Istio CNI instance for each minor version of Istio present in the cluster, and pods were automatically annotated during sidecar injection so they picked up the correct Istio CNI. The Istio CNI agent has an independent lifecycle from the Istio control plane and, in some cases, you must upgrade the Istio CNI agent separately.

For these reasons, the OpenShift Service Mesh 3 Operator manages the Istio CNI node agent with a separate resource called IstioCNI. A single instance of this resource is shared by all Istio control planes, which are managed by Istio resources.

A significant change in Red Hat OpenShift Service Mesh 3 is that the Operator no longer installs and manages observability components such as Prometheus and Grafana with the Istio control plane. It also no longer installs and manages Red Hat OpenShift distributed tracing platform components such as distributed tracing platform (Tempo) and Red Hat OpenShift distributed tracing data collection (previously Jaeger and Elasticsearch), or Kiali.

The OpenShift Service Mesh 3 Operator limits its scope to Istio-related resources, with observability components supported and managed by the independent Operators that make up Red Hat OpenShift Observability, such as the following:

  • Logging
  • User workload monitoring
  • Red Hat OpenShift distributed tracing platform

Kiali and the OpenShift Service Mesh Console (OSSMC) plugin are still supported with the Kiali Operator provided by Red Hat.

This simplification greatly reduces the footprint and complexity of OpenShift Service Mesh 3, while providing better, production-grade support for observability through Red Hat OpenShift Observability components.

In OpenShift Service Mesh 2.4, a cluster-wide mode was introduced to allow a mesh to be cluster-scoped, with the option to limit the mesh by using an Istio feature called discoverySelectors.

Using discoverySelectors limits the Istio control plane’s visibility to a set of namespaces defined with a label selector. This aligned with how community Istio worked, and allowed Istio to manage cluster-level resources. For more information, see "Labels and Selectors".

OpenShift Service Mesh 3 makes all meshes cluster-wide by default. This change means that Istio control planes are all cluster-scoped resources and the resources ServiceMeshMemberRoll and ServiceMeshMember are no longer present, with control planes watching, or discovering, the entire cluster by default. The control plane’s discovery of namespaces can be limited using the discoverySelectors feature.

2.1.6. New considerations for sidecar injection

Red Hat OpenShift Service Mesh 2 supported using pod annotations and labels to configure sidecar injection and there was no need to indicate which control plane a workload belonged to.

With OpenShift Service Mesh 3, even though the Istio control plane discovers a namespace, the workloads present in that namespace still require sidecar proxies to be included as workloads in the service mesh, and to be able to use Istio’s many features.

In OpenShift Service Mesh 3, sidecar injection works the same way as it does for Istio, with pod or namespace labels used to trigger sidecar injection. However, it might be necessary to include a label that indicates which control plane the workload belongs to.

Note

The Istio Project has deprecated pod annotations in favor of labels for sidecar injection.

When an Istio resource has the name default and InPlace upgrades are used, there is a single IstioRevision with the name default and the label istio-injection=enabled for sidecar injection.

However, an IstioRevision resource is required to have a different name in the following cases:

  • Multiple control plane instances are present.
  • A RevisionBased, canary-style control plane upgrade is in progress.

If there are running multiple control plane instances, or you chose the RevisionBased update strategy during your OpenShift Service Mesh 3 installation, then an IstioRevision resource is required to have a different name than default. When that happens, it is necessary to use a label that indicates which control plane revision the workloads belong to by specifying istio.io/rev=<istiorevision_name>.

These labels can be applied at the workload or namespace level.

You can inspect available revisions by running the following command:

$ oc get istiorevision

2.1.7. Support for multiple control planes

Red Hat OpenShift Service Mesh 3 supports multiple service meshes in the same cluster, but in a different manner than in OpenShift Service Mesh 2. A cluster administrator must create multiple Istio instances and then configure discoverySelectors appropriately to ensure that there is no overlap between mesh namespaces.

As Istio resources are cluster-scoped, they must have unique names to represent unique meshes within the same cluster. The OpenShift Service Mesh 3 Operator uses this unique name to create a resource called IstioRevision with a name in the format of {Istio name} or {Istio name}-{Istio version}.

Each instance of IstioRevision is responsible for managing a single control plane. Workloads are assigned to a specific control plane by using Istio’s revision labels of the format istio.io/rev={IstioRevision name}. The name with the version identifier becomes important to support canary-style control plane upgrades.

2.1.8. Independently managed Istio gateways

In Istio, gateways are used to manage traffic entering (ingress) and exiting (egress) the mesh. Red Hat OpenShift Service Mesh 2 deployed and managed an ingress gateway and an egress gateway with the Service Mesh control plane. Both an ingress gateway and an egress gateway were configured using the ServiceMeshControlPlane resource.

The OpenShift Service Mesh 3 Operator does not create or manage gateways.

Instead, gateways in OpenShift Service Mesh 3 are created and managed independent of the Operator and control plane by using gateway injection or the Kubernetes Gateway API. This provides greater flexibility and ensures that gateways can be fully customized and managed as part of a Red Hat OpenShift GitOps pipeline. This allows the gateways to be deployed and managed alongside their applications with the same lifecycle.

This change was made for two reasons:

  • To start with a gateway configuration that can expand over time to meet the more robust needs of a production environment.
  • Gateways are better managed together with their corresponding workloads.

Gateways might continue to be deployed onto nodes or namespaces independent of applications. For example, a centralized gateway node. Istio gateways also remain eligible to be deployed on OpenShift Container Platform infrastructure nodes.

Note

If you are using OpenShift Service Mesh 2.6, and have not migrated from ServiceMeshControlPlane defined gateways to gateway injection, then you must follow the OpenShift Service Mesh 2.x gateway migration procedure before you can move to OpenShift Service Mesh 3.

2.1.9. Explicitly create OpenShift Routes

An OpenShift Route resource allows an application to be exposed with a public URL by using OpenShift Container Platform Ingress Operator for managing HAProxy based Ingress controllers.

Red Hat OpenShift Service Mesh 2 used Istio OpenShift Routing (IOR) that automatically created and managed OpenShift routes for Istio gateways. While this was convenient, as the Operator managed these routes for you, it also caused confusion around ownership as many Route resources are managed by administrators. Istio OpenShift Routing also lacked the ability to configure an independent Route resource, created unnecessary routes, and exhibited unpredictable behavior during updates.

Thus, in OpenShift Service Mesh 3, when a Route is required to expose an Istio gateway, you must create and manage it manually. You can also expose an Istio gateway through a Kubernetes service of type LoadBalancer if a route is not required.

2.1.10. Introducing canary updates

Red Hat OpenShift Service Mesh 3 addresses the risks of traditional in-place updates by providing revision-based update strategies that allow for incremental workload transitions and simplified rollbacks.

OpenShift Service Mesh 3 retains support for simple in-place style updates, and adds support for canary-style updates of the Istio control plane by using Istio’s revision feature.

The Istio resource manages Istio revision labels by using the IstioRevision resource. When the Istio resource’s updateStrategy type is set to RevisionBased, it creates Istio revision labels by using the Istio resource’s name combined with the Istio version, for example mymesh-v1-21-2.

During an updates, a new IstioRevision deploys the new Istio control plane with an updated revision label, for example mymesh-v1-22-0. Workloads can then be migrated between control planes by using the revision label on namespaces or workloads, for example istio.io/rev=mymesh-v1-22-0.

Setting your updateStrategy to RevisionBased also has implications for integrations, such as the cert-manager tool, and gateways.

Note

You can set updateStrategy to RevisionBased to use canary updates. Be aware that setting the updateStrategy to RevisionBased also has implications for some integrations with OpenShift Service Mesh, such as the cert-manager tool integration.

2.1.11. Supported multi-cluster topologies

Red Hat OpenShift Service Mesh 2 supported one form of multi-cluster, federation, which was introduced in OpenShift Service Mesh 2.1. Each cluster maintained its own independent control plane in this topology, with services only shared between those meshes on an as-needed basis.

Communication between federated meshes is through Istio gateways, so there was no need for Service Mesh control planes to watch remote Kubernetes control planes, as is the case with Istio’s multi-cluster service mesh topologies. Federation is ideal where service meshes are loosely coupled, such as those managed by different administrative teams.

OpenShift Service Mesh 3 introduces support for the following Istio multi-cluster topologies as well:

  • Multi-Primary
  • Primary-Remote
  • External control planes

These topologies effectively stretch a single, unified service mesh across multiple clusters, which is ideal when all clusters involved are managed by the same administrative team. Istio’s multi-cluster topologies are also ideal for implementing high-availability or failover use cases across a commonly managed set of applications.

2.1.12. Support for Istioctl

Red Hat OpenShift Service Mesh 1 and 2 did not include support for Istioctl, the command line utility for the Istio project that includes many diagnostic and debugging utilities. OpenShift Service Mesh 3 introduces support for Istioctl for select commands.

The following commands are supported in OpenShift Service Mesh 3:

Expand
CommandDescription

admin

Manage the control plane (istiod) configuration

analyze

Analyze the Istio configuration and print validation messages

completion

Generate the autocompletion script for the specified shell

create-remote-secret

Create a secret with credentials to allow Istio to access remote Kubernetes API servers

help

Display help about any command

proxy-config, pc

Retrieve information about the proxy configuration from Envoy (Kubernetes only)

proxy-status, ps

Retrieve the synchronization status of each Envoy in the mesh

remote-clusters

List the remote clusters each istiod instance is connected to

validate, v

Validate the Istio policy and rules files

version

Print build version information

waypoint

Manage the waypoint configuration

ztunnel-config

Update or retrieve the current Ztunnel configuration.

Installation and management of Istio is only supported by the OpenShift Service Mesh 3 Operator.

2.1.13. Kubernetes network policy management

By default, Red Hat OpenShift Service Mesh 2 created Kubernetes NetworkPolicy resources with the following behavior:

  • Ensured network applications and the control plane could communicate with each other.
  • Restricted ingress for mesh applications to only member projects.

OpenShift Service Mesh 3 does not create these policies. Instead, you must configure the level of isolation required for your environment. Istio provides fine grained access control of service mesh workloads through Authorization Policies. For more information, see "Authorization Policies".

In Red Hat OpenShift Service Mesh 2, you created the ServiceMeshControlPlane resource, and enabled mTLS strict mode by setting spec.security.dataPlane.mtls to true.

You were able to set the minimum and maximum TLS protocol versions by setting the spec.security.controlPlane.tls.minProtocolVersion or spec.security.controlPlane.tls.maxProtocolVersion in your ServiceMeshControlPlane resource.

In OpenShift Service Mesh 3, the Istio resource replaces the ServiceMeshControlPlane resource and does not include these settings.

To enable mTLS strict mode in OpenShift Service Mesh 3, you must apply the corresponding PeerAuthentication and DestinationRule resources.

In OpenShift Service Mesh 3, you can enable the minimum TLS protocol by setting spec.meshConfig.tlsDefaults.minProtocolVersion in your Istio resource. For more information, see "Istio Workload Minimum TLS Version Configuration".

In OpenShift Service Mesh 2 and OpenShift Service Mesh 3, auto mTLS remains enabled by default.

To maintain access to external services when migrating to Red Hat OpenShift Service Mesh 3.0, you must explicitly enable DNS capture in the proxy metadata settings.

This is required for any ServiceEntry resources that rely on DNS resolution. Failure to enable this feature results in application errors such as Name or service not known.

OpenShift Service Mesh 2.6 enabled DNS capture by default to support federation, which did not align with the upstream Istio project. OpenShift Service Mesh 3.0 removes this default configuration and aligns with the upstream project’s multicluster topologies.

To configure DNS capture in OpenShift Service Mesh 3.0, set the ISTIO_META_DNS_AUTO_ALLOCATE and ISTIO_META_DNS_CAPTURE fields to true in the spec.values.meshConfig.defaultConfig.proxyMetadata path of your Istio resource.

Note

The equivalent of spec.values.meshConfig.defaultConfig.proxyMetadata in OpenShift Service Mesh 2.6 was spec.proxy.runtime.container.env.

2.2. Premigration checklists

Complete these premigration checklists to prepare your OpenShift Service Mesh 2 environment for an upgrade by disabling traditional add-ons, migrating gateway configurations, and validating resource compatibility for OpenShift Service Mesh 3.

2.2.1. Before you begin

Verify that your environment meets the following requirements, version dependencies, and operator installations necessary to begin the migration from OpenShift Service Mesh 2 to OpenShift Service Mesh 3.

  • You have read "Migrating from Service Mesh 2 to Service Mesh 3".
  • You have read and understand the "Differences between OpenShift Service Mesh 2 and OpenShift Service Mesh 3" section.
  • You have reviewed the "Migrating references" section.
  • You want to migrate from OpenShift Service Mesh 2 to OpenShift Service Mesh 3.
  • You are running OpenShift Service Mesh 2.6.14.
  • You have upgraded your ServiceMeshControlPlane resource to the latest version.
  • If you are using the Kiali Operator provided by Red Hat, you are running the latest version.
  • You have installed the OpenShift Service Mesh Operator 3. To install the OpenShift Service Mesh 3 Operator, see "Installing OpenShift Service Mesh".
Important

You must complete the following checklists before you can begin migrating your deployment and workloads.

To prepare for your migration to OpenShift Service Mesh 3, you must disable legacy add-ons in your OpenShift Service Mesh 2 ServiceMeshControlPlane resource and reconfigure replacements for the features that those add-ons provided.

  • ❏ Disable Prometheus in your ServiceMeshControlPlane resource: spec.addons.prometheus.enabled=false

    • ❏ Configure the ServiceMeshControlPlane with OpenShift Monitoring as the replacement. These instructions also include installing a standalone Kiali resource. Both can be done at the same time. For more information, see "Integration with user-workload monitoring".
    • ❏ If you are not using OpenShift monitoring, see: "Integration with external Prometheus installation".
  • ❏ Disable tracing in your ServiceMeshControlPlane resource: spec.tracing.type=None

    • ❏ Configure the ServiceMeshControlPlane with OpenShift Distributed Tracing as the replacement. For more information, see "Configuring Red Hat OpenShift distributed tracing platform (Tempo) and the Red Hat build of OpenTelemetry".
  • ❏ Disable Kiali in your ServiceMeshControlPlane resource: spec.addons.kiali.enabled=false

    • ❏ If you did not create a standalone Kiali resource as part of Prometheus or tracing, see: "Using Kiali Operator provided by Red Hat".
Warning

Red Hat OpenShift Service Mesh 3 fails to install if outdated ServiceEntry custom resources are present in the cluster. The upstream Istio version 1.24 introduced schema changes that cause installation failures for ServiceEntry resources that miss port numbers or exceed 256 hostnames. You can check for affected resources by running the following commands:

  • For ServiceEntry with hostnames over 256, run the following command:

    $ oc get serviceentries -A -o json | jq -r '.items[] | select(.spec.hosts | length > 256) | "\(.metadata.namespace)/\(.metadata.name): \(.spec.hosts | length) hosts"'
  • For ServiceEntry with missing port numbers, run the following command:

    $ oc get serviceentries -A -o json | jq -r '.items[] | select(.spec.ports == null or (.spec.ports | length == 0)) | "\(.metadata.namespace)/\(.metadata.name)"'

To ensure a seamless migration to Red Hat OpenShift Service Mesh 3, perform the following corrective actions before installing the OSSM 3 operator:

  • Split ServiceEntry resources: You must split any ServiceEntry containing more than 256 hosts into multiple smaller resources.
  • Validate port configurations: You must ensure that all the ServiceEntry definitions include the required port specifications.

2.2.3. Migrate to explicitly managed routes

Automatic route creation, also known as Istio OpenShift Routing (IOR), is a deprecated feature that is disabled by default for any ServiceMeshControlPlane resource created using OpenShift Service Mesh 2.5 and later.

To move from OpenShift Service Mesh 2 to OpenShift Service Mesh 3, you need to migrate from IOR to explicitly-managed routes.

If you already moved to explicitly-managed routes in OpenShift Service Mesh 2, then continue to gateway injection.

  • ❏ Migrate from Istio OpenShift Routing (IOR) to to explicitly-managed routes. For more information, see "Service Mesh route migration".

2.2.4. Migrate to gateway injection

Gateways were controlled by the ServiceMeshControlPlane (SMCP) resource in OpenShift Service Mesh 2. The OpenShift Service Mesh 3 control plane does not manage gateways so you must migrate from SMCP-Defined gateways to gateway injection.

  • ❏ Migrate to gateway injection. For more information, see "Service Mesh gateway migration".

2.2.5. Disable network policy management

If you do not want your network policies in place during your migration:

  • ❏ Disable network policy management in the OpenShift Service Mesh 2 ServiceMeshControlPlane resource: spec.security.manageNetworkPolicy=false.
  • ❏ Complete the rest of the checklists.
  • ❏ Migrate your deployment and workloads.
  • ❏ Manually recreate your network policies after you have migrated your workloads.

If you want your network policies in place during your migration:

  • ❏ Manually set up network policies to use during migration. For more information, see "Migrating network policies from Service Mesh 2 to Service Mesh 3".
  • ❏ Disable network policy management in the OpenShift Service Mesh 2 ServiceMeshControlPlane resource: spec.security.manageNetworkPolicy=false.
  • ❏ Complete the rest of the checklists.
  • ❏ Migrate your deployment and workloads.

2.2.6. Disable Grafana in OpenShift Service Mesh 2

Grafana is not supported in OpenShift Service Mesh 3, and must be disabled in your OpenShift Service Mesh 2 ServiceMeshControlPlane.

  • ❏ Disable Grafana in your OpenShift Service Mesh 2 ServiceMeshControlPlane: spec.addons.grafana.enabled=false.

2.2.7. Example resource files

After you have completed the premigration procedures, your OpenShift Service Mesh 2 resources might be similar to the following examples:

2.2.7.1. The ServiceMeshControlPlane resource file

You can see the following example configuration for ServiceMeshControlPlane resource:

apiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
  name: basic
  namespace: istio-system
spec:
  version: v2.6
  security:
    manageNetworkPolicy: false
  addons:
    grafana:
      enabled: false
    kiali:
      enabled: false
    prometheus:
      enabled: false
  meshConfig:
    extensionProviders:
      - name: prometheus
        prometheus: {}
      - name: otel
        opentelemetry:
          port: 4317
          service: otel-collector.istio-system.svc.cluster.local
  gateways:
    enabled: false
    openshiftRoute:
      enabled: false
  mode: MultiTenant
  tracing:
    type: None
  • spec.version specifies the version of OpenShift Service Mesh to use.
  • spec.security.manageNetworkPolicy specifies how to manage network policies.
  • spec.addons specifies the addons to enable or disable.
  • spec.meshConfig specifies the mesh configuration.
  • spec.gateways specifies the gateways configuration.
  • spec.mode specifies the mode of operation.
  • spec.tracing specifies the tracing configuration.
2.2.7.2. Telemetry resource file

The Telemetry resource file is located in your root namespace. The following example uses istio-system as the root namespace.

You can see the following example configuration for Telemetry resource:

apiVersion: telemetry.istio.io/v1
kind: Telemetry
metadata:
  name: mesh-default
  namespace: istio-system
spec:
  metrics:
    - providers:
        - name: prometheus
  tracing:
    - providers:
        - name: otel
  • spec.metrics specify your metrics provider. The name field must match what is specified in your ServiceMeshControlPlane resource in the spec.meshConfig.extensionProviders field.
  • spec.tracing specify your tracing provider. The name field must match what is specified in your ServiceMeshControlPlane resource in the spec.meshConfig.extensionProviders field.
2.2.7.3. Kiali resource file

You can see the following example configuration for Kiali resource:

apiVersion: kiali.io/v1alpha1
kind: Kiali
metadata:
  name: kiali
  namespace: istio-system
spec:
  version: default
  external_services:
    prometheus:
      auth:
        type: bearer
        use_kiali_token: true
      thanos_proxy:
        enabled: true
      url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091
    tracing:
      enabled: true
      provider: tempo
      use_grpc: false
      internal_url: http://tempo-sample-query-frontend.tempo:3200
      external_url: https://tempo-sample-query-frontend-tempo.apps-crc.testing
    grafana:
      enabled: false
  • spec.version specifies the version of Kiali to use. You can use the default value of the version parameter if you install OpenShift Service Mesh 3 before updating Kiali. The default version is compatible with both 2.6 and 3.0 control planes.
  • spec.external_services.prometheus specifies the configuration for the external Prometheus service.
  • spec.external_services.tracing specifies the configuration for the external tracing store.
  • spec.external_services.grafana disables the Grafana configuration. Grafana is not supported with OpenShift Service Mesh 3.0.

2.2.8. Find your deployment model

Run the following command to find your deployment model:

oc get smcp <smcp-name> -n <smcp-namespace> -o jsonpath='{.spec.mode}'
Note

If you did not set a value for the .spec.mode parameter in your ServiceMeshControlPlane resource, your deployment is multitenant.

2.2.9. Find your deployment model

If you are not using the cert-manager tool with your deployment, you are ready to migrate your deployment. Refer to the following migration guides based on your deployment model:

  • "Multitenant migration guide"
  • "Cluster-wide migration guide"

If you are unsure, you can check if you are using the cert-manager tool with your deployment.

You must check if your deployment uses the cert-manager tool. If you are using the cert-manager tool with OpenShift Service Mesh 2, you must perform additional configurations before you can start migrating your deployments and workloads.

Procedure

  1. Use one of the following ways to check if you are using the cert-manager tool with your deployment:

    1. Inspect your ServiceMeshControlPlane resource to verify that the spec.security.certificateAuthority.type parameter is set to cert-manager:

      apiVersion: maistra.io/v2
      kind: ServiceMeshControlPlane
      metadata:
        name: basic
        namespace: istio-system
      spec:
        ...
        security:
          certificateAuthority:
            cert-manager:
              address: cert-manager-istio-csr.istio-system.svc:443
            type: cert-manager
          dataPlane:
            mtls: true
          identity:
            type: ThirdParty
          manageNetworkPolicy: false
    2. Run the following command to verify that the spec.security.certificateAuthority.type parameter is set to cert-manager:

      oc get smcp <smcp-name> -n <smcp-namespace> -o jsonpath='{.spec.security.certificateAuthority.type}'

      Example output:

      cert-manager
      Next steps for migrating with the cert-manager tool

      There are some configurations you must complete first before you can start migrating your deployments:

      • "Migrating a multitenant deployment with the cert-manager tool"
      • "Cluster-wide migration methods"

OpenShift Service Mesh 3 changes how network security is managed by removing the automatic generation of network policies previously controlled by the spec.security.manageNetworkPolicy field in Red Hat OpenShift Service Mesh 2.

You can set up network policies to use during your migration.

Note

It is recommended to re-create your network policies after you have migrated your deployment and workloads. However, if your security policies require you to keep your network policies, you must re-create them first, and then set the spec.security.manageNetworkPolicy field to false as outlined in the migration checklists.

Important
  • During the recreation of network policies from OpenShift Service Mesh 2 to OpenShift Service Mesh 3, both control planes must have access to all workloads and all workloads must have access to control planes.
  • The maistra.io/member-of: label is removed from the namespaces during migration.

Prerequisites

  • You have deployed OpenShift Container Platform 4.14 or later.
  • You have logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.
  • You have the OpenShift Service Mesh 2.6.14 Operator installed.
  • You have the ServiceMeshControlPlane 2.6 resource installed.
  • In OpenShift Service Mesh 2, you have set spec.security.manageNetworkPolicy=true in your ServiceMeshControlPlane resource.
  • You have deployed the bookinfo and bookinfo2 applications.

Procedure

  1. Label namespaces by running the following command:

    $ oc label namespace <app_namespace> service-mesh=enabled
    Note

    Use a label scoped specifically to your mesh that you can reuse for discovery selectors.

  2. Create your network policies by using the following NetworkPolicy example configurations:

    You can see the following example configuration for Istiod network policy in a mesh namespace:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: istiod-basic
      namespace: istio-system
    spec:
      ingress:
        - {}
      podSelector:
        matchLabels:
          app: istiod
          istio.io/rev: basic
      policyTypes:
        - Ingress

    You can see the following example configuration for an expose route policy in a mesh namespace:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: expose-route-basic
      namespace: istio-system
    spec:
      podSelector:
        matchLabels:
          maistra.io/expose-route: "true"
      ingress:
        - from:
            - namespaceSelector:
                matchLabels:
                  network.openshift.io/policy-group: ingress
      policyTypes:
        - Ingress

    You can see the following example configuration for a default mesh network policy in a mesh namespace:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: istio-mesh
      namespace: istio-system
    spec:
      ingress:
        - from:
            - namespaceSelector:
                matchLabels:
                  service-mesh: enabled
      podSelector: {}
      policyTypes:
        - Ingress

    You can see the following example configuration for an expose route network policy in the bookinfo namespace:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: istio-expose-route
      namespace: bookinfo
    spec:
      podSelector:
        matchLabels:
          maistra.io/expose-route: "true"
      ingress:
        - from:
            - namespaceSelector:
                matchLabels:
                  network.openshift.io/policy-group: ingress
      policyTypes:
        - Ingress

    You can see the following example configuration for mesh network policy in the bookinfo namespace:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: istio-mesh
      namespace: bookinfo
    spec:
      ingress:
        - from:
            - namespaceSelector:
                matchLabels:
                  service-mesh: enabled
      podSelector: {}
      policyTypes:
        - Ingress

    You can see the following example configuration for an expose route network policy in the bookinfo2 namespace:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: istio-expose-route
      namespace: bookinfo2
    spec:
      podSelector:
        matchLabels:
          maistra.io/expose-route: "true"
      ingress:
        - from:
            - namespaceSelector:
                matchLabels:
                  network.openshift.io/policy-group: ingress
      policyTypes:

    You can see the following example configuration for mesh network policy in the bookinfo2 namespace:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: istio-mesh
      namespace: bookinfo2
    spec:
      ingress:
        - from:
            - namespaceSelector:
                matchLabels:
                  service-mesh: enabled
      podSelector: {}
      policyTypes:
        - Ingress
  3. Disable network policies in OpenShift Service Mesh 2 by setting the spec.security.manageNetworkPolicy field to false in your ServiceMeshControlPlane resource.

    Note

    Setting the spec.security.manageNetworkPolicy field to false in your ServiceMeshControlPlane resource removes the network policies created by default in OpenShift Service Mesh 2.

  4. Find your current active revision by running the following command:

    $ oc get istios <istio_name>

    Example output:

    NAME             REVISIONS   READY   IN USE   ACTIVE REVISION   STATUS    VERSION   AGE
    istio-tenant-a   1           1       0        istio-tenant-a    Healthy   v1.24.3   30s
  5. Copy the active revision name from the output to use for your istio.io/rev label in your second Istiod network policy for OpenShift Service Mesh 3.
  6. Create a second Istiod network policy for OpenShift Service Mesh 3 by using the following NetworkPolicy example configuration:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: istio-istiod-v3
      namespace: istio-system
    spec:
      ingress:
        - {}
      podSelector:
        matchLabels:
          app: istiod
          istio.io/rev: istio-tenant-a 
    1
    
      policyTypes:
        - Ingress
    Note

    You must match your current active revision name through the istio.io/rev label in your second Istiod network policy.

Next steps

  • In OpenShift Service Mesh 2, set the spec.security.manageNetworkPolicy field to false in your ServiceMeshControlPlane resource, and continue with the migration checklists.

Review the architectural and configuration changes in Kiali for Red Hat OpenShift Service Mesh 3, including updated topology graphs, deprecated namespace settings, and renamed service URLs.

2.4.1. About migrating Kiali differences

Kiali Operator provided by Red Hat with Red Hat OpenShift Service Mesh 3 introduces the following changes:

New topology graphs
The Traffic Page Graph page has been reorganized and built using Patternfly topology with a new topology view showcasing the mesh infrastructure.
Deprecated configuration settings

To control which namespaces are accessible or visible to users in OpenShift Service Mesh 3, Kiali relies on discoverySelectors.

By default, deployment.cluster_wide_access=true is enabled, granting Kiali cluster-wide access to all namespaces in the local cluster. If you are migrating a cluster-wide deployment with Kiali, you must remove the following deprecated and unavailable configuration settings from your Kiali custom resource (CR):

  • spec.deployment.accessible_namespaces
  • api.namespaces.exclude
  • api.namespaces.include
  • api.namespaces.label_selector_exclude
  • api.namespaces.label_selector_include

    If you are are using discovery selectors in Istio to restrict the namespaces that Istiod watches, then those must match the discovery selectors in your Kiali CR.

Renamed configuration settings

The following configuration settings have been renamed:

Expand
Old configurationNew configuration

external_service.grafana.in_cluster_url

external_service.grafana.internal_url

external_service.grafana.url

external_service.grafana.external_url

external_service.tracing.in_cluster_url

external_service.tracing.internal_url

external_service.tracing.url

external_service.tracing.external_url

These changes reflect evolving capabilities and configuration standards of Kiali within OpenShift Service Mesh 3.

Chapter 3. Multitenant migration guide

3.1. Multitenant migration guide

This guide is for users who are currently running a multitenant deployment of Red Hat OpenShift Service Mesh 2.6.14, and are migrating to OpenShift Service Mesh 3.0.

3.1.1. Migrating a multitenant deployment

The bookinfo example application is used for demonstration purposes with a minimal example for the Istio resource. For more information, see configuration differences between the OpenShift Service Mesh 2 ServiceMeshControlPlane resource and the OpenShift Service Mesh 3 Istio resource, see "ServiceMeshControlPlane resource to Istio resource fields mapping".

Important

If you have not completed the premigration checklists, you must complete them first before you can start migrating your deployment.

You can follow these same steps with your own workloads.

Prerequisites

  • You have deployed OpenShift Container Platform 4.14 or later.
  • You have logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.
  • You have completed the premigration checklists.
  • You have the OpenShift Service Mesh 2.6.14 Operator installed.
  • You have the OpenShift Service Mesh 3 Operator installed.
  • You created an IstioCNI resource.
  • You have the istioctl tool installed.
  • You are running a MultiTenant ServiceMeshControlPlane.
  • You have installed the bookinfo application.

Procedure

  1. Create your Istio resource.

    You can see the following example configuration for reference:

    apiVersion: sailoperator.io/v1
    kind: Istio
    metadata:
      name: istio-tenant-a
    spec:
      namespace: istio-system-tenant-a
      version: v1.24.3
      values:
        meshConfig:
          discoverySelectors:
            - matchLabels:
                tenant: tenant-a
          extensionProviders:
            - name: prometheus
              prometheus: {}
            - name: otel
              opentelemetry:
                port: 4317
                service: otel-collector.opentelemetrycollector-3.svc.cluster.local
    • spec.namespace specifies the field in your Istio resource must be the same namespace as your ServiceMeshControlPlane resource. If you set the spec.namespace field in your Istio resource to a different namespace than your ServiceMeshControlPlane resource, the migration does not complete successfully.
    • spec.values.meshConfig.discoverySelectors specifies the labels that the control plane uses to identify the namespaces it should manage. By default, control planes watch the entire cluster. When managing multiple control planes on a single cluster, you must narrow the scope of each control plane by setting discoverySelectors fields. In this example, the label tenant-a is used, but you can use any label or combination of labels.
    • spec.values.meshConfig.extensionProviders specifies the metrics and tracing configurations for the control plane. Optional: If you are migrating metrics and tracing, update the extensionProviders fields according to your tracing and metrics configurations.
  2. Add your tenant label to each one of your dataplane namespaces by running the following command for each dataplane namespace:

    $ oc label ns bookinfo tenant=tenant-a
    Note

    With OpenShift Service Mesh 2.6, namespaces were enrolled into the mesh by adding them to the ServiceMeshMemberRoll resource. In OpenShift Service Mesh 3, you must label each one of your dataplane namespaces to match your discoverySelectors fields.

Now you can migrate your workloads from the OpenShift Service Mesh 2.6 control plane to the OpenShift Service Mesh 3.0 control plane.

Note

You can migrate workloads and gateways separately, and in any order. For more information, see "Migrating gateways".

Procedure

  1. Find the current IstioRevision for your OpenShift Service Mesh 3.0 control plane by running the following command:

    $ oc get istios istio-tenant-a

    Example output:

    NAME             REVISIONS   READY   IN USE   ACTIVE REVISION   STATUS    VERSION   AGE
    istio-tenant-a   1           1       0        istio-tenant-a    Healthy   v1.24.3   30s
    Note

    The naming format of your revisions depends on which upgrade strategy you choose for your Istio instance.

  2. Copy the ACTIVE REVISION to use as your istio.io/rev label in the next step.
  3. Update injection labels on the dataplane namespace by running the following command:

    $ oc label ns bookinfo istio.io/rev=istio-tenant-a maistra.io/ignore-namespace="true" --overwrite=true

    This adds the following labels to the namespace:

    1. The istio.io/rev: istio-tenant-a label: Ensures that any new pods that get created in that namespace connect to the OpenShift Service Mesh 3.0 proxy.
    2. The maistra.io/ignore-namespace: "true" label: Disables sidecar injection for OpenShift Service Mesh 2.6 proxies in the namespace so OpenShift Service Mesh 2.6 stops injecting proxies in this namespace, and any new proxies are injected by OpenShift Service Mesh 3.0. Without this label, the OpenShift Service Mesh 2.6 injection webhook tries to inject the pod and the injected sidecar proxy will refuse to start since it will have both the OpenShift Service Mesh 2.6 and the OpenShift Service Mesh 3.0 Container Network Interface(CNI) annotations.

      Note

      Once you apply the maistra.io/ignore-namespace label, any new pod that gets created in the namespace will connect to the OpenShift Service Mesh 3.0 proxy. Workloads can still communicate with each other regardless of which controlplane they are connected to.

  4. Restart the workloads by using one of the following options:

    1. To restart all the workloads at once so that the new pods are injected with the OpenShift Service Mesh 3.0 proxy, run the following command:

      $ oc rollout restart deployments -n bookinfo
    2. To restart each workload individually, run the following command for each workload:

      $ oc rollout restart deployments productpage-v1 -n bookinfo
  5. Wait for the productpage application to restart by running the following command:

    $ oc rollout status deployment productpage-v1 -n bookinfo

Verification

  1. Check that your workload is connected to the new control plane.

    1. Fetch the list of proxies that are still connected to the OpenShift Service Mesh 2.6 control plane with the istioctl tool by running the following command:

      $ istioctl ps --istioNamespace istio-system-tenant-a --revision basic

      In the following example, basic is the name of your ServiceMeshControlPlane:

         NAME                                              CLUSTER        CDS        LDS        EDS        RDS          ECDS         ISTIOD                            VERSION
         details-v1-7b49464bc-zr7nr.bookinfo               Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-basic-6c9f8d9894-sh6lx     1.20.8
         ratings-v1-d6f449f59-9rds2.bookinfo               Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-basic-6c9f8d9894-sh6lx     1.20.8
         reviews-v1-686cd989df-9x59z.bookinfo              Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-basic-6c9f8d9894-sh6lx     1.20.8
         reviews-v2-785b8b48fc-l7xkj.bookinfo              Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-basic-6c9f8d9894-sh6lx     1.20.8
         reviews-v3-67889ffd49-7bhxn.bookinfo              Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-basic-6c9f8d9894-sh6lx     1.20.8
    2. View the list proxies that have been migrated to the new OpenShift Service Mesh 3.0 control plane by running the following command:

      $ istioctl ps --istioNamespace istio-system-tenant-a --revision istio-tenant-a

      Example output:

      NAME                                         CLUSTER        CDS        LDS        EDS        RDS        ECDS     ISTIOD                      VERSION
      productpage-v1-7745c5cc94-wpvth.bookinfo     Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED              istiod-5bbf98dccf-n8566     1.24.3
  2. Verify your application is still working correctly. For the bookinfo application, run the following command:

    $ oc exec -it -n bookinfo deployments/productpage-v1 -c istio-proxy -- curl localhost:9080/productpage
Note

If you are using gateways, you must migrate them before you complete the migration process for your deployment and workloads. If you are not using gateways, and have verified your mulitenant migration, you can proceed to complete the migration and remove OpenShift Service Mesh 2 resources.

The bookinfo example application is used for demonstration purposes with a minimal example for the Istio resource. For more information on configuration differences between the OpenShift Service Mesh 2 ServiceMeshControlPlane resource and the OpenShift Service Mesh 3 Istio resource, see "ServiceMeshControlPlane resource to Istio resource fields mapping".

You can follow these same steps with your own workloads.

Prerequisites

  • You have deployed OpenShift Container Platform 4.14 or later.
  • You have logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.
  • You have completed the premigration checklists.
  • You have the OpenShift Service Mesh 2.6.14 Operator installed.
  • You have the OpenShift Service Mesh 3 Operator installed.
  • You created an IstioCNI resource.
  • You have the istioctl tool installed.
  • You are using the cert-manager and istio-csr tools in a multitenant deployment.
  • Your OpenShift Service Mesh 2 ServiceMeshControlPlane is configured with the cert-manager tool.

Procedure

  1. Check that your OpenShift Service Mesh 2 ServiceMeshControlPlane is configured with the cert-manager-tool:

    You can see the following example configuration for reference:

    apiVersion: maistra.io/v2
    kind: ServiceMeshControlPlane
    metadata:
      name: basic
      namespace: istio-system
    spec:
      ...
      security:
        certificateAuthority:
          cert-manager:
            address: cert-manager-istio-csr.istio-system.svc:443
          type: cert-manager
        dataPlane:
          mtls: true
        identity:
          type: ThirdParty
        manageNetworkPolicy: false
  2. Update the istio-csr deployment to include your OpenShift Service Mesh 3 control plane by running the following command:

      helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr \
          --install \
          --reuse-values \
          --namespace istio-system \
          --wait \
          --set "app.istio.revisions={basic,istio-tenant-a}"

    where:

    app.istio.revisions
    This field needs to include your OpenShift Service Mesh 3.0 control plane revision before you create your Istio resource so that proxies can properly communicate with the OpenShift Service Mesh 3.0 control plane.
  3. Create your Istio resource.

    You can see the following example configuration for reference:

    apiVersion: sailoperator.io/v1
    kind: Istio
    metadata:
      name: istio-tenant-a
    spec:
      namespace: istio-system-tenant-a
      version: v1.24.3
      values:
        meshConfig:
          discoverySelectors:
            - matchLabels:
                tenant: tenant-a
          extensionProviders:
            - name: prometheus
              prometheus: {}
            - name: otel
              opentelemetry:
               port: 4317
               service: otel-collector.opentelemetrycollector-3.svc.cluster.local
        global:
          caAddress: cert-manager-istio-csr.istio-system.svc:443
        pilot:
          env:
            ENABLE_CA_SERVER: "false"
    • spec.namespace specifies the namespace for your Istio resource must be the same namespace as your ServiceMeshControlPlane resource. If you set the spec.namespace field in your Istio resource to a different namespace than your ServiceMeshControlPlane resource, the migration will not work properly.
    • spec.values.meshConfig.discoverySelectors specifies the label selector for your Istio resource. By default, control planes watch the entire cluster. When managing multiple control planes on a single cluster, you must narrow the scope of each control plane by setting discoverySelectors fields. In this example, the label tenant-a is used, but you can use any label or combination of labels.
    • spec.values.meshConfig.extensionProviders is an optional field. If you are migrating metrics and tracing, update the extensionProviders fields according to your tracing and metrics configurations.
  4. Add your tenant label to each one of your dataplane namespaces by running the following command for each dataplane namespace:

    $ oc label ns bookinfo tenant=tenant-a
    Note

    With OpenShift Service Mesh 2.6, namespaces were enrolled into the mesh by adding them to the ServiceMeshMemberRoll resource. In OpenShift Service Mesh 3, you must label each one of your dataplane namespaces to match your discoverySelectors fields.

Now you can migrate your workloads from the OpenShift Service Mesh 2.6 control plane to the OpenShift Service Mesh 3.0 control plane.

Note

You can migrate workloads and gateways separately, and in any order. For more information, see "Migrating gateways".

Procedure

  1. Find the current IstioRevision for your OpenShift Service Mesh 3.0 control plane by running the following command:

    $ oc get istios istio-tenant-a

    Example output:

    NAME             REVISIONS   READY   IN USE   ACTIVE REVISION   STATUS    VERSION   AGE
    istio-tenant-a   1           1       0        istio-tenant-a    Healthy   v1.24.3   30s
    Note

    The naming format of your revisions depends on which upgrade strategy you choose for your Istio instance.

  2. Copy the ACTIVE REVISION to use as your istio.io/rev label in the next step.
  3. Update injection labels on the dataplane namespace by running the following command:

    $ oc label ns bookinfo istio.io/rev=istio-tenant-a maistra.io/ignore-namespace="true" --overwrite=true

    This adds the following labels to the namespace:

    1. The istio.io/rev: istio-tenant-a label: Ensures that any new pods that get created in that namespace connect to the OpenShift Service Mesh 3.0 proxy.
    2. The maistra.io/ignore-namespace: "true" label: Disables sidecar injection for OpenShift Service Mesh 2.6 proxies in the namespace so OpenShift Service Mesh 2.6 stops injecting proxies in this namespace, and any new proxies are injected by OpenShift Service Mesh 3.0. Without this label, the OpenShift Service Mesh 2.6 injection webhook tries to inject the pod and the injected sidecar proxy will refuse to start since it will have both the OpenShift Service Mesh 2.6 and the OpenShift Service Mesh 3.0 Container Network Interface(CNI) annotations.

      Note

      Once you apply the maistra.io/ignore-namespace label, any new pod that gets created in the namespace connects to the OpenShift Service Mesh 3.0 proxy. Workloads can still communicate with each other regardless of which controlplane they are connected to.

  4. Restart the workloads by using one of the following options:

    1. To restart all the workloads at once so that the new pods are injected with the OpenShift Service Mesh 3.0 proxy, run the following command:

      $ oc rollout restart deployments -n bookinfo
    2. To restart each workload individually, run the following command for each workload:

      $ oc rollout restart deployments productpage-v1 -n bookinfo
  5. Wait for the productpage application to restart by running the following command:

    $ oc rollout status deployment productpage-v1 -n bookinfo

Verification

  1. Check that your workload is connected to the new control plane.

    1. Fetch the list of proxies that are still connected to the OpenShift Service Mesh 2.6 control plane with the istioctl tool by running the following command:

      $ istioctl ps --istioNamespace istio-system-tenant-a --revision basic

      In the following example, basic is the name of your ServiceMeshControlPlane:

         NAME                                              CLUSTER        CDS        LDS        EDS        RDS          ECDS         ISTIOD                            VERSION
         details-v1-7b49464bc-zr7nr.bookinfo               Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-basic-6c9f8d9894-sh6lx     1.20.8
         ratings-v1-d6f449f59-9rds2.bookinfo               Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-basic-6c9f8d9894-sh6lx     1.20.8
         reviews-v1-686cd989df-9x59z.bookinfo              Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-basic-6c9f8d9894-sh6lx     1.20.8
         reviews-v2-785b8b48fc-l7xkj.bookinfo              Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-basic-6c9f8d9894-sh6lx     1.20.8
         reviews-v3-67889ffd49-7bhxn.bookinfo              Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-basic-6c9f8d9894-sh6lx     1.20.8
    2. View the list proxies that have been migrated to the new OpenShift Service Mesh 3.0 control plane by running the following command:

      $ istioctl ps --istioNamespace istio-system-tenant-a --revision istio-tenant-a

      Example output:

      NAME                                         CLUSTER        CDS        LDS        EDS        RDS        ECDS     ISTIOD                      VERSION
      productpage-v1-7745c5cc94-wpvth.bookinfo     Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED              istiod-5bbf98dccf-n8566     1.24.3
  2. Verify your application is still working correctly. For the bookinfo application, run the following command:

    $ oc exec -it -n bookinfo deployments/productpage-v1 -c istio-proxy -- curl localhost:9080/productpage
Note

If you are using gateways, you must migrate them before you complete the migration process for your deployment and workloads. After you have migrated your gateways, you must update the app.controller.configmapNamespaceSelector field in your istio-csr deployment. If you are not using gateways, you can complete your migration with cert-manager.

Chapter 4. Cluster-wide migration guide

4.1. Cluster-wide migration guide

This guide is for users who are running a cluster-wide deployment of Red Hat OpenShift Service Mesh 2.6.14 and are migrating to OpenShift Service Mesh 3.0.

Important

You must complete the premigration checklists before you start migrating your deployment.

During the migration process, two cluster-wide control planes run in the same cluster while the data plane namespaces are gradually migrated to the Red Hat OpenShift Service Mesh 3.0 installation. One control plane is associated with the Red Hat OpenShift Service Mesh 2.6.14 installation and the other is associated with the OpenShift Service Mesh 3.0 installation. You must carefully plan the migration steps to avoid possible conflicts between the two control planes.

Important

You must complete the premigration checklists before you start migrating your deployment.

4.1.1.1. Root certificate

During the migration, both control planes must share a root certificate. To share a root certificate between both control planes, you install the 3.0 control plane into the same namespace as 2.6 control plane. The migration procedures show how to verify that the root certificate is shared.

4.1.1.2. Discovery selectors and namespace access

Both control planes must have access to all namespaces in the mesh. During the migration, some proxies are controlled by the 3.0 control plane while other proxies remain controlled by the 2.6 control plane. To ensure that mesh communication works during the migration, both control planes must detect the same set of services. Service discovery is provided by istiod component, which runs in the control plane namespace.

In the OpenShift Service Mesh 3.0 installation, you can control how Istio discovers services by using discovery selectors. When you use discovery selectors, ensure that the discoverySelectors expression that is defined in the OpenShift Service Mesh 3.0 Istio resource match the namespaces that comprise the OpenShift Service Mesh 2.6 mesh. You might have to add additional labels to the OpenShift Service Mesh 2.6 application namespaces to ensure that they are captured in the OpenShift Service Mesh 3.0 installation. For more information, see "Scoping the service mesh with DiscoverySelectors".

Note

In OpenShift Service Mesh 2.6 installations, the maistra.io/member-of label is automatically created. This label cannot be used because it is automatically removed during the migration process.

4.1.1.3. Network policies

By default, OpenShift Service Mesh 2.6 manages network policies that block traffic to the 3.0 control plane.

For both control planes, during the migration ensure that network policies do not block traffic between the following entities:

  • The control plane and the data plane namespaces
  • The data plane namespaces and the control plane
  • The data plane namespaces themselves

In the premigration checklist, you are instructed to disable network policies. However, you can manually re-create them. Manually created network policies must allow traffic for both control planes. When a data plane namespace is migrated to 3.0, the maistra.io/member-of label is automatically removed. Do not use this label in network policies. For more information, see "Set up network policies to use during migration".

Note

In OpenShift Service Mesh 2.6 installations, the maistra.io/member-of label is automatically created. This label cannot be used because it is automatically removed during the migration process.

Incorrectly configured network policies can disrupt mesh traffic. When you run the migration, be careful when you create network policies to prevent traffic disruption. For more information, see "Set up network policies to use during migration".

4.1.1.4. Sidecar injection

If both control planes try to perform sidecar injection, the proxy will not start and the migration cannot be completed. To ensure that only one control plane performs sidecar injection during the migration, use injection labels. For more information, see "Installing the Sidecar".

Note

During the migration, you must disable the 2.6 injector. Use the maistra.io/ignore-namespace: "true" label to prevent the 2.6 control plane from injecting a proxy in the namespace.

4.1.1.5. Label selection

For OpenShift Service Mesh 3.0, you must decide if you want to use the istio.io/rev label or the istio-injection label to configure sidecar injection. For more information, see "About sidecar injection".

In the OpenShift Service Mesh 2.6 installation, the member selection configuration in the ServiceMeshMemberRoll resource can impact how injection labels are used in the OpenShift Service Mesh 3.0 installation.

By default, in a 2.6 installation, the spec.memberSelectors field in the ServiceMeshMemberRoll resource is configured to match the istio-injection=enabled label and all of the data plane namespaces in a 2.6 installation have the istio-injection=enabled label applied. If you are using the default 2.6 installation settings, you can keep using that label or switch to the istio.io/rev label for the 3.0 installation.

If the spec.memberSelectors field in the ServiceMeshMemberRoll resource is not configured to match the istio-injection=enabled label and the 2.6 data plane namespace uses a custom label, you must add the istio.io/rev label or the istio-injection label during the migration. The custom labels defined in the spec.memberSelectors parameter of the ServiceMeshMemberRoll resource have no effect on sidecar injection in the OpenShift Service Mesh 3 installation and cannot be used.

If projects in the 2.6 installation were added to the mesh by manually creating the ServiceMeshMember resource, you must add the istio.io/rev or istio-injection label to the project namespaces during the migration.

4.1.2. Cluster-wide migration methods

You can migrate the control plane from Red Hat OpenShift Service Mesh 2.6.14 to OpenShift Service Mesh 3.0 by using one of the following methods:

  • Applying the istio.io/rev label
  • Applying the istio-injection=enabled label
  • Performing a simple migration by using the istio-injection=enabled label

    Note

    The simple migration method might introduce traffic disruptions.

To select the best migration method for your OpenShift Service Mesh installation, you must understand the difference between the istio.io/rev and istio-injection labels. Read through all the migration methods before choosing the one that best fits your needs.

You can perform a canary upgrade with the gradual migration of data plane namespaces for a cluster-wide deployment by using the istio.io/rev label.

The bookinfo example application is being used for demonstration purposes with a minimal example for the Istio resource. For more information on configuration differences between the OpenShift Service Mesh 2 ServiceMeshControlPlane resource and the OpenShift Service Mesh 3 Istio resource, see "Configuration fields mapping between Service Mesh 2 and Service Mesh 3".

You can follow these same steps with your own workloads.

Prerequisites

  • You have deployed OpenShift Container Platform 4.14 or later.
  • You have logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.
  • You have completed the premigration checklists.
  • You have the OpenShift Service Mesh 2.6.14 Operator installed.
  • You have the OpenShift Service Mesh 3 Operator installed.
  • You have created an IstioCNI resource.
  • You have installed the istioctl tool.
  • You are running a cluster-wide Service Mesh control plane resource.
  • You have installed the bookinfo application.

Procedure

  1. Identify the namespaces that contain a 2.6 control plane by running the following command:

    $ oc get smcp -A

    Example output:

    NAMESPACE      NAME                   READY   STATUS            PROFILES      VERSION       AGE
    istio-system   install-istio-system   6/6     ComponentsReady   ["default"]   2.6.6         115m
  2. Create a YAML file named ossm-3.yaml that creates the Istio resource for the 3.0 installation in the same namespace as the ServiceMeshControlPlane resource for the 2.6 installation.

    Note

    In the following example configuration, the Istio control plane has access to all namespaces on the cluster. If you want limit the namespaces the control plan has access to, you must define discovery selectors. All data plane namespaces that you plan to migrate from version 2.6 must be matched.

    You can see the following example configuration for reference:

    apiVersion: sailoperator.io/v1
    kind: Istio
    metadata:
      name: ossm-3
    spec:
      updateStrategy:
        type: RevisionBased
      namespace: istio-system
      version: v1.24.6
      values:
        meshConfig:
          extensionProviders:
            - name: prometheus
              prometheus: {}
            - name: otel
              opentelemetry:
                port: 4317
                service: otel-collector.opentelemetrycollector-3.svc.cluster.local
    • metadata.name The name, updateStrategy and version fields specify how the IstioRevision resource name is created. For more information, see "Identifying the revision name".
    • spec.namespace The 3.0 and 2.6 control planes must run in the same namespace.
    • spec.values If you are migrating metrics and tracing, update the extensionProviders fields according to your tracing and metrics configurations.
  3. Apply the YAML file by running the following command:

    $ oc apply -f ossm-3.yaml
  4. Verify that the new istiod resource uses the existing root certificate by running the following command:

    $ oc logs deployments/istiod-ossm-3-v1-24-3 -n istio-system | grep 'Load signing key and cert from existing secret'

    Example output:

    2024-12-18T08:13:53.788959Z	info	pkica	Load signing key and cert from existing secret istio-system/istio-ca-secret

Now you can migrate your workloads from the OpenShift Service Mesh 2.6 control plane to the OpenShift Service Mesh 3.0 control plane.

Revision tags are not used in this example for simplicity. When migrating large meshes, you can use revision tags to avoid re-labeling all namespaces during future version 3 updates.

Note

You can migrate workloads and gateways separately, and in any order. For more information, see "Migrating gateways".

Procedure

  1. Find the current IstioRevision for your OpenShift Service Mesh 3.0 control plane by running the following command:

    $ oc get istios

    Example output:

    NAME             REVISIONS   READY   IN USE   ACTIVE REVISION   STATUS    VERSION   AGE
    ossm-3           1           1       0        ossm-3-v1-24-3    Healthy   v1.24.3   30s
  2. Copy the value in the ACTIVE REVISION column to use as your istio.io/rev label in the next step.

    Note

    The naming format of your revisions depends on which upgrade strategy you choose for your Istio instance.

  3. Update the injection labels on the dataplane namespace by running the following command:

    $ oc label ns bookinfo istio.io/rev=ossm-3-v1-24-3 maistra.io/ignore-namespace="true" istio-injection- --overwrite=true

    Running the command performs the following actions:

    1. Removes the istio-injection label: This label prevents the 3.0 control plane from injecting the proxy. The istio-injection label takes precedence over istio.io/rev label.
    2. Adds the istio.io/rev=ossm-3-v1-24-3 label: This label ensures that any newly created or restarted pods in the namespace connect to the OpenShift Service Mesh 3.0 proxy.
    3. Adds the maistra.io/ignore-namespace: "true" label: This label disables sidecar injection for OpenShift Service Mesh 2.6 proxies in the namespace. With the label applied, OpenShift Service Mesh 2.6 stops injecting proxies in this namespace, and any new proxies are injected by OpenShift Service Mesh 3.0. Without this label, the OpenShift Service Mesh 2.6 injection webhook tries to inject the pod and the injected sidecar proxy refuses to start since it will have both the OpenShift Service Mesh 2.6 and the OpenShift Service Mesh 3.0 Container Network Interface(CNI) annotations.

      Note

      Once you apply the maistra.io/ignore-namespace label, any new pod that gets created in the namespace connects to the OpenShift Service Mesh 3.0 proxy. Workloads can still communicate with each other regardless of which control plane they are connected to.

  4. Restart the workloads by using one of the following options:

    1. To restart all the workloads at once so that the new pods are injected with the OpenShift Service Mesh 3.0 proxy, run the following command:

      $ oc rollout restart deployments -n bookinfo
    2. To restart each workload individually, run the following command for each workload:

      $ oc rollout restart deployments productpage-v1 -n bookinfo
  5. Wait for the productpage application to restart by running the following command:

    $ oc rollout status deployment productpage-v1 -n bookinfo

Verification

  1. Ensure that expected workloads are managed by the new control plane by running the following command:

    $ istioctl ps -n bookinfo

    Example output:

    $ istioctl ps -n bookinfo
    NAME                                          CLUSTER        CDS             LDS             EDS             RDS             ECDS         ISTIOD                                           VERSION
    details-v1-7f46897b-d497c.bookinfo            Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    productpage-v1-74bfbd4d65-vsxqm.bookinfo      Kubernetes     SYNCED (4s)     SYNCED (4s)     SYNCED (3s)     SYNCED (4s)     IGNORED      istiod-ossm-3-v1-24-3-797bb4d78f-xpchx           1.24.3
    ratings-v1-559b64556-c5ppg.bookinfo           Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    reviews-v1-847fb7c54d-qxt5d.bookinfo          Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    reviews-v2-5c7ff5b77b-8jbhd.bookinfo          Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    reviews-v3-5c5d764c9b-rrx8w.bookinfo          Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8

    The previous output shows that the productpage-v1 deployment is the only deployment that restarted and was injected with the 3.0 proxy. Even if there are different versions of the proxies, communication between services still works.

  2. If the 2.6 installation has additional data plane namespaces, migrate the next namespace now.

    Note

    Do not remove the maistra.io/ignore-namespace="true" label until the 2.6 control plane is uninstalled.

Next steps

If you are using gateways, you must migrate them before you complete the migration process.

  • See: "Migrating gateways"

If you are not using gateways, and have verified your cluster-wide migration, you can proceed to complete the migration and remove OpenShift Service Mesh 2 resources.

  • See: "Completing the Migration"

You can perform a canary upgrade with the gradual migration of data plane namespaces for a cluster-wide deployment by using the istio.io/rev label.

The bookinfo example application is being used for demonstration purposes with a minimal example for the Istio resource. For more information on configuration differences between the OpenShift Service Mesh 2 ServiceMeshControlPlane resource and the OpenShift Service Mesh 3 Istio resource, see "Configuration fields mapping between Service Mesh 2 and Service Mesh 3".

You can follow these same steps with your own workloads.

Prerequisites

  • You have deployed OpenShift Container Platform 4.14 or later.
  • You have logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.
  • You have completed the premigration checklists.
  • You have the OpenShift Service Mesh 2.6.14 Operator installed.
  • You have the OpenShift Service Mesh 3 Operator installed.
  • You have created an IstioCNI resource.
  • You have installed the istioctl tool.
  • You are using the cert-manager and istio-csr tools in a cluster-wide deployment.
  • Your OpenShift Service Mesh 2.6.14 ServiceMeshControlPlane resource is configured with the cert-manager tool.
  • You have installed the bookinfo application.

Procedure

  1. Confirm that your OpenShift Service Mesh 2 ServiceMeshControlPlane resource is configured with the cert-manager tool.

    You can see the following example configuration for reference:

    apiVersion: maistra.io/v2
    kind: ServiceMeshControlPlane
    metadata:
      name: basic
      namespace: istio-system
    spec:
      ...
      security:
        certificateAuthority:
          cert-manager:
            address: cert-manager-istio-csr.istio-system.svc:443
          type: cert-manager
        dataPlane:
          mtls: true
        identity:
          type: ThirdParty
        manageNetworkPolicy: false
  2. Update the istio-csr deployment to include your OpenShift Service Mesh 3 control plane by running the following command:

      helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr \
          --install \
          --reuse-values \
          --namespace istio-system \
          --wait \
          --set "app.istio.revisions={basic,ossm-3-v1-24-3}"

    where:

    app.istio.revisions
    This field must include your OpenShift Service Mesh 3.0 control plane revision before you create your Istio resource so that proxies can properly communicate with the OpenShift Service Mesh 3.0 control plane.
  3. Identify the namespaces that contain a 2.6 control plane by running the following command:

    $ oc get smcp -A

    Example output:

    NAMESPACE      NAME                   READY   STATUS            PROFILES      VERSION       AGE
    istio-system   install-istio-system   6/6     ComponentsReady   ["default"]   2.6.6         115m
  4. Create a YAML file named ossm-3.yaml that creates the Istio resource for the 3.0 installation in the same namespace as the ServiceMeshControlPlane resource for the 2.6 installation.

    Note

    In the following example configuration, the Istio control plane has access to all namespaces on the cluster. If you want limit the namespaces the control plan has access to, you must define discovery selectors. All data plane namespaces that you plan to migrate from version 2.6 must be matched.

    You can see the following example configuration for reference:

    apiVersion: sailoperator.io/v1
    kind: Istio
    metadata:
      name: ossm-3
    spec:
      updateStrategy:
        type: RevisionBased
      namespace: istio-system
      version: v1.24.6
      values:
        meshConfig:
          extensionProviders:
            - name: prometheus
              prometheus: {}
            - name: otel
              opentelemetry:
                port: 4317
                service: otel-collector.opentelemetrycollector-3.svc.cluster.local
    • metadata.name The name, updateStrategy and version fields specify how the IstioRevision resource name is created. For more information, see "Identifying the revision name".
    • spec.namespace The 3.0 and 2.6 control planes must run in the same namespace.
    • spec.values If you are migrating metrics and tracing, update the extensionProviders fields according to your tracing and metrics configurations.
  5. Apply the YAML file by running the following command:

    $ oc apply -f ossm-3.yaml
  6. Verify that the new istiod resource uses the existing root certificate by running the following command:

    $ oc logs deployments/istiod-ossm-3-v1-24-3 -n istio-system | grep 'Load signing key and cert from existing secret'

    Example output:

    2024-12-18T08:13:53.788959Z	info	pkica	Load signing key and cert from existing secret istio-system/istio-ca-secret

Now you can migrate your workloads from the OpenShift Service Mesh 2.6 control plane to the OpenShift Service Mesh 3.0 control plane.

Revision tags are not used in this example for simplicity. When migrating large meshes, you can use revision tags to avoid re-labeling all namespaces during future version 3 updates.

Note

You can migrate workloads and gateways separately, and in any order. For more information, see "Migrating gateways".

Procedure

  1. Find the current IstioRevision for your OpenShift Service Mesh 3.0 control plane by running the following command:

    $ oc get istios

    Example output:

    NAME             REVISIONS   READY   IN USE   ACTIVE REVISION   STATUS    VERSION   AGE
    ossm-3           1           1       0        ossm-3-v1-24-3    Healthy   v1.24.3   30s
  2. Copy the value in the ACTIVE REVISION column to use as your istio.io/rev label in the next step.

    Note

    The naming format of your revisions depends on which upgrade strategy you choose for your Istio instance.

  3. Update the injection labels on the dataplane namespace by running the following command:

    $ oc label ns bookinfo istio.io/rev=ossm-3-v1-24-3 maistra.io/ignore-namespace="true" istio-injection- --overwrite=true

    Running the command performs the following actions:

    1. Removes the istio-injection label: This label prevents the 3.0 control plane from injecting the proxy. The istio-injection label takes precedence over istio.io/rev label.
    2. Adds the istio.io/rev=ossm-3-v1-24-3 label: This label ensures that any newly created or restarted pods in the namespace connect to the OpenShift Service Mesh 3.0 proxy.
    3. Adds the maistra.io/ignore-namespace: "true" label: This label disables sidecar injection for OpenShift Service Mesh 2.6 proxies in the namespace. With the label applied, OpenShift Service Mesh 2.6 stops injecting proxies in this namespace, and any new proxies are injected by OpenShift Service Mesh 3.0. Without this label, the OpenShift Service Mesh 2.6 injection webhook tries to inject the pod and the injected sidecar proxy refuses to start since it will have both the OpenShift Service Mesh 2.6 and the OpenShift Service Mesh 3.0 Container Network Interface(CNI) annotations.

      Note

      Once you apply the maistra.io/ignore-namespace label, any new pod that gets created in the namespace connects to the OpenShift Service Mesh 3.0 proxy. Workloads can still communicate with each other regardless of which control plane they are connected to.

  4. Restart the workloads by using one of the following options:

    1. To restart all the workloads at once so that the new pods are injected with the OpenShift Service Mesh 3.0 proxy, run the following command:

      $ oc rollout restart deployments -n bookinfo
    2. To restart each workload individually, run the following command for each workload:

      $ oc rollout restart deployments productpage-v1 -n bookinfo
  5. Wait for the productpage application to restart by running the following command:

    $ oc rollout status deployment productpage-v1 -n bookinfo

Verification

  1. Ensure that expected workloads are managed by the new control plane by running the following command:

    $ istioctl ps -n bookinfo

    Example output:

    $ istioctl ps -n bookinfo
    NAME                                          CLUSTER        CDS             LDS             EDS             RDS             ECDS         ISTIOD                                           VERSION
    details-v1-7f46897b-d497c.bookinfo            Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    productpage-v1-74bfbd4d65-vsxqm.bookinfo      Kubernetes     SYNCED (4s)     SYNCED (4s)     SYNCED (3s)     SYNCED (4s)     IGNORED      istiod-ossm-3-v1-24-3-797bb4d78f-xpchx           1.24.3
    ratings-v1-559b64556-c5ppg.bookinfo           Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    reviews-v1-847fb7c54d-qxt5d.bookinfo          Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    reviews-v2-5c7ff5b77b-8jbhd.bookinfo          Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    reviews-v3-5c5d764c9b-rrx8w.bookinfo          Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8

    The previous output shows that the productpage-v1 deployment is the only deployment that restarted and was injected with the 3.0 proxy. Even if there are different versions of the proxies, communication between services still works.

  2. If the 2.6 installation has additional data plane namespaces, migrate the next namespace now.

    Note

    Do not remove the maistra.io/ignore-namespace="true" label until the 2.6 control plane is uninstalled.

Next steps

If you are using gateways, you must migrate them before you complete the migration process.

  • See: "Migrating gateways"

If you are not using gateways, and have verified your cluster-wide migration, you can proceed to complete the migration and remove OpenShift Service Mesh 2 resources.

  • See: "Completing the Migration"

You can perform a canary upgrade with the gradual migration of data plane namespaces for a cluster-wide deployment by using the istio-injection=enabled label and the default revision tag.

You must re-label all of the data plane namespaces. However, it is safe to restart any of the workloads at any point during the migration process.

The bookinfo application is used as an example for the Istio resource. For more information about configuration differences between the OpenShift Service Mesh 2 ServiceMeshControlPlane resource and the OpenShift Service Mesh 3 Istio resource, see "ServiceMeshControlPlane resource to Istio resource fields mapping".

Prerequisites

  • You have deployed OpenShift Container Platform 4.14 or later.
  • You have logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.
  • You have completed the premigration checklists.
  • You have the OpenShift Service Mesh 2.6.14 Operator installed.
  • You have the OpenShift Service Mesh 3 Operator installed.
  • You have created an IstioCNI resource.
  • You have installed the istioctl tool.
  • You are running a cluster-wide Service Mesh control plane resource.
  • You have installed the bookinfo application.

Procedure

  1. Identify the namespaces that contain a 2.6 control plane by running the following command:

    $ oc get smcp -A

    Example output:

    NAMESPACE      NAME                   READY   STATUS            PROFILES      VERSION   AGE
    istio-system   install-istio-system   6/6     ComponentsReady   ["default"]   2.6.6     115m
  2. Create a YAML file named ossm-3.yaml. This procedure creates the Istio resource for the 3.0 installation in the same namespace as the ServiceMeshControlPlane resource for the 2.6 installation.

    Note

    In the following example configuration, the Istio control plane has access to all namespaces on the cluster. If you want to limit the namespaces the control plan has access to, you must define discovery selectors. You must match all the data plane namespaces that you plan to migrate from version 2.6.

    You can see the following example configuration for reference:

    apiVersion: sailoperator.io/v1
    kind: Istio
    metadata:
      name: ossm-3
    spec:
      updateStrategy:
        type: RevisionBased
      namespace: istio-system
      version: v1.24.3
      values:
        meshConfig:
          extensionProviders:
            - name: prometheus
              prometheus: {}
            - name: otel
              opentelemetry:
                port: 4317
                service: otel-collector.opentelemetrycollector-3.svc.cluster.local
    • metadata.name specifies the name of the IstioRevision resource. The updateStrategy and version fields specify how the resource is updated. For more information, see "Identifying the revision name".
    • spec.namespace specifies the namespace where the 3.0 and 2.6 control planes must run.
    • spec.values specifies the configuration values for the 3.0 control plane. If you are migrating metrics and tracing, update the extensionProviders fields according to your tracing and metrics configurations.

      Note

      To prevent the OpenShift Service Mesh 3.0 control plane from injecting proxies in the namespaces that have the istio-injection=enabled label applied and are still managed by OpenShift Service Mesh 2.6 control plane, do not use the default name for the Istio resource, and do not create the default revision tag in the following steps. You create the default revision tag later in this procedure.

  3. Apply the YAML file by running the following command:

    $ oc apply -f ossm-3.yaml

Verification

  1. Verify that the new istiod resource uses the existing root certificate by running the following command:

    $ oc logs deployments/istiod-ossm-3-v1-24-3 -n istio-system | grep 'Load signing key and cert from existing secret'

    Example output:

    2024-12-18T08:13:53.788959Z	info	pkica	Load signing key and cert from existing secret istio-system/istio-ca-secret

Now you can migrate your workloads from the OpenShift Service Mesh 2.6 control plane to the OpenShift Service Mesh 3.0 control plane.

Note

You can migrate workloads and gateways separately, and in any order. For more information, see "Migrating gateways".

Procedure

  1. Find the current IstioRevision resource for your OpenShift Service Mesh 3.0 control plane by running the following command:

    $ oc get istios

    Example output:

    NAME             REVISIONS   READY   IN USE   ACTIVE REVISION   STATUS    VERSION   AGE
    ossm-3           1           1       0        ossm-3-v1-24-3    Healthy   v1.24.3   30s
  2. Copy the ACTIVE REVISION value to use as your istio.io/rev label in the next step.

    Note

    The naming format of your revisions depends on which upgrade strategy you choose for your Istio instance.

  3. Update the injection labels on the data plane namespace by running the following command:

    $ oc label ns bookinfo istio.io/rev=ossm-3-v1-24-3 maistra.io/ignore-namespace="true" istio-injection- --overwrite=true

    The oc label command performs the following actions:

    1. Removes the istio-injection label: This label prevents the 3.0 control plane from injecting the proxy. The istio-injection label takes precedence over the istio.io/rev label. You must temporarily remove the istio-injection=enabled because you cannot create the default IstioRevisionTag tag yet. Leaving the istio-injection=enabled label applied would prevent the 3.0 control plane from performing proxy injection.
    2. Adds the istio.io/rev=ossm-3-v1-24-3 label: This label ensures that any newly created or restarted pods in the namespace connect to the OpenShift Service Mesh 3.0 proxy.
    3. Adds the maistra.io/ignore-namespace: "true" label: This label disables sidecar injection for OpenShift Service Mesh 2.6 proxies in the namespace. With the label applied, OpenShift Service Mesh 2.6 stops injecting proxies in this namespace, and any new proxies are injected by OpenShift Service Mesh 3.0. Without this label, the OpenShift Service Mesh 2.6 injection webhook tries to inject the pod and the injected sidecar proxy refuses to start since it will has both the OpenShift Service Mesh 2.6 and the OpenShift Service Mesh 3.0 Container Network Interface(CNI) annotations.

      Note

      After you apply the maistra.io/ignore-namespace label, any new pod that gets created in the namespace connects to the OpenShift Service Mesh 3.0 proxy. Workloads can still communicate with each other regardless of which control plane they are connected to.

  4. Restart the workloads by using one of the following options:

    1. To restart all the workloads at the same time so that the new pods are injected with the OpenShift Service Mesh 3.0 proxy, run the following command:

      $ oc rollout restart deployments -n bookinfo
    2. To restart each workload individually, run the following command for each workload:

      $ oc rollout restart deployments productpage-v1 -n bookinfo
  5. Wait for the productpage application to restart by running the following command:

    $ oc rollout status deployment productpage-v1 -n bookinfo

Verification

  1. Verify that the new control plane manages the expected workloads by running the following command:

    $ istioctl ps -n bookinfo

    Example output:

    NAME                                          CLUSTER        CDS             LDS             EDS             RDS             ECDS         ISTIOD                                           VERSION
    details-v1-7f46897b-d497c.bookinfo            Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    productpage-v1-74bfbd4d65-vsxqm.bookinfo      Kubernetes     SYNCED (4s)     SYNCED (4s)     SYNCED (3s)     SYNCED (4s)     IGNORED      istiod-ossm-3-v1-24-3-797bb4d78f-xpchx           1.24.3
    ratings-v1-559b64556-c5ppg.bookinfo           Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    reviews-v1-847fb7c54d-qxt5d.bookinfo          Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    reviews-v2-5c7ff5b77b-8jbhd.bookinfo          Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    reviews-v3-5c5d764c9b-rrx8w.bookinfo          Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8

    The output shows that the productpage-v1 deployment is the only deployment that has been restarted and was injected with the 3.0 proxy. Even if there are different versions of the proxies, communication between the services still works.

  2. If the 2.6 installation has additional namespaces, migrate the next namespace now.

    Note

    Remove the maistra.io/ignore-namespace="true" label only after the 2.6 control plane has been uninstalled.

Next steps

If you are using gateways, you must migrate them before you complete the migration process.

  • See: "Migrating gateways"

If you are not using gateways, and have verified your cluster-wide migration, create a default revision tag and relabel namespaces.

You can create the default revision tag and relabel the namespaces after you have completed the OpenShift Service Mesh 2 to OpenShift Service Mesh 3 cluster-wide migration process by using the Istio injection label.

The bookinfo application is used as an example.

Prerequisites

  • You have completed the OpenShift Service Mesh 2 to OpenShift Service Mesh 3 cluster-wide migration process by using Istio injection label.

Procedure

  1. Create a YAML file called rev-tag.yaml that defines the IstioRevisionTag resource:

    Example IstioRevisionTag resource:

    apiVersion: sailoperator.io/v1
    kind: IstioRevisionTag
    metadata:
      name: default
    spec:
      targetRef:
        kind: IstioRevision
        name: ossm-3-v1-24-3
  2. Apply the YAML file by running the following command:

    $ oc apply -f rev-tag.yaml
  3. Verify the status of the IstioRevisionTag resource by running the following command:

    $ oc get istiorevisiontags

    Example output:

    NNAME      STATUS                    IN USE   REVISION        AGE
    default    NotReferencedByAnything   False    ossm-3-v1-24-3  18s
  4. Add the istio-injection=enabled label to the bookinfo namespace, and remove the istio.io/rev label by running the following command:

    $ oc label ns bookinfo istio-injection=enabled istio.io/rev-
    Note

    Remove the maistra.io/ignore-namespace="true" label only after the 2.6 control plane has been uninstalled.

  5. Restart the workloads by running the following command:

    $ oc rollout restart deployments -n bookinfo
    Note

    Repeat steps 4 and 5 for each namespace you are migrating.

Verification

  1. Verify that the IstioRevisionTag resource is in use by running the following command:

    $ oc get istiorevisiontags

    Example output:

    NAME      STATUS    IN USE   REVISION        AGE
    default   Healthy   True     ossm-3-v1-24-3  28s
  2. Ensure that expected workloads are managed by the new control plane by running the following command:

    $ istioctl ps -n bookinfo

    Example output:

    NAME                                         CLUSTER        CDS              LDS              EDS             RDS              ECDS        ISTIOD                                     VERSION
    details-v1-79dfbd6fff-t5lzm.bookinfo         Kubernetes     SYNCED (57s)     SYNCED (57s)     SYNCED (3s)     SYNCED (57s)     IGNORED     istiod-ossm-3-v1-24-3-6595bf8695-s8ktn     1.24.3
    details-v1-7cb48d8bb-6rjq8.bookinfo          Kubernetes     SYNCED (3s)      SYNCED (3s)      SYNCED (3s)     SYNCED (3s)      IGNORED     istiod-ossm-3-v1-24-3-6595bf8695-s8ktn     1.24.3
    productpage-v1-7d9cdf655d-cqk48.bookinfo     Kubernetes     SYNCED (10s)     SYNCED (10s)     SYNCED (3s)     SYNCED (10s)     IGNORED     istiod-ossm-3-v1-24-3-6595bf8695-s8ktn     1.24.3
    ratings-v1-5b67b59fcb-w4whk.bookinfo         Kubernetes     SYNCED (18s)     SYNCED (18s)     SYNCED (3s)     SYNCED (18s)     IGNORED     istiod-ossm-3-v1-24-3-6595bf8695-s8ktn     1.24.3
    reviews-v1-585fc84dbb-fvm2h.bookinfo         Kubernetes     SYNCED (11s)     SYNCED (11s)     SYNCED (3s)     SYNCED (11s)     IGNORED     istiod-ossm-3-v1-24-3-6595bf8695-s8ktn     1.24.3
    reviews-v2-65cb66b45c-6ggp9.bookinfo         Kubernetes     SYNCED (57s)     SYNCED (57s)     SYNCED (3s)     SYNCED (57s)     IGNORED     istiod-ossm-3-v1-24-3-6595bf8695-s8ktn     1.24.3
    reviews-v2-698b86b848-v92xq.bookinfo         Kubernetes     SYNCED (3s)      SYNCED (3s)      SYNCED (3s)     SYNCED (3s)      IGNORED     istiod-ossm-3-v1-24-3-6595bf8695-s8ktn     1.24.3
    reviews-v3-6cbc49c8c8-v4jck.bookinfo         Kubernetes     SYNCED (11s)     SYNCED (11s)     SYNCED (3s)     SYNCED (11s)     IGNORED     istiod-ossm-3-v1-24-3-6595bf8695-s8ktn     1.24.3

Next steps

You can proceed to complete the migration and remove OpenShift Service Mesh 2 resources.

  • See: "Completing the Migration"
Important

Before creating a default revision tag and relabelling the namespaces, you must migrate all remaining workload namespaces, including gateways.

You can perform a canary upgrade with the gradual migration of data plane namespaces for a cluster-wide deployment by using the istio-injection=enabled label and the default revision tag.

You must re-label all of the data plane namespaces. However, it is safe to restart any of the workloads at any point during the migration process.

The bookinfo application is used as an example for the Istio resource. For more information about configuration differences between the OpenShift Service Mesh 2 ServiceMeshControlPlane resource and the OpenShift Service Mesh 3 Istio resource, see "ServiceMeshControlPlane resource to Istio resource fields mapping".

Prerequisites

  • You have deployed OpenShift Container Platform 4.14 or later.
  • You have logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.
  • You have completed the premigration checklists.
  • You have the OpenShift Service Mesh 2.6.14 Operator installed.
  • You have the OpenShift Service Mesh 3 Operator installed.
  • You have created an IstioCNI resource.
  • You have installed the istioctl tool.
  • You are using the cert-manager and istio-csr tools in a cluster-wide deployment.
  • Your OpenShift Service Mesh 2.6.14 ServiceMeshControlPlane resource is configured with the cert-manager tool
  • You have installed the bookinfo application.

Procedure

  1. Confirm that your OpenShift Service Mesh 2 ServiceMeshControlPlane resource is configured with the cert-manager tool.

    You can see the following example configuration for reference:

    apiVersion: maistra.io/v2
    kind: ServiceMeshControlPlane
    metadata:
      name: basic
      namespace: istio-system
    spec:
      ...
      security:
        certificateAuthority:
          cert-manager:
            address: cert-manager-istio-csr.istio-system.svc:443
          type: cert-manager
        dataPlane:
          mtls: true
        identity:
          type: ThirdParty
        manageNetworkPolicy: false
  2. Update the istio-csr deployment to include your OpenShift Service Mesh 3 control plane by running the following command:

      helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr \
          --install \
          --reuse-values \
          --namespace istio-system \
          --wait \
          --set "app.istio.revisions={basic,ossm-3-v1-24-3}"

    where:

    app.istio.revisions
    This field must include your OpenShift Service Mesh 3.0 control plane revision before you create your Istio resource so that proxies can properly communicate with the OpenShift Service Mesh 3.0 control plane.
  3. Identify the namespaces that contain a 2.6 control plane by running the following command:

    $ oc get smcp -A

    Example output:

    NAMESPACE      NAME                   READY   STATUS            PROFILES      VERSION   AGE
    istio-system   install-istio-system   6/6     ComponentsReady   ["default"]   2.6.6     115m
  4. Create a YAML file named ossm-3.yaml. This procedure creates the Istio resource for the 3.0 installation in the same namespace as the ServiceMeshControlPlane resource for the 2.6 installation.

    Note

    In the following example configuration, the Istio control plane has access to all namespaces on the cluster. If you want to limit the namespaces the control plan has access to, you must define discovery selectors. You must match all the data plane namespaces that you plan to migrate from version 2.6.

    You can see the following example configuration for reference:

    apiVersion: sailoperator.io/v1
    kind: Istio
    metadata:
      name: ossm-3
    spec:
      updateStrategy:
        type: RevisionBased
      namespace: istio-system
      version: v1.24.3
      values:
        meshConfig:
          extensionProviders:
            - name: prometheus
              prometheus: {}
            - name: otel
              opentelemetry:
                port: 4317
                service: otel-collector.opentelemetrycollector-3.svc.cluster.local
    • metadata.name specifies the name of the IstioRevision resource. The updateStrategy and version fields specify how the resource is updated. For more information, see "Identifying the revision name".
    • spec.namespace specifies the namespace where the 3.0 and 2.6 control planes must run.
    • spec.values specifies the configuration values for the 3.0 control plane. If you are migrating metrics and tracing, update the extensionProviders fields according to your tracing and metrics configurations.

      Note

      To prevent the OpenShift Service Mesh 3.0 control plane from injecting proxies in the namespaces that have the istio-injection=enabled label applied and are still managed by OpenShift Service Mesh 2.6 control plane, do not use the default name for the Istio resource, and do not create the default revision tag in the following steps. You create the default revision tag later in this procedure.

  5. Apply the YAML file by running the following command:

    $ oc apply -f ossm-3.yaml

Verification

  1. Verify that the new istiod resource uses the existing root certificate by running the following command:

    $ oc logs deployments/istiod-ossm-3-v1-24-3 -n istio-system | grep 'Load signing key and cert from existing secret'

    Example output:

    2024-12-18T08:13:53.788959Z	info	pkica	Load signing key and cert from existing secret istio-system/istio-ca-secret

Now you can migrate your workloads from the OpenShift Service Mesh 2.6 control plane to the OpenShift Service Mesh 3.0 control plane.

Note

You can migrate workloads and gateways separately, and in any order. For more information, see "Migrating gateways".

Procedure

  1. Find the current IstioRevision resource for your OpenShift Service Mesh 3.0 control plane by running the following command:

    $ oc get istios

    Example output:

    NAME             REVISIONS   READY   IN USE   ACTIVE REVISION   STATUS    VERSION   AGE
    ossm-3           1           1       0        ossm-3-v1-24-3    Healthy   v1.24.3   30s
  2. Copy the ACTIVE REVISION value to use as your istio.io/rev label in the next step.

    Note

    The naming format of your revisions depends on which upgrade strategy you choose for your Istio instance.

  3. Update the injection labels on the data plane namespace by running the following command:

    $ oc label ns bookinfo istio.io/rev=ossm-3-v1-24-3 maistra.io/ignore-namespace="true" istio-injection- --overwrite=true

    The oc label command performs the following actions:

    1. Removes the istio-injection label: This label prevents the 3.0 control plane from injecting the proxy. The istio-injection label takes precedence over the istio.io/rev label. You must temporarily remove the istio-injection=enabled because you cannot create the default IstioRevisionTag tag yet. Leaving the istio-injection=enabled label applied would prevent the 3.0 control plane from performing proxy injection.
    2. Adds the istio.io/rev=ossm-3-v1-24-3 label: This label ensures that any newly created or restarted pods in the namespace connect to the OpenShift Service Mesh 3.0 proxy.
    3. Adds the maistra.io/ignore-namespace: "true" label: This label disables sidecar injection for OpenShift Service Mesh 2.6 proxies in the namespace. With the label applied, OpenShift Service Mesh 2.6 stops injecting proxies in this namespace, and any new proxies are injected by OpenShift Service Mesh 3.0. Without this label, the OpenShift Service Mesh 2.6 injection webhook tries to inject the pod and the injected sidecar proxy refuses to start since it will has both the OpenShift Service Mesh 2.6 and the OpenShift Service Mesh 3.0 Container Network Interface(CNI) annotations.

      Note

      After you apply the maistra.io/ignore-namespace label, any new pod that gets created in the namespace connects to the OpenShift Service Mesh 3.0 proxy. Workloads can still communicate with each other regardless of which control plane they are connected to.

  4. Restart the workloads by using one of the following options:

    1. To restart all the workloads at the same time so that the new pods are injected with the OpenShift Service Mesh 3.0 proxy, run the following command:

      $ oc rollout restart deployments -n bookinfo
    2. To restart each workload individually, run the following command for each workload:

      $ oc rollout restart deployments productpage-v1 -n bookinfo
  5. Wait for the productpage application to restart by running the following command:

    $ oc rollout status deployment productpage-v1 -n bookinfo

Verification

  1. Verify that the new control plane manages the expected workloads by running the following command:

    $ istioctl ps -n bookinfo

    Example output:

    NAME                                          CLUSTER        CDS             LDS             EDS             RDS             ECDS         ISTIOD                                           VERSION
    details-v1-7f46897b-d497c.bookinfo            Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    productpage-v1-74bfbd4d65-vsxqm.bookinfo      Kubernetes     SYNCED (4s)     SYNCED (4s)     SYNCED (3s)     SYNCED (4s)     IGNORED      istiod-ossm-3-v1-24-3-797bb4d78f-xpchx           1.24.3
    ratings-v1-559b64556-c5ppg.bookinfo           Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    reviews-v1-847fb7c54d-qxt5d.bookinfo          Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    reviews-v2-5c7ff5b77b-8jbhd.bookinfo          Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    reviews-v3-5c5d764c9b-rrx8w.bookinfo          Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8

    The output shows that the productpage-v1 deployment is the only deployment that has been restarted and was injected with the 3.0 proxy. Even if there are different versions of the proxies, communication between the services still works.

  2. If the 2.6 installation has additional namespaces, migrate the next namespace now.

    Note

    Remove the maistra.io/ignore-namespace="true" label only after the 2.6 control plane has been uninstalled.

Next steps

If you are using gateways, you must migrate them before you complete the migration process.

  • See: "Migrating gateways"

If you are not using gateways, and have verified your cluster-wide migration, create a default revision tag and relabel namespaces.

Important

Before creating a default revision tag and relabelling the namespaces, you must migrate all remaining workload namespaces, including gateways.

You can create the default revision tag and relabel the namespaces after you have completed the OpenShift Service Mesh 2 to OpenShift Service Mesh 3 cluster-wide migration process by using the Istio injection label.

The bookinfo application is used as an example.

Prerequisites

  • You have completed the OpenShift Service Mesh 2 to OpenShift Service Mesh 3 cluster-wide migration process by using Istio injection label.

Procedure

  1. Create a YAML file called rev-tag.yaml that defines the IstioRevisionTag resource:

    Example IstioRevisionTag resource:

    apiVersion: sailoperator.io/v1
    kind: IstioRevisionTag
    metadata:
      name: default
    spec:
      targetRef:
        kind: IstioRevision
        name: ossm-3-v1-24-3
  2. Apply the YAML file by running the following command:

    $ oc apply -f rev-tag.yaml
  3. Verify the status of the IstioRevisionTag resource by running the following command:

    $ oc get istiorevisiontags

    Example output:

    NNAME      STATUS                    IN USE   REVISION        AGE
    default    NotReferencedByAnything   False    ossm-3-v1-24-3  18s
  4. Add the istio-injection=enabled label to the bookinfo namespace, and remove the istio.io/rev label by running the following command:

    $ oc label ns bookinfo istio-injection=enabled istio.io/rev-
    Note

    Remove the maistra.io/ignore-namespace="true" label only after the 2.6 control plane has been uninstalled.

  5. Restart the workloads by running the following command:

    $ oc rollout restart deployments -n bookinfo
    Note

    Repeat steps 4 and 5 for each namespace you are migrating.

Verification

  1. Verify that the IstioRevisionTag resource is in use by running the following command:

    $ oc get istiorevisiontags

    Example output:

    NAME      STATUS    IN USE   REVISION        AGE
    default   Healthy   True     ossm-3-v1-24-3  28s
  2. Ensure that expected workloads are managed by the new control plane by running the following command:

    $ istioctl ps -n bookinfo

    Example output:

    NAME                                         CLUSTER        CDS              LDS              EDS             RDS              ECDS        ISTIOD                                     VERSION
    details-v1-79dfbd6fff-t5lzm.bookinfo         Kubernetes     SYNCED (57s)     SYNCED (57s)     SYNCED (3s)     SYNCED (57s)     IGNORED     istiod-ossm-3-v1-24-3-6595bf8695-s8ktn     1.24.3
    details-v1-7cb48d8bb-6rjq8.bookinfo          Kubernetes     SYNCED (3s)      SYNCED (3s)      SYNCED (3s)     SYNCED (3s)      IGNORED     istiod-ossm-3-v1-24-3-6595bf8695-s8ktn     1.24.3
    productpage-v1-7d9cdf655d-cqk48.bookinfo     Kubernetes     SYNCED (10s)     SYNCED (10s)     SYNCED (3s)     SYNCED (10s)     IGNORED     istiod-ossm-3-v1-24-3-6595bf8695-s8ktn     1.24.3
    ratings-v1-5b67b59fcb-w4whk.bookinfo         Kubernetes     SYNCED (18s)     SYNCED (18s)     SYNCED (3s)     SYNCED (18s)     IGNORED     istiod-ossm-3-v1-24-3-6595bf8695-s8ktn     1.24.3
    reviews-v1-585fc84dbb-fvm2h.bookinfo         Kubernetes     SYNCED (11s)     SYNCED (11s)     SYNCED (3s)     SYNCED (11s)     IGNORED     istiod-ossm-3-v1-24-3-6595bf8695-s8ktn     1.24.3
    reviews-v2-65cb66b45c-6ggp9.bookinfo         Kubernetes     SYNCED (57s)     SYNCED (57s)     SYNCED (3s)     SYNCED (57s)     IGNORED     istiod-ossm-3-v1-24-3-6595bf8695-s8ktn     1.24.3
    reviews-v2-698b86b848-v92xq.bookinfo         Kubernetes     SYNCED (3s)      SYNCED (3s)      SYNCED (3s)     SYNCED (3s)      IGNORED     istiod-ossm-3-v1-24-3-6595bf8695-s8ktn     1.24.3
    reviews-v3-6cbc49c8c8-v4jck.bookinfo         Kubernetes     SYNCED (11s)     SYNCED (11s)     SYNCED (3s)     SYNCED (11s)     IGNORED     istiod-ossm-3-v1-24-3-6595bf8695-s8ktn     1.24.3

You can perform a canary upgrade with the gradual migration of data plane namespaces for a cluster-wide deployment by using the simple migration method. In an OpenShift Service Mesh 2.6.14 installation, if the istio-injection=enabled label is already applied to the data plane namespaces, the simple migration method is the easiest way to migrate from the OpenShift Service Mesh 2 installation to the OpenShift Service Mesh 3 installation.

The simple migration method should not be used in production environments.

Note

Using the simple migration method to migrate from OpenShift Service Mesh 2 to OpenShift Service Mesh 3 might result in traffic disruption to the services running on a mesh. There are two methods to perform the cluster-wide migration without disrupting traffic. See "Migrating a cluster-wide deployment by using the istio injection label" or "Migrating a cluster-wide deployment by using the Istio revision label" for more information.

The bookinfo application is used as an example for the Istio resource. For more information about configuration differences between the OpenShift Service Mesh 2 ServiceMeshControlPlane resource and the OpenShift Service Mesh 3 Istio resource, see "ServiceMeshControlPlane resource to Istio resource fields mapping."

Prerequisites

  • You have deployed OpenShift Container Platform 4.14 or later.
  • You have logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.
  • You have completed the premigration checklists.
  • You have the OpenShift Service Mesh 2.6.14 Operator installed.
  • You have the OpenShift Service Mesh 3 Operator installed.
  • You have created an IstioCNI resource.
  • You have installed the istioctl tool.
  • You are running a cluster-wide Service Mesh control plane resource.
  • You have installed the bookinfo application.

Procedure

  1. Identify the namespaces that contain a 2.6 control plane by running the following command:

    $ oc get smcp -A

    Example output:

    NAMESPACE      NAME                   READY   STATUS            PROFILES      VERSION   AGE
    istio-system   install-istio-system   6/6     ComponentsReady   ["default"]   2.6.6     115m
  2. Create a YAML file named ossm-3.yaml. This procedure creates the Istio resource for the 3.0 installation in the same namespace as the ServiceMeshControlPlane resource for the 2.6 installation.

    Note

    In the following example configuration, the Istio control plane has access to all namespaces on the cluster. If you want to limit the namespaces the control plane has access to, you must define discovery selectors. You must match all the data plane namespaces that you plan to migrate from version 2.6.

    You can see the following example configuration for reference:

    apiVersion: sailoperator.io/v1
    kind: Istio
    metadata:
      name: default
    spec:
      updateStrategy:
        type: InPlace
      namespace: istio-system
      version: v1.24.3
      values:
        meshConfig:
          extensionProviders:
            - name: prometheus
              prometheus: {}
            - name: otel
              opentelemetry:
                port: 4317
                service: otel-collector.opentelemetrycollector-3.svc.cluster.local
    • metadata.name specifies the name, updateStrategy and version fields specify how the IstioRevision resource name is created. For more information, see "Identifying the revision name."
    • spec.namespace specifies the namespace in which the 3.0 and 2.6 control planes must run.
    • spec.values specifies the values for the Istio resource. If you are migrating metrics and tracing, update the extensionProviders fields according to your tracing and metrics configurations.

      Note

      If the Istio resource is named default, and the installation uses the InPlace update strategy, you can use the istio-injection=enabled label without creating the IstioRevisionTag tag. If you use a different name for the Istio resource or you use the RevisionBased update strategy, you must configure the default IstioRevisionTag tag. For more information, see "Creating the default revision tag and relabeling the namespaces."

  3. Apply the YAML file by running the following command:

    $ oc apply -f ossm-3.yaml
    Note

    After you apply the YAML file, any time the workloads are restarted, both the OpenShift Service Mesh 2.6 and the OpenShift Service Mesh 3.0 control planes will try to inject side cars to all pods in namespaces that have the istio-injection=enabled label applied and to all pods that have the sidecar.istio.io/inject="true" label applied. This causes a traffic disruption. To prevent traffic disruption, restart the workloads only after the maistra.io/ignore-namespace: "true" label is added.

Verification

  1. Verify that the new istiod resource uses the existing root certificate by running the following command:

    $ oc logs deployments/istiod -n istio-system | grep 'Load signing key and cert from existing secret'

    Example output:

    2024-12-18T08:13:53.788959Z	info	pkica	Load signing key and cert from existing secret istio-system/istio-ca-secret

After migrating a cluster-wide deployment, you can migrate your workloads from the OpenShift Service Mesh 2.6 control plane to the OpenShift Service Mesh 3.0 control plane.

Note

You can migrate workloads and gateways separately, and in any order. For more information, see "Migrating gateways."

Procedure

  1. Add the maistra.io/ignore-namespace: "true" label to the data plane namespace by running the following command:

    $ oc label ns bookinfo maistra.io/ignore-namespace="true"

    The maistra.io/ignore-namespace: "true" label disables sidecar injection for OpenShift Service Mesh 2.6 proxies in the namespace. With the label applied, OpenShift Service Mesh 2.6 stops injecting proxies in this namespace, and any new proxies are injected by OpenShift Service Mesh 3.0. Without this label, the OpenShift Service Mesh 2.6 injection webhook tries to inject the pod and the injected sidecar proxy refuses to start since it has both the OpenShift Service Mesh 2.6 and the OpenShift Service Mesh 3.0 Container Network Interface(CNI) annotations.

    Note

    After you apply the maistra.io/ignore-namespace label, any new pod that gets created or restarted in the namespace connects to the OpenShift Service Mesh 3.0 proxy. Workloads can still communicate with each other regardless of which control plane they are connected to.

  2. Restart the workloads by using one of the following options:

    1. To restart all the workloads at the same time so that the new pods are injected with the OpenShift Service Mesh 3.0 proxy, run the following command:

      $ oc rollout restart deployments -n bookinfo
    2. To restart each workload individually, run the following command for each workload:

      $ oc rollout restart deployments productpage-v1 -n bookinfo
  3. Wait for the productpage application to restart by running the following command:

    $ oc rollout status deployment productpage-v1 -n bookinfo

Verification

  1. Verify that the new control plane manages the expected workloads by running the following command:

    $ istioctl ps -n bookinfo

    Example output:

    NAME                                          CLUSTER        CDS             LDS             EDS             RDS             ECDS         ISTIOD                                           VERSION
    details-v1-7f46897b-d497c.bookinfo            Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    productpage-v1-74bfbd4d65-vsxqm.bookinfo      Kubernetes     SYNCED (4s)     SYNCED (4s)     SYNCED (3s)     SYNCED (4s)     IGNORED      istiod-797bb4d78f-xpchx           1.24.3
    ratings-v1-559b64556-c5ppg.bookinfo           Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    reviews-v1-847fb7c54d-qxt5d.bookinfo          Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    reviews-v2-5c7ff5b77b-8jbhd.bookinfo          Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8
    reviews-v3-5c5d764c9b-rrx8w.bookinfo          Kubernetes     SYNCED          SYNCED          SYNCED          SYNCED          NOT SENT     istiod-install-istio-system-866b57d668-6lpcr     1.20.8

    The output shows that the productpage-v1 deployment is the only deployment that has been restarted and was injected with the 3.0 proxy. Even if there are different versions of the proxies, communication between the services still works.

  2. If the 2.6 installation has additional namespaces, migrate the next namespace now.

    Note

    Remove the maistra.io/ignore-namespace="true" label only after the 2.6 control plane has been uninstalled.

Next steps

If you are using gateways, you must migrate them before you complete the migration process.

  • See: "Migrating gateways"

If you are not using gateways, you can complete your migration.

  • See: "Completing the Migration"

Chapter 5. Migrating gateways

5.1. Migrating gateways

Migrate Red Hat OpenShift Service Mesh gateways from version 2 to version 3 by using either a canary or an in-place migration strategy to ensure continuous traffic management.

5.1.1. Gateway canary migration

Use the gateway canary migration method when you want a gradual rollout. It uses multiple gateway versions with maximum control over your gateway rollout.

Note

The label for the gateway namespace differs between mulitenant and cluster-wide meshes. To understand which labels to use in your specific migration case and the maistra.io/ignore-namespace: "true" label requirement, see the "Multitenant migration guide" or the "Cluster-wide migration guide" section.

Procedure

  1. Run the following command to label the gateway namespace to ensure that gateway injection is enabled from the new mesh and add the maistra.io/ignore-namespace: "true" label:

    $ oc label namespace <gateway_namespace> istio.io/rev=<istio_revision_name> maistra.io/ignore-namespace="true"
    1. Remove the istio-injection=enabled label, if needed.
  2. Deploy a canary gateway by using the following example:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: istio-ingressgateway-canary
      namespace: istio-ingress
    spec:
       selector:
         matchLabels:
           istio: ingressgateway
       template:
         metadata:
           annotations:
             inject.istio.io/templates: gateway
           labels:
             istio: ingressgateway
             istio.io/rev: canary
         spec:
           containers:
           - name: istio-proxy
             image: auto
    • metadata.namespace specifies the namespace for your Deployment resource must be in the same namespace as your existing gateway.
    • spec.selector.matchLabels.istio specifies the label selector for your Deployment resource that must match your existing gateway service selector.
    • spec.template.metadata.labels.istio.io/rev sets your OpenShift Service Mesh 3.0 control plane revision as the value of the istio.io/rev label.
  3. Ensure that the new gateway deployment is running with the new revision and is handling requests:

    1. Check that pods are running and in the Ready status.
    2. Check that the gateway is running the new revision by running the following command:

      $ istioctl ps -n istio-ingress
    3. Test a sample route through the gateway.
  4. Gradually shift traffic between deployments:

    1. Increase replicas for new gateway by running the following command:

      $ oc scale -n istio-ingress deployment/<new_gateway_deployment> --replicas <new_number_of_replicas>
    2. Decrease replicas for old gateway by running the following command:

      $ oc scale -n istio-ingress deployment/<old_gateway_deployment> --replicas <new_number_of_replicas>
    3. Repeat this process, incrementally adjusting replica counts until the new gateway handles all traffic to the gateway Service.

5.1.2. Gateway in place migration

If you need less detailed control over your gateway migration, you can perform gateway in place migration. This method applies to namespaces with dedicated gateways and to environments by using a centralized gateway shared across multiple namespaces.

Note

The label for the gateway namespace differs between mulitenant and cluster-wide meshes. To understand which labels to use in your specific migration case and the maistra.io/ignore-namespace: "true" label requirement, see the "Multitenant migration guide" or the "Cluster-wide migration guide" section.

Procedure

  1. Run the following command to label the gateway namespace to ensure that gateway injection is enabled from the new mesh and add the maistra.io/ignore-namespace: "true" label:

    $ oc label namespace <gateway_namespace> istio.io/rev=<istio_revision_name> maistra.io/ignore-namespace="true"
    1. Remove the istio-injection=enabled label, if needed.
  2. Restart the gateway deployment by running the following command:

    $ oc -n ${app_namespace} rollout restart deployment ${gateway_name}

Verification

  1. Check that the gateway is running the new revision by running the following command:

    $ istioctl ps -n istio-ingress
  2. Test application-specific routes.

Chapter 6. Completing the migration

6.1. Completing the Migration

Once the migration is complete, you can uninstall OpenShift Service Mesh 2.6.14 and optionally verify the migration status of data plane namespaces by using the Kiali Mesh page.

If you did not re-create your network policies before you migrated your deployment and workloads, you can re-create your network policies after migrating.

Prerequisites

  • You have migrated your deployment.
  • You have migrated your workloads.

Procedure

  1. Re-create necessary network policies in the new OpenShift Service Mesh 3 control plane namespace.
  2. Re-create network policies for each namespace that was part of the OpenShift Service Mesh 2 mesh.
  3. Update labels.

    1. Update corresponding network policy selectors to match the new labels.

      Note

      Use a label scoped specifically to your mesh that you can reuse for discovery selectors.

Prerequisites

  • You have migrated a multitenant deployment with the cert-manager and istio-csr tools.

Procedure

  1. Verify that your new injection label is present in all workload namespaces.
  2. Update the app.controller.configmapNamespaceSelector field by running the following command:

    helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr \
       --install \
       --reuse-values \
       --namespace istio-system \
       --wait \
       --set "app.controller.configmapNamespaceSelector=tenant=tenant-a"

6.1.3. Remove the Service Mesh 2.6 control plane

After you have migrated all your workloads and gateways, you can remove the OpenShift Service Mesh 2.x control plane.

Prerequisites

  • You have completed migrating your workloads.
  • You have completed migrating your gateways.
  • You have logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.
Note

Depending on how you created ServiceMeshMember and ServiceMeshMemberRoll resources, those resources might be removed automatically with removal of the ServiceMeshControlPlane resource.

Procedure

  1. Find all Service Mesh 2.6 resources by running the following command:

    $ oc get smcp,smm,smmr -A
  2. Remove all ServiceMeshControlPlane resources by running the following command:

    $ oc delete smcp --all -A
  3. Remove all ServiceMeshMemberRoll resources by running the following command:

    $ oc delete smmr --all -A
  4. Remove all ServiceMeshMember resources by running the following command:

    $ oc delete smm --all -A
  5. Verify that all resources were removed by running the following command:

    $ oc get smcp,smm,smmr -A

    Example output:

    No resources found

After you remove the Red Hat OpenShift Service Mesh 2 ServiceMeshControlplane resource, and all other OpenShift Service Mesh 2 resources, you can remove the OpenShift Service Mesh 2.6 Operator and custom resource definitions (CRDs).

Prerequisites

  • You have completed migrating your workloads.
  • You have completed migrating your gateways.
  • You have removed the OpenShift Service Mesh 2 ServiceMeshControlPlane resource.
  • You have removed all other OpenShift Service Mesh 2 resources.
  • You have logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.

Procedure

  1. Check that there are no Red Hat OpenShift Service Mesh 2.6 resources left by running the following command:

    $ oc get smcp,smm,smmr -A

    Example output:

    No resources found
  2. Remove the Operator:

    1. Find the Operator subscription by running the following command:

      csv=$(oc get subscription servicemeshoperator -n openshift-operators -o yaml | grep currentCSV | cut -f 2 -d ':')
    2. Delete the subscription by running the following command:

      $ oc delete subscription servicemeshoperator -n openshift-operators
    3. Delete the clusterserviceversion CSV by running the following command:

      $ oc delete clusterserviceversion $csv -n openshift-operators
  3. Remove maistra CRDs by running the following command:

    $ oc get crds -o name | grep ".*\.maistra\.io" | xargs -r -n 1 oc delete

6.1.5. Remove the Maistra labels

After you have removed all OpenShift Service Mesh 2 resources, removed the OpenShift Service Mesh 2 Operator, and OpenShift Service Mesh 2 custom resource definitions (CRDs), you can remove namespace labels created during the migration.

Prerequisites

  • You have completed migrating your workloads.
  • You have completed migrating your gateways.
  • You have removed the OpenShift Service Mesh 2 ServiceMeshControlPlane resource.
  • You have removed all other OpenShift Service Mesh 2 resources.
  • You have removed the OpenShift Service Mesh 2 Operator.
  • You have removed the OpenShift Service Mesh 2 CRDs.
  • You have logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.

Procedure

  1. Verify that all OpenShift Service Mesh 2.6 resources have been removed by running the following command:

    $ oc get smcp,smm,smmr -A

    Example output:

    No resources found
  2. Find namespaces with the maistra.io/ignore-namespace="true" label by running the following command:

    $ oc get namespace -l maistra.io/ignore-namespace="true"

    Example output:

    NAME       STATUS   AGE
    bookinfo   Active   127m
  3. Remove the label by running the following command:

    $ oc label namespace bookinfo maistra.io/ignore-namespace-

    Example output:

    namespace/bookinfo unlabeled

Chapter 7. Reference

7.1. Migrating references

Many configuration options in the OpenShift Service Mesh 2 ServiceMeshControlPlane resource have changed location in the OpenShift Service Mesh 3 Istio resource. The following tables offer guidance for creating a new Istio resource in OpenShift Service Mesh 3 based on your existing OpenShift Service Mesh 2 ServiceMeshControlPlane resource.

Many of the spec fields in the OpenShift Service Mesh 2 ServiceMeshControlPlane can be configured in the OpenShift Service Mesh 3 Istio resource.

The following tables offer guidance for configuring your Istio resource in OpenShift Service Mesh 3.

7.1.1.1. Cluster configurations
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.cluster.multiCluster.enabled

spec.values.global.multiCluster.enabled

spec.cluster.multiCluster.meshNetworks

spec.values.global.meshNetworks

spec.cluster.multiCluster.meshNetworks.endpoints

spec.values.global.meshNetworks.endpoints

spec.cluster.multiCluster.meshNetworks.endpoints.fromCID

spec.values.global.meshNetworks.endpoints.fromCidr

spec.cluster.multiCluster.meshNetworks.endpoints.fromRegistry

spec.values.global.meshNetworks.endpoints.fromRegistry

spec.cluster.multiCluster.meshNetworks.gateways

spec.values.global.meshNetworks.gateways

spec.cluster.multiCluster.meshNetworks.gateways.address

spec.values.global.meshNetworks.gateways.address

spec.cluster.multiCluster.meshNetworks.gateways.port

spec.values.global.meshNetworks.gateways.port

spec.cluster.multiCluster.meshNetworks.gateways.registryServiceName

spec.values.global.meshNetworks.gateways.registryServiceName

spec.cluster.name

spec.values.global.multiCluster.clusterName

spec.cluster.network

spec.values.global.network

7.1.1.2. General configurations
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.general.logging.componentLevels

spec.values.global.logging.levels

spec.general.logging.logAsJSON

spec.values.global.logAsJson

spec.general.validationMessages

spec.values.global.istiod.enableAnalysis

7.1.1.3. MeshConfig configurations
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.values.meshConfig.discoverySelectors

spec.values.meshConfig.discoverySelectors

spec.values.meshConfig.extensionProviders

spec.values.meshConfig.extensionProviders

7.1.1.4. Mode configurations

The mode configurations in the OpenShift Service Mesh 2 ServiceMeshControlPlane resource were:

  • Multitenant
  • Cluster-wide
  • Federation

In OpenShift Service Mesh 3, the mode is not configured by using a single field in the Istio resource.

By default, the OpenShift Service Mesh 3 control plane has access to all namespaces, which is equal to cluster-wide mode in OpenShift Service Mesh 2. For a similar configuration to MultiTenant mode in OpenShift Service Mesh 2 in OpenShift Service Mesh 3, you must use the discoverySelectors field. For more information, see "Deploying multiple service meshes on a single cluster".

7.1.1.5. Profile configurations

The profile configuration options for OpenShift Service Mesh 3 are:

  • ambient
  • default
  • demo
  • empty
  • openshift-ambient
  • openshift
  • preview
  • stable
7.1.1.6. Proxy configurations
Access logging configuration fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.proxy.accessLogging.envoyService.address

spec.values.meshConfig.defaultConfig.envoyAccessLogService.address

spec.proxy.accessLogging.envoyService.enabled

spec.values.meshConfig.enableEnvoyAccessLogService

spec.proxy.accessLogging.envoyService.tcpKeepalive

spec.values.meshConfig.defaultConfig.envoyAccessLogService.tcpKeepalive

spec.proxy.accessLogging.envoyService.tcpKeepalive.interval

spec.values.meshConfig.defaultConfig.envoyAccessLogService.tcpKeepalive.interval

spec.proxy.accessLogging.envoyService.tcpKeepalive.probes

spec.values.meshConfig.defaultConfig.envoyAccessLogService.tcpKeepalive.probes

spec.proxy.accessLogging.envoyService.tcpKeepalive.time

spec.values.meshConfig.defaultConfig.envoyAccessLogService.tcpKeepalive.time

spec.proxy.accessLogging.envoyService.tlsSettings

spec.values.meshConfig.defaultConfig.envoyAccessLogService.tlsSettings

spec.proxy.accessLogging.file.encoding

spec.values.meshConfig.accessLogEncoding

spec.proxy.accessLogging.file.format

spec.values.meshConfig.accessLogFormat

spec.proxy.accessLogging.file.name

spec.values.meshConfig.accessLogFile

Basic proxy configuration fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.proxy.adminPort

spec.values.meshConfig.defaultConfig.proxyAdminPort

spec.proxy.concurrency

spec.values.meshConfig.defaultConfig.concurrency

Envoy metrics service fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.proxy.envoyMetricsService.address

spec.values.meshConfig.defaultConfig.envoyMetricsService.address

spec.proxy.envoyMetricsService.enabled

spec.values.meshConfig.enableEnvoyAccessLogService

spec.proxy.envoyMetricsService.tcpKeepalive

spec.values.meshConfig.defaultConfig.envoyMetricsService.tcpKeepalive

spec.proxy.envoyMetricsService.tlsSettings

spec.values.meshConfig.defaultConfig.envoyMetricsService.tlsSettings

Injection configuration fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.proxy.injection.alwaysInjectSelector

spec.values.sidecarInjectorWebhook.alwaysInjectSelector

spec.proxy.injection.neverInjectSelector

spec.values.sidecarInjectorWebhook.neverInjectSelector

spec.proxy.injection.injectedAnnotations

spec.values.sidecarInjectorWebhook.injectedAnnotations

spec.proxy.injection.autoInject

spec.values.global.proxy.autoInject

Proxy logging configuration fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.proxy.logging.componentLevels

spec.values.global.proxy.componentLogLevel

spec.proxy.logging.level

spec.values.global.logging.level

Proxy networking configuration fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.proxy.networking.clusterDomain

spec.values.global.proxy.clusterDomain

spec.proxy.networking.connectionTimeout

spec.values.meshConfig.connectTimeout

spec.proxy.networking.dns.refreshRate

spec.values.meshConfig.dnsRefreshRate

spec.proxy.networking.dns.searchSuffixes

spec.values.global.podDNSSearchNamespaces

spec.proxy.networking.initialization.initContainer.runtime.imageName

spec.values.global.proxy_init.image

spec.proxy.networking.initialization.initContainer.runtime.imagePullPolicy

spec.values.global.imagePullPolicy

spec.proxy.networking.initialization.initContainer.runtime.imagePullSecrets

spec.values.global.imagePullSecrets

spec.proxy.networking.initialization.initContainer.runtime.imageRegistry

spec.values.global.hub

spec.proxy.networking.initialization.initContainer.runtime.imageTag

spec.values.global.tag

spec.proxy.networking.initialization.initContainer.runtime.resources

spec.values.global.proxy_init.resources

spec.proxy.networking.maxConnectionAge

spec.values.pilot.keepaliveMaxServerConnectionAge

spec.proxy.networking.protocol.timeout

spec.values.meshConfig.protocolDetectionTimeout

Traffic control configuration fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.proxy.networking.trafficControl.inbound.excludedPorts

spec.values.global.proxy.excludeInboundPorts

spec.proxy.networking.trafficControl.inbound.includedPorts

spec.values.global.proxy.includeInboundPorts

spec.proxy.networking.trafficControl.inbound.interceptionMode

spec.values.meshConfig.defaultConfig.interceptionMode

spec.proxy.networking.trafficControl.outbound.excludedIPRanges

spec.values.global.proxy.excludeIPRanges

spec.proxy.networking.trafficControl.outbound.excludedPorts

spec.values.global.proxy.excludeOutboundPorts

spec.proxy.networking.trafficControl.outbound.includedIPRanges

spec.values.global.proxy.includeIPRanges

spec.proxy.networking.trafficControl.outbound.policy

spec.values.meshConfig.outboundTrafficPolicy.mode

Proxy runtime configuration fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.proxy.runtime.container.env

spec.values.meshConfig.defaultConfig.proxyMetadata

spec.proxy.runtime.container.imageName

spec.values.global.proxy.image

spec.proxy.runtime.container.imagePullPolicy

spec.values.global.imagePullPolicy

spec.proxy.runtime.container.imagePullSecrets

spec.values.global.imagePullSecrets

spec.proxy.runtime.container.imageRegistry

spec.values.global.hub

spec.proxy.runtime.container.imageTag

spec.values.global.tag

spec.proxy.runtime.container.resources

spec.values.global.proxy.resources

spec.proxy.runtime.readiness.failureThreshold

spec.values.global.proxy.readinessFailureThreshold

spec.proxy.runtime.readiness.initialDelaySeconds

spec.values.global.proxy.readinessInitialDelaySeconds

spec.proxy.runtime.readiness.periodSeconds

spec.values.global.proxy.readinessPeriodSeconds

spec.proxy.runtime.readiness.rewriteApplicationProbes

spec.values.sidecarInjectorWebhook.rewriteAppHTTPProbe

spec.proxy.runtime.readiness.statusPort

spec.values.global.proxy.statusPort

7.1.1.7. Runtime configurations
Container configuration fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.runtime.components.container.env

spec.values.pilot.env

spec.runtime.components.container.imageName

spec.values.pilot.image

spec.runtime.components.container.imagePullPolicy

spec.values.global.imagePullPolicy

spec.runtime.components.container.imagePullSecrets

spec.values.global.imagePullSecrets

spec.runtime.components.container.imageRegistry

spec.values.global.hub

spec.runtime.components.container.imageTag

spec.values.pilot.tag

spec.runtime.components.container.resources

spec.values.pilot.resources

Deployment configuration fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.runtime.components.deployment.autoScaling.enabled

spec.values.pilot.autoscaleEnabled

spec.runtime.components.deployment.autoScaling.maxReplicas

spec.values.pilot.autoscaleMax

spec.runtime.components.deployment.autoScaling.minReplicas

spec.values.pilot.autoscaleMin

spec.runtime.components.deployment.autoScaling.targetCPUUtilizationPercentage

spec.values.pilot.cpu.targetAverageUtilization

spec.runtime.components.deployment.replicas

spec.values.pilot.replicaCount

spec.runtime.components.deployment.strategy.rollingUpdate.maxSurge

spec.values.pilot.rollingMaxSurge

spec.runtime.components.deployment.strategy.rollingUpdate.maxUnavailable

spec.values.pilot.rollingMaxUnavailable

Pod configuration fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.runtime.components.pod.affinity

spec.values.pilot.affinity

spec.runtime.components.pod.affinity.nodeAffinity

spec.values.pilot.affinity.nodeAffinity

spec.runtime.components.pod.affinity.podAffinity

spec.values.pilot.affinity.podAffinity

spec.runtime.components.pod.affinity.podAntiAffinity

spec.values.pilot.affinity.podAntiAffinity

spec.runtime.components.pod.metadata.annotations

spec.values.pilot.podAnnotations

spec.runtime.components.pod.metadata.labels

spec.values.pilot.podLabels

spec.runtime.components.pod.nodeSelector

spec.values.pilot.nodeSelector

spec.runtime.components.pod.tolerations

spec.values.pilot.tolerations

Defaults configuration fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.runtime.defaults.container.imagePullPolicy

spec.values.global.imagePullPolicy

spec.runtime.defaults.container.imagePullSecrets

spec.values.global.imagePullSecrets

spec.runtime.defaults.container.imageRegistry

spec.values.global.hub

spec.runtime.defaults.container.imageTag

spec.values.global.tag

spec.runtime.defaults.container.resources

spec.values.global.defaultResources

spec.runtime.defaults.deployment.podDisruption.enabled

spec.values.global.defaultPodDisruptionBudget.enabled

spec.runtime.defaults.pod.nodeSelector

spec.values.global.defaultNodeSelector

spec.runtime.defaults.pod.tolerations

spec.values.global.defaultTolerations

7.1.1.8. Security configurations
Certificate Authority (CA) fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.security.certificateAuthority.cert-manager

spec.values.meshConfig.ca AND spec.values.global.pilotCertProvider

spec.security.certificateAuthority.cert-manager.address

spec.values.meshConfig.ca.address

spec.security.certificateAuthority.custom.address

spec.values.meshConfig.ca.address

Istiod CA fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.security.certificateAuthority.istiod.type

spec.values.global.pilotCertProvider

Control plane security fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.security.controlPlane.certProvider

spec.values.global.pilotCertProvider

spec.security.controlPlane.mtls

spec.values.meshConfig.enableAutoMtls

spec.security.controlPlane.tls.cipherSuites

spec.values.meshConfig.tlsDefaults.cipherSuites

spec.security.controlPlane.tls.ecdhCurves

spec.values.meshConfig.tlsDefaults.ecdhCurves

spec.security.controlPlane.tls.minProtocolVersion

spec.values.meshConfig.tlsDefaults.minProtocolVersion

Data plane security fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.security.dataPlane.automtls

spec.values.meshConfig.enableAutoMtls

spec.security.dataPlane.mtls

spec.values.meshConfig.meshMTLS

Identity configuration fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.security.identity.thirdParty.audience

spec.values.global.sds.token.aud

Other security fields
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.security.jwksResolverCA

spec.values.pilot.jwksResolverExtraRootCA

spec.security.trust.domain

spec.values.meshConfig.trustDomain

spec.security.trust.additionalDomains

spec.values.meshConfig.trustDomainAliases

7.1.1.9. Tracing configurations
Expand
SMCP 2.6 configurationIstio 3.0 configuration

spec.tracing.sampling

spec.values.pilot.traceSampling

The following tables list OpenShift Service Mesh 2 ServiceMeshControlPlane configuration fields that are not supported in Red Hat OpenShift Service Mesh 3. This does not necessarily mean the functionality has been removed. In some cases, such as add ons, you need to install the application separately and configure it separately.

7.1.2.1. Unsupported add-on configurations

Add-ons, such as Red Hat OpenShift distributed tracing platform, Kiali Operator provided by Red Hat, and others, are managed and configured separately in OpenShift Service Mesh 3. For more information, see "Observability and Service Mesh".

Expand

spec.addons.3scale

spec.addons.grafana

spec.addons.jaeger

spec.addons.kiali

spec.addons.prometheus

spec.addons.stackdriver

7.1.2.2. Unsupported cluster configurations
Expand

spec.cluster.meshExpansion.ilbGateway,

spec.cluster.multiCluster.meshNetworks.gateways.service

7.1.2.3. Unsupported Gateways configurations

Gateways are managed separately in OpenShift Service Mesh 3.

7.1.2.4. Unsupported policy configurations
Expand

spec.policy.type

spec.policy.mixer

spec.policy.remote

7.1.2.5. Unsupported Proxy configurations
Unsupported Proxy networking configuration fields
Expand

spec.proxy.networking.initialization.type

spec.proxy.networking.initialization.initContainer.runtime.env

spec.proxy.networking.protocol.autoDetect

spec.proxy.networking.protocol.inbound

spec.proxy.networking.protocol.outbound

7.1.2.6. Unsupported runtime configurations
Unsupported deployment configuration fields
Expand

spec.runtime.components.deployment.strategy.type

Unsupported defaults configuration fields
Expand

spec.runtime.defaults.deployment.podDisruption.maxUnavailable

spec.runtime.defaults.deployment.podDisruption.minAvailable

7.1.2.7. Unsupported security configurations
Unsupported certificate Authority (CA) fields
Expand

spec.security.certificateAuthority.cert-manager.pilotSecretName

spec.security.certificateAuthority.cert-manager.rootCAConfigMapName

Unsupported Istiod CA fields
Expand

spec.security.certificateAuthority.istiod.privateKey.rootCADir

spec.security.certificateAuthority.istiod.selfSigned.checkPeriod

spec.security.certificateAuthority.istiod.selfSigned.enableJitter

spec.security.certificateAuthority.istiod.selfSigned.gracePeriod

spec.security.certificateAuthority.istiod.selfSigned.ttl

spec.security.certificateAuthority.istiod.workloadCertTTLDefault

spec.security.certificateAuthority.istiod.workloadCertTTLMax

Unsupported control plane security fields
Expand

spec.security.controlPlane.tls.maxProtocolVersion

Unsupported identity configuration fields
Expand

spec.security.identity.thirdParty.issuer

spec.security.identity.type

7.1.2.8. Unsupported Telemetry configurations
Expand

spec.telemetry.type

spec.telemetry.mixer

spec.telemetry.remote

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top