Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 3. Updating OpenShift Service Mesh in ambient mode


Update Red Hat OpenShift Service Mesh in ambient mode by transitioning the control plane and waypoint proxies to new revisions while maintaining Layer 7 (L7) functionality and resource compatibility.

3.1. About the update strategies in ambient mode

In ambient mode, components update directly through InPlace updates. Unlike sidecar mode, ambient mode allows you to move application pods to an upgraded ztunnel proxy without restarting or rescheduling the pods.

Update sequence

To update in ambient mode, use the following sequence:

  1. Istio control plane: Update the patch version in the Istio resource.
  2. Istio CNI: Update to the same patch version as the control plane.
  3. ZTunnel: Update to the same patch version as the control plane.

3.2. Updating waypoint proxies with InPlace strategy in ambient mode

During an InPlace update in ambient mode, waypoint proxies are going to be updated to the latest control plane version without restarting application workloads because they are deployed and managed as separate Gateway API resources that scale and upgrade independently.

Prerequisites

  • You have updated the Istio control plane with InPlace update strategy.

Procedure

  • Confirm that the waypoint proxy was updated proxy version by running the following command:

    $ istioctl proxy-status | grep waypoint

    You should see an output similar to the following example:

    waypoint-5d9c8b7f9-abc12.bookinfo     SYNCED     SYNCED     SYNCED     SYNCED     istiod-6cf8d4f9cb-wm7x6.istio-system     1.28.5

    You can run the command to query the Istio control plane and verify that the waypoint proxy connects and synchronizes. The output lists the waypoint proxy name and namespace, the synchronization status for each configuration type, the connected istiod pod, and the Istio version of the running proxy. Columns showing SYNCED confirm that the waypoint proxy is successfully receiving configuration from the control plane.

In ambient mode, you can update waypoint proxies by using the RevisionBased update strategy. During the migration period, the proxies remain compatible with many control plane versions and automatically connect to the active control plane revision.

Note

Keep waypoint proxies within one minor version of the control plane (same version or n–1). This recommendation aligns with the support policy of Istio, which states that data plane components must not run ahead of the control plane version. Apply the same versioning guidance to Istio Container Network Interface (CNI) and Ztunnel components. For more details, see the "Istio Supported Releases" documentation.

Prerequisites

  • You have updated the Istio control plane with RevisionBased update strategy.

Procedure

  1. After the new Istio control plane revision is ready, verify waypoint proxy pods are running by entering the following command:

    $ oc get pods -n bookinfo -l gateway.networking.k8s.io/gateway-name=waypoint

    You should see an output similar to the following example:

    NAME                       READY   STATUS    RESTARTS   AGE
    waypoint-5d9c8b7f9-abc12   1/1     Running   0          5m
  2. Confirm that the waypoint proxy is updated to the latest version by running the following command:

    $ istioctl proxy-status | grep waypoint

    You should see an output similar to the following example:

    waypoint-5d9c8b7f9-abc12.bookinfo     SYNCED     SYNCED     SYNCED     SYNCED     istiod-1-27-3-7b9f8c5d6-xyz78.istio-system     1.28.5

    You can run the command to query the Istio control plane and verify that the waypoint proxy is connected to the new revision. The output lists the revision-specific istiod pod (for example, istiod-1-27-3) and shows that the waypoint proxy is running the updated version, 1.28.5. The revision-specific name in the ISTIOD column confirms that the waypoint proxy has successfully migrated to the new control plane revision.

3.4. Verifying Layer 7 (L7) features with traffic routing

After updating the waypoint proxies, verify that Layer 7 (L7) features function as expected. If you use traffic routing rules such as HTTPRoute, confirm that they continue to enforce the intended behavior.

Prerequisites

  • You have updated the waypoint proxies.
  • You have deployed the bookinfo application.
  • You have created an HTTPRoute resource.

Procedure

  1. Optional: Create the HTTPRoute resource if it does not already exist by running the following command:

    $ oc apply -f - <<EOF
    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: reviews
      namespace: bookinfo
    spec:
      parentRefs:
      - group: ""
        kind: Service
        name: reviews
        port: 9080
      rules:
      - backendRefs:
        - name: reviews-v1
          port: 9080
          weight: 90
        - name: reviews-v2
          port: 9080
          weight: 10
    EOF
  2. Verify that the HTTPRoute rules distribute traffic correctly by running the following command:

    for i in {1..10}; do
      kubectl exec "$(
        kubectl get pod \
          -l app=productpage \
          -n bookinfo \
          -o jsonpath='{.items[0].metadata.name}'
      )" \
      -c istio-proxy \
      -n bookinfo -- \
      curl -s http://reviews:9080/reviews/0 | grep -o "reviews-v[0-9]"
    done

    The output should reflect the traffic distribution defined in your HTTPRoute. For example, with a 90/10 weight split between reviews-v1 and reviews-v2, you should observe about nine requests routed to reviews-v1 and one request routed to reviews-v2. The exact ratio can vary slightly due to load-balancing behavior, but should closely match the configured weights over multiple test runs.

3.5. Verifying Layer 7 (L7) features with authorization policies

After updating the waypoint proxies, verify that the Layer 7 (L7) authorization policies are enforced correctly. In this example, the AuthorizationPolicy resource named productpage-waypoint allows only requests from the default/sa/curl service account to send GET requests to the productpage service.

Prerequisites

  • You have updated the waypoint proxies.
  • You have created an application pod using the described service account in the AuthorizationPolicy resource.
  • You have created an AuthorizationPolicy resource.

Procedure

  1. Optional: Create the AuthorizationPolicy resource if it does not already exist by running the following command:

    $ oc apply -f - <<EOF
    apiVersion: security.istio.io/v1
    kind: AuthorizationPolicy
    metadata:
      name: productpage-waypoint
      namespace: bookinfo
    spec:
      targetRefs:
      - kind: Service
        group: ""
        name: productpage
      action: ALLOW
      rules:
      - from:
        - source:
            principals:
            - cluster.local/ns/default/sa/curl
        to:
        - operation:
            methods: ["GET"]
    EOF
  2. Verify that services not included in the allow list, such as the ratings service, are denied access by running the following command:

    $ oc exec "$(
      kubectl get pod \
        -l app=ratings \
        -n bookinfo \
        -o jsonpath='{.items[0].metadata.name}'
    )" \
    -c ratings \
    -n bookinfo -- \
    curl -sS productpage:9080/productpage

    The request will be denied because the ratings service is not included in the authorization policy’s allow list. Only the curl pod using the default/curl service account can access productpage service.

  3. Verify that the curl service can access the productpage service with GET requests by running the following command:

    oc exec "$(
      kubectl get pod \
        -l app=curl \
        -n default \
        -o jsonpath='{.items[0].metadata.name}'
    )" \
    -c curl \
    -n default -- \
    curl -sS http://productpage.bookinfo:9080/productpage | grep -o "<title>.*</title>"

    The request will succeed because the curl service meets the authorization policy rules. It uses the cluster.local/ns/default/sa/curl principal and performs a GET operation, both allowed by the policy. The successful response containing the page title confirms that the waypoint proxy correctly enforces L7 authorization rules and allows valid traffic.

3.6. Updating cross-namespace waypoint

If you are using cross-namespace waypoints, verify that the istio.io/use-waypoint-namespace and istio.io/use-waypoint labels are correctly applied to the relevant namespaces before updating.

  1. Verify the namespace with any of the waypoint labels by running the following command:

    $ oc get ns bookinfo --show-labels | grep waypoint
  2. If there is no namespace with the label or if the label is wrong, re-apply the labels:

    1. Apply the istio.io/use-waypoint-namespace by running the following command:

      $ oc label ns bookinfo istio.io/use-waypoint-namespace=foo --overwrite
    2. Apply the istio.io/use-waypoint by running the following command:

      $ oc label ns bookinfo istio.io/use-waypoint=waypoint-foo --overwrite

3.7. About Ztunnel update lifecycle

Understand the Ztunnel rolling update process, and how the process affects connection persistence during a proxy restart.

Ztunnel operates at Layer 4 (L4) of the Open Systems Interconnection (OSI) model and proxies TCP traffic. Ztunnel cannot transfer connection states to another process. Upgrading the Ztunnel DaemonSet affects all traffic on at least one node at a time.

Ztunnel operates at Layer 4 (L4) of the Open Systems Interconnection (OSI) model and proxies TCP traffic. Ztunnel cannot transfer connection states to another process.  Upgrading the Ztunnel DaemonSet affects all traffic on at least one node at a time. By default, the Ztunnel DaemonSet uses a RollingUpdate strategy. During a restart, each node goes through the following phases:

  • Startup: A new Ztunnel pod starts on the node while the old pod continues running.
  • Readiness: The new Ztunnel establishes listeners in each pod on the node and marks itself as ready. For a brief period, both instances run simultaneously, and new connections may be handled by either one.
  • Draining: Kubernetes sends a SIGTERM to the old Ztunnel, which begins the draining process. The old instance closes its listeners so that only the new Ztunnel accepts new connections. At all times, at least one Ztunnel remains available to handle incoming connections.
  • Connection processing: The old Ztunnel continues processing existing connections until the terminationGracePeriodSeconds expires.
  • Termination: Once the terminationGracePeriodSeconds expires, the old Ztunnel forcefully terminates any remaining connections.

3.7.1. Configuring the Ztunnel termination grace period

Configure a high termination grace period in the ZTunnel custom resource (CR) for the application pods to ensure that active connections close gracefully during a rolling update.

Procedure

  • Update the terminationGracePeriodSeconds value in the ZTunnel CR to a higher value similar to the following example:

    apiVersion: sailoperator.io/v1
    kind: ZTunnel
    metadata:
      name: default
    spec:
      version: 1.28.5
      namespace: ztunnel
      values:
        ztunnel:
          terminationGracePeriodSeconds: 300

3.7.2. Updating Ztunnel using node draining

Drain nodes to force long-lived TCP connections to reconnect through a new Ztunnel instance, without risking traffic loss because the node is empty during the proxy swap.

Procedure

  1. Configure the OnDelete update strategy in the ZTunnel custom resource (CR) to need manual pod deletion before the update to the new version starts, similar to the following example:

    apiVersion: sailoperator.io/v1
    kind: ZTunnel
    metadata:
      name: default
    spec:
      version: 1.28.5
      namespace: ztunnel
      values:
        ztunnel:
          updateStrategy:
            type: OnDelete
  2. Update the version field in the ZTunnel CR to the target version.
  3. Drain a node to force all applications to move to other nodes, allowing their long-lived connections to close gracefully based on their terminationGracePeriodSeconds.
  4. Delete the old Ztunnel pod on the empty node and wait for the new pod to start.
  5. Mark the node as schedulable. Applications that return to the node will automatically use the new Ztunnel.
  6. Repeat steps 3 through 5 for all remaining nodes in the cluster.
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2026 Red Hat
Nach oben