Questo contenuto non è disponibile nella lingua selezionata.
Chapter 3. Updating OpenShift Service Mesh in ambient mode
Update Red Hat OpenShift Service Mesh in ambient mode by transitioning the control plane and waypoint proxies to new revisions while maintaining Layer 7 (L7) functionality and resource compatibility.
3.1. About the update strategies in ambient mode Copia collegamentoCollegamento copiato negli appunti!
In ambient mode, components update directly through InPlace updates. Unlike sidecar mode, ambient mode allows you to move application pods to an upgraded ztunnel proxy without restarting or rescheduling the pods.
- Update sequence
To update in ambient mode, use the following sequence:
- Istio control plane: Update the patch version in the Istio resource.
- Istio CNI: Update to the same patch version as the control plane.
- ZTunnel: Update to the same patch version as the control plane.
3.2. Updating waypoint proxies with InPlace strategy in ambient mode Copia collegamentoCollegamento copiato negli appunti!
During an InPlace update in ambient mode, waypoint proxies are going to be updated to the latest control plane version without restarting application workloads because they are deployed and managed as separate Gateway API resources that scale and upgrade independently.
Prerequisites
-
You have updated the Istio control plane with
InPlaceupdate strategy.
Procedure
Confirm that the waypoint proxy was updated proxy version by running the following command:
$ istioctl proxy-status | grep waypointYou should see an output similar to the following example:
waypoint-5d9c8b7f9-abc12.bookinfo SYNCED SYNCED SYNCED SYNCED istiod-6cf8d4f9cb-wm7x6.istio-system 1.28.5You can run the command to query the Istio control plane and verify that the waypoint proxy connects and synchronizes. The output lists the waypoint proxy name and namespace, the synchronization status for each configuration type, the connected
istiodpod, and the Istio version of the running proxy. Columns showingSYNCEDconfirm that the waypoint proxy is successfully receiving configuration from the control plane.
3.3. Updating waypoint proxies with RevisionBased strategy in ambient mode Copia collegamentoCollegamento copiato negli appunti!
In ambient mode, you can update waypoint proxies by using the RevisionBased update strategy. During the migration period, the proxies remain compatible with many control plane versions and automatically connect to the active control plane revision.
Keep waypoint proxies within one minor version of the control plane (same version or n–1). This recommendation aligns with the support policy of Istio, which states that data plane components must not run ahead of the control plane version. Apply the same versioning guidance to Istio Container Network Interface (CNI) and Ztunnel components. For more details, see the "Istio Supported Releases" documentation.
Prerequisites
-
You have updated the Istio control plane with
RevisionBasedupdate strategy.
Procedure
After the new Istio control plane revision is ready, verify waypoint proxy pods are running by entering the following command:
$ oc get pods -n bookinfo -l gateway.networking.k8s.io/gateway-name=waypointYou should see an output similar to the following example:
NAME READY STATUS RESTARTS AGE waypoint-5d9c8b7f9-abc12 1/1 Running 0 5mConfirm that the waypoint proxy is updated to the latest version by running the following command:
$ istioctl proxy-status | grep waypointYou should see an output similar to the following example:
waypoint-5d9c8b7f9-abc12.bookinfo SYNCED SYNCED SYNCED SYNCED istiod-1-27-3-7b9f8c5d6-xyz78.istio-system 1.28.5You can run the command to query the Istio control plane and verify that the waypoint proxy is connected to the new revision. The output lists the revision-specific
istiodpod (for example,istiod-1-27-3) and shows that the waypoint proxy is running the updated version, 1.28.5. The revision-specific name in theISTIODcolumn confirms that the waypoint proxy has successfully migrated to the new control plane revision.
3.4. Verifying Layer 7 (L7) features with traffic routing Copia collegamentoCollegamento copiato negli appunti!
After updating the waypoint proxies, verify that Layer 7 (L7) features function as expected. If you use traffic routing rules such as HTTPRoute, confirm that they continue to enforce the intended behavior.
Prerequisites
- You have updated the waypoint proxies.
-
You have deployed the
bookinfoapplication. -
You have created an
HTTPRouteresource.
Procedure
Optional: Create the
HTTPRouteresource if it does not already exist by running the following command:$ oc apply -f - <<EOF apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: reviews namespace: bookinfo spec: parentRefs: - group: "" kind: Service name: reviews port: 9080 rules: - backendRefs: - name: reviews-v1 port: 9080 weight: 90 - name: reviews-v2 port: 9080 weight: 10 EOFVerify that the
HTTPRouterules distribute traffic correctly by running the following command:for i in {1..10}; do kubectl exec "$( kubectl get pod \ -l app=productpage \ -n bookinfo \ -o jsonpath='{.items[0].metadata.name}' )" \ -c istio-proxy \ -n bookinfo -- \ curl -s http://reviews:9080/reviews/0 | grep -o "reviews-v[0-9]" doneThe output should reflect the traffic distribution defined in your
HTTPRoute. For example, with a90/10weight split betweenreviews-v1andreviews-v2, you should observe about nine requests routed toreviews-v1and one request routed toreviews-v2. The exact ratio can vary slightly due to load-balancing behavior, but should closely match the configured weights over multiple test runs.
3.5. Verifying Layer 7 (L7) features with authorization policies Copia collegamentoCollegamento copiato negli appunti!
After updating the waypoint proxies, verify that the Layer 7 (L7) authorization policies are enforced correctly. In this example, the AuthorizationPolicy resource named productpage-waypoint allows only requests from the default/sa/curl service account to send GET requests to the productpage service.
Prerequisites
- You have updated the waypoint proxies.
-
You have created an application pod using the described service account in the
AuthorizationPolicyresource. -
You have created an
AuthorizationPolicyresource.
Procedure
Optional: Create the
AuthorizationPolicyresource if it does not already exist by running the following command:$ oc apply -f - <<EOF apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: productpage-waypoint namespace: bookinfo spec: targetRefs: - kind: Service group: "" name: productpage action: ALLOW rules: - from: - source: principals: - cluster.local/ns/default/sa/curl to: - operation: methods: ["GET"] EOFVerify that services not included in the allow list, such as the ratings service, are denied access by running the following command:
$ oc exec "$( kubectl get pod \ -l app=ratings \ -n bookinfo \ -o jsonpath='{.items[0].metadata.name}' )" \ -c ratings \ -n bookinfo -- \ curl -sS productpage:9080/productpageThe request will be denied because the
ratingsservice is not included in the authorization policy’sallowlist. Only thecurlpod using thedefault/curlservice account can accessproductpageservice.Verify that the
curlservice can access theproductpageservice withGETrequests by running the following command:oc exec "$( kubectl get pod \ -l app=curl \ -n default \ -o jsonpath='{.items[0].metadata.name}' )" \ -c curl \ -n default -- \ curl -sS http://productpage.bookinfo:9080/productpage | grep -o "<title>.*</title>"The request will succeed because the
curlservice meets the authorization policy rules. It uses thecluster.local/ns/default/sa/curlprincipal and performs aGEToperation, both allowed by the policy. The successful response containing the page title confirms that the waypoint proxy correctly enforces L7 authorization rules and allows valid traffic.
3.6. Updating cross-namespace waypoint Copia collegamentoCollegamento copiato negli appunti!
If you are using cross-namespace waypoints, verify that the istio.io/use-waypoint-namespace and istio.io/use-waypoint labels are correctly applied to the relevant namespaces before updating.
Verify the namespace with any of the waypoint labels by running the following command:
$ oc get ns bookinfo --show-labels | grep waypointIf there is no namespace with the label or if the label is wrong, re-apply the labels:
Apply the
istio.io/use-waypoint-namespaceby running the following command:$ oc label ns bookinfo istio.io/use-waypoint-namespace=foo --overwriteApply the
istio.io/use-waypointby running the following command:$ oc label ns bookinfo istio.io/use-waypoint=waypoint-foo --overwrite
3.7. About Ztunnel update lifecycle Copia collegamentoCollegamento copiato negli appunti!
Understand the Ztunnel rolling update process, and how the process affects connection persistence during a proxy restart.
Ztunnel operates at Layer 4 (L4) of the Open Systems Interconnection (OSI) model and proxies TCP traffic. Ztunnel cannot transfer connection states to another process. Upgrading the Ztunnel DaemonSet affects all traffic on at least one node at a time.
Ztunnel operates at Layer 4 (L4) of the Open Systems Interconnection (OSI) model and proxies TCP traffic. Ztunnel cannot transfer connection states to another process. Upgrading the Ztunnel DaemonSet affects all traffic on at least one node at a time. By default, the Ztunnel DaemonSet uses a RollingUpdate strategy. During a restart, each node goes through the following phases:
-
Startup: A new
Ztunnelpod starts on the node while the old pod continues running. -
Readiness: The new
Ztunnelestablishes listeners in each pod on the node and marks itself as ready. For a brief period, both instances run simultaneously, and new connections may be handled by either one. -
Draining: Kubernetes sends a
SIGTERMto the oldZtunnel, which begins the draining process. The old instance closes its listeners so that only the newZtunnelaccepts new connections. At all times, at least oneZtunnelremains available to handle incoming connections. -
Connection processing: The old Ztunnel continues processing existing connections until the
terminationGracePeriodSecondsexpires. -
Termination: Once the
terminationGracePeriodSecondsexpires, the oldZtunnelforcefully terminates any remaining connections.
3.7.1. Configuring the Ztunnel termination grace period Copia collegamentoCollegamento copiato negli appunti!
Configure a high termination grace period in the ZTunnel custom resource (CR) for the application pods to ensure that active connections close gracefully during a rolling update.
Procedure
Update the
terminationGracePeriodSecondsvalue in theZTunnelCR to a higher value similar to the following example:apiVersion: sailoperator.io/v1 kind: ZTunnel metadata: name: default spec: version: 1.28.5 namespace: ztunnel values: ztunnel: terminationGracePeriodSeconds: 300
3.7.2. Updating Ztunnel using node draining Copia collegamentoCollegamento copiato negli appunti!
Drain nodes to force long-lived TCP connections to reconnect through a new Ztunnel instance, without risking traffic loss because the node is empty during the proxy swap.
Procedure
Configure the
OnDeleteupdate strategy in theZTunnelcustom resource (CR) to need manual pod deletion before the update to the new version starts, similar to the following example:apiVersion: sailoperator.io/v1 kind: ZTunnel metadata: name: default spec: version: 1.28.5 namespace: ztunnel values: ztunnel: updateStrategy: type: OnDelete-
Update the
versionfield in theZTunnelCR to the target version. -
Drain a node to force all applications to move to other nodes, allowing their long-lived connections to close gracefully based on their
terminationGracePeriodSeconds. -
Delete the old
Ztunnelpod on the empty node and wait for the new pod to start. -
Mark the node as
schedulable. Applications that return to the node will automatically use the new Ztunnel. - Repeat steps 3 through 5 for all remaining nodes in the cluster.