About


Red Hat OpenShift Service Mesh 3.0.0tp1

About OpenShift Service Mesh

Red Hat OpenShift Documentation Team

Abstract

This document provides an overview of OpenShift Service Mesh features.

Chapter 1. About OpenShift Service Mesh

Red Hat OpenShift Service Mesh, which is based on the open source Istio project, addresses a variety of problems in a microservice architecture by creating a centralized point of control in an application.

1.1. Introduction to Red Hat OpenShift Service Mesh

Red Hat OpenShift Service Mesh adds a transparent layer on existing distributed applications without requiring any changes to the application code. The mesh introduces an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. A service mesh also provides more complex operational functionality, including A/B testing, canary releases, access control, and end-to-end authentication.

Microservice architectures split the work of enterprise applications into modular services, which can make scaling and maintenance easier. However, as an enterprise application built on a microservice architecture grows in size and complexity, it becomes difficult to understand and manage. Service Mesh can address those architecture problems by capturing or intercepting traffic between services and can modify, redirect, or create new requests to other services.

1.2. Core features

Red Hat OpenShift Service Mesh provides a number of key capabilities uniformly across a network of services:

  • Traffic Management - Control the flow of traffic and API calls between services, make calls more reliable, and make the network more robust in the face of adverse conditions.
  • Service Identity and Security - Provide services in the mesh with a verifiable identity and provide the ability to protect service traffic as it flows over networks of varying degrees of trustworthiness.
  • Policy Enforcement - Apply organizational policy to the interaction between services, ensure access policies are enforced and resources are fairly distributed among consumers. Policy changes are made by configuring the mesh, not by changing application code.
  • Telemetry - Gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues.

Chapter 2. Understanding OpenShift Service Mesh

Red Hat OpenShift Service Mesh is composed of two parts:

  • Red Hat OpenShift Service Mesh resources
  • Kiali provided by Red Hat

Kali provided by Red Hat is composed of three parts:

  • Kiali Operator provided by Red Hat
  • Kiali Server
  • OpenShift Service Mesh Console (OSSMC) plugin

OpenShift Service Mesh integrates with the following:

  • Observability components such as:

    • OpenShift Monitoring
    • Red Hat OpenShift distributed tracing platform
    • Red Hat OpenShift distributed tracing data collection Operator
  • cert-manager
  • Argo rollouts

2.1. Red Hat OpenShift Service Mesh resources

Red Hat OpenShift Service Mesh Operator manages the lifecycle of your Istio control planes. Instead of creating a new configuration schema, OpenShift Service Mesh Operator APIs are built around Istio’s Helm chart APIs.

Note
  • Though Red Hat OpenShift Service Mesh APIs are built around Istio’s Helm chart APIs, Helm charts are not supported.
  • All installation and configuration options that are exposed by Istio’s Helm charts are available through the Red Hat OpenShift Service Mesh Custom Resource Definition (CRD) values fields.

2.1.1. Istio resource

The Istio resource is used to manage your Istio control planes. It is a cluster-wide resource, because the Istio control plane operates in and requires access to the entire cluster.

To select a namespace to run the control plane pods in, you can use the spec.namespace field.

Note

The spec.namespace field is immutable: in order to move a control plane to another namespace, you must remove the Istio resource and recreate it with a different spec.namespace.

You can access all Istio custom resource definition (CRD) options through spec.values fields:

Example Istio resource CRD

apiVersion: sailoperator.io/v1alpha1
kind: Istio
metadata:
  name: default
spec:
  version: v1.22.3
  namespace: istio-system
  updateStrategy:
    type: InPlace
  values:
    pilot:
      resources:
        requests:
          cpu: 100m
          memory: 1024Mi

You can run the following command to see all the customization options:

$ oc explain istios.spec.values

To support canary updates of the control plane, OpenShift Service Mesh includes support for multiple Istio versions. You can select a version by setting spec.version to the version you would like to install, prefixed with a v. You can update to a new version just by changing this field.

OpenShift Service Mesh supports two different update strategies for your control planes:

InPlace
The OpenShift Service Mesh Operator immediately replaces your existing control plane resources with the ones for the new version.
RevisionBased
Uses Istio’s canary update mechanism by creating a second control plane to which you can migrate your workloads to complete the update.

After creating an Istio resource, OpenShift Service Mesh generates a revision name for the resource based on the updateStrategy, and creates a corresponding IstioRevision.

2.1.2. IstioRevision resource

The IstioRevision is a cluster-wide resource and the lowest-level API OpenShift Service Mesh provides. It is usually not created by the user, but by the Operator itself. Its schema closely resembles that of the Istio resource - but instead of representing the state of a control plane you want to be present in your cluster, it represents a revision of that control plane.

A revision of the control plane you want to be present in your cluster is an instance of Istio with a specific version and revision name, and its revision name can be used to add workloads or entire namespaces to the mesh. For example: by using the istio.io/rev=<REVISION_NAME> label.

You can think of the relationship between the Istio and IstioRevision resources as similar to the relationship between Kubernetes' replica set and pod: a replica set can be created by users and results in the automatic creation of pods, which will trigger the instantiation of your containers.

Similarly, users create an Istio resource which instructs the OpenShift Service Mesh Operator to create a matching IstioRevision resource, which then in turn triggers the creation of the Istio control plane. To do that, the OpenShift Service Mesh Operator will copy all of your relevant configuration from the Istio resource to the IstioRevision resource.

2.1.3. IstioCNI resource

The lifecycle of Istio’s Container Network Interface (CNI) plugin is managed separately when using OpenShift Service Mesh Operator. To install Istio’s CNI plugin, you create an IstioCNI resource.

The IstioCNI resource is a cluster-wide resource as it installs a daemon set that operates on all nodes of your cluster. You can select a version by setting the spec.version field, as you can see in the example that follows. To update the CNI plugin, change the version field to the version you want to install. Like the Istio resource, it also has a values field that exposes all of the options provided in the istio-cni chart:

Example IstioCNI resource

apiVersion: sailoperator.io/v1alpha1
kind: IstioCNI
metadata:
  name: default
spec:
  version: v1.22.3
  namespace: istio-cni
  values:
    cni:
      cniConfDir: /etc/cni/net.d
      excludeNamespaces:
      - kube-system

2.2. Red Hat OpenShift Service Mesh and Kiali

Kiali is based on the open source Kiali project. See Kiali project. Kiali provided by Red Hat is composed of three parts:

  • Kiali Operator provided by Red Hat
  • Kiali Server
  • OpenShift Service Mesh Console (OSSMC) plugin

Working together, they form the user interface (UI) for OpenShift Service Mesh. Kiali provides visibility into your service mesh by showing you the microservices and how they are connected.

Kiali helps you define, validate, and observe your Istio service mesh. It helps you to understand the structure of your service mesh by inferring the topology, and also provides information about the health of your service mesh.

Kiali provides an interactive graph view of your mesh namespaces in near real time that provides visibility into features like circuit breakers, request rates, latency, and even graphs of traffic flows. Kiali offers insights about components at different levels, such as applications, services, workloads, and can display the interactions with contextual information and charts on the selected graph node or edge.

Kiali also provides the ability to validate your Istio configurations, such as gateways, destination rules, virtual services, mesh policies, and so on. Kiali provides detailed metrics, and a basic Grafana integration is available for advanced queries. Distributed tracing is provided by integrating Red Hat OpenShift distributed tracing platform (Tempo) and Red Hat OpenShift distributed tracing data collection into the Kiali console.

2.2.1. Kiali architecture

Kiali Server (back end)
This component runs in the container application platform and communicates with the service mesh components, retrieves and processes data, and exposes this data to the console. The Kiali Server does not need storage. When deploying the Server to a cluster, configurations are set in config maps and secrets.
Kiali console (front end)
The Kiali console is a web application. The console queries the Kiali Server for data to present it to the user.

In addition, Kiali depends on external services and components provided by the container application platform and Istio.

Red Hat Service Mesh (Istio)
Istio is a Kiali requirement. Istio is the component that provides and controls the service mesh. Although Kiali and Istio can be installed separately, Kiali depends on Istio and will not work if it is not present. Kiali needs to retrieve Istio data and configurations, which are exposed through Prometheus and the Red Hat OpenShift Service Mesh cluster API.
Prometheus
A dedicated Prometheus instance is optional. When Istio telemetry is enabled, metrics data are stored in Prometheus. Kiali uses this Prometheus data to determine the mesh topology, display metrics, calculate health, show possible problems, and so on. Kiali communicates directly with Prometheus and assumes the data schema used by Istio Telemetry. Prometheus is an Istio dependency and a hard dependency for Kiali, and many of Kiali’s features will not work without Prometheus.
OpenShift Container Platform API
Kiali uses the OpenShift Container Platform API to fetch and resolve service mesh configurations. For example, Kiali queries the cluster API to retrieve definitions for namespaces, services, deployments, pods, and other entities. Kiali also makes queries to resolve relationships between the different cluster entities. The cluster API is also queried to retrieve Istio configurations like virtual services, destination rules, route rules, gateways, quotas, and so on.
Tracing
Tracing is optional, but when you install Red Hat OpenShift distributed tracing platform and Kiali is configured, the Kiali console includes a tab to display distributed tracing data, and tracing integration on the graph itself. Note that tracing data will not be available if you disable Istio’s distributed tracing feature. Also note that the user must have access to the namespace where the user needs to see tracing data.
Grafana
Grafana is optional. When available, the metrics pages of Kiali display links to direct the user to the same metric in Grafana. Note that Grafana is not supported as part of OpenShift Container Platform or OpenShift Service Mesh.

2.2.2. Kiali features

The Kiali console is integrated with OpenShift Service Mesh and provides the following capabilities:

Health
Quickly identify issues with applications, services, or workloads.
Topology
Visualize how your applications, services, or workloads communicate through the Kiali graph.
Metrics
Predefined metrics dashboards let you chart service mesh and application performance for Go, Node.js. Quarkus, Spring Boot, Thorntail and Vert.x. You can also create your own custom dashboards.
Tracing
Integration with Red Hat OpenShift distributed tracing platform (Tempo) lets you follow the path of a request through various microservices that make up an application.
Validations
Perform advanced validations on the most common Istio objects (Destination Rules, Service Entries, Virtual Services, and so on).
Configuration
Optional ability to create, update, and delete Istio routing configuration using wizards or directly in the YAML editor in the Kiali Console.

2.2.3. OpenShift Service Mesh Console (OSSMC) plugin

The OpenShift Service Mesh Console (OSSMC) plugin is an OpenShift Container Platform plugin for Red Hat OpenShift Service Mesh. It integrates much of the Kiali Operator provided by Red Hat interface into the OpenShift Console, injecting both a Service Mesh main menu option with dedicated screens, as well as integrating Service Mesh tabs throughout console.

The OSSMC plugin is installed using Kiali Operator provided by Red Hat, and requires the Kiali Server component. OSSMC plugin has its own Custom Resource (CR) configuration.

2.3. Red Hat OpenShift Service Mesh and Observability

Red Hat OpenShift Service Mesh integrates with Red Hat Observability components, including:

OpenShift Monitoring

Monitoring stack components are deployed by default in every OpenShift Container Platform installation and are managed by the Cluster Monitoring Operator (CMO). These components include Prometheus, Alertmanager, Thanos Querier, and so on. The CMO also deploys the Telemeter Client, which sends a subset of data from platform Prometheus instances to Red Hat to facilitate Remote Health Monitoring for clusters.

When you have added your application to the mesh, you can monitor the in-cluster health and performance of your applications running on OpenShift Container Platform with metrics and customized alerts for CPU and memory usage, network connectivity, and other resource usage.

Red Hat OpenShift distributed tracing platform

Red Hat OpenShift Service Mesh uses Red Hat OpenShift distributed tracing platform to allow developers to view call flows in a microservice application.

Integrating Red Hat OpenShift distributed tracing platform with Red Hat OpenShift Service Mesh is made of up two parts: Red Hat OpenShift distributed tracing platform (Tempo) and Red Hat OpenShift distributed tracing data collection.

Red Hat OpenShift distributed tracing platform (Tempo)

Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source Grafana Tempo project.

For more information about distributed tracing platform (Tempo), its features, installation, and configuration, see: Red Hat OpenShift distributed tracing platform (Tempo).

Red Hat OpenShift distributed tracing data collection

Is based on the open source OpenTelemetry project, which aims to provide unified, standardized, and vendor-neutral telemetry data collection for cloud-native software. Red Hat OpenShift distributed tracing data collection product provides support for deploying and managing the OpenTelemetry Collector and simplifying the workload instrumentation. See OpenTelemetry project

The OpenTelemetry Collector can receive, process, and forward telemetry data in multiple formats, making it the ideal component for telemetry processing and interoperability between telemetry systems. The Collector provides a unified solution for collecting and processing metrics, traces, and logs. See OpenTelemetry Collector.

For more information about distributed tracing data collection, its features, installation, and configuration, see: Red Hat OpenShift distributed tracing data collection.

2.4. Red Hat OpenShift Service Mesh and cert-manager

The cert-manager tool is a solution for X.509 certificate management on Kubernetes. It delivers a unified API to integrate applications with private or public key infrastructure (PKI), such as Vault, Google Cloud Certificate Authority Service, Let’s Encrypt, and other providers.

The cert-manager tool ensures that the certificates are valid and up-to-date by attempting to renew the certificates at a configured time before they expire.

For Istio users, cert-manager also provides integration with istio-csr, which is a certificate authority (CA) server that handles certificate signing requests (CSR) from Istio proxies. The server then delegates signing to cert-manager, which forwards CSRs to the configured CA server.

2.5. Red Hat OpenShift Service Mesh and Argo Rollouts

Red Hat OpenShift Service Mesh, when used with Argo Rollouts, provides more advanced routing capabilities by using Istio, and does not require the configuration of a sidecar container.

You can use OpenShift Service Mesh to split traffic between two application versions.

  • Canary version: A new version of an application where you gradually route the traffic.
  • Stable version: The current version of an application. After the canary version is stable and has all the user traffic directed to it, it becomes the new stable version. The previous stable version is discarded.

The Istio-support within Argo Rollouts uses the Gateway and VirtualService resources to handle traffic routing.

  • Gateway: You can use a Gateway to manage inbound and outbound traffic for your mesh. The gateway is the entry point of OpenShift Service Mesh and handles traffic requests sent to an application.
  • VirtualService: VirtualService defines traffic routing rules and the percentage of traffic that goes to underlying services, such as the stable and canary services.

2.5.1. Additional resources

Legal Notice

Copyright © 2024 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.