このコンテンツは選択した言語では利用できません。

Chapter 4. Kiali Operator provided by Red Hat


4.1. Using Kiali Operator provided by Red Hat

Once you have added your application to the mesh, you can use Kiali Operator provided by Red Hat to view the data flow through your application.

4.1.1. About Kiali

You can use Kiali Operator provided by Red Hat to view configurations, monitor traffic, and analyze traces in a single console. Kiali Operator provided by Red Hat derives its core functionality from the open source Kiali project.

Kiali Operator provided by Red Hat is the management console for Red Hat OpenShift Service Mesh. It provides dashboards, observability, and robust configuration and validation capabilities. It shows the structure of your service mesh by inferring traffic topology and displays the health of your mesh. Kiali provides detailed metrics, powerful validation, access to Grafana, and strong integration with the Red Hat OpenShift distributed tracing platform (Tempo).

4.1.2. About Kiali and Istio ambient mode

When running in Istio ambient mode, Kiali introduces new behaviors and visualizations to support the Ambient data plane. The following information describes key aspects of Kiali in this context:

Access requirements
Kiali requires access to the ztunnel namespace to detect whether ambient mode is enabled. Without this access, Kiali does not display ambient-related features.
Visualizations and features
Kiali displays ambient badges for namespaces and workloads you enrolled in the ambient mesh, enabling quick identification.
Traffic graph adjustments

Ambient mode introduces new telemetry sources. Kiali collects and displays metrics from both ztunnel and waypoint proxies to give complete visibility into mesh traffic. You can focus on ambient-specific traffic sources by using new filters and selectors in Kiali. Kiali provides a display option for visualizing waypoint nodes in the traffic graph.

The traffic graph changes based on the ambient enrollment:

  • Without waypoint proxies, the traffic graph displays only Layer 4 (L4) traffic.
  • With waypoint proxies, the graph includes Layer 7 (L7) traffic and might also include L4 traffic.
Workload proxy logs
Kiali aggregates and filters logs from both ztunnel and waypoint proxies. This unified view simplifies troubleshooting by showing only the relevant log entries for each workload.
Distributed tracing
Tracing data is available only after you deploy waypoint proxies, because waypoint services generate the traces. Kiali automatically correlates workload traces with their associated waypoint proxies.
Dedicated pages for ambient components

Analyze ambient components separately from workloads and services on the following dedicated pages:

  • Waypoint pages display detailed information about captured workloads.
  • Ztunnel pages focus on telemetry, metrics, and diagnostics, based on data from istioctl utilities.

Kiali integration with ambient mode ensures full observability for workloads running in the ambient mesh and simplifies operational monitoring and troubleshooting tasks.

4.1.3. Installing the Kiali Operator provided by Red Hat

The following steps show how to install the Kiali Operator provided by Red Hat.

Warning

Do not install the Community version of the Operator. The Community version is not supported.

Prerequisites

  • You have access to the Red Hat OpenShift Service Mesh web console.

Procedure

  1. Log in to the Red Hat OpenShift Service Mesh web console.
  2. Navigate to Operators OperatorHub.
  3. Type Kiali into the filter box to find the Kiali Operator provided by Red Hat.
  4. Click Kiali Operator provided by Red Hat to display information about the Operator.
  5. Click Install.
  6. On the Operator Installation page, select the stable Update Channel.
  7. Select All namespaces on the cluster (default). This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster.
  8. Select the Automatic Approval Strategy.

    Note

    The Manual approval strategy requires a user with appropriate credentials to approve the Operator installation and subscription process.

  9. Click Install.
  10. The Installed Operators page displays the Kiali Operator’s installation progress.

4.1.4. Configuring OpenShift Monitoring with Kiali

The following steps show how to integrate the Kiali Operator provided by Red Hat with user-workload monitoring.

Prerequisites

  • You have installed Red Hat OpenShift Service Mesh.
  • You have enabled user-workload monitoring. See "Enabling monitoring for user-defined projects".
  • You have configured OpenShift Monitoring with Service Mesh. See "Configuring OpenShift Monitoring with Service Mesh".
  • You have Kiali Operator provided by Red Hat 2.4 installed.

Procedure

  1. Create a ClusterRoleBinding resource for Kiali similar to the following example:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kiali-monitoring-rbac
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-monitoring-view
    subjects:
    - kind: ServiceAccount
      name: kiali-service-account
      namespace: istio-system
  2. Create a Kiali resource and point it to your Istio instance similar to the following example:

    apiVersion: kiali.io/v1alpha1
    kind: Kiali
    metadata:
      name: kiali-user-workload-monitoring
      namespace: istio-system
    spec:
      external_services:
        prometheus:
          auth:
            type: bearer
            use_kiali_token: true
          thanos_proxy:
            enabled: true
          url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091
  3. When the Kiali resource is ready, get the Kiali URL from the Route by running the following command:

    $ echo "https://$(oc get routes -n istio-system kiali -o jsonpath='{.spec.host}')"
  4. Follow the URL to open Kiali in your web browser.
  5. Navigate to the Traffic Graph tab to check the traffic in the Kiali UI.

4.1.5. Integrating Red Hat OpenShift distributed tracing platform with Kiali Operator provided by Red Hat

You can integrate Red Hat OpenShift distributed tracing platform with Kiali Operator provided by Red Hat, which enables the following features:

  • Display trace overlays and details on the graph.
  • Display scatterplot charts and in-depth trace/span information on detail pages.
  • Integrated span information in logs and metric charts.
  • Offer links to the external tracing UI.

4.1.5.1. Configuring Red Hat OpenShift distributed tracing platform with Kiali Operator provided by Red Hat

Analyze service communication and troubleshoot request flows within the mesh by viewing distributed traces directly in the Kiali console.

Prerequisites

  • You have installed Red Hat OpenShift Service Mesh.
  • You have configured distributed tracing platform with Red Hat OpenShift Service Mesh.

Procedure

  1. Update the Kiali resource spec configuration for tracing:

    Example Kiali resource spec configuration for tracing:

    spec:
      external_services:
        tracing:
          enabled: true
          provider: tempo
          use_grpc: false
          internal_url: https://tempo-sample-gateway.tempo.svc.cluster.local:8080/api/traces/v1/default/tempo
          external_url: https://tempo-sample-gateway-tempo.apps-crc.testing/api/traces/v1/default/search 
    1
    
          health_check_url: https://tempo-sample-gateway-tempo.apps-crc.testing/api/traces/v1/default/tempo/api/echo
          auth: 
    2
    
            ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
            insecure_skip_verify: false
            type: bearer
            use_kiali_token: true
          tempo_config:
             url_format: "jaeger"
    • spec.external_services.tracing.enabled specifies whether you have enabled tracing.
    • spec.external_services.tracing.provider specifies either distributed tracing platform (Tempo) or distributed tracing platform (Jaeger). The distributed tracing platform can expose a Jaeger API or a Tempo API.
    • spec.external_services.tracing.internal_url specifies the internal URL for the Tempo API. When you deploy the distributed tracing platform in multitenancy, include the tenant name in the URL path of the internal_url parameter. In this example, default represents the tenant name.
    • spec.external_services.tracing.external_url specifies the external URL for the Jaeger UI. When you deploy the distributed tracing platform in multitenancy, the gateway creates the route. Otherwise, you must create the route in the Tempo namespace. You can manually create the route for the tempo-sample-query-frontend service or update the Tempo custom resource with .spec.template.queryFrontend.jaegerQuery.ingress.type: route.
    • spec.external_services.tracing.health_check_url specifies the health check URL. Not required by default. When you deploy the distributed tracing platform in multitenancy, it does not expose the default health check URL. This is an example of a valid health URL.
    • spec.external_services.tracing.auth specifies the configuration used when the access URL is HTTPS or requires authentication. Not required by default.
    • spec.external_services.tracing.tempo_config.url_format specifies the configuration that defaults to grafana. Not required by default. Change to jaeger if the Kiali View in tracing link redirects to the Jaeger console UI.
  2. Save the updated spec in kiali_cr.yaml.
  3. Run the following command to apply the configuration:

    $ oc patch -n istio-system kiali kiali --type merge -p "$(cat kiali_cr.yaml)"

    Example output:

     kiali.kiali.io/kiali patched

Verification

  1. Run the following command to get the Kiali route:

    $ oc get route kiali ns istio-system
  2. Navigate to the Kiali UI.
  3. Navigate to Workload Traces tab to see traces in the Kiali UI.

4.1.6. External Kiali deployment model

Large mesh deployments can separate mesh operation from mesh observability by deploying Kiali away from the mesh. This separation provides dedicated management of observability, reduced resource consumption on mesh clusters, centralized visibility, and improved security isolation.

The external deployment model requires a minimum of two clusters:

  • Management cluster: The home cluster where you deploy Kiali.
  • Mesh clusters: The remote clusters where you deploy the service mesh.

In this model, Kiali is not co-located with an Istio control plane. You can also colocate other observability tools, such as Prometheus, on the management cluster to improve metric query performance.

4.1.6.1. Installing Kiali Operator on remote clusters

In an external deployment, you must install the Kiali Operator on all clusters, including those where Kiali is not deployed, to ensure the creation of required namespaces and remote cluster resources.

Prerequisites

  • You have logged in to the OpenShift Container Platform web console as a user with the cluster-admin role.
  • You have Istio installed in a multi-cluster configuration on each cluster.
  • You have configured a metrics store so that Kiali can query metrics from all the clusters. Kiali queries metrics and traces from their endpoints.
  • You have the necessary secrets for Kiali to access remote clusters.
  • You have set the clustering.ignore_home_cluster field to true in the Kiali custom reource (CR).
  • You have given a unique cluster name for the Kiali home cluster in .spec.kubernetes_config.cluster_name specification. In an external deployment, you must manually set this name because there is no colocated Istio control plane to offer it.

Procedure

  1. Deploy the Kiali Operator on all clusters using the procedure "Installing Kiali in a multi-cluster mesh".
  2. For clusters where Kiali is not deployed, configure the Kiali CR to create only the remote cluster resources by setting the spec.deployment.remote_cluster_resources_only field to true, similar to the following example:

    apiVersion: kiali.io/v1alpha1
    kind: Kiali
    metadata:
      name: kiali
      namespace: istio-system
    spec:
      version: default
      auth:
      deployment:
        remote_cluster_resources_only: true
  3. Ensure the Kiali namespace and instance name are consistent across all clusters. If you change the default namespace (istio-system) or instance name (kiali), you must apply the same values to the following Kiali CR settings on every cluster:

    • spec.deployment.namespace
    • spec.deployment.instance_name

4.2. OpenShift Service Mesh Console plugin

The OpenShift Service Mesh Console (OSSMC) plugin extends the OpenShift Container Platform web console with a Service Mesh menu and enhanced tabs for workloads and services.

4.2.1. About OpenShift Service Mesh Console plugin

The OpenShift Service Mesh Console (OSSMC) plugin is an extension to OpenShift Container Platform web console that provides visibility into your Service Mesh.

Warning

The OSSMC plugin supports only one Kiali instance, regardless of its project access scope.

The OSSMC plugin provides a new category, Service Mesh, in the main OpenShift Container Platform web console navigation with the following menu options:

Overview
Provides a summary of your mesh, displayed as cards that represent the namespaces in the mesh.
Traffic Graph
Provides a full topology view of your mesh, represented by nodes and edges. Each node represents a component of the mesh and each edge represents traffic flowing through the mesh between components.
Istio config
Provides a list of all Istio configuration files in your mesh, with a column that provides a quick way to know if the configuration for each resource is valid.
Mesh
Provides detailed information about the Istio infrastructure status. It shows an infrastructure topology view with core and add-on components, their health, and how they connect to each other.

In the web console Workloads details page, the OSSMC plugin adds a Service Mesh tab that has the following subtabs:

Overview
Shows a summary of the selected workload, including a localized topology graph showing the workload with all inbound and outbound edges and nodes.
Traffic
Shows information about all inbound and outbound traffic to the workload.
Logs
Shows the logs for the workload’s containers. You can see container logs individually ordered by log time and how the Envoy sidecar proxy logs relate to your workload’s application logs. You can enable tracing span integration to see logs that correspond to specific trace spans.
Metrics
Shows inbound and outbound metric graphs in the corresponding subtabs. All the workload metrics are here, providing a detailed view of the performance of your workload. You can enable the tracing span integration to see spans that occurred at the same time as the metrics. With the span marker in the graph, you can see the specific spans associated with that time frame.
Traces
Provides a chart showing the trace spans collected over the given time frame. The trace spans show the lowest-level detail within your workload application. The trace details further show heatmaps that offer a comparison of one span in relation to other requests and spans in the same time frame.
Envoy
Shows information about the Envoy sidecar configuration.

In the web console Networking details page, the OSSMC plugin adds a Service Mesh tab similar to the Workloads details page.

In the web console Projects details page, the OSSMC plugin adds a Service Mesh tab that provides traffic graph information about that project. It is the same information shown in the Traffic Graph page but specific to that project.

4.2.2. About installing OpenShift Service Mesh Console plugin

Install the OSSMC plugin by creating an OSSMConsole resource with the Kiali Operator to enable integrated service mesh management within the OpenShift console.

You must install the latest version of the Kiali Operator, even while installing a earlier OSSMC plugin version, because it includes the latest z-stream release.

OSSM version compatibility
Expand
3.1v2.11v2.114.16+

3.0

v2.4

v2.4

4.15+

2.6

v1.73

v1.73

4.15-4.18

2.5

v1.73

v1.73

4.14-4.18

You can install the OSSMC plugin by using the OpenShift Container Platform web console or the OpenShift CLI (oc).

Note

OSSMC plugin is only supported on OpenShift Container Platform 4.15 and above. For OpenShift Container Platform 4.14 users, only the standalone Kiali console is accessible.

4.2.2.1. Installing OSSMC plugin by using the OpenShift Container Platform web console

You can install the OpenShift Service Mesh Console (OSSMC) plugin by using the OpenShift Container Platform web console.

Prerequisites

  • You have the administrator access to the OpenShift Container Platform web console.
  • You have installed the OpenShift Service Mesh (OSSM).
  • You have installed the Istio control plane from OSSM 3.0.
  • You have installed the Kiali Server 2.4.

Procedure

  1. Navigate to Installed Operators.
  2. Click Kiali Operator provided by Red Hat.
  3. Click Create instance on the Red Hat OpenShift Service Mesh Console tile. You can also click Create OSSMConsole button under the OpenShift Service Mesh Console tab.
  4. Use the Create OSSMConsole form to create an instance of the OSSMConsole custom resource (CR). Name and Version are the required fields.

    Note

    The Version field must match with the spec.version field in your Kiali custom resource (CR). If Version value is the string default, the Kiali Operator installs OpenShift Service Mesh Console (OSSMC) with the same version as the operator. The spec.version field requires the v prefix in the version number. The version number must only include the major and minor version numbers (not the patch number); for example: v1.73.

  5. Click Create.

Verification

  1. Wait for the web console to confirm the OSSMC plugin installation and prompt you to refresh.
  2. Verify that the Service Mesh category shows up in the main OpenShift Container Platform web console navigation.

4.2.2.2. Installing OSSMC plugin by using the CLI

You can install the OpenShift Service Mesh Console (OSSMC) plugin by using the OpenShift CLI.

Prerequisites

  • You have access to the OpenShift CLI (oc) on the cluster as an administrator.
  • You have installed the OpenShift Service Mesh (OSSM).
  • You have installed the Istio control plane from OSSM 3.0.
  • You have installed the Kiali Server 2.4.

Procedure

  1. Create a OSSMConsole custom resource (CR) to install the plugin by running the following command:

    $ cat <<EOM | oc apply -f -
    apiVersion: kiali.io/v1alpha1
    kind: OSSMConsole
    metadata:
      namespace: openshift-operators
      name: ossmconsole
    spec:
      version: default
    EOM
    Note

    The OpenShift Service Mesh Console (OSSMC) version must match with the Kiali Server version. If spec.version field value is the string default or is not specified, the Kiali Operator installs OSSMC with the same version as the operator. The spec.version field requires the v prefix in the version number. The version number must only include the major and minor version numbers (not the patch number); for example: v1.73.

    The plugin resources deploy in the same namespace as the OSSMConsole CR.

  2. Optional: If you installed more than one Kiali Server in the cluster, specify the spec.kiali setting in the OSSMConsole CR similar to the following example:

    $ cat <<EOM | oc apply -f -
    apiVersion: kiali.io/v1alpha1
    kind: OSSMConsole
    metadata:
      namespace: openshift-operators
      name: ossmconsole
    spec:
      kiali:
        serviceName: kiali
        serviceNamespace: istio-system-two
        servicePort: 20001
    EOM

Verification

  1. Go to the OpenShift Container Platform web console.
  2. Verify that the Service Mesh category shows up in the main OpenShift Container Platform web console navigation.
  3. Wait for the web console to confirm the OSSMC plugin installation and prompt you to refresh.

4.2.2.3. About uninstalling OpenShift Service Mesh Console plugin

You can uninstall the OSSMC plugin by using the OpenShift Container Platform web console or the OpenShift CLI (oc).

You must uninstall the OSSMC plugin before removing the Kiali Operator. Deleting the Operator first might leave OSSMC and Kiali CRs stuck, requiring manual removal of the finalizer. Use the following command with <custom_resource_type> as kiali or ossmconsole to remove the finalizer, if needed:

$ oc patch <custom_resource_type> <custom_resource_name> -n <custom_resource_namespace> -p '{"metadata":{"finalizers": []}}' --type=merge

4.2.2.4. Uninstalling OSSMC plugin by using the web console

You can uninstall the OpenShift Service Mesh Console (OSSMC) plugin by using the OpenShift Container Platform web console.

Procedure

  1. Navigate to Installed Operators.
  2. Click Kiali Operator.
  3. Select the OpenShift Service Mesh Console tab.
  4. Click Delete OSSMConsole option from the entry menu.
  5. Confirm that you want to delete the plugin.

4.2.2.5. Uninstalling OSSMC plugin by using the CLI

You can uninstall the OpenShift Service Mesh Console (OSSMC) plugin by using the OpenShift CLI (oc).

Procedure

  • Remove the OSSMC custom resource (CR) by running the following command:

    $ oc delete ossmconsoles <custom_resource_name> -n <custom_resource_namespace>

Verification

  • Verify that you deleted all the CRs from all namespaces by running the following command:

    $ for r in $(oc get ossmconsoles --ignore-not-found=true --all-namespaces -o custom-columns=NS:.metadata.namespace,N:.metadata.name --no-headers | sed 's/  */:/g'); do oc delete ossmconsoles -n $(echo $r|cut -d: -f1) $(echo $r|cut -d: -f2); done
Red Hat logoGithubredditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

Theme

© 2026 Red Hat
トップに戻る