Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 3. Distributed tracing and Service Mesh


3.1. Configuring Red Hat OpenShift distributed tracing platform with Service Mesh

Integrating Red Hat OpenShift distributed tracing platform with Red Hat OpenShift Service Mesh is made of up two parts: Red Hat OpenShift distributed tracing platform (Tempo) and Red Hat OpenShift distributed tracing data collection.

Red Hat OpenShift distributed tracing platform (Tempo)

Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. Tempo is based on the open source Grafana Tempo project.

For more about information about distributed tracing platform (Tempo), its features, installation, and configuration, see: Red Hat OpenShift distributed tracing platform (Tempo).

Red Hat OpenShift distributed tracing data collection

Is based on the open source OpenTelemetry project, which aims to provide unified, standardized, and vendor-neutral telemetry data collection for cloud-native software. Red Hat OpenShift distributed tracing data collection product provides support for deploying and managing the OpenTelemetry Collector and simplifying the workload instrumentation.

The OpenTelemetry Collector can receive, process, and forward telemetry data in multiple formats, making it the ideal component for telemetry processing and interoperability between telemetry systems. The Collector provides a unified solution for collecting and processing metrics, traces, and logs.

For more information about distributed tracing data collection, its features, installation, and configuration, see: Red Hat OpenShift distributed tracing data collection.

3.1.1. Configuring Red Hat OpenShift distributed tracing data collection with Service Mesh

You can integrate Red Hat OpenShift Service Mesh with Red Hat OpenShift distributed tracing data collection to instrument, generate, collect, and export OpenTelemetry traces, metrics, and logs to analyze and understand your software’s performance and behavior.

Prerequisites

Procedure

  1. Navigate to the Red Hat OpenShift distributed tracing data collection Operator and install the OpenTelemetryCollector resource in the istio-system namespace:

    Example OpenTelemetry Collector in istio-system namespace

    kind: OpenTelemetryCollector
    apiVersion: opentelemetry.io/v1beta1
    metadata:
      name: otel
      namespace: istio-system
    spec:
      observability:
        metrics: {}
      deploymentUpdateStrategy: {}
      config:
        exporters:
          otlp:
            endpoint: 'tempo-sample-distributor.tempo.svc.cluster.local:4317'
            tls:
              insecure: true
        receivers:
          otlp:
            protocols:
              grpc:
                endpoint: '0.0.0.0:4317'
              http: {}
        service:
          pipelines:
            traces:
              exporters:
                - otlp
              receivers:
                - otlp

  2. Configure Red Hat OpenShift Service Mesh to enable tracing, and define the distributed tracing data collection tracing providers in your meshConfig:

    Example enabling tracing and defining tracing providers

    apiVersion: sailoperator.io/v1alpha1
    kind: Istio
    metadata:
    #  ...
      name: default
    spec:
      namespace: istio-system
    #  ...
      values:
        meshConfig:
          enableTracing: true
          extensionProviders:
          - name: otel-tracing
            opentelemetry:
              port: 4317
              service: otel-collector.istio-system.svc.cluster.local 1

    1
    The service field is the OpenTelemetry collector service in the istio-system namespace.
  3. Create an Istio Telemetry resource to enable tracers defined in spec.values.meshConfig.ExtensionProviders:

    Example Istio Telemetry resource

    apiVersion: telemetry.istio.io/v1
    kind: Telemetry
    metadata:
      name: otel-demo
      namespace: istio-system
    spec:
      tracing:
        - providers:
            - name: otel-tracing
          randomSamplingPercentage: 100

    Note

    Once you verify that you can see traces, lower the randomSamplingPercentage value or set it to default to reduce the number of requests.

  4. Create the bookinfo namespace by running the following command:

    $ oc create ns bookinfo
  5. Depending on the update strategy you are using, enable sidecar injection in the namespace by running the appropriate commands:

    1. If you are using the InPlace update strategy, run the following command:

      $ oc label namespace curl istio-injection=enabled
    2. If you are using the RevisionBased update strategy, run the following commands:

      1. Display the revision name by running the following command:

        $ oc get istiorevisions.sailoperator.io

        Example output

        NAME              TYPE    READY   STATUS    IN USE   VERSION   AGE
        default-v1-23-0   Local   True    Healthy   True     v1.23.0   3m33s

      2. Label the namespace with the revision name to enable sidecar injection by running the following command:

        $ oc label namespace curl istio.io/rev=default-v1-23-0
  6. Deploy the bookinfo application in the bookinfo namespace by running the following command:

    $ oc apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo
  7. Generate traffic to the productpage pod to generate traces:

    $ oc exec -it -n bookinfo deployments/productpage-v1 -c istio-proxy -- curl localhost:9080/productpage
  8. Validate the integration by running the following command to see traces in the UI:

    $ oc get routes -n tempo tempo-sample-query-frontend
    Note

    The OpenShift route for Jaeger UI must be created in the Tempo namespace. You can either manually create it for the tempo-sample-query-frontend service, or update the Tempo custom resource with .spec.template.queryFrontend.jaegerQuery.ingress.type: route.

Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.