Chapter 4. Configuring Red Hat OpenShift distributed tracing data collection with Service Mesh


You can integrate Red Hat OpenShift Service Mesh with Red Hat OpenShift distributed tracing data collection to instrument, generate, collect, and export OpenTelemetry traces, metrics, and logs to analyze and understand your software’s performance and behavior.

Prerequisites

Procedure

  1. Navigate to the Red Hat OpenShift distributed tracing data collection Operator and install the OpenTelemetryCollector resource in the istio-system namespace:

    Example OpenTelemetry Collector in istio-system namespace

    kind: OpenTelemetryCollector
    apiVersion: opentelemetry.io/v1beta1
    metadata:
      name: otel
      namespace: istio-system
    spec:
      observability:
        metrics: {}
      deploymentUpdateStrategy: {}
      config:
        exporters:
          otlp:
            endpoint: 'tempo-sample-distributor.tempo.svc.cluster.local:4317'
            tls:
              insecure: true
        receivers:
          otlp:
            protocols:
              grpc:
                endpoint: '0.0.0.0:4317'
              http: {}
        service:
          pipelines:
            traces:
              exporters:
                - otlp
              receivers:
                - otlp

  2. Configure Red Hat OpenShift Service Mesh to enable tracing, and define the distributed tracing data collection tracing providers in your meshConfig:

    Example enabling tracing and defining tracing providers

    apiVersion: sailoperator.io/v1alpha1
    kind: Istio
    metadata:
    #  ...
      name: default
    spec:
      namespace: istio-system
    #  ...
      values:
        meshConfig:
          enableTracing: true
          extensionProviders:
          - name: otel-tracing
            opentelemetry:
              port: 4317
              service: otel-collector.istio-system.svc.cluster.local 1

    1
    The service field is the OpenTelemetry collector service in the istio-system namespace.
  3. Create an Istio Telemetry resource to enable tracers defined in spec.values.meshConfig.ExtensionProviders:

    Example Istio Telemetry resource

    apiVersion: telemetry.istio.io/v1
    kind: Telemetry
    metadata:
      name: otel-demo
      namespace: istio-system
    spec:
      tracing:
        - providers:
            - name: otel-tracing
          randomSamplingPercentage: 100

    Note

    Once you verify that you can see traces, lower the randomSamplingPercentage value or set it to default to reduce the number of requests.

  4. Deploy the bookinfo application in bookinfo namespace:

    $ oc create ns bookinfo
    $ oc label ns <namespace_name> istio.io/rev= 1
    $ oc apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo
    1
    If you named your Istio resource default and are using the InPlace upgrade strategy, use oc label ns bookinfo istio-injection=enabled.
    Note

    To find your <revision-name>, run the following command:

    $ oc get istiorevisions.sailoperator.io

    Sample output:

    NAME              TYPE    READY   STATUS    IN USE   VERSION   AGE
    default-v1-23-0   Local   True    Healthy   False    v1.23.0   3m33s

  5. Generate traffic to the productpage pod to generate traces:

    $ oc exec -it -n bookinfo deployments/productpage-v1 -c istio-proxy -- curl localhost:9080/productpage
  6. Validate the integration by running the following command to see traces in the UI:

    $ oc get routes -n tempo tempo-sample-query-frontend
    Note

    The OpenShift route for Jaeger UI must be created in the Tempo namespace. You can either manually create it for the tempo-sample-query-frontend service, or update the Tempo custom resource with .spec.template.queryFrontend.jaegerQuery.ingress.type: route.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.