Chapter 5. Using the Red Hat build of OpenTelemetry


You can set up and use the Red Hat build of OpenTelemetry to send traces to the OpenTelemetry Collector or the TempoStack.

5.1. Forwarding traces to a TempoStack by using the OpenTelemetry Collector

To configure forwarding traces to a TempoStack, you can deploy and configure the OpenTelemetry Collector. You can deploy the OpenTelemetry Collector in the deployment mode by using the specified processors, receivers, and exporters. For other modes, see the OpenTelemetry Collector documentation linked in Additional resources.

Prerequisites

  • The Red Hat build of OpenTelemetry Operator is installed.
  • The Tempo Operator is installed.
  • A TempoStack is deployed on the cluster.

Procedure

  1. Create a service account for the OpenTelemetry Collector.

    Example ServiceAccount

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: otel-collector-deployment

  2. Create a cluster role for the service account.

    Example ClusterRole

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: otel-collector
    rules:
      1
      2
    - apiGroups: ["", "config.openshift.io"]
      resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
      verbs: ["get", "watch", "list"]

    1
    The k8sattributesprocessor requires permissions for pods and namespaces resources.
    2
    The resourcedetectionprocessor requires permissions for infrastructures and status.
  3. Bind the cluster role to the service account.

    Example ClusterRoleBinding

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: otel-collector
    subjects:
    - kind: ServiceAccount
      name: otel-collector-deployment
      namespace: otel-collector-example
    roleRef:
      kind: ClusterRole
      name: otel-collector
      apiGroup: rbac.authorization.k8s.io

  4. Create the YAML file to define the OpenTelemetryCollector custom resource (CR).

    Example OpenTelemetryCollector

    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    metadata:
      name: otel
    spec:
      mode: deployment
      serviceAccount: otel-collector-deployment
      config: |
        receivers:
          jaeger:
            protocols:
              grpc:
              thrift_binary:
              thrift_compact:
              thrift_http:
          opencensus:
          otlp:
            protocols:
              grpc:
              http:
          zipkin:
        processors:
          batch:
          k8sattributes:
          memory_limiter:
            check_interval: 1s
            limit_percentage: 50
            spike_limit_percentage: 30
          resourcedetection:
            detectors: [openshift]
        exporters:
          otlp:
            endpoint: "tempo-simplest-distributor:4317" 1
            tls:
              insecure: true
        service:
          pipelines:
            traces:
              receivers: [jaeger, opencensus, otlp, zipkin] 2
              processors: [memory_limiter, k8sattributes, resourcedetection, batch]
              exporters: [otlp]

    1
    The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint, "tempo-simplest-distributor:4317" in this example, which is already created.
    2
    The Collector is configured with a receiver for Jaeger traces, OpenCensus traces over the OpenCensus protocol, Zipkin traces over the Zipkin protocol, and OTLP traces over the GRPC protocol.
Tip

You can deploy tracegen as a test:

apiVersion: batch/v1
kind: Job
metadata:
  name: tracegen
spec:
  template:
    spec:
      containers:
        - name: tracegen
          image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/tracegen:latest
          command:
            - "./tracegen"
          args:
            - -otlp-endpoint=otel-collector:4317
            - -otlp-insecure
            - -duration=30s
            - -workers=1
      restartPolicy: Never
  backoffLimit: 4

5.2. Sending traces and metrics to the OpenTelemetry Collector

Sending traces and metrics to the OpenTelemetry Collector is possible with or without sidecar injection.

5.2.1. Sending traces and metrics to the OpenTelemetry Collector with sidecar injection

You can set up sending telemetry data to an OpenTelemetry Collector instance with sidecar injection.

The Red Hat build of OpenTelemetry Operator allows sidecar injection into deployment workloads and automatic configuration of your instrumentation to send telemetry data to the OpenTelemetry Collector.

Prerequisites

  • The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed.
  • You have access to the cluster through the web console or the OpenShift CLI (oc):

    • You are logged in to the web console as a cluster administrator with the cluster-admin role.
    • An active OpenShift CLI (oc) session by a cluster administrator with the cluster-admin role.
    • For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.

Procedure

  1. Create a project for an OpenTelemetry Collector instance.

    apiVersion: project.openshift.io/v1
    kind: Project
    metadata:
      name: observability
  2. Create a service account.

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: otel-collector-sidecar
      namespace: observability
  3. Grant the permissions to the service account for the k8sattributes and resourcedetection processors.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: otel-collector
    rules:
    - apiGroups: ["", "config.openshift.io"]
      resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
      verbs: ["get", "watch", "list"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: otel-collector
    subjects:
    - kind: ServiceAccount
      name: otel-collector-sidecar
      namespace: observability
    roleRef:
      kind: ClusterRole
      name: otel-collector
      apiGroup: rbac.authorization.k8s.io
  4. Deploy the OpenTelemetry Collector as a sidecar.

    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    metadata:
      name: otel
      namespace: observability
    spec:
      serviceAccount: otel-collector-sidecar
      mode: sidecar
      config: |
        serviceAccount: otel-collector-sidecar
        receivers:
          otlp:
            protocols:
              grpc:
              http:
        processors:
          batch:
          memory_limiter:
            check_interval: 1s
            limit_percentage: 50
            spike_limit_percentage: 30
          resourcedetection:
            detectors: [openshift]
            timeout: 2s
        exporters:
          otlp:
            endpoint: "tempo-<example>-gateway:8090" 1
            tls:
              insecure: true
        service:
          pipelines:
            traces:
              receivers: [jaeger]
              processors: [memory_limiter, resourcedetection, batch]
              exporters: [otlp]
    1
    This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator.
  5. Create your deployment using the otel-collector-sidecar service account.
  6. Add the sidecar.opentelemetry.io/inject: "true" annotation to your Deployment object. This will inject all the needed environment variables to send data from your workloads to the OpenTelemetry Collector instance.

5.2.2. Sending traces and metrics to the OpenTelemetry Collector without sidecar injection

You can set up sending telemetry data to an OpenTelemetry Collector instance without sidecar injection, which involves manually setting several environment variables.

Prerequisites

  • The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed.
  • You have access to the cluster through the web console or the OpenShift CLI (oc):

    • You are logged in to the web console as a cluster administrator with the cluster-admin role.
    • An active OpenShift CLI (oc) session by a cluster administrator with the cluster-admin role.
    • For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.

Procedure

  1. Create a project for an OpenTelemetry Collector instance.

    apiVersion: project.openshift.io/v1
    kind: Project
    metadata:
      name: observability
  2. Create a service account.

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: otel-collector-deployment
      namespace: observability
  3. Grant the permissions to the service account for the k8sattributes and resourcedetection processors.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: otel-collector
    rules:
    - apiGroups: ["", "config.openshift.io"]
      resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
      verbs: ["get", "watch", "list"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: otel-collector
    subjects:
    - kind: ServiceAccount
      name: otel-collector
      namespace: observability
    roleRef:
      kind: ClusterRole
      name: otel-collector
      apiGroup: rbac.authorization.k8s.io
  4. Deploy the OpenTelemetry Collector instance with the OpenTelemetryCollector custom resource.

    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    metadata:
      name: otel
      namespace: observability
    spec:
      mode: deployment
      serviceAccount: otel-collector-deployment
      config: |
        receivers:
          jaeger:
            protocols:
              grpc:
              thrift_binary:
              thrift_compact:
              thrift_http:
          opencensus:
          otlp:
            protocols:
              grpc:
              http:
          zipkin:
        processors:
          batch:
          k8sattributes:
          memory_limiter:
            check_interval: 1s
            limit_percentage: 50
            spike_limit_percentage: 30
          resourcedetection:
            detectors: [openshift]
        exporters:
          otlp:
            endpoint: "tempo-<example>-distributor:4317" 1
            tls:
              insecure: true
        service:
          pipelines:
            traces:
              receivers: [jaeger, opencensus, otlp, zipkin]
              processors: [memory_limiter, k8sattributes, resourcedetection, batch]
              exporters: [otlp]
    1
    This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator.
  5. Set the environment variables in the container with your instrumented application.

    NameDescriptionDefault value
    OTEL_SERVICE_NAME

    Sets the value of the service.name resource attribute.

    ""

    OTEL_EXPORTER_OTLP_ENDPOINT

    Base endpoint URL for any signal type with an optionally specified port number.

    https://localhost:4317

    OTEL_EXPORTER_OTLP_CERTIFICATE

    Path to the certificate file for the TLS credentials of the gRPC client.

    https://localhost:4317

    OTEL_TRACES_SAMPLER

    Sampler to be used for traces.

    parentbased_always_on

    OTEL_EXPORTER_OTLP_PROTOCOL

    Transport protocol for the OTLP exporter.

    grpc

    OTEL_EXPORTER_OTLP_TIMEOUT

    Maximum time interval for the OTLP exporter to wait for each batch export.

    10s

    OTEL_EXPORTER_OTLP_INSECURE

    Disables client transport security for gRPC requests. An HTTPS schema overrides it.

    False

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.