Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 2. Installing


Installing the Red Hat build of OpenTelemetry involves the following steps:

  1. Installing the Red Hat build of OpenTelemetry Operator.
  2. Creating a namespace for an OpenTelemetry Collector instance.
  3. Creating an OpenTelemetryCollector custom resource to deploy the OpenTelemetry Collector instance.

2.1. Installing the Red Hat build of OpenTelemetry from the web console

You can install the Red Hat build of OpenTelemetry from the Administrator view of the web console.

Prerequisites

  • You are logged in to the web console as a cluster administrator with the cluster-admin role.
  • For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role.

Procedure

  1. Install the Red Hat build of OpenTelemetry Operator:

    1. Go to Operators OperatorHub and search for Red Hat build of OpenTelemetry Operator.
    2. Select the Red Hat build of OpenTelemetry Operator that is provided by Red Hat Install Install View Operator.

      Important

      This installs the Operator with the default presets:

      • Update channel stable
      • Installation mode All namespaces on the cluster
      • Installed Namespace openshift-operators
      • Update approval Automatic
    3. In the Details tab of the installed Operator page, under ClusterServiceVersion details, verify that the installation Status is Succeeded.
  2. Create a project of your choice for the OpenTelemetry Collector instance that you will create in the next step by going to Home Projects Create Project.
  3. Create an OpenTelemetry Collector instance.

    1. Go to Operators Installed Operators.
    2. Select OpenTelemetry Collector Create OpenTelemetry Collector YAML view.
    3. In the YAML view, customize the OpenTelemetryCollector custom resource (CR):

      Example OpenTelemetryCollector CR

      apiVersion: opentelemetry.io/v1alpha1
      kind: OpenTelemetryCollector
      metadata:
        name: otel
        namespace: <project_of_opentelemetry_collector_instance>
      spec:
        mode: deployment
        config: |
          receivers: 1
            otlp:
              protocols:
                grpc:
                http:
            jaeger:
              protocols:
                grpc: {}
                thrift_binary: {}
                thrift_compact: {}
                thrift_http: {}
            zipkin: {}
          processors: 2
            batch: {}
            memory_limiter:
              check_interval: 1s
              limit_percentage: 50
              spike_limit_percentage: 30
          exporters: 3
            debug: {}
          service:
            pipelines:
              traces:
                receivers: [otlp,jaeger,zipkin]
                processors: [memory_limiter,batch]
                exporters: [debug]

      1
      For details, see the "Receivers" page.
      2
      For details, see the "Processors" page.
      3
      For details, see the "Exporters" page.
    4. Select Create.

Verification

  1. Use the Project: dropdown list to select the project of the OpenTelemetry Collector instance.
  2. Go to Operators Installed Operators to verify that the Status of the OpenTelemetry Collector instance is Condition: Ready.
  3. Go to Workloads Pods to verify that all the component pods of the OpenTelemetry Collector instance are running.

2.2. Installing the Red Hat build of OpenTelemetry by using the CLI

You can install the Red Hat build of OpenTelemetry from the command line.

Prerequisites

  • An active OpenShift CLI (oc) session by a cluster administrator with the cluster-admin role.

    Tip
    • Ensure that your OpenShift CLI (oc) version is up to date and matches your OpenShift Container Platform version.
    • Run oc login:

      $ oc login --username=<your_username>

Procedure

  1. Install the Red Hat build of OpenTelemetry Operator:

    1. Create a project for the Red Hat build of OpenTelemetry Operator by running the following command:

      $ oc apply -f - << EOF
      apiVersion: project.openshift.io/v1
      kind: Project
      metadata:
        labels:
          kubernetes.io/metadata.name: openshift-opentelemetry-operator
          openshift.io/cluster-monitoring: "true"
        name: openshift-opentelemetry-operator
      EOF
    2. Create an Operator group by running the following command:

      $ oc apply -f - << EOF
      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: openshift-opentelemetry-operator
        namespace: openshift-opentelemetry-operator
      spec:
        upgradeStrategy: Default
      EOF
    3. Create a subscription by running the following command:

      $ oc apply -f - << EOF
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: opentelemetry-product
        namespace: openshift-opentelemetry-operator
      spec:
        channel: stable
        installPlanApproval: Automatic
        name: opentelemetry-product
        source: redhat-operators
        sourceNamespace: openshift-marketplace
      EOF
    4. Check the Operator status by running the following command:

      $ oc get csv -n openshift-opentelemetry-operator
  2. Create a project of your choice for the OpenTelemetry Collector instance that you will create in a subsequent step:

    • To create a project without metadata, run the following command:

      $ oc new-project <project_of_opentelemetry_collector_instance>
    • To create a project with metadata, run the following command:

      $ oc apply -f - << EOF
      apiVersion: project.openshift.io/v1
      kind: Project
      metadata:
        name: <project_of_opentelemetry_collector_instance>
      EOF
  3. Create an OpenTelemetry Collector instance in the project that you created for it.

    Note

    You can create multiple OpenTelemetry Collector instances in separate projects on the same cluster.

    1. Customize the OpenTelemetryCollector custom resource (CR):

      Example OpenTelemetryCollector CR

      apiVersion: opentelemetry.io/v1alpha1
      kind: OpenTelemetryCollector
      metadata:
        name: otel
        namespace: <project_of_opentelemetry_collector_instance>
      spec:
        mode: deployment
        config: |
          receivers: 1
            otlp:
              protocols:
                grpc:
                http:
            jaeger:
              protocols:
                grpc: {}
                thrift_binary: {}
                thrift_compact: {}
                thrift_http: {}
            zipkin: {}
          processors: 2
            batch: {}
            memory_limiter:
              check_interval: 1s
              limit_percentage: 50
              spike_limit_percentage: 30
          exporters: 3
            debug: {}
          service:
            pipelines:
              traces:
                receivers: [otlp,jaeger,zipkin]
                processors: [memory_limiter,batch]
                exporters: [debug]

      1
      For details, see the "Receivers" page.
      2
      For details, see the "Processors" page.
      3
      For details, see the "Exporters" page.
    2. Apply the customized CR by running the following command:

      $ oc apply -f - << EOF
      <OpenTelemetryCollector_custom_resource>
      EOF

Verification

  1. Verify that the status.phase of the OpenTelemetry Collector pod is Running and the conditions are type: Ready by running the following command:

    $ oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml
  2. Get the OpenTelemetry Collector service by running the following command:

    $ oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>

2.3. Using taints and tolerations

To schedule the OpenTelemetry pods on dedicated nodes, see How to deploy the different OpenTelemetry components on infra nodes using nodeSelector and tolerations in OpenShift 4

2.4. Creating the required RBAC resources automatically

Some Collector components require configuring the RBAC resources.

Procedure

  • Add the following permissions to the opentelemetry-operator-controller-manage service account so that the Red Hat build of OpenTelemetry Operator can create them automatically:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: generate-processors-rbac
    rules:
    - apiGroups:
      - rbac.authorization.k8s.io
      resources:
      - clusterrolebindings
      - clusterroles
      verbs:
      - create
      - delete
      - get
      - list
      - patch
      - update
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: generate-processors-rbac
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: generate-processors-rbac
    subjects:
    - kind: ServiceAccount
      name: opentelemetry-operator-controller-manager
      namespace: openshift-opentelemetry-operator

2.5. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.