Chapter 1. Installing
Installing the Red Hat build of OpenTelemetry involves installing the Red Hat build of OpenTelemetry Operator, creating a namespace for an OpenTelemetry Collector instance, and creating an OpenTelemetryCollector custom resource to deploy the OpenTelemetry Collector instance.
1.1. Installing the Red Hat build of OpenTelemetry from the web console Copy linkLink copied to clipboard!
You can install the Red Hat build of OpenTelemetry from the OpenShift Container Platform web console.
Prerequisites
-
You are logged in to the web console as a cluster administrator with the
cluster-adminrole. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-adminrole.
Procedure
Install the Red Hat build of OpenTelemetry Operator:
In the web console, search for
Red Hat build of OpenTelemetry Operator.TipIn OpenShift Container Platform 4.19 or earlier, go to Operators
OperatorHub. In OpenShift Container Platform 4.20 or later, go to Ecosystem
Software Catalog. Select the Red Hat build of OpenTelemetry Operator that is provided by Red Hat
Install Install View Operator. ImportantThis installs the Operator with the default presets:
-
Update channel
stable -
Installation mode
All namespaces on the cluster -
Installed Namespace
openshift-opentelemetry-operator -
Update approval
Automatic
-
Update channel
- In the Details tab of the installed Operator page, under ClusterServiceVersion details, verify that the installation Status is Succeeded.
-
Create a permitted project of your choice for the OpenTelemetry Collector instance that you will create in the next step by going to Home
Projects Create Project. Project names beginning with the openshift-prefix are not permitted. Create an OpenTelemetry Collector instance.
-
Go to Ecosystem
Installed Operators. -
Select OpenTelemetry Collector
Create OpenTelemetry Collector YAML view. In the YAML view, customize the
OpenTelemetryCollectorcustom resource (CR):Example
OpenTelemetryCollectorCRapiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <permitted_project_of_opentelemetry_collector_instance> spec: mode: <deployment_mode> config: receivers: otlp: protocols: grpc: {} http: {} jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]where:
namespace-
Project that you have chosen for the
OpenTelemetryCollectordeployment. Project names beginning with theopenshift-prefix are not permitted. mode-
Deployment mode with the following supported values: the default
deployment,daemonset,statefulset, orsidecar. For details, see Deployment Modes. receivers- For details, see Receivers.
processors- For details, see Processors.
exporters- For details, see Exporters.
- Select Create.
-
Go to Ecosystem
Verification
- Use the Project: dropdown list to select the project of the OpenTelemetry Collector instance.
-
Go to Ecosystem
Installed Operators to verify that the Status of the OpenTelemetry Collector instance is Condition: Ready. -
Go to Workloads
Pods to verify that all the component pods of the OpenTelemetry Collector instance are running.
1.2. Installing the Red Hat build of OpenTelemetry by using the CLI Copy linkLink copied to clipboard!
You can install the Red Hat build of OpenTelemetry from the command line.
Prerequisites
An active OpenShift CLI (
oc) session by a cluster administrator with thecluster-adminrole.Tip-
Your OpenShift CLI (
oc) version must be up to date and match your OpenShift Container Platform version. -
You can verify your session by running
oc whoami.
-
Your OpenShift CLI (
Procedure
Install the Red Hat build of OpenTelemetry Operator:
Create a project for the Red Hat build of OpenTelemetry Operator by running the following command:
$ oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-opentelemetry-operator openshift.io/cluster-monitoring: "true" name: openshift-opentelemetry-operator EOFCreate an Operator group by running the following command:
$ oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-opentelemetry-operator namespace: openshift-opentelemetry-operator spec: upgradeStrategy: Default EOFCreate a subscription by running the following command:
$ oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: opentelemetry-product namespace: openshift-opentelemetry-operator spec: channel: stable installPlanApproval: Automatic name: opentelemetry-product source: redhat-operators sourceNamespace: openshift-marketplace EOFCheck the Operator status by running the following command:
$ oc get csv -n openshift-opentelemetry-operator
Create a permitted project of your choice for the OpenTelemetry Collector instance that you will create in a subsequent step:
To create a permitted project without metadata, run the following command:
$ oc new-project <permitted_project_of_opentelemetry_collector_instance>where:
<permitted_project_of_opentelemetry_collector_instance>-
Project names beginning with the
openshift-prefix are not permitted.
To create a permitted project with metadata, run the following command:
$ oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <permitted_project_of_opentelemetry_collector_instance> EOFwhere:
name-
Project names beginning with the
openshift-prefix are not permitted.
Create an OpenTelemetry Collector instance in the project that you created for it.
NoteYou can create multiple OpenTelemetry Collector instances in separate projects on the same cluster.
Customize the
OpenTelemetryCollectorcustom resource (CR):Example
OpenTelemetryCollectorCRapiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <permitted_project_of_opentelemetry_collector_instance> spec: mode: <deployment_mode> config: receivers: otlp: protocols: grpc: {} http: {} jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} zipkin: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: debug: {} service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]where:
namespace-
Project that you have chosen for the
OpenTelemetryCollectordeployment. Project names beginning with theopenshift-prefix are not permitted. mode-
Deployment mode with the following supported values: the default
deployment,daemonset,statefulset, orsidecar. For details, see Deployment Modes. receivers- For details, see Receivers.
processors- For details, see Processors.
exporters- For details, see Exporters.
Apply the customized CR by running the following command:
$ oc apply -f - << EOF <OpenTelemetryCollector_custom_resource> EOF
Verification
Verify that the
status.phaseof the OpenTelemetry Collector pod isRunningand theconditionsaretype: Readyby running the following command:$ oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yamlGet the OpenTelemetry Collector service by running the following command:
$ oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>
1.3. Using taints and tolerations Copy linkLink copied to clipboard!
To schedule the OpenTelemetry pods on dedicated nodes, see How to deploy the different OpenTelemetry components on infra nodes by using a node selector and tolerations in OpenShift 4
1.4. Creating the required RBAC resources automatically Copy linkLink copied to clipboard!
Some Collector components require configuring the RBAC resources.
Procedure
Add the following permissions to the
opentelemetry-operator-controller-manageservice account so that the Red Hat build of OpenTelemetry Operator can create them automatically:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator