此内容没有您所选择的语言版本。
Chapter 3. Distributed tracing platform (Tempo)
3.1. Installing
Installing the distributed tracing platform (Tempo) involves the following steps:
- Installing the Tempo Operator.
- Setting up a supported object store and creating a secret for the object store credentials.
- Configuring the permissions and tenants.
Depending on your use case, installing your choice of deployment:
-
Microservices-mode
TempoStack
instance -
Monolithic-mode
TempoMonolithic
instance
-
Microservices-mode
3.1.1. Installing the Tempo Operator
You can install the Tempo Operator by using the web console or the command line.
3.1.1.1. Installing the Tempo Operator by using the web console
You can install the Tempo Operator from the Administrator view of the web console.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as a cluster administrator with the
cluster-admin
role. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-admin
role. You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, Google Cloud Storage. For more information, see "Object storage setup".
WarningObject storage is required and not included with the distributed tracing platform (Tempo). You must choose and set up object storage by a supported provider before installing the distributed tracing platform (Tempo).
Procedure
-
Go to Operators
OperatorHub and search for Tempo Operator
. Select the Tempo Operator that is provided by Red Hat.
ImportantThe following selections are the default presets for this Operator:
-
Update channel
stable -
Installation mode
All namespaces on the cluster -
Installed Namespace
openshift-tempo-operator -
Update approval
Automatic
-
Update channel
- Select the Enable Operator recommended cluster monitoring on this Namespace checkbox.
-
Select Install
Install View Operator.
Verification
- In the Details tab of the page of the installed Operator, under ClusterServiceVersion details, verify that the installation Status is Succeeded.
3.1.1.2. Installing the Tempo Operator by using the CLI
You can install the Tempo Operator from the command line.
Prerequisites
An active OpenShift CLI (
oc
) session by a cluster administrator with thecluster-admin
role.Tip-
Ensure that your OpenShift CLI (
oc
) version is up to date and matches your OpenShift Container Platform version. Run
oc login
:$ oc login --username=<your_username>
-
Ensure that your OpenShift CLI (
You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, Google Cloud Storage. For more information, see "Object storage setup".
WarningObject storage is required and not included with the distributed tracing platform (Tempo). You must choose and set up object storage by a supported provider before installing the distributed tracing platform (Tempo).
Procedure
Create a project for the Tempo Operator by running the following command:
$ oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-tempo-operator openshift.io/cluster-monitoring: "true" name: openshift-tempo-operator EOF
Create an Operator group by running the following command:
$ oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-tempo-operator namespace: openshift-tempo-operator spec: upgradeStrategy: Default EOF
Create a subscription by running the following command:
$ oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tempo-product namespace: openshift-tempo-operator spec: channel: stable installPlanApproval: Automatic name: tempo-product source: redhat-operators sourceNamespace: openshift-marketplace EOF
Verification
Check the Operator status by running the following command:
$ oc get csv -n openshift-tempo-operator
3.1.2. Object storage setup
You can use the following configuration parameters when setting up a supported object storage.
Using object storage requires setting up a supported object store and creating a secret for the object store credentials before deploying a TempoStack
or TempoMonolithic
instance.
Storage provider |
---|
Secret parameters |
|
MinIO |
See MinIO Operator.
|
Amazon S3 |
|
Amazon S3 with Security Token Service (STS) |
|
Microsoft Azure Blob Storage |
|
Google Cloud Storage on Google Cloud Platform (GCP) |
|
3.1.2.1. Setting up the Amazon S3 storage with the Security Token Service
You can set up the Amazon S3 storage with the Security Token Service (STS) by using the AWS Command Line Interface (AWS CLI).
The Amazon S3 storage with the Security Token Service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
- You have installed the latest version of the AWS CLI.
Procedure
- Create an AWS S3 bucket.
Create the following
trust.json
file for the AWS IAM policy that will set up a trust relationship for the AWS IAM role, created in the next step, with the service account of the TempoStack instance:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${<aws_account_id>}:oidc-provider/${<oidc_provider>}" 1 }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_PROVIDER}:sub": [ "system:serviceaccount:${<openshift_project_for_tempostack>}:tempo-${<tempostack_cr_name>}" 2 "system:serviceaccount:${<openshift_project_for_tempostack>}:tempo-${<tempostack_cr_name>}-query-frontend" ] } } } ] }
- 1
- OIDC provider that you have configured on the OpenShift Container Platform. You can get the configured OIDC provider value also by running the following command:
$ oc get authentication cluster -o json | jq -r '.spec.serviceAccountIssuer' | sed 'shttp[s]*://~g'
. - 2
- Namespace in which you intend to create the TempoStack instance.
Create an AWS IAM role by attaching the
trust.json
policy file that you created:$ aws iam create-role \ --role-name "tempo-s3-access" \ --assume-role-policy-document "file:///tmp/trust.json" \ --query Role.Arn \ --output text
Attach an AWS IAM policy to the created role:
$ aws iam attach-role-policy \ --role-name "tempo-s3-access" \ --policy-arn "arn:aws:iam::aws:policy/AmazonS3FullAccess"
In the OpenShift Container Platform, create an object storage secret with keys as follows:
apiVersion: v1 kind: Secret metadata: name: minio-test stringData: bucket: <s3_bucket_name> region: <s3_region> role_arn: <s3_role_arn> type: Opaque
Additional resources
- AWS Identity and Access Management Documentation (AWS documentation)
- AWS Command Line Interface Documentation (AWS documentation)
- Configuring an OpenID Connect identity provider
- Identify AWS resources with Amazon Resource Names (ARNs) (AWS documentation)
3.1.2.2. Setting up IBM Cloud Object Storage
You can set up IBM Cloud Object Storage by using the OpenShift CLI (oc
).
Prerequisites
-
You have installed the latest version of OpenShift CLI (
oc
). For more information, see "Getting started with the OpenShift CLI" in Configure: CLI tools. -
You have installed the latest version of IBM Cloud Command Line Interface (
ibmcloud
). For more information, see "Getting started with the IBM Cloud CLI" in IBM Cloud Docs. You have configured IBM Cloud Object Storage. For more information, see "Choosing a plan and creating an instance" in IBM Cloud Docs.
- You have an IBM Cloud Platform account.
- You have ordered an IBM Cloud Object Storage plan.
- You have created an instance of IBM Cloud Object Storage.
Procedure
- On IBM Cloud, create an object store bucket.
On IBM Cloud, create a service key for connecting to the object store bucket by running the following command:
$ ibmcloud resource service-key-create <tempo_bucket> Writer \ --instance-name <tempo_bucket> --parameters '{"HMAC":true}'
On IBM Cloud, create a secret with the bucket credentials by running the following command:
$ oc -n <namespace> create secret generic <ibm_cos_secret> \ --from-literal=bucket="<tempo_bucket>" \ --from-literal=endpoint="<ibm_bucket_endpoint>" \ --from-literal=access_key_id="<ibm_bucket_access_key>" \ --from-literal=access_key_secret="<ibm_bucket_secret_key>"
On OpenShift Container Platform, create an object storage secret with keys as follows:
apiVersion: v1 kind: Secret metadata: name: <ibm_cos_secret> stringData: bucket: <tempo_bucket> endpoint: <ibm_bucket_endpoint> access_key_id: <ibm_bucket_access_key> access_key_secret: <ibm_bucket_secret_key> type: Opaque
On OpenShift Container Platform, set the storage section in the
TempoStack
custom resource as follows:apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack # ... spec: # ... storage: secret: name: <ibm_cos_secret> 1 type: s3 # ...
- 1
- Name of the secret that contains the IBM Cloud Storage access and secret keys.
Additional resources
- Getting started with the OpenShift CLI
- Getting started with the IBM Cloud CLI (IBM Cloud Docs)
- Choosing a plan and creating an instance (IBM Cloud Docs)
- Getting started with IBM Cloud Object Storage: Before you begin (IBM Cloud Docs)
3.1.3. Configuring the permissions and tenants
Before installing a TempoStack
or TempoMonolithic
instance, you must define one or more tenants and configure their read and write access. You can configure such an authorization setup by using a cluster role and cluster role binding for the Kubernetes Role-Based Access Control (RBAC). By default, no users are granted read or write permissions. For more information, see "Configuring the read permissions for tenants" and "Configuring the write permissions for tenants".
The OpenTelemetry Collector of the Red Hat build of OpenTelemetry can send trace data to a TempoStack
or TempoMonolithic
instance by using the service account with RBAC for writing the data.
Component | Tempo Gateway service | OpenShift OAuth | TokenReview API | SubjectAccessReview API |
---|---|---|---|---|
Authentication | X | X | X | |
Authorization | X | X |
3.1.3.1. Configuring the read permissions for tenants
You can configure the read permissions for tenants from the Administrator view of the web console or from the command line.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as a cluster administrator with the
cluster-admin
role. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-admin
role.
Procedure
Define the tenants by adding the
tenantName
andtenantId
parameters with your values of choice to theTempoStack
custom resource (CR):Tenant example in a
TempoStack
CRapiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: redmetrics spec: # ... tenants: mode: openshift authentication: - tenantName: dev 1 tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" 2 # ...
Add the tenants to a cluster role with the read (
get
) permissions to read traces.Example RBAC configuration in a
ClusterRole
resourceapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-reader rules: - apiGroups: - 'tempo.grafana.com' resources: 1 - dev - prod resourceNames: - traces verbs: - 'get' 2
Grant authenticated users the read permissions for trace data by defining a cluster role binding for the cluster role from the previous step.
Example RBAC configuration in a
ClusterRoleBinding
resourceapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-reader subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: system:authenticated 1
- 1
- Grants all authenticated users the read permissions for trace data.
3.1.3.2. Configuring the write permissions for tenants
You can configure the write permissions for tenants from the Administrator view of the web console or from the command line.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as a cluster administrator with the
cluster-admin
role. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-admin
role. - You have installed the OpenTelemetry Collector and configured it to use an authorized service account with permissions. For more information, see "Creating the required RBAC resources automatically" in the Red Hat build of OpenTelemetry documentation.
Procedure
Create a service account for use with OpenTelemetry Collector.
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector namespace: <project_of_opentelemetry_collector_instance>
Add the tenants to a cluster role with the write (
create
) permissions to write traces.Example RBAC configuration in a
ClusterRole
resourceapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-write rules: - apiGroups: - 'tempo.grafana.com' resources: 1 - dev resourceNames: - traces verbs: - 'create' 2
Grant the OpenTelemetry Collector the write permissions by defining a cluster role binding to attach the OpenTelemetry Collector service account.
Example RBAC configuration in a
ClusterRoleBinding
resourceapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-write subjects: - kind: ServiceAccount name: otel-collector 1 namespace: otel
- 1
- The service account that you created in a previous step. The client uses it when exporting trace data.
Configure the
OpenTelemetryCollector
custom resource as follows:-
Add the
bearertokenauth
extension and a valid token to the tracing pipeline service. -
Add the tenant name in the
otlp/otlphttp
exporters as theX-Scope-OrgID
headers. Enable TLS with a valid certificate authority file.
Sample OpenTelemetry CR configuration
apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: <project_of_tempostack_instance> spec: mode: deployment serviceAccount: otel-collector 1 config: | extensions: bearertokenauth: 2 filename: "/var/run/secrets/kubernetes.io/serviceaccount/token" 3 exporters: otlp/dev: 4 endpoint: sample-gateway.tempo.svc.cluster.local:8090 tls: insecure: false ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" 5 auth: authenticator: bearertokenauth headers: X-Scope-OrgID: "dev" 6 otlphttp/dev: 7 endpoint: https://sample-gateway.<project_of_tempostack_instance>.svc.cluster.local:8080/api/traces/v1/dev tls: insecure: false ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: "dev" service: extensions: [bearertokenauth] pipelines: traces: exporters: [otlp/dev] 8 # ...
- 1
- Service account configured with write permissions.
- 2
- Bearer Token extension to use service account token.
- 3
- The service account token. The client sends the token to the tracing pipeline service as the bearer token header.
- 4
- Specify either the OTLP gRPC Exporter (
otlp/dev
) or the OTLP HTTP Exporter (otlphttp/dev
). - 5
- Enabled TLS with a valid service CA file.
- 6
- Header with tenant name.
- 7
- Specify either the OTLP gRPC Exporter (
otlp/dev
) or the OTLP HTTP Exporter (otlphttp/dev
). - 8
- The exporter you specified in
exporters
section of the CR.
-
Add the
Additional resources
3.1.4. Installing a TempoStack instance
You can install a TempoStack instance by using the web console or command line.
3.1.4.1. Installing a TempoStack instance by using the web console
You can install a TempoStack instance from the Administrator view of the web console.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as a cluster administrator with the
cluster-admin
role. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-admin
role. You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, Google Cloud Storage. For more information, see "Object storage setup".
WarningObject storage is required and not included with the distributed tracing platform (Tempo). You must choose and set up object storage by a supported provider before installing the distributed tracing platform (Tempo).
- You have defined one or more tenants and configured the read and write permissions. For more information, see "Configuring the read permissions for tenants" and "Configuring the write permissions for tenants".
Procedure
-
Go to Home
Projects Create Project to create a project of your choice for the TempoStack instance that you will create in a subsequent step. Go to Workloads
Secrets Create From YAML to create a secret for your object storage bucket in the project that you created for the TempoStack instance. For more information, see "Object storage setup". Example secret for Amazon S3 and MinIO storage
apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque
Create a TempoStack instance.
NoteYou can create multiple TempoStack instances in separate projects on the same cluster.
-
Go to Operators
Installed Operators. -
Select TempoStack
Create TempoStack YAML view. In the YAML view, customize the
TempoStack
custom resource (CR):Example
TempoStack
CR for AWS S3 and MinIO storage and two tenantsapiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack 1 metadata: name: simplest namespace: <project_of_tempostack_instance> 2 spec: storage: 3 secret: 4 name: <secret_name> 5 type: <secret_provider> 6 storageSize: <value>Gi 7 resources: total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift 8 authentication: 9 - tenantName: dev 10 tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" 11 - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb" template: gateway: enabled: true 12 queryFrontend: jaegerQuery: enabled: true 13
- 1
- This CR creates a TempoStack deployment, which is configured to receive Jaeger Thrift over the HTTP and OpenTelemetry Protocol (OTLP).`
- 2
- The namespace that you have chosen for the TempoStack deployment.
- 3
- Specifies the storage for storing traces.
- 4
- The secret you created in step 2 for the object storage that had been set up as one of the prerequisites.
- 5
- The value of the
name
field in themetadata
section of the secret. For example:minio
. - 6
- The accepted values are
azure
for Azure Blob Storage;gcs
for Google Cloud Storage; ands3
for Amazon S3, MinIO, or Red Hat OpenShift Data Foundation. For example:s3
. - 7
- The size of the persistent volume claim for the Tempo Write-Ahead Logging (WAL). The default is
10Gi
. For example:1Gi
. - 8
- The value must be
openshift
. - 9
- The list of tenants.
- 10
- The tenant name, which is to be provided in the
X-Scope-OrgId
header when ingesting the data. - 11
- The unique identifier of the tenant. Must be unique throughout the lifecycle of the TempoStack deployment. The distributed tracing platform (Tempo) uses this ID to prefix objects in the object storage. You can reuse the value of the UUID or
tempoName
field. - 12
- Enables a gateway that performs authentication and authorization. The Jaeger UI is exposed at
http://<gateway_ingress>/api/traces/v1/<tenant_name>/search
. - 13
- Exposes the Jaeger UI, which visualizes the data, via a route.
- Select Create.
-
Go to Operators
Verification
- Use the Project: dropdown list to select the project of the TempoStack instance.
-
Go to Operators
Installed Operators to verify that the Status of the TempoStack instance is Condition: Ready. -
Go to Workloads
Pods to verify that all the component pods of the TempoStack instance are running. Access the Tempo console:
-
Go to Networking
Routes and Ctrl+F to search for tempo
. In the Location column, open the URL to access the Tempo console.
NoteThe Tempo console initially shows no trace data following the Tempo console installation.
-
Go to Networking
3.1.4.2. Installing a TempoStack instance by using the CLI
You can install a TempoStack instance from the command line.
Prerequisites
An active OpenShift CLI (
oc
) session by a cluster administrator with thecluster-admin
role.Tip-
Ensure that your OpenShift CLI (
oc
) version is up to date and matches your OpenShift Container Platform version. Run the
oc login
command:$ oc login --username=<your_username>
-
Ensure that your OpenShift CLI (
You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, Google Cloud Storage. For more information, see "Object storage setup".
WarningObject storage is required and not included with the distributed tracing platform (Tempo). You must choose and set up object storage by a supported provider before installing the distributed tracing platform (Tempo).
- You have defined one or more tenants and configured the read and write permissions. For more information, see "Configuring the read permissions for tenants" and "Configuring the write permissions for tenants".
Procedure
Run the following command to create a project of your choice for the TempoStack instance that you will create in a subsequent step:
$ oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempostack_instance> EOF
In the project that you created for the TempoStack instance, create a secret for your object storage bucket by running the following command:
$ oc apply -f - << EOF <object_storage_secret> EOF
For more information, see "Object storage setup".
Example secret for Amazon S3 and MinIO storage
apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque
Create a TempoStack instance in the project that you created for it:
NoteYou can create multiple TempoStack instances in separate projects on the same cluster.
Customize the
TempoStack
custom resource (CR):Example
TempoStack
CR for AWS S3 and MinIO storage and two tenantsapiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack 1 metadata: name: simplest namespace: <project_of_tempostack_instance> 2 spec: storage: 3 secret: 4 name: <secret_name> 5 type: <secret_provider> 6 storageSize: <value>Gi 7 resources: total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift 8 authentication: 9 - tenantName: dev 10 tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" 11 - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb" template: gateway: enabled: true 12 queryFrontend: jaegerQuery: enabled: true 13
- 1
- This CR creates a TempoStack deployment, which is configured to receive Jaeger Thrift over the HTTP and OpenTelemetry Protocol (OTLP).`
- 2
- The namespace that you have chosen for the TempoStack deployment.
- 3
- Specifies the storage for storing traces.
- 4
- The secret you created in step 2 for the object storage that had been set up as one of the prerequisites.
- 5
- The value of the
name
field in themetadata
section of the secret. For example:minio
. - 6
- The accepted values are
azure
for Azure Blob Storage;gcs
for Google Cloud Storage; ands3
for Amazon S3, MinIO, or Red Hat OpenShift Data Foundation. For example:s3
. - 7
- The size of the persistent volume claim for the Tempo Write-Ahead Logging (WAL). The default is
10Gi
. For example:1Gi
. - 8
- The value must be
openshift
. - 9
- The list of tenants.
- 10
- The tenant name, which is to be provided in the
X-Scope-OrgId
header when ingesting the data. - 11
- The unique identifier of the tenant. Must be unique throughout the lifecycle of the TempoStack deployment. The distributed tracing platform (Tempo) uses this ID to prefix objects in the object storage. You can reuse the value of the UUID or
tempoName
field. - 12
- Enables a gateway that performs authentication and authorization. The Jaeger UI is exposed at
http://<gateway_ingress>/api/traces/v1/<tenant_name>/search
. - 13
- Exposes the Jaeger UI, which visualizes the data, via a route.
Apply the customized CR by running the following command:
$ oc apply -f - << EOF <tempostack_cr> EOF
Verification
Verify that the
status
of all TempoStackcomponents
isRunning
and theconditions
aretype: Ready
by running the following command:$ oc get tempostacks.tempo.grafana.com simplest -o yaml
Verify that all the TempoStack component pods are running by running the following command:
$ oc get pods
Access the Tempo console:
Query the route details by running the following command:
$ oc get route
Open
https://<route_from_previous_step>
in a web browser.NoteThe Tempo console initially shows no trace data following the Tempo console installation.
3.1.5. Installing a TempoMonolithic instance
The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can install a TempoMonolithic instance by using the web console or command line.
The TempoMonolithic
custom resource (CR) creates a Tempo deployment in monolithic mode. All components of the Tempo deployment, such as the compactor, distributor, ingester, querier, and query frontend, are contained in a single container.
A TempoMonolithic instance supports storing traces in in-memory storage, a persistent volume, or object storage.
Tempo deployment in monolithic mode is preferred for a small deployment, demonstration, testing, and as a migration path of the Red Hat OpenShift distributed tracing platform (Jaeger) all-in-one deployment.
The monolithic deployment of Tempo does not scale horizontally. If you require horizontal scaling, use the TempoStack
CR for a Tempo deployment in microservices mode.
3.1.5.1. Installing a TempoMonolithic instance by using the web console
The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can install a TempoMonolithic instance from the Administrator view of the web console.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as a cluster administrator with the
cluster-admin
role. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-admin
role. - You have defined one or more tenants and configured the read and write permissions. For more information, see "Configuring the read permissions for tenants" and "Configuring the write permissions for tenants".
Procedure
-
Go to Home
Projects Create Project to create a project of your choice for the TempoMonolithic instance that you will create in a subsequent step. Decide which type of supported storage to use for storing traces: in-memory storage, a persistent volume, or object storage.
ImportantObject storage is not included with the distributed tracing platform (Tempo) and requires setting up an object store by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, or Google Cloud Storage.
Additionally, opting for object storage requires creating a secret for your object storage bucket in the project that you created for the TempoMonolithic instance. You can do this in Workloads
Secrets Create From YAML. For more information, see "Object storage setup".
Example secret for Amazon S3 and MinIO storage
apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque
Create a TempoMonolithic instance:
NoteYou can create multiple TempoMonolithic instances in separate projects on the same cluster.
-
Go to Operators
Installed Operators. -
Select TempoMonolithic
Create TempoMonolithic YAML view. In the YAML view, customize the
TempoMonolithic
custom resource (CR).Example
TempoMonolithic
CRapiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic 1 metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> 2 spec: storage: 3 traces: backend: <supported_storage_type> 4 size: <value>Gi 5 s3: 6 secret: <secret_name> 7 tls: 8 enabled: true caName: <ca_certificate_configmap_name> 9 jaegerui: enabled: true 10 route: enabled: true 11 resources: 12 total: limits: memory: <value>Gi cpu: <value>m multitenancy: enabled: true mode: openshift authentication: 13 - tenantName: dev 14 tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" 15 - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb"
- 1
- This CR creates a TempoMonolithic deployment with trace ingestion in the OTLP protocol.
- 2
- The namespace that you have chosen for the TempoMonolithic deployment.
- 3
- Specifies the storage for storing traces.
- 4
- Type of storage for storing traces: in-memory storage, a persistent volume, or object storage. The value for a persistent volume is
pv
. The accepted values for object storage ares3
,gcs
, orazure
, depending on the used object store type. The default value ismemory
for thetmpfs
in-memory storage, which is only appropriate for development, testing, demonstrations, and proof-of-concept environments because the data does not persist when the pod is shut down. - 5
- Memory size: For in-memory storage, this means the size of the
tmpfs
volume, where the default is2Gi
. For a persistent volume, this means the size of the persistent volume claim, where the default is10Gi
. For object storage, this means the size of the persistent volume claim for the Tempo Write-Ahead Logging (WAL), where the default is10Gi
. - 6
- Optional: For object storage, the type of object storage. The accepted values are
s3
,gcs
, andazure
, depending on the used object store type. - 7
- Optional: For object storage, the value of the
name
in themetadata
of the storage secret. The storage secret must be in the same namespace as the TempoMonolithic instance and contain the fields specified in "Table 1. Required secret parameters" in the section "Object storage setup". - 8
- Optional.
- 9
- Optional: Name of a
ConfigMap
object that contains a CA certificate. - 10
- Exposes the Jaeger UI, which visualizes the data, via a route.
- 11
- Enables creation of a route for the Jaeger UI.
- 12
- Optional.
- 13
- Lists the tenants.
- 14
- The tenant name from the
X-Scope-OrgId
header when ingesting the data. - 15
- The unique identifier of the tenant. Must be unique throughout the lifecycle of the TempoMonolithic deployment. This ID will be added as a prefix to the objects in the object storage. You can reuse the value of the UUID or
tempoName
field.
- Select Create.
-
Go to Operators
Verification
- Use the Project: dropdown list to select the project of the TempoMonolithic instance.
-
Go to Operators
Installed Operators to verify that the Status of the TempoMonolithic instance is Condition: Ready. -
Go to Workloads
Pods to verify that the pod of the TempoMonolithic instance is running. Access the Jaeger UI:
Go to Networking
Routes and Ctrl+F to search for jaegerui
.NoteThe Jaeger UI uses the
tempo-<metadata_name_of_TempoMonolithic_CR>-jaegerui
route.- In the Location column, open the URL to access the Jaeger UI.
When the pod of the TempoMonolithic instance is ready, you can send traces to the
tempo-<metadata_name_of_TempoMonolithic_CR>:4317
(OTLP/gRPC) andtempo-<metadata_name_of_TempoMonolithic_CR>:4318
(OTLP/HTTP) endpoints inside the cluster.The Tempo API is available at the
tempo-<metadata_name_of_TempoMonolithic_CR>:3200
endpoint inside the cluster.
3.1.5.2. Installing a TempoMonolithic instance by using the CLI
The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can install a TempoMonolithic instance from the command line.
Prerequisites
An active OpenShift CLI (
oc
) session by a cluster administrator with thecluster-admin
role.Tip-
Ensure that your OpenShift CLI (
oc
) version is up to date and matches your OpenShift Container Platform version. Run the
oc login
command:$ oc login --username=<your_username>
-
Ensure that your OpenShift CLI (
- You have defined one or more tenants and configured the read and write permissions. For more information, see "Configuring the read permissions for tenants" and "Configuring the write permissions for tenants".
Procedure
Run the following command to create a project of your choice for the TempoMonolithic instance that you will create in a subsequent step:
$ oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempomonolithic_instance> EOF
Decide which type of supported storage to use for storing traces: in-memory storage, a persistent volume, or object storage.
ImportantObject storage is not included with the distributed tracing platform (Tempo) and requires setting up an object store by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, or Google Cloud Storage.
Additionally, opting for object storage requires creating a secret for your object storage bucket in the project that you created for the TempoMonolithic instance. You can do this by running the following command:
$ oc apply -f - << EOF <object_storage_secret> EOF
For more information, see "Object storage setup".
Example secret for Amazon S3 and MinIO storage
apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque
Create a TempoMonolithic instance in the project that you created for it.
TipYou can create multiple TempoMonolithic instances in separate projects on the same cluster.
Customize the
TempoMonolithic
custom resource (CR).Example
TempoMonolithic
CRapiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic 1 metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> 2 spec: storage: 3 traces: backend: <supported_storage_type> 4 size: <value>Gi 5 s3: 6 secret: <secret_name> 7 tls: 8 enabled: true caName: <ca_certificate_configmap_name> 9 jaegerui: enabled: true 10 route: enabled: true 11 resources: 12 total: limits: memory: <value>Gi cpu: <value>m multitenancy: enabled: true mode: openshift authentication: 13 - tenantName: dev 14 tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" 15 - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb"
- 1
- This CR creates a TempoMonolithic deployment with trace ingestion in the OTLP protocol.
- 2
- The namespace that you have chosen for the TempoMonolithic deployment.
- 3
- Specifies the storage for storing traces.
- 4
- Type of storage for storing traces: in-memory storage, a persistent volume, or object storage. The value for a persistent volume is
pv
. The accepted values for object storage ares3
,gcs
, orazure
, depending on the used object store type. The default value ismemory
for thetmpfs
in-memory storage, which is only appropriate for development, testing, demonstrations, and proof-of-concept environments because the data does not persist when the pod is shut down. - 5
- Memory size: For in-memory storage, this means the size of the
tmpfs
volume, where the default is2Gi
. For a persistent volume, this means the size of the persistent volume claim, where the default is10Gi
. For object storage, this means the size of the persistent volume claim for the Tempo Write-Ahead Logging (WAL), where the default is10Gi
. - 6
- Optional: For object storage, the type of object storage. The accepted values are
s3
,gcs
, andazure
, depending on the used object store type. - 7
- Optional: For object storage, the value of the
name
in themetadata
of the storage secret. The storage secret must be in the same namespace as the TempoMonolithic instance and contain the fields specified in "Table 1. Required secret parameters" in the section "Object storage setup". - 8
- Optional.
- 9
- Optional: Name of a
ConfigMap
object that contains a CA certificate. - 10
- Exposes the Jaeger UI, which visualizes the data, via a route.
- 11
- Enables creation of a route for the Jaeger UI.
- 12
- Optional.
- 13
- Lists the tenants.
- 14
- The tenant name from the
X-Scope-OrgId
header when ingesting the data. - 15
- The unique identifier of the tenant. Must be unique throughout the lifecycle of the TempoMonolithic deployment. This ID will be added as a prefix to the objects in the object storage. You can reuse the value of the UUID or
tempoName
field.
Apply the customized CR by running the following command:
$ oc apply -f - << EOF <tempomonolithic_cr> EOF
Verification
Verify that the
status
of all TempoMonolithiccomponents
isRunning
and theconditions
aretype: Ready
by running the following command:$ oc get tempomonolithic.tempo.grafana.com <metadata_name_of_tempomonolithic_cr> -o yaml
Run the following command to verify that the pod of the TempoMonolithic instance is running:
$ oc get pods
Access the Jaeger UI:
Query the route details for the
tempo-<metadata_name_of_tempomonolithic_cr>-jaegerui
route by running the following command:$ oc get route
-
Open
https://<route_from_previous_step>
in a web browser.
When the pod of the TempoMonolithic instance is ready, you can send traces to the
tempo-<metadata_name_of_tempomonolithic_cr>:4317
(OTLP/gRPC) andtempo-<metadata_name_of_tempomonolithic_cr>:4318
(OTLP/HTTP) endpoints inside the cluster.The Tempo API is available at the
tempo-<metadata_name_of_tempomonolithic_cr>:3200
endpoint inside the cluster.
3.1.6. Additional resources
3.2. Configuring
The Tempo Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings for creating and deploying the distributed tracing platform (Tempo) resources. You can install the default configuration or modify the file.
3.2.1. Configuring back-end storage
For information about configuring the back-end storage, see Understanding persistent storage and the relevant configuration section for your chosen storage option.
3.2.2. Introduction to TempoStack configuration parameters
The TempoStack
custom resource (CR) defines the architecture and settings for creating the distributed tracing platform (Tempo) resources. You can modify these parameters to customize your implementation to your business needs.
Example TempoStack
CR
apiVersion: tempo.grafana.com/v1alpha1 1 kind: TempoStack 2 metadata: 3 name: <name> 4 spec: 5 storage: {} 6 resources: {} 7 replicationFactor: 1 8 retention: {} 9 template: distributor: {} 10 ingester: {} 11 compactor: {} 12 querier: {} 13 queryFrontend: {} 14 gateway: {} 15 limits: 16 global: ingestion: {} 17 query: {} 18 observability: 19 grafana: {} metrics: {} tracing: {} search: {} 20 managementState: managed 21
- 1
- API version to use when creating the object.
- 2
- Defines the kind of Kubernetes object to create.
- 3
- Data that uniquely identifies the object, including a
name
string,UID
, and optionalnamespace
. OpenShift Container Platform automatically generates theUID
and completes thenamespace
with the name of the project where the object is created. - 4
- Name of the TempoStack instance.
- 5
- Contains all of the configuration parameters of the TempoStack instance. When a common definition for all Tempo components is required, define it in the
spec
section. When the definition relates to an individual component, place it in thespec.template.<component>
section. - 6
- Storage is specified at instance deployment. See the installation page for information about storage options for the instance.
- 7
- Defines the compute resources for the Tempo container.
- 8
- Integer value for the number of ingesters that must acknowledge the data from the distributors before accepting a span.
- 9
- Configuration options for retention of traces.
- 10
- Configuration options for the Tempo
distributor
component. - 11
- Configuration options for the Tempo
ingester
component. - 12
- Configuration options for the Tempo
compactor
component. - 13
- Configuration options for the Tempo
querier
component. - 14
- Configuration options for the Tempo
query-frontend
component. - 15
- Configuration options for the Tempo
gateway
component. - 16
- Limits ingestion and query rates.
- 17
- Defines ingestion rate limits.
- 18
- Defines query rate limits.
- 19
- Configures operands to handle telemetry data.
- 20
- Configures search capabilities.
- 21
- Defines whether or not this CR is managed by the Operator. The default value is
managed
.
Parameter | Description | Values | Default value |
---|---|---|---|
| API version to use when creating the object. |
|
|
| Defines the kind of the Kubernetes object to create. |
| |
|
Data that uniquely identifies the object, including a |
OpenShift Container Platform automatically generates the | |
| Name for the object. | Name of your TempoStack instance. |
|
| Specification for the object to be created. |
Contains all of the configuration parameters for your TempoStack instance. When a common definition for all Tempo components is required, it is defined under the | N/A |
| Resources assigned to the TempoStack instance. | ||
| Storage size for ingester PVCs. | ||
| Configuration for the replication factor. | ||
| Configuration options for retention of traces. | ||
| Configuration options that define the storage. | ||
| Configuration options for the Tempo distributor. | ||
| Configuration options for the Tempo ingester. | ||
| Configuration options for the Tempo compactor. | ||
| Configuration options for the Tempo querier. | ||
| Configuration options for the Tempo query frontend. | ||
| Configuration options for the Tempo gateway. |
Additional resources
3.2.3. Query configuration options
Two components of the distributed tracing platform (Tempo), the querier and query frontend, manage queries. You can configure both of these components.
The querier component finds the requested trace ID in the ingesters or back-end storage. Depending on the set parameters, the querier component can query both the ingesters and pull bloom or indexes from the back end to search blocks in object storage. The querier component exposes an HTTP endpoint at GET /querier/api/traces/<trace_id>
, but it is not expected to be used directly. Queries must be sent to the query frontend.
Parameter | Description | Values |
---|---|---|
| The simple form of the node-selection constraint. | type: object |
| The number of replicas to be created for the component. | type: integer; format: int32 |
| Component-specific pod tolerations. | type: array |
The query frontend component is responsible for sharding the search space for an incoming query. The query frontend exposes traces via a simple HTTP endpoint: GET /api/traces/<trace_id>
. Internally, the query frontend component splits the blockID
space into a configurable number of shards and then queues these requests. The querier component connects to the query frontend component via a streaming gRPC connection to process these sharded queries.
Parameter | Description | Values |
---|---|---|
| Configuration of the query frontend component. | type: object |
| The simple form of the node selection constraint. | type: object |
| The number of replicas to be created for the query frontend component. | type: integer; format: int32 |
| Pod tolerations specific to the query frontend component. | type: array |
| The options specific to the Jaeger Query component. | type: object |
|
When | type: boolean |
| The options for the Jaeger Query ingress. | type: object |
| The annotations of the ingress object. | type: object |
| The hostname of the ingress object. | type: string |
| The name of an IngressClass cluster resource. Defines which ingress controller serves this ingress resource. | type: string |
| The options for the OpenShift route. | type: object |
|
The termination type. The default is | type: string (enum: insecure, edge, passthrough, reencrypt) |
|
The type of ingress for the Jaeger Query UI. The supported types are | type: string (enum: ingress, route) |
| The monitor tab configuration. | type: object |
|
Enables the monitor tab in the Jaeger console. The | type: boolean |
|
The endpoint to the Prometheus instance that contains the span rate, error, and duration (RED) metrics. For example, | type: string |
Example configuration of the query frontend component in a TempoStack
CR
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: storage: secret: name: minio type: s3 storageSize: 200M resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route
Additional resources
3.2.4. Configuring the Monitor tab in Jaeger UI
You can have the request rate, error, and duration (RED) metrics extracted from traces and visualized through the Jaeger Console in the Monitor tab of the OpenShift Container Platform web console. The metrics are derived from spans in the OpenTelemetry Collector that are scraped from the Collector by Prometheus, which you can deploy in your user-workload monitoring stack. The Jaeger UI queries these metrics from the Prometheus endpoint and visualizes them.
Prerequisites
- You have configured the permissions and tenants for the distributed tracing platform (Tempo). For more information, see "Configuring the permissions and tenants".
Procedure
In the
OpenTelemetryCollector
custom resource of the OpenTelemetry Collector, enable the Spanmetrics Connector (spanmetrics
), which derives metrics from traces and exports the metrics in the Prometheus format.Example
OpenTelemetryCollector
custom resource for span REDapiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel spec: mode: deployment observability: metrics: enableMetrics: true 1 config: | connectors: spanmetrics: 2 metrics_flush_interval: 15s receivers: otlp: 3 protocols: grpc: http: exporters: prometheus: 4 endpoint: 0.0.0.0:8889 add_metric_suffixes: false resource_to_telemetry_conversion: enabled: true 5 otlp: auth: authenticator: bearertokenauth endpoint: tempo-redmetrics-gateway.mynamespace.svc.cluster.local:8090 headers: X-Scope-OrgID: dev tls: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt insecure: false extensions: bearertokenauth: filename: /var/run/secrets/kubernetes.io/serviceaccount/token service: extensions: - bearertokenauth pipelines: traces: receivers: [otlp] exporters: [otlp, spanmetrics] 6 metrics: receivers: [spanmetrics] 7 exporters: [prometheus] # ...
- 1
- Creates the
ServiceMonitor
custom resource to enable scraping of the Prometheus exporter. - 2
- The Spanmetrics connector receives traces and exports metrics.
- 3
- The OTLP receiver to receive spans in the OpenTelemetry protocol.
- 4
- The Prometheus exporter is used to export metrics in the Prometheus format.
- 5
- The resource attributes are dropped by default.
- 6
- The Spanmetrics connector is configured as exporter in traces pipeline.
- 7
- The Spanmetrics connector is configured as receiver in metrics pipeline.
In the
TempoStack
custom resource, enable the Monitor tab and set the Prometheus endpoint to the Thanos querier service to query the data from your user-defined monitoring stack.Example
TempoStack
custom resource with the enabled Monitor tabapiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: redmetrics spec: storage: secret: name: minio-test type: s3 storageSize: 1Gi tenants: mode: openshift authentication: - tenantName: dev tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" template: gateway: enabled: true queryFrontend: jaegerQuery: monitorTab: enabled: true 1 prometheusEndpoint: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 redMetricsNamespace: "" 3 # ...
- 1
- Enables the monitoring tab in the Jaeger console.
- 2
- The service name for Thanos Querier from user-workload monitoring.
- 3
- Optional: The metrics namespace on which the Jaeger query retrieves the Prometheus metrics. Include this line only if you are using an OpenTelemetry Collector version earlier than 0.109.0. If you are using an OpenTelemetry Collector version 0.109.0 or later, omit this line.
Optional: Use the span RED metrics generated by the
spanmetrics
connector with alerting rules. For example, for alerts about a slow service or to define service level objectives (SLOs), the connector creates aduration_bucket
histogram and thecalls
counter metric. These metrics have labels that identify the service, API name, operation type, and other attributes.Table 3.6. Labels of the metrics created in the spanmetrics connector Label Description Values service_name
Service name set by the
otel_service_name
environment variable.frontend
span_name
Name of the operation.
-
/
-
/customer
span_kind
Identifies the server, client, messaging, or internal operation.
-
SPAN_KIND_SERVER
-
SPAN_KIND_CLIENT
-
SPAN_KIND_PRODUCER
-
SPAN_KIND_CONSUMER
-
SPAN_KIND_INTERNAL
Example
PrometheusRule
custom resource that defines an alerting rule for SLO when not serving 95% of requests within 2000ms on the front-end serviceapiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: span-red spec: groups: - name: server-side-latency rules: - alert: SpanREDFrontendAPIRequestLatency expr: histogram_quantile(0.95, sum(rate(duration_bucket{service_name="frontend", span_kind="SPAN_KIND_SERVER"}[5m])) by (le, service_name, span_name)) > 2000 1 labels: severity: Warning annotations: summary: "High request latency on {{$labels.service_name}} and {{$labels.span_name}}" description: "{{$labels.instance}} has 95th request latency above 2s (current value: {{$value}}s)"
- 1
- The expression for checking if 95% of the front-end server response time values are below 2000 ms. The time range (
[5m]
) must be at least four times the scrape interval and long enough to accommodate a change in the metric.
-
Additional resources
3.2.5. Configuring the receiver TLS
The custom resource of your TempoStack or TempoMonolithic instance supports configuring the TLS for receivers by using user-provided certificates or OpenShift’s service serving certificates.
3.2.5.1. Receiver TLS configuration for a TempoStack instance
You can provide a TLS certificate in a secret or use the service serving certificates that are generated by OpenShift Container Platform.
To provide a TLS certificate in a secret, configure it in the
TempoStack
custom resource.NoteThis feature is not supported with the enabled Tempo Gateway.
TLS for receivers and using a user-provided certificate in a secret
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack # ... spec: # ... template: distributor: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3 # ...
Alternatively, you can use the service serving certificates that are generated by OpenShift Container Platform.
NoteMutual TLS authentication (mTLS) is not supported with this feature.
TLS for receivers and using the service serving certificates that are generated by OpenShift Container Platform
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack # ... spec: # ... template: distributor: tls: enabled: true 1 # ...
- 1
- Sufficient configuration for the TLS at the Tempo Distributor.
Additional resources
3.2.5.2. Receiver TLS configuration for a TempoMonolithic instance
You can provide a TLS certificate in a secret or use the service serving certificates that are generated by OpenShift Container Platform.
To provide a TLS certificate in a secret, configure it in the
TempoMonolithic
custom resource.NoteThis feature is not supported with the enabled Tempo Gateway.
TLS for receivers and using a user-provided certificate in a secret
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic # ... spec: # ... ingestion: otlp: grpc: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3 # ...
Alternatively, you can use the service serving certificates that are generated by OpenShift Container Platform.
NoteMutual TLS authentication (mTLS) is not supported with this feature.
TLS for receivers and using the service serving certificates that are generated by OpenShift Container Platform
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic # ... spec: # ... ingestion: otlp: grpc: tls: enabled: true http: tls: enabled: true 1 # ...
- 1
- Minimal configuration for the TLS at the Tempo Distributor.
Additional resources
3.2.6. Using taints and tolerations
To schedule the TempoStack pods on dedicated nodes, see How to deploy the different TempoStack components on infra nodes using nodeSelector and tolerations in OpenShift 4.
3.2.7. Configuring monitoring and alerts
The Tempo Operator supports monitoring and alerts about each TempoStack component such as distributor, ingester, and so on, and exposes upgrade and operational metrics about the Operator itself.
3.2.7.1. Configuring the TempoStack metrics and alerts
You can enable metrics and alerts of TempoStack instances.
Prerequisites
- Monitoring for user-defined projects is enabled in the cluster.
Procedure
To enable metrics of a TempoStack instance, set the
spec.observability.metrics.createServiceMonitors
field totrue
:apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createServiceMonitors: true
To enable alerts for a TempoStack instance, set the
spec.observability.metrics.createPrometheusRules
field totrue
:apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createPrometheusRules: true
Verification
You can use the Administrator view of the web console to verify successful configuration:
-
Go to Observe
Targets, filter for Source: User, and check that ServiceMonitors in the format tempo-<instance_name>-<component>
have the Up status. -
To verify that alerts are set up correctly, go to Observe
Alerting Alerting rules, filter for Source: User, and check that the Alert rules for the TempoStack instance components are available.
Additional resources
3.2.7.2. Configuring the Tempo Operator metrics and alerts
When installing the Tempo Operator from the web console, you can select the Enable Operator recommended cluster monitoring on this Namespace checkbox, which enables creating metrics and alerts of the Tempo Operator.
If the checkbox was not selected during installation, you can manually enable metrics and alerts even after installing the Tempo Operator.
Procedure
-
Add the
openshift.io/cluster-monitoring: "true"
label in the project where the Tempo Operator is installed, which isopenshift-tempo-operator
by default.
Verification
You can use the Administrator view of the web console to verify successful configuration:
-
Go to Observe
Targets, filter for Source: Platform, and search for tempo-operator
, which must have the Up status. -
To verify that alerts are set up correctly, go to Observe
Alerting Alerting rules, filter for Source: Platform, and locate the Alert rules for the Tempo Operator.
3.3. Troubleshooting
You can diagnose and fix issues in TempoStack or TempoMonolithic instances by using various troubleshooting methods.
3.3.1. Collecting diagnostic data from the command line
When submitting a support case, it is helpful to include diagnostic information about your cluster to Red Hat Support. You can use the oc adm must-gather
tool to gather diagnostic data for resources of various types, such as TempoStack
or TempoMonolithic
, and the created resources like Deployment
, Pod
, or ConfigMap
. The oc adm must-gather
tool creates a new pod that collects this data.
Procedure
From the directory where you want to save the collected data, run the
oc adm must-gather
command to collect the data:$ oc adm must-gather --image=ghcr.io/grafana/tempo-operator/must-gather -- \ /usr/bin/must-gather --operator-namespace <operator_namespace> 1
- 1
- The default namespace where the Operator is installed is
openshift-tempo-operator
.
Verification
- Verify that the new directory is created and contains the collected data.
3.4. Upgrading
For version upgrades, the Tempo Operator uses the Operator Lifecycle Manager (OLM), which controls installation, upgrade, and role-based access control (RBAC) of Operators in a cluster.
The OLM runs in the OpenShift Container Platform by default. The OLM queries for available Operators as well as upgrades for installed Operators.
When the Tempo Operator is upgraded to the new version, it scans for running TempoStack instances that it manages and upgrades them to the version corresponding to the Operator’s new version.
3.4.1. Additional resources
3.5. Removing
The steps for removing the Red Hat OpenShift distributed tracing platform (Tempo) from an OpenShift Container Platform cluster are as follows:
- Shut down all distributed tracing platform (Tempo) pods.
- Remove any TempoStack instances.
- Remove the Tempo Operator.
3.5.1. Removing by using the web console
You can remove a TempoStack instance in the Administrator view of the web console.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as a cluster administrator with the
cluster-admin
role. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-admin
role.
Procedure
-
Go to Operators
Installed Operators Tempo Operator TempoStack. -
To remove the TempoStack instance, select
Delete TempoStack Delete. - Optional: Remove the Tempo Operator.
3.5.2. Removing by using the CLI
You can remove a TempoStack instance on the command line.
Prerequisites
An active OpenShift CLI (
oc
) session by a cluster administrator with thecluster-admin
role.Tip-
Ensure that your OpenShift CLI (
oc
) version is up to date and matches your OpenShift Container Platform version. Run
oc login
:$ oc login --username=<your_username>
-
Ensure that your OpenShift CLI (
Procedure
Get the name of the TempoStack instance by running the following command:
$ oc get deployments -n <project_of_tempostack_instance>
Remove the TempoStack instance by running the following command:
$ oc delete tempo <tempostack_instance_name> -n <project_of_tempostack_instance>
- Optional: Remove the Tempo Operator.
Verification
Run the following command to verify that the TempoStack instance is not found in the output, which indicates its successful removal:
$ oc get deployments -n <project_of_tempostack_instance>