Chapter 3. Installing the Distributed Tracing Platform
For information about installing the deprecated Distributed Tracing Platform (Jaeger), see Installing in the Distributed Tracing Platform (Jaeger) documentation.
Installing the Distributed Tracing Platform involves the following steps:
- Installing the Tempo Operator.
- Setting up a supported object store and creating a secret for the object store credentials.
- Configuring the permissions and tenants.
Depending on your use case, installing your choice of deployment:
-
Microservices-mode
TempoStack
instance -
Monolithic-mode
TempoMonolithic
instance
-
Microservices-mode
3.1. Installing the Tempo Operator
You can install the Tempo Operator by using the web console or the command line.
3.1.1. Installing the Tempo Operator by using the web console
You can install the Tempo Operator from the Administrator view of the web console.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as a cluster administrator with the
cluster-admin
role. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-admin
role. You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, Google Cloud Storage. For more information, see "Object storage setup".
WarningObject storage is required and not included with the Distributed Tracing Platform. You must choose and set up object storage by a supported provider before installing the Distributed Tracing Platform.
Procedure
-
Go to Operators
OperatorHub and search for Tempo Operator
. Select the Tempo Operator that is provided by Red Hat.
ImportantThe following selections are the default presets for this Operator:
-
Update channel
stable -
Installation mode
All namespaces on the cluster -
Installed Namespace
openshift-tempo-operator -
Update approval
Automatic
-
Update channel
- Select the Enable Operator recommended cluster monitoring on this Namespace checkbox.
-
Select Install
Install View Operator.
Verification
- In the Details tab of the page of the installed Operator, under ClusterServiceVersion details, verify that the installation Status is Succeeded.
3.1.2. Installing the Tempo Operator by using the CLI
You can install the Tempo Operator from the command line.
Prerequisites
An active OpenShift CLI (
oc
) session by a cluster administrator with thecluster-admin
role.Tip-
Ensure that your OpenShift CLI (
oc
) version is up to date and matches your OpenShift Container Platform version. Run
oc login
:oc login --username=<your_username>
$ oc login --username=<your_username>
Copy to Clipboard Copied!
-
Ensure that your OpenShift CLI (
You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, Google Cloud Storage. For more information, see "Object storage setup".
WarningObject storage is required and not included with the Distributed Tracing Platform. You must choose and set up object storage by a supported provider before installing the Distributed Tracing Platform.
Procedure
Create a project for the Tempo Operator by running the following command:
oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-tempo-operator openshift.io/cluster-monitoring: "true" name: openshift-tempo-operator EOF
$ oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-tempo-operator openshift.io/cluster-monitoring: "true" name: openshift-tempo-operator EOF
Copy to Clipboard Copied! Create an Operator group by running the following command:
oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-tempo-operator namespace: openshift-tempo-operator spec: upgradeStrategy: Default EOF
$ oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-tempo-operator namespace: openshift-tempo-operator spec: upgradeStrategy: Default EOF
Copy to Clipboard Copied! Create a subscription by running the following command:
oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tempo-product namespace: openshift-tempo-operator spec: channel: stable installPlanApproval: Automatic name: tempo-product source: redhat-operators sourceNamespace: openshift-marketplace EOF
$ oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tempo-product namespace: openshift-tempo-operator spec: channel: stable installPlanApproval: Automatic name: tempo-product source: redhat-operators sourceNamespace: openshift-marketplace EOF
Copy to Clipboard Copied!
Verification
Check the Operator status by running the following command:
oc get csv -n openshift-tempo-operator
$ oc get csv -n openshift-tempo-operator
Copy to Clipboard Copied!
3.2. Object storage setup
You can use the following configuration parameters when setting up a supported object storage.
Using object storage requires setting up a supported object store and creating a secret for the object store credentials before deploying a TempoStack
or TempoMonolithic
instance.
Storage provider |
---|
Secret parameters |
|
MinIO |
See MinIO Operator.
|
Amazon S3 |
|
Amazon S3 with Security Token Service (STS) |
|
Microsoft Azure Blob Storage |
|
Google Cloud Storage on Google Cloud Platform (GCP) |
|
3.2.1. Setting up the Amazon S3 storage with the Security Token Service
You can set up the Amazon S3 storage with the Security Token Service (STS) and AWS Command Line Interface (AWS CLI). Optionally, you can also use the Cloud Credential Operator (CCO).
Using the Distributed Tracing Platform with the Amazon S3 storage and STS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
- You have installed the latest version of the AWS CLI.
- If you intend to use the CCO, you have installed and configured the CCO in your cluster.
Procedure
- Create an AWS S3 bucket.
Create the following
trust.json
file for the AWS Identity and Access Management (AWS IAM) policy for the purpose of setting up a trust relationship between the AWS IAM role, which you will create in the next step, and the service account of either theTempoStack
orTempoMonolithic
instance:trust.json
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::<aws_account_id>:oidc-provider/<oidc_provider>" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "<oidc_provider>:sub": [ "system:serviceaccount:<openshift_project_for_tempo>:tempo-<tempo_custom_resource_name>" "system:serviceaccount:<openshift_project_for_tempo>:tempo-<tempo_custom_resource_name>-query-frontend" ] } } } ] }
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::<aws_account_id>:oidc-provider/<oidc_provider>"
1 }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "<oidc_provider>:sub": [ "system:serviceaccount:<openshift_project_for_tempo>:tempo-<tempo_custom_resource_name>"
2 "system:serviceaccount:<openshift_project_for_tempo>:tempo-<tempo_custom_resource_name>-query-frontend" ] } } } ] }
Copy to Clipboard Copied! - 1
- The OpenID Connect (OIDC) provider that you have configured on the OpenShift Container Platform.
- 2
- The namespace in which you intend to create either a
TempoStack
orTempoMonolithic
instance. Replace<tempo_custom_resource_name>
with themetadata
name that you define in yourTempoStack
orTempoMonolithic
custom resource.
TipYou can also get the value for the OIDC provider by running the following command:
oc get authentication cluster -o json | jq -r '.spec.serviceAccountIssuer' | sed 's~http[s]*://~~g'
$ oc get authentication cluster -o json | jq -r '.spec.serviceAccountIssuer' | sed 's~http[s]*://~~g'
Copy to Clipboard Copied! Create an AWS IAM role by attaching the created
trust.json
policy file. You can do this by running the following command:aws iam create-role \ --role-name "tempo-s3-access" \ --assume-role-policy-document "file:///tmp/trust.json" \ --query Role.Arn \ --output text
$ aws iam create-role \ --role-name "tempo-s3-access" \ --assume-role-policy-document "file:///tmp/trust.json" \ --query Role.Arn \ --output text
Copy to Clipboard Copied! Attach an AWS IAM policy to the created AWS IAM role. You can do this by running the following command:
aws iam attach-role-policy \ --role-name "tempo-s3-access" \ --policy-arn "arn:aws:iam::aws:policy/AmazonS3FullAccess"
$ aws iam attach-role-policy \ --role-name "tempo-s3-access" \ --policy-arn "arn:aws:iam::aws:policy/AmazonS3FullAccess"
Copy to Clipboard Copied! If you are not using the CCO, skip this step. If you are using the CCO, configure the cloud provider environment for the Tempo Operator. You can do this by running the following command:
oc patch subscription <tempo_operator_sub> \ -n <tempo_operator_namespace> \ --type='merge' -p '{"spec": {"config": {"env": [{"name": "ROLEARN", "value": "'"<role_arn>"'"}]}}}'
$ oc patch subscription <tempo_operator_sub> \
1 -n <tempo_operator_namespace> \
2 --type='merge' -p '{"spec": {"config": {"env": [{"name": "ROLEARN", "value": "'"<role_arn>"'"}]}}}'
3 Copy to Clipboard Copied! In the OpenShift Container Platform, create an object storage secret with keys as follows:
apiVersion: v1 kind: Secret metadata: name: <secret_name> stringData: bucket: <s3_bucket_name> region: <s3_region> role_arn: <s3_role_arn> type: Opaque
apiVersion: v1 kind: Secret metadata: name: <secret_name> stringData: bucket: <s3_bucket_name> region: <s3_region> role_arn: <s3_role_arn> type: Opaque
Copy to Clipboard Copied! When the object storage secret is created, update the relevant custom resource of the Distributed Tracing Platform instance as follows:
Example
TempoStack
custom resourceapiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> namespace: <namespace> spec: # ... storage: secret: name: <secret_name> type: s3 credentialMode: token-cco # ...
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> namespace: <namespace> spec: # ... storage: secret:
1 name: <secret_name> type: s3 credentialMode: token-cco
2 # ...
Copy to Clipboard Copied! Example
TempoMonolithic
custom resourceapiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <name> namespace: <namespace> spec: # ... storage: traces: backend: s3 s3: secret: <secret_name> credentialMode: token-cco # ...
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <name> namespace: <namespace> spec: # ... storage: traces: backend: s3 s3: secret: <secret_name>
1 credentialMode: token-cco
2 # ...
Copy to Clipboard Copied!
3.2.2. Setting up the Azure storage with the Security Token Service
You can set up the Azure storage with the Security Token Service (STS) by using the Azure Command Line Interface (Azure CLI).
Using the Distributed Tracing Platform with the Azure storage and STS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
- You have installed the latest version of the Azure CLI.
- You have created an Azure storage account.
- You have created an Azure blob storage container.
Procedure
Create an Azure managed identity by running the following command:
az identity create \ --name <identity_name> \ --resource-group <resource_group> \ --location <region> \ --subscription <subscription_id>
$ az identity create \ --name <identity_name> \
1 --resource-group <resource_group> \
2 --location <region> \
3 --subscription <subscription_id>
4 Copy to Clipboard Copied! Create a federated identity credential for the OpenShift Container Platform service account for use by all components of the Distributed Tracing Platform except the Query Frontend. You can do this by running the following command:
az identity federated-credential create \ --name <credential_name> \ --identity-name <identity_name> \ --resource-group <resource_group> \ --issuer <oidc_provider> \ --subject <tempo_service_account_subject> \ --audiences <audience>
$ az identity federated-credential create \
1 --name <credential_name> \
2 --identity-name <identity_name> \ --resource-group <resource_group> \ --issuer <oidc_provider> \
3 --subject <tempo_service_account_subject> \
4 --audiences <audience>
5 Copy to Clipboard Copied! - 1
- Federated identity credentials allow OpenShift Container Platform service accounts to authenticate as an Azure managed identity without storing secrets or using an Azure service principal identity.
- 2
- The name you have chosen for the federated credential.
- 3
- The URL of the OpenID Connect (OIDC) provider for your cluster.
- 4
- The service account subject for your cluster in the following format:
system:serviceaccount:<namespace>:tempo-<tempostack_instance_name>
. - 5
- The expected audience, which is to be used for validating the issued tokens for the federated identity credential. This is commonly set to
api://AzureADTokenExchange
.
TipYou can get the URL of the OpenID Connect (OIDC) issuer for your cluster by running the following command:
oc get authentication cluster -o json | jq -r .spec.serviceAccountIssuer
$ oc get authentication cluster -o json | jq -r .spec.serviceAccountIssuer
Copy to Clipboard Copied! Create a federated identity credential for the OpenShift Container Platform service account for use by the Query Frontend component of the Distributed Tracing Platform. You can do this by running the following command:
az identity federated-credential create \ --name <credential_name>-frontend \ --identity-name <identity_name> \ --resource-group <resource_group> \ --issuer <cluster_issuer> \ --subject <tempo_service_account_query_frontend_subject> \ --audiences <audience> | jq
$ az identity federated-credential create \
1 --name <credential_name>-frontend \
2 --identity-name <identity_name> \ --resource-group <resource_group> \ --issuer <cluster_issuer> \ --subject <tempo_service_account_query_frontend_subject> \
3 --audiences <audience> | jq
Copy to Clipboard Copied! - 1
- Federated identity credentials allow OpenShift Container Platform service accounts to authenticate as an Azure managed identity without storing secrets or using an Azure service principal identity.
- 2
- The name you have chosen for the frontend federated identity credential.
- 3
- The service account subject for your cluster in the following format:
system:serviceaccount:<namespace>:tempo-<tempostack_instance_name>
.
Assign the Storage Blob Data Contributor role to the Azure service principal identity of the created Azure managed identity. You can do this by running the following command:
az role assignment create \ --assignee <assignee_name> \ --role "Storage Blob Data Contributor" \ --scope "/subscriptions/<subscription_id>
$ az role assignment create \ --assignee <assignee_name> \
1 --role "Storage Blob Data Contributor" \ --scope "/subscriptions/<subscription_id>
Copy to Clipboard Copied! - 1
- The Azure service principal identity of the Azure managed identity that you created in step 1.
TipYou can get the
<assignee_name>
value by running the following command:az ad sp list --all --filter "servicePrincipalType eq 'ManagedIdentity'" | jq -r --arg idName <identity_name> '.[] | select(.displayName == $idName) | .appId'`
$ az ad sp list --all --filter "servicePrincipalType eq 'ManagedIdentity'" | jq -r --arg idName <identity_name> '.[] | select(.displayName == $idName) | .appId'`
Copy to Clipboard Copied! Fetch the client ID of the Azure managed identity that you created in step 1:
CLIENT_ID=$(az identity show \ --name <identity_name> \ --resource-group <resource_group> \ --query clientId \ -o tsv)
CLIENT_ID=$(az identity show \ --name <identity_name> \
1 --resource-group <resource_group> \
2 --query clientId \ -o tsv)
Copy to Clipboard Copied! Create an OpenShift Container Platform secret for the Azure workload identity federation (WIF). You can do this by running the following command:
oc create -n <tempo_namespace> secret generic azure-secret \ --from-literal=container=<azure_storage_azure_container> \ --from-literal=account_name=<azure_storage_azure_accountname> \ --from-literal=client_id=<client_id> \ --from-literal=audience=<audience> \ --from-literal=tenant_id=<tenant_id>
$ oc create -n <tempo_namespace> secret generic azure-secret \ --from-literal=container=<azure_storage_azure_container> \
1 --from-literal=account_name=<azure_storage_azure_accountname> \
2 --from-literal=client_id=<client_id> \
3 --from-literal=audience=<audience> \
4 --from-literal=tenant_id=<tenant_id>
5 Copy to Clipboard Copied! When the object storage secret is created, update the relevant custom resource of the Distributed Tracing Platform instance as follows:
Example
TempoStack
custom resourceapiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> namespace: <namespace> spec: # ... storage: secret: name: <secret_name> type: azure # ...
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> namespace: <namespace> spec: # ... storage: secret:
1 name: <secret_name> type: azure # ...
Copy to Clipboard Copied! - 1
- The secret that you created in the previous step.
Example
TempoMonolithic
custom resourceapiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <name> namespace: <namespace> spec: # ... storage: traces: backend: azure azure: secret: <secret_name> # ...
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <name> namespace: <namespace> spec: # ... storage: traces: backend: azure azure: secret: <secret_name>
1 # ...
Copy to Clipboard Copied! - 1
- The secret that you created in the previous step.
3.2.3. Setting up the Google Cloud storage with the Security Token Service
You can set up the Google Cloud Storage (GCS) with the Security Token Service (STS) by using the Google Cloud CLI.
Using the Distributed Tracing Platform with the GCS and STS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
- You have installed the latest version of the Google Cloud CLI.
Procedure
- Create a GCS bucket on the Google Cloud Platform (GCP).
Create or reuse a service account with Google’s Identity and Access Management (IAM):
SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts create <iam_service_account_name> \ --display-name="Tempo Account" \ --project <project_id> \ --format='value(email)' \ --quiet)
SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts create <iam_service_account_name> \
1 --display-name="Tempo Account" \ --project <project_id> \
2 --format='value(email)' \ --quiet)
Copy to Clipboard Copied! Bind the required GCP roles to the created service account at the project level. You can do this by running the following command:
gcloud projects add-iam-policy-binding <project_id> \ --member "serviceAccount:$SERVICE_ACCOUNT_EMAIL" \ --role "roles/storage.objectAdmin"
$ gcloud projects add-iam-policy-binding <project_id> \ --member "serviceAccount:$SERVICE_ACCOUNT_EMAIL" \ --role "roles/storage.objectAdmin"
Copy to Clipboard Copied! Retrieve the
POOL_ID
value of the Google Cloud Workload Identity Pool that is associated with the cluster. How you can retrieve this value depends on your environment, so the following command is only an example:OIDC_ISSUER=$(oc get authentication.config cluster -o jsonpath='{.spec.serviceAccountIssuer}') \ &&
$ OIDC_ISSUER=$(oc get authentication.config cluster -o jsonpath='{.spec.serviceAccountIssuer}') \ && POOL_ID=$(echo "$OIDC_ISSUER" | awk -F'/' '{print $NF}' | sed 's/-oidc$//')
Copy to Clipboard Copied! Add the IAM policy bindings. You can do this by running the following commands:
gcloud iam service-accounts add-iam-policy-binding "$SERVICE_ACCOUNT_EMAIL" \ --role="roles/iam.workloadIdentityUser" \ --member="principal://iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/subject/system:serviceaccount:<tempo_namespace>:tempo-<tempo_name>" \ --project=<project_id> \ --quiet \ &&
$ gcloud iam service-accounts add-iam-policy-binding "$SERVICE_ACCOUNT_EMAIL" \
1 --role="roles/iam.workloadIdentityUser" \ --member="principal://iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/subject/system:serviceaccount:<tempo_namespace>:tempo-<tempo_name>" \ --project=<project_id> \ --quiet \ && gcloud iam service-accounts add-iam-policy-binding "$SERVICE_ACCOUNT_EMAIL" \ --role="roles/iam.workloadIdentityUser" \ --member="principal://iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/subject/system:serviceaccount:<tempo_namespace>:tempo-<tempo_name>-query-frontend" \ --project=<project_id> \ --quiet && gcloud storage buckets add-iam-policy-binding "gs://$BUCKET_NAME" \ --role="roles/storage.admin" \ --member="serviceAccount:$SERVICE_ACCOUNT_EMAIL" \ --condition=None
Copy to Clipboard Copied! - 1
- The
$SERVICE_ACCOUNT_EMAIL
is the output of the command in step 2.
Create a credential file for the
key.json
key of the storage secret for use by theTempoStack
custom resource. You can do this by running the following command:gcloud iam workload-identity-pools create-cred-config \ "projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>" \ --service-account="$SERVICE_ACCOUNT_EMAIL" \ --credential-source-file=/var/run/secrets/storage/serviceaccount/token \ --credential-source-type=text \ --output-file=<output_file_path>
$ gcloud iam workload-identity-pools create-cred-config \ "projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>" \ --service-account="$SERVICE_ACCOUNT_EMAIL" \ --credential-source-file=/var/run/secrets/storage/serviceaccount/token \
1 --credential-source-type=text \ --output-file=<output_file_path>
2 Copy to Clipboard Copied! Get the correct audience by running the following command:
gcloud iam workload-identity-pools providers describe "$PROVIDER_NAME" --format='value(oidc.allowedAudiences[0])'
$ gcloud iam workload-identity-pools providers describe "$PROVIDER_NAME" --format='value(oidc.allowedAudiences[0])'
Copy to Clipboard Copied! Create a storage secret for the Distributed Tracing Platform by running the following command.
oc -n <tempo_namespace> create secret generic gcs-secret \ --from-literal=bucketname="<bucket_name>" \ --from-literal=audience="<audience>" \ --from-file=key.json=<output_file_path>
$ oc -n <tempo_namespace> create secret generic gcs-secret \ --from-literal=bucketname="<bucket_name>" \
1 --from-literal=audience="<audience>" \
2 --from-file=key.json=<output_file_path>
3 Copy to Clipboard Copied! When the object storage secret is created, update the relevant custom resource of the Distributed Tracing Platform instance as follows:
Example
TempoStack
custom resourceapiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> namespace: <namespace> spec: # ... storage: secret: name: <secret_name> type: gcs # ...
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> namespace: <namespace> spec: # ... storage: secret:
1 name: <secret_name> type: gcs # ...
Copy to Clipboard Copied! - 1
- The secret that you created in the previous step.
Example
TempoMonolithic
custom resourceapiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <name> namespace: <namespace> spec: # ... storage: traces: backend: gcs gcs: secret: <secret_name> # ...
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <name> namespace: <namespace> spec: # ... storage: traces: backend: gcs gcs: secret: <secret_name>
1 # ...
Copy to Clipboard Copied! - 1
- The secret that you created in the previous step.
3.2.4. Setting up IBM Cloud Object Storage
You can set up IBM Cloud Object Storage by using the OpenShift CLI (oc
).
Prerequisites
-
You have installed the latest version of OpenShift CLI (
oc
). For more information, see "Getting started with the OpenShift CLI" in Configure: CLI tools. -
You have installed the latest version of IBM Cloud Command Line Interface (
ibmcloud
). For more information, see "Getting started with the IBM Cloud CLI" in IBM Cloud Docs. You have configured IBM Cloud Object Storage. For more information, see "Choosing a plan and creating an instance" in IBM Cloud Docs.
- You have an IBM Cloud Platform account.
- You have ordered an IBM Cloud Object Storage plan.
- You have created an instance of IBM Cloud Object Storage.
Procedure
- On IBM Cloud, create an object store bucket.
On IBM Cloud, create a service key for connecting to the object store bucket by running the following command:
ibmcloud resource service-key-create <tempo_bucket> Writer \ --instance-name <tempo_bucket> --parameters '{"HMAC":true}'
$ ibmcloud resource service-key-create <tempo_bucket> Writer \ --instance-name <tempo_bucket> --parameters '{"HMAC":true}'
Copy to Clipboard Copied! On IBM Cloud, create a secret with the bucket credentials by running the following command:
oc -n <namespace> create secret generic <ibm_cos_secret> \ --from-literal=bucket="<tempo_bucket>" \ --from-literal=endpoint="<ibm_bucket_endpoint>" \ --from-literal=access_key_id="<ibm_bucket_access_key>" \ --from-literal=access_key_secret="<ibm_bucket_secret_key>"
$ oc -n <namespace> create secret generic <ibm_cos_secret> \ --from-literal=bucket="<tempo_bucket>" \ --from-literal=endpoint="<ibm_bucket_endpoint>" \ --from-literal=access_key_id="<ibm_bucket_access_key>" \ --from-literal=access_key_secret="<ibm_bucket_secret_key>"
Copy to Clipboard Copied! On OpenShift Container Platform, create an object storage secret with keys as follows:
apiVersion: v1 kind: Secret metadata: name: <ibm_cos_secret> stringData: bucket: <tempo_bucket> endpoint: <ibm_bucket_endpoint> access_key_id: <ibm_bucket_access_key> access_key_secret: <ibm_bucket_secret_key> type: Opaque
apiVersion: v1 kind: Secret metadata: name: <ibm_cos_secret> stringData: bucket: <tempo_bucket> endpoint: <ibm_bucket_endpoint> access_key_id: <ibm_bucket_access_key> access_key_secret: <ibm_bucket_secret_key> type: Opaque
Copy to Clipboard Copied! On OpenShift Container Platform, set the storage section in the
TempoStack
custom resource as follows:apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack # ... spec: # ... storage: secret: name: <ibm_cos_secret> type: s3 # ...
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack # ... spec: # ... storage: secret: name: <ibm_cos_secret>
1 type: s3 # ...
Copy to Clipboard Copied! - 1
- Name of the secret that contains the IBM Cloud Storage access and secret keys.
3.3. Configuring the permissions and tenants
Before installing a TempoStack
or TempoMonolithic
instance, you must define one or more tenants and configure their read and write access. You can configure such an authorization setup by using a cluster role and cluster role binding for the Kubernetes Role-Based Access Control (RBAC). By default, no users are granted read or write permissions. For more information, see "Configuring the read permissions for tenants" and "Configuring the write permissions for tenants".
The OpenTelemetry Collector of the Red Hat build of OpenTelemetry can send trace data to a TempoStack
or TempoMonolithic
instance by using the service account with RBAC for writing the data.
Component | Tempo Gateway service | OpenShift OAuth | TokenReview API | SubjectAccessReview API |
---|---|---|---|---|
Authentication | X | X | X | |
Authorization | X | X |
3.3.1. Configuring the read permissions for tenants
You can configure the read permissions for tenants from the Administrator view of the web console or from the command line.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as a cluster administrator with the
cluster-admin
role. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-admin
role.
Procedure
Define the tenants by adding the
tenantName
andtenantId
parameters with your values of choice to theTempoStack
custom resource (CR):Tenant example in a
TempoStack
CRapiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: redmetrics spec: # ... tenants: mode: openshift authentication: - tenantName: dev tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" # ...
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: redmetrics spec: # ... tenants: mode: openshift authentication: - tenantName: dev
1 tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa"
2 # ...
Copy to Clipboard Copied! Add the tenants to a cluster role with the read (
get
) permissions to read traces.Example RBAC configuration in a
ClusterRole
resourceapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-reader rules: - apiGroups: - 'tempo.grafana.com' resources: - dev - prod resourceNames: - traces verbs: - 'get'
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-reader rules: - apiGroups: - 'tempo.grafana.com' resources:
1 - dev - prod resourceNames: - traces verbs: - 'get'
2 Copy to Clipboard Copied! Grant authenticated users the read permissions for trace data by defining a cluster role binding for the cluster role from the previous step.
Example RBAC configuration in a
ClusterRoleBinding
resourceapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-reader subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: system:authenticated
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-reader subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: system:authenticated
1 Copy to Clipboard Copied! - 1
- Grants all authenticated users the read permissions for trace data.
3.3.2. Configuring the write permissions for tenants
You can configure the write permissions for tenants from the Administrator view of the web console or from the command line.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as a cluster administrator with the
cluster-admin
role. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-admin
role. - You have installed the OpenTelemetry Collector and configured it to use an authorized service account with permissions. For more information, see "Creating the required RBAC resources automatically" in the Red Hat build of OpenTelemetry documentation.
Procedure
Create a service account for use with OpenTelemetry Collector.
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector namespace: <project_of_opentelemetry_collector_instance>
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector namespace: <project_of_opentelemetry_collector_instance>
Copy to Clipboard Copied! Add the tenants to a cluster role with the write (
create
) permissions to write traces.Example RBAC configuration in a
ClusterRole
resourceapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-write rules: - apiGroups: - 'tempo.grafana.com' resources: - dev resourceNames: - traces verbs: - 'create'
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-write rules: - apiGroups: - 'tempo.grafana.com' resources:
1 - dev resourceNames: - traces verbs: - 'create'
2 Copy to Clipboard Copied! Grant the OpenTelemetry Collector the write permissions by defining a cluster role binding to attach the OpenTelemetry Collector service account.
Example RBAC configuration in a
ClusterRoleBinding
resourceapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-write subjects: - kind: ServiceAccount name: otel-collector namespace: otel
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-write subjects: - kind: ServiceAccount name: otel-collector
1 namespace: otel
Copy to Clipboard Copied! - 1
- The service account that you created in a previous step. The client uses it when exporting trace data.
Configure the
OpenTelemetryCollector
custom resource as follows:-
Add the
bearertokenauth
extension and a valid token to the tracing pipeline service. -
Add the tenant name in the
otlp/otlphttp
exporters as theX-Scope-OrgID
headers. Enable TLS with a valid certificate authority file.
Sample OpenTelemetry CR configuration
apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: <project_of_tempostack_instance> spec: mode: deployment serviceAccount: otel-collector config: | extensions: bearertokenauth: filename: "/var/run/secrets/kubernetes.io/serviceaccount/token" exporters: otlp/dev: endpoint: sample-gateway.tempo.svc.cluster.local:8090 tls: insecure: false ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: "dev" otlphttp/dev: endpoint: https://sample-gateway.<project_of_tempostack_instance>.svc.cluster.local:8080/api/traces/v1/dev tls: insecure: false ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: "dev" service: extensions: [bearertokenauth] pipelines: traces: exporters: [otlp/dev] # ...
apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: <project_of_tempostack_instance> spec: mode: deployment serviceAccount: otel-collector
1 config: | extensions: bearertokenauth:
2 filename: "/var/run/secrets/kubernetes.io/serviceaccount/token"
3 exporters: otlp/dev:
4 endpoint: sample-gateway.tempo.svc.cluster.local:8090 tls: insecure: false ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
5 auth: authenticator: bearertokenauth headers: X-Scope-OrgID: "dev"
6 otlphttp/dev:
7 endpoint: https://sample-gateway.<project_of_tempostack_instance>.svc.cluster.local:8080/api/traces/v1/dev tls: insecure: false ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: "dev" service: extensions: [bearertokenauth] pipelines: traces: exporters: [otlp/dev]
8 # ...
Copy to Clipboard Copied! - 1
- Service account configured with write permissions.
- 2
- Bearer Token extension to use service account token.
- 3
- The service account token. The client sends the token to the tracing pipeline service as the bearer token header.
- 4
- Specify either the OTLP gRPC Exporter (
otlp/dev
) or the OTLP HTTP Exporter (otlphttp/dev
). - 5
- Enabled TLS with a valid service CA file.
- 6
- Header with tenant name.
- 7
- Specify either the OTLP gRPC Exporter (
otlp/dev
) or the OTLP HTTP Exporter (otlphttp/dev
). - 8
- The exporter you specified in
exporters
section of the CR.
-
Add the
3.4. Installing a TempoStack instance
You can install a TempoStack instance by using the web console or command line.
3.4.1. Installing a TempoStack instance by using the web console
You can install a TempoStack instance from the Administrator view of the web console.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as a cluster administrator with the
cluster-admin
role. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-admin
role. You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, Google Cloud Storage. For more information, see "Object storage setup".
WarningObject storage is required and not included with the Distributed Tracing Platform. You must choose and set up object storage by a supported provider before installing the Distributed Tracing Platform.
- You have defined one or more tenants and configured the read and write permissions. For more information, see "Configuring the read permissions for tenants" and "Configuring the write permissions for tenants".
Procedure
-
Go to Home
Projects Create Project to create a project of your choice for the TempoStack instance that you will create in a subsequent step. Go to Workloads
Secrets Create From YAML to create a secret for your object storage bucket in the project that you created for the TempoStack instance. For more information, see "Object storage setup". Example secret for Amazon S3 and MinIO storage
apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque
apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque
Copy to Clipboard Copied! Create a TempoStack instance.
NoteYou can create multiple TempoStack instances in separate projects on the same cluster.
-
Go to Operators
Installed Operators. -
Select TempoStack
Create TempoStack YAML view. In the YAML view, customize the
TempoStack
custom resource (CR):Example
TempoStack
CR for AWS S3 and MinIO storage and two tenantsapiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storage: secret: name: <secret_name> type: <secret_provider> storageSize: <value>Gi resources: total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift authentication: - tenantName: dev tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb" template: gateway: enabled: true queryFrontend: jaegerQuery: enabled: true
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack
1 metadata: name: simplest namespace: <project_of_tempostack_instance>
2 spec:
3 storage:
4 secret:
5 name: <secret_name>
6 type: <secret_provider>
7 storageSize: <value>Gi
8 resources:
9 total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift
10 authentication:
11 - tenantName: dev
12 tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa"
13 - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb" template: gateway: enabled: true
14 queryFrontend: jaegerQuery: enabled: true
15 Copy to Clipboard Copied! - 1
- This CR creates a TempoStack deployment, which is configured to receive Jaeger Thrift over the HTTP and OpenTelemetry Protocol (OTLP).
- 2
- The namespace that you have chosen for the TempoStack deployment.
- 3
- Red Hat supports only the custom resource options that are available in the Red Hat OpenShift Distributed Tracing Platform documentation.
- 4
- Specifies the storage for storing traces.
- 5
- The secret you created in step 2 for the object storage that had been set up as one of the prerequisites.
- 6
- The value of the
name
field in themetadata
section of the secret. For example:minio
. - 7
- The accepted values are
azure
for Azure Blob Storage;gcs
for Google Cloud Storage; ands3
for Amazon S3, MinIO, or Red Hat OpenShift Data Foundation. For example:s3
. - 8
- The size of the persistent volume claim for the Tempo Write-Ahead Logging (WAL). The default is
10Gi
. For example:1Gi
. - 9
- Optional.
- 10
- The value must be
openshift
. - 11
- The list of tenants.
- 12
- The tenant name, which is used as the value for the
X-Scope-OrgId
HTTP header. - 13
- The unique identifier of the tenant. Must be unique throughout the lifecycle of the TempoStack deployment. The Distributed Tracing Platform uses this ID to prefix objects in the object storage. You can reuse the value of the UUID or
tempoName
field. - 14
- Enables a gateway that performs authentication and authorization.
- 15
- Exposes the Jaeger UI, which visualizes the data, via a route at
http://<gateway_ingress>/api/traces/v1/<tenant_name>/search
.
- Select Create.
-
Go to Operators
Verification
- Use the Project: dropdown list to select the project of the TempoStack instance.
-
Go to Operators
Installed Operators to verify that the Status of the TempoStack instance is Condition: Ready. -
Go to Workloads
Pods to verify that all the component pods of the TempoStack instance are running. Access the Tempo console:
-
Go to Networking
Routes and Ctrl+F to search for tempo
. In the Location column, open the URL to access the Tempo console.
NoteThe Tempo console initially shows no trace data following the Tempo console installation.
-
Go to Networking
3.4.2. Installing a TempoStack instance by using the CLI
You can install a TempoStack instance from the command line.
Prerequisites
An active OpenShift CLI (
oc
) session by a cluster administrator with thecluster-admin
role.Tip-
Ensure that your OpenShift CLI (
oc
) version is up to date and matches your OpenShift Container Platform version. Run the
oc login
command:oc login --username=<your_username>
$ oc login --username=<your_username>
Copy to Clipboard Copied!
-
Ensure that your OpenShift CLI (
You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, Google Cloud Storage. For more information, see "Object storage setup".
WarningObject storage is required and not included with the Distributed Tracing Platform. You must choose and set up object storage by a supported provider before installing the Distributed Tracing Platform.
- You have defined one or more tenants and configured the read and write permissions. For more information, see "Configuring the read permissions for tenants" and "Configuring the write permissions for tenants".
Procedure
Run the following command to create a project of your choice for the TempoStack instance that you will create in a subsequent step:
oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempostack_instance> EOF
$ oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempostack_instance> EOF
Copy to Clipboard Copied! In the project that you created for the TempoStack instance, create a secret for your object storage bucket by running the following command:
oc apply -f - << EOF <object_storage_secret> EOF
$ oc apply -f - << EOF <object_storage_secret> EOF
Copy to Clipboard Copied! For more information, see "Object storage setup".
Example secret for Amazon S3 and MinIO storage
apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque
apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque
Copy to Clipboard Copied! Create a TempoStack instance in the project that you created for it:
NoteYou can create multiple TempoStack instances in separate projects on the same cluster.
Customize the
TempoStack
custom resource (CR):Example
TempoStack
CR for AWS S3 and MinIO storage and two tenantsapiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storage: secret: name: <secret_name> type: <secret_provider> storageSize: <value>Gi resources: total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift authentication: - tenantName: dev tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb" template: gateway: enabled: true queryFrontend: jaegerQuery: enabled: true
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack
1 metadata: name: simplest namespace: <project_of_tempostack_instance>
2 spec:
3 storage:
4 secret:
5 name: <secret_name>
6 type: <secret_provider>
7 storageSize: <value>Gi
8 resources:
9 total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift
10 authentication:
11 - tenantName: dev
12 tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa"
13 - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb" template: gateway: enabled: true
14 queryFrontend: jaegerQuery: enabled: true
15 Copy to Clipboard Copied! - 1
- This CR creates a TempoStack deployment, which is configured to receive Jaeger Thrift over the HTTP and OpenTelemetry Protocol (OTLP).
- 2
- The namespace that you have chosen for the TempoStack deployment.
- 3
- Red Hat supports only the custom resource options that are available in the Red Hat OpenShift Distributed Tracing Platform documentation.
- 4
- Specifies the storage for storing traces.
- 5
- The secret you created in step 2 for the object storage that had been set up as one of the prerequisites.
- 6
- The value of the
name
field in themetadata
section of the secret. For example:minio
. - 7
- The accepted values are
azure
for Azure Blob Storage;gcs
for Google Cloud Storage; ands3
for Amazon S3, MinIO, or Red Hat OpenShift Data Foundation. For example:s3
. - 8
- The size of the persistent volume claim for the Tempo Write-Ahead Logging (WAL). The default is
10Gi
. For example:1Gi
. - 9
- Optional.
- 10
- The value must be
openshift
. - 11
- The list of tenants.
- 12
- The tenant name, which is used as the value for the
X-Scope-OrgId
HTTP header. - 13
- The unique identifier of the tenant. Must be unique throughout the lifecycle of the TempoStack deployment. The Distributed Tracing Platform uses this ID to prefix objects in the object storage. You can reuse the value of the UUID or
tempoName
field. - 14
- Enables a gateway that performs authentication and authorization.
- 15
- Exposes the Jaeger UI, which visualizes the data, via a route at
http://<gateway_ingress>/api/traces/v1/<tenant_name>/search
.
Apply the customized CR by running the following command:
oc apply -f - << EOF <tempostack_cr> EOF
$ oc apply -f - << EOF <tempostack_cr> EOF
Copy to Clipboard Copied!
Verification
Verify that the
status
of all TempoStackcomponents
isRunning
and theconditions
aretype: Ready
by running the following command:oc get tempostacks.tempo.grafana.com simplest -o yaml
$ oc get tempostacks.tempo.grafana.com simplest -o yaml
Copy to Clipboard Copied! Verify that all the TempoStack component pods are running by running the following command:
oc get pods
$ oc get pods
Copy to Clipboard Copied! Access the Tempo console:
Query the route details by running the following command:
oc get route
$ oc get route
Copy to Clipboard Copied! Open
https://<route_from_previous_step>
in a web browser.NoteThe Tempo console initially shows no trace data following the Tempo console installation.
3.5. Installing a TempoMonolithic instance
The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can install a TempoMonolithic instance by using the web console or command line.
The TempoMonolithic
custom resource (CR) creates a Tempo deployment in monolithic mode. All components of the Tempo deployment, such as the compactor, distributor, ingester, querier, and query frontend, are contained in a single container.
A TempoMonolithic instance supports storing traces in in-memory storage, a persistent volume, or object storage.
Tempo deployment in monolithic mode is preferred for a small deployment, demonstration, testing, and as a migration path of the Red Hat OpenShift Distributed Tracing Platform (Jaeger) all-in-one deployment.
The monolithic deployment of Tempo does not scale horizontally. If you require horizontal scaling, use the TempoStack
CR for a Tempo deployment in microservices mode.
3.5.1. Installing a TempoMonolithic instance by using the web console
The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can install a TempoMonolithic instance from the Administrator view of the web console.
Prerequisites
-
You are logged in to the OpenShift Container Platform web console as a cluster administrator with the
cluster-admin
role. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-admin
role. - You have defined one or more tenants and configured the read and write permissions. For more information, see "Configuring the read permissions for tenants" and "Configuring the write permissions for tenants".
Procedure
-
Go to Home
Projects Create Project to create a project of your choice for the TempoMonolithic instance that you will create in a subsequent step. Decide which type of supported storage to use for storing traces: in-memory storage, a persistent volume, or object storage.
ImportantObject storage is not included with the Distributed Tracing Platform and requires setting up an object store by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, or Google Cloud Storage.
Additionally, opting for object storage requires creating a secret for your object storage bucket in the project that you created for the TempoMonolithic instance. You can do this in Workloads
Secrets Create From YAML. For more information, see "Object storage setup".
Example secret for Amazon S3 and MinIO storage
apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque
apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque
Copy to Clipboard Copied! Create a TempoMonolithic instance:
NoteYou can create multiple TempoMonolithic instances in separate projects on the same cluster.
-
Go to Operators
Installed Operators. -
Select TempoMonolithic
Create TempoMonolithic YAML view. In the YAML view, customize the
TempoMonolithic
custom resource (CR).Example
TempoMonolithic
CRapiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> size: <value>Gi s3: secret: <secret_name> tls: enabled: true caName: <ca_certificate_configmap_name> jaegerui: enabled: true route: enabled: true resources: total: limits: memory: <value>Gi cpu: <value>m multitenancy: enabled: true mode: openshift authentication: - tenantName: dev tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb"
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic
1 metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance>
2 spec:
3 storage:
4 traces: backend: <supported_storage_type>
5 size: <value>Gi
6 s3:
7 secret: <secret_name>
8 tls:
9 enabled: true caName: <ca_certificate_configmap_name>
10 jaegerui: enabled: true
11 route: enabled: true
12 resources:
13 total: limits: memory: <value>Gi cpu: <value>m multitenancy: enabled: true mode: openshift authentication:
14 - tenantName: dev
15 tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa"
16 - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb"
Copy to Clipboard Copied! - 1
- This CR creates a TempoMonolithic deployment with trace ingestion in the OTLP protocol.
- 2
- The namespace that you have chosen for the TempoMonolithic deployment.
- 3
- Red Hat supports only the custom resource options that are available in the Red Hat OpenShift Distributed Tracing Platform documentation.
- 4
- Specifies the storage for storing traces.
- 5
- Type of storage for storing traces: in-memory storage, a persistent volume, or object storage. The value for a persistent volume is
pv
. The accepted values for object storage ares3
,gcs
, orazure
, depending on the used object store type. The default value ismemory
for thetmpfs
in-memory storage, which is only appropriate for development, testing, demonstrations, and proof-of-concept environments because the data does not persist when the pod is shut down. - 6
- Memory size: For in-memory storage, this means the size of the
tmpfs
volume, where the default is2Gi
. For a persistent volume, this means the size of the persistent volume claim, where the default is10Gi
. For object storage, this means the size of the persistent volume claim for the Tempo Write-Ahead Logging (WAL), where the default is10Gi
. - 7
- Optional: For object storage, the type of object storage. The accepted values are
s3
,gcs
, andazure
, depending on the used object store type. - 8
- Optional: For object storage, the value of the
name
in themetadata
of the storage secret. The storage secret must be in the same namespace as the TempoMonolithic instance and contain the fields specified in "Table 1. Required secret parameters" in the section "Object storage setup". - 9
- Optional.
- 10
- Optional: Name of a
ConfigMap
object that contains a CA certificate. - 11
- Exposes the Jaeger UI, which visualizes the data, via a route at
http://<gateway_ingress>/api/traces/v1/<tenant_name>/search
. - 12
- Enables creation of a route for the Jaeger UI.
- 13
- Optional.
- 14
- Lists the tenants.
- 15
- The tenant name, which is used as the value for the
X-Scope-OrgId
HTTP header. - 16
- The unique identifier of the tenant. Must be unique throughout the lifecycle of the TempoMonolithic deployment. This ID will be added as a prefix to the objects in the object storage. You can reuse the value of the UUID or
tempoName
field.
- Select Create.
-
Go to Operators
Verification
- Use the Project: dropdown list to select the project of the TempoMonolithic instance.
-
Go to Operators
Installed Operators to verify that the Status of the TempoMonolithic instance is Condition: Ready. -
Go to Workloads
Pods to verify that the pod of the TempoMonolithic instance is running. Access the Jaeger UI:
Go to Networking
Routes and Ctrl+F to search for jaegerui
.NoteThe Jaeger UI uses the
tempo-<metadata_name_of_TempoMonolithic_CR>-jaegerui
route.- In the Location column, open the URL to access the Jaeger UI.
When the pod of the TempoMonolithic instance is ready, you can send traces to the
tempo-<metadata_name_of_TempoMonolithic_CR>:4317
(OTLP/gRPC) andtempo-<metadata_name_of_TempoMonolithic_CR>:4318
(OTLP/HTTP) endpoints inside the cluster.The Tempo API is available at the
tempo-<metadata_name_of_TempoMonolithic_CR>:3200
endpoint inside the cluster.
3.5.2. Installing a TempoMonolithic instance by using the CLI
The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can install a TempoMonolithic instance from the command line.
Prerequisites
An active OpenShift CLI (
oc
) session by a cluster administrator with thecluster-admin
role.Tip-
Ensure that your OpenShift CLI (
oc
) version is up to date and matches your OpenShift Container Platform version. Run the
oc login
command:oc login --username=<your_username>
$ oc login --username=<your_username>
Copy to Clipboard Copied!
-
Ensure that your OpenShift CLI (
- You have defined one or more tenants and configured the read and write permissions. For more information, see "Configuring the read permissions for tenants" and "Configuring the write permissions for tenants".
Procedure
Run the following command to create a project of your choice for the TempoMonolithic instance that you will create in a subsequent step:
oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempomonolithic_instance> EOF
$ oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempomonolithic_instance> EOF
Copy to Clipboard Copied! Decide which type of supported storage to use for storing traces: in-memory storage, a persistent volume, or object storage.
ImportantObject storage is not included with the Distributed Tracing Platform and requires setting up an object store by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, or Google Cloud Storage.
Additionally, opting for object storage requires creating a secret for your object storage bucket in the project that you created for the TempoMonolithic instance. You can do this by running the following command:
oc apply -f - << EOF <object_storage_secret> EOF
$ oc apply -f - << EOF <object_storage_secret> EOF
Copy to Clipboard Copied! For more information, see "Object storage setup".
Example secret for Amazon S3 and MinIO storage
apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque
apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque
Copy to Clipboard Copied! Create a TempoMonolithic instance in the project that you created for it.
TipYou can create multiple TempoMonolithic instances in separate projects on the same cluster.
Customize the
TempoMonolithic
custom resource (CR).Example
TempoMonolithic
CRapiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> size: <value>Gi s3: secret: <secret_name> tls: enabled: true caName: <ca_certificate_configmap_name> jaegerui: enabled: true route: enabled: true resources: total: limits: memory: <value>Gi cpu: <value>m multitenancy: enabled: true mode: openshift authentication: - tenantName: dev tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb"
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic
1 metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance>
2 spec:
3 storage:
4 traces: backend: <supported_storage_type>
5 size: <value>Gi
6 s3:
7 secret: <secret_name>
8 tls:
9 enabled: true caName: <ca_certificate_configmap_name>
10 jaegerui: enabled: true
11 route: enabled: true
12 resources:
13 total: limits: memory: <value>Gi cpu: <value>m multitenancy: enabled: true mode: openshift authentication:
14 - tenantName: dev
15 tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa"
16 - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb"
Copy to Clipboard Copied! - 1
- This CR creates a TempoMonolithic deployment with trace ingestion in the OTLP protocol.
- 2
- The namespace that you have chosen for the TempoMonolithic deployment.
- 3
- Red Hat supports only the custom resource options that are available in the Red Hat OpenShift Distributed Tracing Platform documentation.
- 4
- Specifies the storage for storing traces.
- 5
- Type of storage for storing traces: in-memory storage, a persistent volume, or object storage. The value for a persistent volume is
pv
. The accepted values for object storage ares3
,gcs
, orazure
, depending on the used object store type. The default value ismemory
for thetmpfs
in-memory storage, which is only appropriate for development, testing, demonstrations, and proof-of-concept environments because the data does not persist when the pod is shut down. - 6
- Memory size: For in-memory storage, this means the size of the
tmpfs
volume, where the default is2Gi
. For a persistent volume, this means the size of the persistent volume claim, where the default is10Gi
. For object storage, this means the size of the persistent volume claim for the Tempo Write-Ahead Logging (WAL), where the default is10Gi
. - 7
- Optional: For object storage, the type of object storage. The accepted values are
s3
,gcs
, andazure
, depending on the used object store type. - 8
- Optional: For object storage, the value of the
name
in themetadata
of the storage secret. The storage secret must be in the same namespace as the TempoMonolithic instance and contain the fields specified in "Table 1. Required secret parameters" in the section "Object storage setup". - 9
- Optional.
- 10
- Optional: Name of a
ConfigMap
object that contains a CA certificate. - 11
- Exposes the Jaeger UI, which visualizes the data, via a route at
http://<gateway_ingress>/api/traces/v1/<tenant_name>/search
. - 12
- Enables creation of a route for the Jaeger UI.
- 13
- Optional.
- 14
- Lists the tenants.
- 15
- The tenant name, which is used as the value for the
X-Scope-OrgId
HTTP header. - 16
- The unique identifier of the tenant. Must be unique throughout the lifecycle of the TempoMonolithic deployment. This ID will be added as a prefix to the objects in the object storage. You can reuse the value of the UUID or
tempoName
field.
Apply the customized CR by running the following command:
oc apply -f - << EOF <tempomonolithic_cr> EOF
$ oc apply -f - << EOF <tempomonolithic_cr> EOF
Copy to Clipboard Copied!
Verification
Verify that the
status
of all TempoMonolithiccomponents
isRunning
and theconditions
aretype: Ready
by running the following command:oc get tempomonolithic.tempo.grafana.com <metadata_name_of_tempomonolithic_cr> -o yaml
$ oc get tempomonolithic.tempo.grafana.com <metadata_name_of_tempomonolithic_cr> -o yaml
Copy to Clipboard Copied! Run the following command to verify that the pod of the TempoMonolithic instance is running:
oc get pods
$ oc get pods
Copy to Clipboard Copied! Access the Jaeger UI:
Query the route details for the
tempo-<metadata_name_of_tempomonolithic_cr>-jaegerui
route by running the following command:oc get route
$ oc get route
Copy to Clipboard Copied! -
Open
https://<route_from_previous_step>
in a web browser.
When the pod of the TempoMonolithic instance is ready, you can send traces to the
tempo-<metadata_name_of_tempomonolithic_cr>:4317
(OTLP/gRPC) andtempo-<metadata_name_of_tempomonolithic_cr>:4318
(OTLP/HTTP) endpoints inside the cluster.The Tempo API is available at the
tempo-<metadata_name_of_tempomonolithic_cr>:3200
endpoint inside the cluster.