Este conteúdo não está disponível no idioma selecionado.
Chapter 3. Distributed tracing and Service Mesh
3.1. Configuring Red Hat OpenShift distributed tracing platform with Service Mesh Copiar o linkLink copiado para a área de transferência!
Integrate Red Hat OpenShift distributed tracing platform with Red Hat OpenShift Service Mesh by using Red Hat OpenShift distributed tracing platform (Tempo) for distributed tracing platform storage and Red Hat OpenShift distributed tracing data collection for standardized telemetry data collection and processing.
3.1.1. About Red Hat OpenShift distributed tracing platform and Red Hat OpenShift Service Mesh Copiar o linkLink copiado para a área de transferência!
Two parts integrate Red Hat OpenShift distributed tracing platform with Red Hat OpenShift Service Mesh: Red Hat OpenShift distributed tracing platform (Tempo) and Red Hat OpenShift distributed tracing data collection.
- Red Hat OpenShift distributed tracing platform (Tempo)
Provides distributed tracing platform to monitor and troubleshoot transactions in complex distributed systems. Tempo derives its core functionality from the open source Grafana Tempo project.
For more about information about distributed tracing platform (Tempo), its features, installation, and configuration, see, "Red Hat OpenShift distributed tracing platform (Tempo)".
- Red Hat OpenShift distributed tracing data collection
Derives its core functionality from the open source "OpenTelemetry project", which aims to offer unified, standardized, and vendor-neutral telemetry data collection for cloud-native software. Red Hat OpenShift distributed tracing data collection product provides support for deploying and managing the OpenTelemetry Collector and simplifying the instrumentation of workloads.
The "OpenTelemetry Collector" can receive, process, and forward telemetry data in many formats, making it the ideal component for telemetry processing and interoperability between telemetry systems. The Collector provides a unified solution for collecting and processing metrics, traces, and logs.
For more information about distributed tracing data collection, its features, installation, and configuration, see: "Red Hat OpenShift distributed tracing data collection".
3.1.2. Configuring Red Hat OpenShift distributed tracing data collection with Service Mesh Copiar o linkLink copiado para a área de transferência!
You can integrate Red Hat OpenShift Service Mesh with Red Hat OpenShift distributed tracing data collection to instrument, generate, collect, and export OpenTelemetry traces, metrics, and logs to analyze and understand the performance and behavior of the software.
Prerequisites
- You have installed the Tempo Operator. For more information, see "Installing the Tempo Operator".
- You have installed the Red Hat OpenShift distributed tracing data collection Operator. For more information, see "Installing the Red Hat build of OpenTelemetry".
-
You have installed a
TempoStackand configured it in atemponamespace. For more information, see "Installing aTempoStackinstance". -
You have created an
Istioinstance. -
You have created an
IstioCNIinstance.
Procedure
Navigate to the Red Hat OpenShift distributed tracing data collection Operator and install the
OpenTelemetryCollectorresource in theistio-systemnamespace:Example OpenTelemetry Collector in
istio-systemnamespacekind: OpenTelemetryCollector apiVersion: opentelemetry.io/v1beta1 metadata: name: otel namespace: istio-system spec: observability: metrics: {} deploymentUpdateStrategy: {} config: exporters: otlp: endpoint: 'tempo-sample-distributor.tempo.svc.cluster.local:4317' tls: insecure: true receivers: otlp: protocols: grpc: endpoint: '0.0.0.0:4317' http: {} service: pipelines: traces: exporters: - otlp receivers: - otlpConfigure Red Hat OpenShift Service Mesh to enable tracing, and define the distributed tracing data collection tracing providers in your
meshConfig:Example enabling tracing and defining tracing providers
apiVersion: sailoperator.io/v1 kind: Istio metadata: # ... name: default spec: namespace: istio-system # ... values: meshConfig: enableTracing: true extensionProviders: - name: otel opentelemetry: port: 4317 service: otel-collector.istio-system.svc.cluster.local-
spec.values.meshConfig.ExtensionProviders.opentelemetry.serviceis theOpenTelemetrycollector service in theistio-systemnamespace.
-
Create an Istio Telemetry resource to enable tracers defined in
spec.values.meshConfig.ExtensionProviders:Example Istio Telemetry resource
apiVersion: telemetry.istio.io/v1 kind: Telemetry metadata: name: otel-demo namespace: istio-system spec: tracing: - providers: - name: otel randomSamplingPercentage: 100NoteYou can use a single Istio Telemetry resource for both the Prometheus metrics provider and a tracing provider by setting
spec.metrics.overrides.disabledtofalse. This enables the Prometheus metrics provider. This is an optional step and you can skip it if you configured metrics through the OpenShift Cluster Monitoring method described in the earlier step.Create the
bookinfonamespace by running the following command:$ oc create ns bookinfoDepending on the update strategy you are using, enable sidecar injection in the namespace by running the appropriate commands:
If you are using the
InPlaceupdate strategy, run the following command:$ oc label namespace curl istio-injection=enabledIf you are using the
RevisionBasedupdate strategy, run the following commands:Display the revision name by running the following command:
$ oc get istiorevisions.sailoperator.ioYou should see output similar to the following example:
NAME TYPE READY STATUS IN USE VERSION AGE default Local True Healthy True v1.24.3 3m33sLabel the namespace with the revision name to enable sidecar injection by running the following command:
$ oc label namespace curl istio.io/rev=default
Deploy the
bookinfoapplication in thebookinfonamespace by running the following command:$ oc apply -f https://raw.githubusercontent.com/openshift-service-mesh/istio/release-1.24/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfoGenerate traffic to the
productpagepod to generate traces:$ oc exec -it -n bookinfo deployments/productpage-v1 -c istio-proxy -- curl localhost:9080/productpageValidate the integration by running the following command to see traces in the UI:
$ oc get routes -n tempo tempo-sample-query-frontendNoteYou must create the OpenShift route for the Jaeger UI in the Tempo namespace. You can either manually create it for the
tempo-sample-query-frontendservice, or update theTempocustom resource with.spec.template.queryFrontend.jaegerQuery.ingress.type: route.