Questo contenuto non è disponibile nella lingua selezionata.
Chapter 2. About the Distributed Tracing Platform
2.1. Key concepts in distributed tracing Copia collegamentoCollegamento copiato negli appunti!
Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate to produce a response. Red Hat OpenShift Distributed Tracing Platform lets you perform distributed tracing, which records the path of a request through various microservices that make up an application.
Distributed tracing is a technique that is used to tie the information about different units of work together — usually executed in different processes or hosts — to understand a whole chain of events in a distributed transaction. Developers can visualize call flows in large microservice architectures with distributed tracing. It is valuable for understanding serialization, parallelism, and sources of latency.
Red Hat OpenShift Distributed Tracing Platform records the execution of individual requests across the whole stack of microservices, and presents them as traces. A trace is a data/execution path through the system. An end-to-end trace consists of one or more spans.
A span represents a logical unit of work in Red Hat OpenShift Distributed Tracing Platform that has an operation name, the start time of the operation, and the duration, as well as potentially tags and logs. Spans may be nested and ordered to model causal relationships.
As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use Red Hat OpenShift Distributed Tracing Platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications.
With Distributed Tracing Platform, you can perform the following functions:
- Monitor distributed transactions
- Optimize performance and latency
- Perform root cause analysis
You can combine Distributed Tracing Platform with other relevant components of the OpenShift Container Platform:
- Red Hat build of OpenTelemetry for forwarding traces to a TempoStack instance
- Distributed tracing UI plugin of the Cluster Observability Operator (COO)
2.2. Red Hat OpenShift Distributed Tracing Platform features Copia collegamentoCollegamento copiato negli appunti!
Red Hat OpenShift Distributed Tracing Platform provides the following capabilities:
- Integration with Kiali – When properly configured, you can view Distributed Tracing Platform data from the Kiali console.
- High scalability – The Distributed Tracing Platform back end is designed to have no single points of failure and to scale with the business needs.
- Distributed Context Propagation – Enables you to connect data from different components together to create a complete end-to-end trace.
- Backwards compatibility with Zipkin – Red Hat OpenShift Distributed Tracing Platform has APIs that enable it to be used as a drop-in replacement for Zipkin, but Red Hat is not supporting Zipkin compatibility in this release.
2.3. Red Hat OpenShift Distributed Tracing Platform architecture Copia collegamentoCollegamento copiato negli appunti!
Red Hat OpenShift Distributed Tracing Platform is made up of several components that work together to collect, store, and display tracing data.
Red Hat OpenShift Distributed Tracing Platform - This component is based on the open source Grafana Tempo project.
- Gateway – The Gateway handles authentication, authorization, and forwarding of requests to the Distributor or Query front-end service.
-
Distributor – The Distributor accepts spans in multiple formats, including Jaeger, OpenTelemetry, and Zipkin. It routes spans to Ingesters by hashing the
traceIDand using a distributed consistent hash ring. - Ingester – The Ingester batches a trace into blocks, creates bloom filters and indexes, and then flushes it all to the back end.
- Query Frontend – The Query Frontend shards the search space for an incoming query and sends the query to the Queriers. The Query Frontend deployment exposes the Jaeger UI through the Tempo Query sidecar.
- Querier - The Querier is responsible for finding the requested trace ID in either the Ingesters or the back-end storage. Depending on parameters, it can query the Ingesters and pull Bloom indexes from the back end to search blocks in object storage.
- Compactor – The Compactor streams blocks to and from the back-end storage to reduce the total number of blocks.
Red Hat build of OpenTelemetry - This component is based on the open source OpenTelemetry project.
- OpenTelemetry Collector - The OpenTelemetry Collector is a vendor-agnostic way to receive, process, and export telemetry data. The OpenTelemetry Collector supports open-source observability data formats, for example, Jaeger and Prometheus, sending to one or more open-source or commercial back-ends. The Collector is the default location instrumentation libraries export their telemetry data.