此内容没有您所选择的语言版本。
Chapter 1. Connectivity Link observability
You can use the Connectivity Link observability features to observe and monitor your gateways, applications, and APIs on OpenShift Container Platform.
1.1. Connectivity Link observability features 复制链接链接已复制到粘贴板!
Connectivity Link uses metrics exposed by Connectivity Link components, Gateway API state metrics, and standard metrics exposed by Envoy to build a set template dashboards and alerts. Envoy is part of OpenShift Service Mesh. In this case, it runs as a gateway deployment.
You can download Kuadrant community-based templates to integrate with Grafana, Prometheus, and Alertmanager deployments, or use those templates as starting points that you can change for your specific needs. Use the secure images available in the Red Hat Catalog at: Red Hat Connectivity Link.
Connectivity Link includes the following observability features:
- Metrics: Prometheus metrics for monitoring gateway and policy performance
- Tracing: Distributed tracing with Red Hat build of OpenTelemetry support for request flows
- Access Logs: Envoy access logs with request correlation and structured logging
- Dashboards: Pre-built Grafana dashboards for visualization
1.2. Configure your observability monitoring stack 复制链接链接已复制到粘贴板!
You can prepare your monitoring stack to give yourself insight into your gateways, applications, and APIs by setting up dashboards and alerts on your OpenShift Container Platform cluster. You must configure your stack on each OpenShift Container Platform cluster that you want to use Connectivity Link on.
The example dashboards and alerts for observing Connectivity Link functionality use low-level CPU and network metrics from the user monitoring stack in OpenShift Container Platform and resource-state metrics from Gateway API and Connectivity Link resources. The user monitoring stack in OpenShift Container Platform is based on the Prometheus open source project.
The following procedure is an example only and is not intended for production use.
Prerequisites
- You installed Connectivity Link.
-
You set up metrics, such as
prometheus. - You installed and configured Grafana on your OpenShift Container Platform cluster.
- You cloned the Kuadrant Operator GitHub repository.
Procedure
Verify that the user workload monitoring is configured correctly in your OpenShift Container Platform cluster as follows:
kubectl get configmap cluster-monitoring-config -n openshift-monitoring -o jsonpath='{.data.config\.yaml}'|grep enableUserWorkload$ kubectl get configmap cluster-monitoring-config -n openshift-monitoring -o jsonpath='{.data.config\.yaml}'|grep enableUserWorkloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow The expected output is
enableUserWorkload: true.Install the Connectivity Link, Gateway, and Grafana component metrics and configuration as follows:
kubectl apply -k https://github.com/Kuadrant/kuadrant-operator/config/install/configure/observability?ref=v1.2.0
$ kubectl apply -k https://github.com/Kuadrant/kuadrant-operator/config/install/configure/observability?ref=v1.2.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the root directory of your Kuadrant Operator repository, configure the OpenShift Container Platform
thanos-queryinstance as a data source in Grafana as follows:TOKEN="Bearer $(oc whoami -t)" HOST="$(kubectl -n openshift-monitoring get route thanos-querier -o jsonpath='https://{.status.ingress[].host}')" echo "TOKEN=$TOKEN" > config/observability/openshift/grafana/datasource.env echo "HOST=$HOST" >> config/observability/openshift/grafana/datasource.env kubectl apply -k config/observability/openshift/grafanaTOKEN="Bearer $(oc whoami -t)" HOST="$(kubectl -n openshift-monitoring get route thanos-querier -o jsonpath='https://{.status.ingress[].host}')" echo "TOKEN=$TOKEN" > config/observability/openshift/grafana/datasource.env echo "HOST=$HOST" >> config/observability/openshift/grafana/datasource.env kubectl apply -k config/observability/openshift/grafanaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the example Grafana dashboards as follows:
kubectl apply -k https://github.com/Kuadrant/kuadrant-operator/examples/dashboards?ref=v1.3.0
$ kubectl apply -k https://github.com/Kuadrant/kuadrant-operator/examples/dashboards?ref=v1.3.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow
By enabling observability monitoring, you can view context, historical trends, and alerts based on the metrics you configured. After you have configured your monitoring stack, use this step to expose metrics endpoints, deploy monitoring resources, and configure the Envoy gateway.
When you enable observability monitoring, the following events occur:
-
Connectivity Link creates
ServiceMonitorandPodMonitorcustom resource definitions (CRDs) for its components in the namespace where Connectivity Link is. - A single set of monitors is created in each gateway namespace to scrape metrics from any gateways.
-
Monitors also scrape metrics from the corresponding gateway system namespace, generally the
istio-systemnamespace.
You can delete and re-create monitors as required. Monitors are only ever created or deleted, and not updated or reverted. The following procedure is optional. You can create your own ServiceMonitor or PodMonitor definitions, or configure prometheus metrics directly.
To use Connectivity Link observability dashboards, you must enable observability on each OpenShift Container Platform cluster that Connectivity Link runs on.
Prerequisites
- You installed Connectivity Link.
- You have administrator access to your OpenShift Container Platform cluster.
- You configured observability metrics.
Procedure
To enable default observability for Connectivity Link and any gateways, set
spec.observability.enableparameter value totruein yourKuadrantcustom resource (CR):Example
KuadrantCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can also set the
spec.observability.enabletofalseand create your ownServiceMonitororPodMonitordefinitions, or configure Prometheus directly.
Verification
Check the created monitors by running the following command:
kubectl get servicemonitor,podmonitor -A -l kuadrant.io/observability=true
$ kubectl get servicemonitor,podmonitor -A -l kuadrant.io/observability=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Connectivity Link includes starting points for monitoring your Connectivity Link deployment with ready-to-use example dashboards and alerts. You can customize these dashboards and alerts to fit your environment.
Dashboards are organized with different metrics for different use cases.
1.4.1. Platform engineer Grafana dashboard 复制链接链接已复制到粘贴板!
The platform engineer dashboard displays the following details:
- Policy compliance and governance.
- Resource consumption
- Error rates
- Request latency and throughput
- Multi-window, multi-burn alert templates for API error rates and latency
- Multicluster split
1.4.2. Application developer Grafana dashboard 复制链接链接已复制到粘贴板!
The application developer dashboard is less focused on policies than the platform engineer dashboard and is more focused on APIs and applications. For example:
- Request latency and throughput per API
- Total requests and error rates by API path
1.4.3. Business user Grafana dashboard 复制链接链接已复制到粘贴板!
The business user dashboard includes the following details:
- Requests per second per API
- Increase or decrease in rates of API usage over specified times
1.4.4. Grafana dashboards available to import 复制链接链接已复制到粘贴板!
The Connectivity Link example dashboards are uploaded to the Grafana dashboards website. You can import the following dashboards into your Grafana deployment on OpenShift Container Platform:
| Name | Dashboard ID |
|---|---|
| 21538 | |
| 20982 | |
| 20981 | |
| 22695 |
1.4.5. Importing dashboards in Grafana 复制链接链接已复制到粘贴板!
As an infrastructure engineer, you can manually select and important dashboards to Grafana to conduct rapid prototyping or emergency troubleshooting, test community dashboards, or perfect a dashboard that you intend to automate for another team.
You must perform these steps on each OpenShift Container Platform cluster that you want to use Connectivity Link on.
Prerequisites
- You configured your monitoring stack and other observability resources as needed.
- You installed Connectivity Link.
- You have administrator access to a running OpenShift Container Platform cluster.
Procedure
Click Dashboards > New > Import, and use one of the following options:
- Upload a dashboard JSON file.
- Enter a dashboard ID obtained from the Grafana dashboards website.
Enter JSON content directly.
For more information, see the Grafana documentation on how to import dashboards.
As an infrastructure engineer, automating the import of observability dashboards can give you more consistency, version control, and operational velocity. Automation gives you the benefits of monitoring-as-code, and helps keep Operators updated, clusters identical, and supports multi-tenancy.
You can use a
GrafanaDashboardresource to reference aConfigMap.Data sources are configured as template variables, automatically integrating with your existing data sources. The metrics for these dashboards are sourced from Prometheus.
ImportantFor some example dashboard panels to work correctly, HTTPRoutes in Connectivity Link must include a
serviceanddeploymentlabel with a value that matches the name of the service and deployment being routed to. For example,service=my-appanddeployment=my-app. This allows low-level Istio and Envoy metrics to be joined with Gateway API state metrics.-
If you do not want to use the GUI, you can automate dashboard provisioning in Grafana by adding JSON files to a
ConfigMapobject that you must mount at/etc/grafana/provisioning/dashboards.
1.4.7. About configuring Prometheus alerts 复制链接链接已复制到粘贴板!
As an infrastructure engineer, you can configure Prometheus alerts in OpenShift Container Platform is a proactive way to tune alerts so that you can ensure platform stability. For example, you can set alert triggers for automated incident detection, usage, and cluster health.
-
You can integrate the Connectivity Link example alerts into Prometheus as
PrometheusRuleresources, and then adjust the alert thresholds to suit your specific operational needs. - For details on how to configure Prometheus alerts, see Configuring alerts and notifications for user workload monitoring.
- Service Level Objective (SLO) alerts generated by using the Sloth GitHub project are also included. You can use these alerts to integrate with the SLO Grafana dashboard, which uses generated labels to comprehensively overview your SLOs.
1.5. Tracing in Connectivity Link 复制链接链接已复制到粘贴板!
Connectivity Link supports tracing at both the control plane and data-plane levels. Connectivity Link exports control-plane traces to your OpenTelemetry Collector so that you can observe reconciliation loops and internal operations. This is useful for debugging controller behavior, understanding operator performance, and tracking policy lifecycle events.
Data-plane tracing traces actual user requests through the gateway and policy enforcement components. You can see request flows through Istio, Authorino, Limitador, and the wasm-shim module. Data-plane tracing is useful for debugging request-level issues and policy enforcement.
To use tracing, you must configure both types of tracing. You must configure the kuadrant custom resource (CR) for the data plane. For control plane tracing, you must configure each operating component separately, such as the kuadrant-operator, authorino-operator, and limitador-operator deployments. This configuration sends traces to the same collector, providing a complete view of your Connectivity Link system from policy reconciliation to request processing.
1.5.1. Correlating control plane and data plane traces 复制链接链接已复制到粘贴板!
Even though control plane and data plane traces are separate, you can correlate them. For example, create a RateLimitPolicy to understand how traces work together to show all events.
Create a RateLimitPolicy at
15:30:00, then view the control plane trace to see the following events:-
Policy reconciliation completed at
15:30:05. - Limitador configuration updated.
-
wasm-shimconfiguration updated.
-
Policy reconciliation completed at
Next, send a test request at
15:30:10, then view data plane trace to see the following events:-
Request processed through the
wasm-shimmodule. - Rate limit check sent to Limitador.
- Response returned.
-
Request processed through the
You can use a similar pattern of action for any events that you want to correlate manually. This type of correlation is useful in development environments.
1.5.2. Control-plane tracing environment variables 复制链接链接已复制到粘贴板!
You can enable control tracing in Connectivity Link by setting OpenTelemetry environment variables in the deployment. The method for setting the variables depends on your deployment approach, for example, whether you used the Operator Lifecycle Manager (OLM) or YAML manifests.
Control plane traces appear under the service name kuadrant-operator in the Grafana dashboard.
| Variable | Description | Default |
|---|---|---|
|
|
OTLP collector endpoint URL, for example,
| Tracing disabled |
|
| Override endpoint specifically for traces |
Uses |
|
|
Use insecure connection to collector; set to |
|
|
| Service name for traces |
|
|
| Service version for telemetry data | Empty |
Enable data plane tracing in OpenShift Service Mesh with the kuadrant CR. You must perform these steps on each OpenShift Container Platform cluster that you want to use Connectivity Link on.
Prerequisites
- You installed Connectivity Link.
- You have administrator access to a running OpenShift Container Platform cluster.
- You have Red Hat OpenShift Distributed Tracing Platform installed and configured to support OpenTelemetry.
Procedure
Enable tracing in OpenShift Service Mesh by configuring your
Telemetrycustom resource (CR) as follows:Example OpenShift Service Mesh Telemetry CR with tracing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration by running the following command:
kubectl apply -f mesh-default.yaml
$ kubectl apply -f mesh-default.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure a tracing extension provider for OpenShift Service Mesh in your
IstioCR by adding a list value to thespec.values.meshConfig.extensionProvidersparameter. Ensure that you also add theotelport and service information:Example Istio CR with tracing extension provider
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf you are setting the controller manually, you must set the OpenTelemetry Collector protocol in the
ServiceCR portnameandappProtocolfields. For example, when using gRPC, the port name should begin withgrpc-or theappProtocolshould begrpc:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration by running the following command:
kubectl apply -f istio.yaml
$ kubectl apply -f istio.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional. If you want to collect
AuthorinoandLimitadortraces in a different location than yourKaudranttraces, complete the following steps:Enable request tracing in your
Authorinocustom resource (CR) and send authentication and authorization traces to the central collector as follows:Example Authorino CR with request tracing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
insecuretotrueto skip TLS certificate verification in development environments. Set tofalsefor production environments.Apply the configuration by running the following command:
kubectl apply -f authorino.yaml
$ kubectl apply -f authorino.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable request tracing in your
LimitadorCR and send rate limit traces to the central collector as follows:Example Limitador CR with request tracing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
insecuretotrueto skip TLS certificate verification in development environments. Set tofalsefor production environments.Apply the configuration by running the following command:
kubectl apply -f limitador.yaml
$ kubectl apply -f limitador.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantTrace IDs do not propagate to WebAssembly modules in OpenShift Service Mesh. This means that requests passed to Limitador do not have the relevant parent trace ID. However, if the trace initiation point is outside OpenShift Service Mesh, the parent trace ID is available to Limitador and included in traces. This impacts correlating traces from Limitador with traces from Authorino, the gateway, and any other components in the request path.
Configure data-plane tracing in the
KuadrantCR by providing the collector endpoint as shown in the following example:Example
kuadrantCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
spec.observability.tracing.defaultEndpoint: The URL of the tracing collector backend. That is, the OpenTelemetry endpoint. The following are supported protocols:-
rpc://for gRPC OTLP, port 4317 -
http://for HTTP OTLP, port 4318
-
`spec.observability.tracing.insecure: Set totrueto skip TLS certificate verification in development environments. Set tofalsefor production environments.ImportantPoint to the collector service, such as the Distributed Tracing Platform, not the query service. The collector receives traces from your applications. The query service is only for viewing traces in the GUI.
Apply the configuration by running the following command:
kubectl apply -f kuadrant.yaml
$ kubectl apply -f kuadrant.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the CR applied successfully by listing the objects of that
Kindby running the following command:kubectl get kuadrant
$ kubectl get kuadrantCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5.4. Troubleshooting by using traces and logs 复制链接链接已复制到粘贴板!
You can use a tracing user interface such Grafana to search for OpenShift Service Mesh and Connectivity Link trace information by trace ID. You can get the trace ID from logs, or from a header in a sample request that you want to troubleshoot. You can also search for recent traces, filtering by the service that you want to focus on.
If you centrally aggregate logs by using tools such as Grafana Loki and Promtail, you can jump between trace information and the relevant logs for that service.
By using a combination of tracing and logs, you can visualize and troubleshoot request timing issues and narrow down to specific services. This method gives you even more insight a more complete picture of your user traffic when you combine it with Connectivity Link metrics and dashboards.
1.5.5. Viewing rate-limit logging with trace IDs 复制链接链接已复制到粘贴板!
You can enable request logging with trace IDs to get more information about requests when you use the Limitador component of Connectivity Link for rate limiting. To do this, you must increase the log level.
Prerequisites
- You installed Connectivity Link.
- You have administrator access to a running OpenShift Container Platform cluster.
- You configured Grafana dashboards.
- You have Red Hat OpenShift Distributed Tracing Platform installed and configured to support OpenTelemetry.
Procedure
Set the verbosity to
3or higher in yourLimitadorcustom resource (CR) as follows:Example Limitador CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example log entry with the
traceparentfield holding the trace ID"Request received: Request { metadata: MetadataMap { headers: {"te": "trailers", "grpc-timeout": "5000m", "content-type": "application/grpc", "traceparent": "00-4a2a933a23df267aed612f4694b32141-00f067aa0ba902b7-01", "x-envoy-internal": "true", "x-envoy-expected-rq-timeout-ms": "5000"} }, message: RateLimitRequest { domain: "default/toystore", descriptors: [RateLimitDescriptor { entries: [Entry { key: "limit.general_user__f5646550", value: "1" }, Entry { key: "metadata.filter_metadata.envoy\\.filters\\.http\\.ext_authz.identity.userid", value: "alice" }], limit: None }], hits_addend: 1 }, extensions: Extensions }""Request received: Request { metadata: MetadataMap { headers: {"te": "trailers", "grpc-timeout": "5000m", "content-type": "application/grpc", "traceparent": "00-4a2a933a23df267aed612f4694b32141-00f067aa0ba902b7-01", "x-envoy-internal": "true", "x-envoy-expected-rq-timeout-ms": "5000"} }, message: RateLimitRequest { domain: "default/toystore", descriptors: [RateLimitDescriptor { entries: [Entry { key: "limit.general_user__f5646550", value: "1" }, Entry { key: "metadata.filter_metadata.envoy\\.filters\\.http\\.ext_authz.identity.userid", value: "alice" }], limit: None }], hits_addend: 1 }, extensions: Extensions }"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.6. Configuring access logs 复制链接链接已复制到粘贴板!
You can configure Envoy access logs in OpenShift Service Mesh so that you can to track a single request across multiple services and components by using a unique identifier.
Prerequisites
- You installed Connectivity Link.
- You have a running OpenShift Container Platform cluster.
- You have administrator access to the OpenShift Container Platform cluster.
Procedure
Enable mesh-wide, default-format access logs by using the Istio Telemetry API. Use the following example as a starting point:
Example Telemetry API config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You might also use the
istio-systemas your namespace, depending on your setup.For better parsing and integration with log aggregation systems, enable JSON-formatted access logs. Only log errors as shown in the following example:
Example JSON config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enable logging for a specific workload and add filtering, use the following example:
Example JSON workload config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipThe
expressionfield uses Common Expression Language (CEL). You can use CEL-based filters to avoid excessive and meaningless logs.If you are using the Sail Operator, check which
IstioOperator is active in your cluster by running the following command:kubectl get istio -A
$ kubectl get istio -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow The expected output is a list of your mesh deployments, such as
default,prod-meshand their current status.Configure the Istio mesh with a custom access log provider to enable JSON encoding:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- Filter your access logs to focus on the errors you need to see.
- Enable request, log, and tracing correlation.
1.6.1. Filtering access logs 复制链接链接已复制到粘贴板!
You can filter your access logs to reduce extra messages and focus on the issues and errors that are relevant to your use case.
Prerequisites
- You installed Connectivity Link.
- You have a running OpenShift Container Platform cluster.
- You have administrator access to the OpenShift Container Platform cluster.
- You enabled access logs.
Procedure
Configure your
Telemetrycustom resource (CR) to only log errors by using the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure your
Telemetrycustom resource (CR) to only log specific routes by using the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure your
Telemetrycustom resource (CR) to exclude health checks by using the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.6.2. Common access log format variables 复制链接链接已复制到粘贴板!
You can quickly set up Envoy logs by using the most common format variables so that you get exactly the data you want.
Example configuration snippet
envoyFileAccessLog:
path: /dev/stdout
logFormat:
text: "[%START_TIME%] %REQ(X-REQUEST-ID)% %RESP(HEADER)% %RESPONSE_FLAGS%\n"
envoyFileAccessLog:
path: /dev/stdout
logFormat:
text: "[%START_TIME%] %REQ(X-REQUEST-ID)% %RESP(HEADER)% %RESPONSE_FLAGS%\n"
| Variable | Description |
|---|---|
|
| Request start time |
|
|
Request header value, such as |
|
| Response header value |
|
| Protocol, such as HTTP/1.1, HTTP/2 |
|
| HTTP response code |
|
| Response flags indicating issues, such as UH, UF |
|
| Bytes received from client |
|
| Bytes sent to client |
|
| Total request duration in milliseconds |
|
| Upstream host address |
|
| Upstream cluster name |
|
| Route name that matched |
1.7. About using access logs for request correlation 复制链接链接已复制到粘贴板!
Access logs give you detailed information about each request processed by the gateway, including timing, response codes, and request identifiers. For example, you can correlate requests across gateways, Authorino, Limitador, and backend services.
You can correlate request information with traces and application logs for a variety of uses. Request correlation uses x-request-id headers. These headers are automatically generated by Envoy for each incoming request. For example:
-
Access logs show the
x-request-id. -
Traces include the
x-request-idas a span attribute. - Use a dashboard to jump from logs to traces and vice versa.
The following fields are the most important access-log fields for request correlation:
-
request_id(%REQ(X-REQUEST-ID)%): The unique request identifier generated by Envoy. -
start_time** (%START_TIME%): The request start time for time-based correlation. -
route_name** (%ROUTE_NAME%): The route that matched the request, which is useful for policy debugging.
1.7.1. Setting up access log and tracing correlation 复制链接链接已复制到粘贴板!
You can use access logs and tracing together to correlate requests. When you correlate request IDs, you can search for an ID once and see the entire journey from the initial access through to an event that you are investigating.
You can see the exact timing of a request as it entered and left each service. If you have configured user or organization-based IDs, you can also determine who a problem is effecting so that you can prioritize your response.
The following configuration example tells WASM filters to log the x-request-id header value and enables request correlation across Envoy, Authorino, Limitador, and WASM logs.
Prerequisites
- You installed Connectivity Link.
- You have a running OpenShift Container Platform cluster.
- You have administrator access to the OpenShift Container Platform cluster.
- You enabled access logs and tracing.
Procedure
To enable request correlation across Connectivity Link components, configure the
httpHeaderIdentifierin the Kuadrant CR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can correlate logs across all components using the
x-request-id, by using the following examples:View the following Envoy access log entry:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Correlate the following Authorino log entry with the Envoy access log:
{"level":"info","ts":"2026-01-23T15:45:12.350Z","request_id":"a1b2c3d4-e5f6-7890-abcd-ef1234567890","msg":"auth check succeeded","identity":"alice"}{"level":"info","ts":"2026-01-23T15:45:12.350Z","request_id":"a1b2c3d4-e5f6-7890-abcd-ef1234567890","msg":"auth check succeeded","identity":"alice"}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Correlate the following Limitador log entry with the Envoy and Authorino logs:
Request received: ... "x-request-id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890" ...
Request received: ... "x-request-id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890" ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
When you combine the three logs, the story of this
request_idis:-
At
15:45:12, a user namedalicerequested the users' API,/api/users. You can also see therequest_idofa1b2c3d4-e5f6-7890-abcd-ef1234567890. -
The request hit the
toystore-routein Envoy. -
Envoy paused the request and checked authentication with Authorino, see
levelandinfo. -
Authorino verified Alice’s identity:
auth check succeeded,identity,alice. - Simultaneously, Limitador noted the request to ensure that Alice did not exceed her allowed limit.
-
Finally, Envoy allowed the traffic through, resulting in a
200response code.
-
At