Chapter 1. Connectivity Link observability


You can use the Connectivity Link observability features to observe and monitor your gateways, applications, and APIs on OpenShift Container Platform.

1.2. Configuring observability metrics

The Connectivity Link example dashboards and alerts use metrics exported by Connectivity Link, Gateway API, and OpenShift Container Platform components to provide insight into your gateways, applications, and APIs. This section explains how to configure these metrics and set up these dashboards and alerts on your OpenShift Container Platform cluster.

The example dashboards and alerts for observing Connectivity Link functionality use low-level CPU and network metrics from the user monitoring stack in OpenShift Container Platform and resource state metrics from Gateway API and Connectivity Link resources. The user monitoring stack in OpenShift Container Platform is based on the Prometheus open source project.

Note

You must perform these steps on each OpenShift Container Platform cluster that you want to use Connectivity Link on.

Prerequisites

  • You installed Connectivity Link.
  • You set up monitoring for OpenShift Container Platform user-defined projects.
  • You installed and configured Grafana on your OpenShift Container Platform cluster.
  • You have cloned the Kuadrant Operator GitHub repository.

Procedure

  1. Verify that user workload monitoring is configured correctly in your OpenShift Container Platform cluster as follows:

    $ kubectl get configmap cluster-monitoring-config -n openshift-monitoring -o jsonpath='{.data.config\.yaml}'|grep enableUserWorkload
    Copy to Clipboard Toggle word wrap

    The expected output is enableUserWorkload: true.

  2. Install the Connectivity Link, Gateway, and Grafana component metrics and configuration as follows:

    kubectl apply -k https://github.com/Kuadrant/kuadrant-operator/config/install/configure/observability?ref=v1.1.0
    Copy to Clipboard Toggle word wrap
  3. From the root directory of your Kuadrant Operator repository, configure the OpenShift Container Platform thanos-query instance as a data source in Grafana as follows:

    TOKEN="Bearer $(oc whoami -t)"
    HOST="$(kubectl -n openshift-monitoring get route thanos-querier -o jsonpath='https://{.status.ingress[].host}')"
    echo "TOKEN=$TOKEN" > config/observability/openshift/grafana/datasource.env
    echo "HOST=$HOST" >> config/observability/openshift/grafana/datasource.env
    kubectl apply -k config/observability/openshift/grafana
    Copy to Clipboard Toggle word wrap
  4. Configure the example Grafana dashboards as follows:

    kubectl apply -k https://github.com/Kuadrant/kuadrant-operator/examples/dashboards?ref=v1.1.0
    Copy to Clipboard Toggle word wrap

1.3. Enabling observability in Connectivity Link

Enabling observability for Connectivity Link components configures integration between Connectivity Link components, including any gateways, and the Prometheus Operator installed in your OpenShift Container Platform cluster when user monitoring is enabled.

This feature works by creating a set of ServiceMonitors and PodMonitors, which instruct Prometheus to scrape metrics from Connectivity Link and Gateway components. These scraped metrics are then used in the Connectivity Link example dashboards and alerts.

Important

You must perform these steps on each OpenShift Container Platform cluster that you want to use Connectivity Link on.

Prerequisites

  • You configured observability metrics.

Procedure

  1. To enable observability for Kuadrant and any gateways, set enable: true in the observability section in your Kuadrant Custom Resource:

    apiVersion: kuadrant.io/v1beta1
    kind: Kuadrant
    metadata:
      name: kuadrant-sample
    spec:
      observability:
        enable: true
    Copy to Clipboard Toggle word wrap

    When enabled, Connectivity Link creates ServiceMonitors and PodMonitors for its components in the Connectivity Link Operator namespace. A single set of monitors is also created in each Gateway namespace to scrape metrics from any gateways, and in the corresponding Gateway system namespace, in this case, the istio-system namespace.

Verification

  1. Check the created monitors by running the following command:

    $ kubectl get servicemonitor,podmonitor -A -l kuadrant.io/observability=true
    Copy to Clipboard Toggle word wrap
    Note

    You can delete and re-create monitors as required. Monitors are only ever created or deleted, and not updated or reverted. If you decide the default monitors are not suitable, you can set enable: false and create your own ServiceMonitor or PodMonitor definitions, or configure Prometheus directly.

Connectivity Link provides starting points for monitoring your Connectivity Link deployment by using example dashboards and alerts, which are ready-to-use and customizable to fit your environment.

The Connectivity Link example dashboards are uploaded to the Grafana dashboards website. You can import the following dashboards into your Grafana deployment on OpenShift Container Platform:

Expand
Table 1.1. Connectivity Link example dashboards in Grafana
NameDashboard ID

App Developer Dashboard

21538

Platform Engineer Dashboard

20982

Business User Dashboard

20981

Important

You must perform these steps on each OpenShift Container Platform cluster that you want to use Connectivity Link on.

Prerequisites

  • You configured Observability.

Procedure

  1. Importing dashboards in Grafana::

    1. Click Dashboards > New > Import, and use one of the following options:

  1. Importing dashboards automatically in OpenShift Container Platform::

    1. You can automate dashboard provisioning in Grafana by adding JSON files to a ConfigMap, which must be mounted at /etc/grafana/provisioning/dashboards.

      Tip

      Alternatively, to avoid adding ConfigMap volume mounts in your Grafana deployment, you can use a GrafanaDashboard resource to reference a ConfigMap. For an example, see Dashboard from ConfigMap in the Grafana documentation.

      Data sources are configured as template variables, automatically integrating with your existing data sources. The metrics for these dashboards are sourced from Prometheus.

      Note

      For some example dashboard panels to work correctly, HTTPRoutes in Connectivity Link must include a service and deployment label with a value that matches the name of the service and deployment being routed to, for example, service=my-app and deployment=my-app. This allows low-level Istio and Envoy metrics to be joined with Gateway API state metrics.

  2. Configuring Prometheus alerts

    1. You can integrate the Kuadrant example alerts into Prometheus as PrometheusRule resources, and then adjust the alert thresholds to suit your specific operational needs.
    2. Service Level Objective (SLO) alerts generated by using the Sloth GitHub project are also included. You can use these alerts to integrate with the SLO Grafana dashboard, which uses generated labels to comprehensively overview your SLOs.
    3. For details on how to configure Prometheus alerts, see the OpenShift documentation on managing alerting rules.

1.5. Configuring tracing in Connectivity Link

You can enable tracing in OpenShift Service Mesh and the Connectivity Link components of Authorino and Limitador by directing traces to a central collector for improved observability and troubleshooting.

Important

You must perform these steps on each OpenShift Container Platform cluster that you want to use Connectivity Link on.

Prerequisites

  • You configured Grafana dashboards.
  • You have a trace collector such as Tempo or Jaeger installed and configured to support OpenTelemetry.

Procedure

  1. Enable tracing in OpenShift Service Mesh by configuring your Telemetry custom resource as follows:

    apiVersion: telemetry.istio.io/v1alpha1
    kind: Telemetry
    metadata:
      name: mesh-default
      namespace: gateway-system
    spec:
      tracing:
      - providers:
        - name: tempo-otlp
        randomSamplingPercentage: 100
    Copy to Clipboard Toggle word wrap
  2. Configure a tracing extensionProvider for OpenShift Service Mesh in your Istio custom resource as follows:

    apiVersion: operator.istio.io/v1alpha1
    kind: Istio
    metadata:
      name: default
    spec:
      namespace: gateway-system
      values:
        meshConfig:
          defaultConfig:
            tracing: {}
          enableTracing: true
          extensionProviders:
          - name: tempo-otlp
            opentelemetry:
              port: 4317
              service: tempo.tempo.svc.cluster.local
    Copy to Clipboard Toggle word wrap
    Important

    You must set the OpenTelemetry collector protocol in the service port name or appProtocol fields as described in the OpenShift Service Mesh documentation. For example, when using gRPC, the port name should begin with grpc- or the appProtocol should be grpc.

  3. Enable request tracing in your Authorino Custom Resource (CR) and send authentication and authorization traces to the central collector as follows:

    apiVersion: operator.authorino.kuadrant.io/v1beta1
    kind: Authorino
    metadata:
      name: authorino
    spec:
      tracing:
        endpoint: rpc://tempo.tempo.svc.cluster.local:4317
    Copy to Clipboard Toggle word wrap
  4. Enable request tracing in your Limitador CR and send rate limit traces to the central collector as follows:

    apiVersion: limitador.kuadrant.io/v1alpha1
    kind: Limitador
    metadata:
      name: limitador
    spec:
      tracing:
        endpoint: rpc://tempo.tempo.svc.cluster.local:4317
    Copy to Clipboard Toggle word wrap
    Tip

    Ensure that the tracing collector is the same one that OpenShift Service Mesh is sending traces to so that they can all be correlated later.

    When the changes are applied, the Authorino and Limitador components are redeployed with tracing enabled.

    Important

    Trace IDs do not propagate to WASM modules in OpenShift Service Mesh currently. This means that requests passed to Limitador do not have the relevant parent trace ID. However, if the trace initiation point is outside Service Mesh, the parent trace ID is available to Limitador and included in traces. This impacts on correlating traces from Limitador with traces from Authorino, the Gateway, and any other components in the request path.

1.6. Troubleshooting by using traces and logs

You can use a tracing user interface such as Jaeger or Grafana to search for OpenShift Service Mesh and Red Hat Connectivity Link trace information by trace ID. You can get the trace ID from logs, or from a header in a sample request that you want to troubleshoot. You can also search for recent traces, filtering by the service that you want to focus on.

1.6.1. Viewing Red Hat Connectivity Link traces

The following example trace in the Grafana user interface shows the total request time from the Istio-based Gateway, the time to check and update the rate limit count in Limitador, and the time to check authentication and authorization in Authorino:

Figure 1.4. Example Connectivity Link trace in Grafana

Connectivity Link tracing in Grafana

1.6.2. Viewing rate limit logging with trace IDs

When using the Limitador component of Red Hat Connectivity Link for rate limiting, you can enable request logging with trace IDs to get more information on requests. This requires the log level to be increased to at least debug, so you must set the verbosity to 3 or higher in your Limitador custom resource as follows:

apiVersion: limitador.kuadrant.io/v1alpha1
kind: Limitador
metadata:
  name: limitador
spec:
  verbosity: 3
Copy to Clipboard Toggle word wrap

Example log entry with the traceparent field holding the trace ID:

"Request received: Request { metadata: MetadataMap { headers: {"te": "trailers", "grpc-timeout": "5000m", "content-type": "application/grpc", "traceparent": "00-4a2a933a23df267aed612f4694b32141-00f067aa0ba902b7-01", "x-envoy-internal": "true", "x-envoy-expected-rq-timeout-ms": "5000"} }, message: RateLimitRequest { domain: "default/toystore", descriptors: [RateLimitDescriptor { entries: [Entry { key: "limit.general_user__f5646550", value: "1" }, Entry { key: "metadata.filter_metadata.envoy\\.filters\\.http\\.ext_authz.identity.userid", value: "alice" }], limit: None }], hits_addend: 1 }, extensions: Extensions }"
Copy to Clipboard Toggle word wrap

If you centrally aggregate logs by using tools such as Grafana Loki and Promtail, you can jump between trace information and the relevant logs for that service.

By using a combination of tracing and logs, you can visualize and troubleshoot request timing issues and drill down to specific services. This method becomes even more powerful when combined with Connectivity Link metrics and dashboards to get a more complete picture of your user traffic.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top