Introduction to Connectivity Link
Multicloud application connectivity and API management
Abstract
Preface
Providing feedback on Red Hat documentation
Red Hat appreciates your feedback on product documentation.
To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to help the documentation team to address your request quickly.
Prerequisite
- You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one.
Procedure
- Click the following link: Create issue.
- In the Summary text box, enter a brief description of the issue.
In the Description text box, provide the following information:
- The URL of the page where you found the issue.
- A detailed description of the issue. You can leave the information in other fields at their default values.
- In the Reporter field, enter your Jira user name.
- Click Create to submit the Jira issue to the documentation team.
Thank you for taking the time to provide feedback.
Chapter 1. Red Hat Connectivity Link overview
Red Hat Connectivity Link is a modular and flexible solution for application connectivity, policy management, and API management in multicloud and hybrid cloud environments. You can use Connectivity Link to secure, protect, connect, and observe your APIs, applications, and infrastructure.
Connectivity Link is based on the Kuadrant community project, and is targeted at the specific user roles of platform engineer, application developer, and business user.
1.1. Red Hat Connectivity Link architecture
The following diagram shows a high-level overview of the Connectivity Link architecture and its main features and technologies:
Figure 1.1. Connectivity Link architecture overview

Connectivity Link provides a control plane for configuring and deploying ingress Gateways based on the Kubernetes Gateway API standard. This control plane provides Kubernetes-native APIs for platform engineers to configure Gateways with TLS policies for certificate management, authentication and authorization policies, rate limiting policies, along with DNS policies for multicluster load balancing, health checks, and remediation.
Connectivity Link also provides data plane policies for application developers to secure and protect applications and APIs with authentication, authorization, and rate limiting. In addition, Connectivity Link provides templates for observability dashboards, metrics, tracing, and alerts for all user roles.
Connectivity Link supports OpenShift Service Mesh 3.0 as the Gateway API provider, which is based on the Istio community project.
Chapter 2. Connectivity Link features
Connectivity Link includes the following main features to secure, protect, connect, and observe your cloud applications and APIs:
- Multicloud application connectivity
DNS provider integrations:
- Amazon Route 53
- Google Cloud DNS
- MicroSoft Azure DNS
- CoreDNS (Developer Preview only)
- High availability and disaster recovery
- Global load balancing
- Application portability
- Application connectivity configuration
- Endpoint health and status checks
- Automatic TLS certificate generation
- Universal authentication
- Kubernetes ingress policy management
- Global DNS policy
- TLS policy
- Auth policy
- Rate limiting policy
- Traffic weighting and distribution
- User role-based design
- Multicluster administration
- Observability dashboards and alerts
- OpenShift web console dynamic plug-in
- Composable API management
- API security and governance
- Advanced API metrics collection
- API-level policies for authentication, authorization, and rate limiting
- Flexible integration with open source tooling
Chapter 3. Connectivity Link technologies and patterns
The main technologies and patterns provided by Connectivity Link include the following:
- Gateway API
Gateways play an essential role in application connectivity and security. In Kubernetes-based environments, Gateway API is the new standard for deploying ingress Gateways and managing application networking.
Gateway API provides standardized APIs for ingress traffic management and support for multiple protocols. Gateway API is user persona role-oriented by design, and provides configuration flexibility and portability. You can use Gateway API to set up ingress policies on each OpenShift cluster to be identical, consistent, and implemented with minimum effort.
Figure 3.1. Gateway API user persona-based design
Typically, the infrastructure owner is responsible for the infrastructure that hosts multiple clusters, for example, based on a cloud provider such as Amazon Web Services or Google Cloud Platform.
The platform engineer is responsible for managing the clusters to meet user requirements, for example, managing Gateways, policies, network access, and application permissions. While the application developer is responsible for creating and managing the applications running in a cluster, for example, managing application authentication, rate limits, timeouts, and routing to backend services.
- Policy-based configuration
By using Connectivity Link policies defined as Kubernetes custom resource definitions (CRDs), platform engineers and application developers can easily secure, protect, and connect their applications and infrastructure. Connectivity Link provides policies for managing TLS, authentication and authorization, rate limiting, and DNS.
The policy attachment pattern provides a way to add behavior to a Kubernetes object by using configuration that cannot be described in the object
spec
field. Policy attachments also provide the concept of defaults and overrides, which allow different roles to operate with policy APIs at different levels of the object hierarchy. These policies are then merged with specific rules and strategies to form an effective policy.The following simple example of a rate limiting policy configures a specified limit of 5 requests per 10 seconds for every listener defined in the target Gateway that does not have its own rate limiting policy defined:
Simple rate limiting policy example
apiVersion: kuadrant.io/v1 kind: RateLimitPolicy metadata: name: gw-rlp spec: targetRef: # Specifies Gateway API policy attachment group: gateway.networking.k8s.io kind: Gateway name: external defaults: # Means it can be overridden limits: # Limitador component configuration "global": rates: - limit: 5 window: 10s
apiVersion: kuadrant.io/v1 kind: RateLimitPolicy metadata: name: gw-rlp spec: targetRef: # Specifies Gateway API policy attachment group: gateway.networking.k8s.io kind: Gateway name: external defaults: # Means it can be overridden limits: # Limitador component configuration "global": rates: - limit: 5 window: 10s
Copy to Clipboard Copied! - WebAssembly plug-in
Unlike other connectivity management systems, Connectivity Link is not a standalone Gateway. Connectivity Link is a WebAssembly (WASM) plug-in, which is developed for the Envoy proxy. This means that users of OpenShift Service Mesh, Istio, or Envoy for ingress do not require major changes to their existing ingress objects and policies to begin using Connectivity Link.
The WebAssembly plug-in design also means that Connectivity Link is lightweight, fast, hardware independent, non-intrusive, and secure.
- Multicluster configuration mirroring
Connectivity Link uses multicluster configuration mirroring across multicloud and hybrid cloud environments to ensure that you can deploy your routing, configuration, and policies wherever they are required. You are no longer required to set different policies in different ways based on the cloud service provider. Instead, you can configure and deploy your policies in a consistent way with Connectivity Link.
You can also ensure that your development, test, and production environments are set in the same way to prevent surprises later. In this way, Connectivity Link provides consistency, simplicity, unified experience, global administration, and security compliance.
Figure 3.2. Multicluster configuration mirroring across multicloud and hybrid cloud environments
- API connectivity and API management
Connectivity Link provides a next-generation approach to API management that extends beyond traditional API management capabilities provided by other products.
API management requires connectivity, and Connectivity Link provides scalable multicluster and multi-Gateway connectivity management, along with API management features such as API observability, authentication, and rate limiting.
Figure 3.3. Connectivity Link API management and connectivity
Chapter 4. Connectivity Link policy APIs and observability
This section describes the Connectivity Link core policy APIs and observability features that you can use to secure, protect, connect, and observe your cloud applications and APIs.
4.1. Connectivity Link policy APIs
- Secure your applications with TLSPolicy
- Lightweight wrapper API to manage TLS for targeted Gateways.
- Automatically provision TLS certificates based on the Gateway listener hosts by using integration with cert-manager and ACME providers such as Let’s Encrypt.
- Configure secrets so that the Gateway automatically retrieves them when ready.
- Protect your applications with AuthPolicy
- Apply authentication and authorization across all or specific listeners in a Gateway, or at the HTTPRoute or HTTPRouteRule level.
- Use the hierarchical and role-based concept of defaults and overrides to improve collaboration and ensure compliance.
- Leverage dedicated authentication providers such as Red Hat build of Keycloak.
- Apply fine-grained authorization requirements based on request and metadata attributes.
- Protect your applications with RateLimitPolicy
- Apply rate limiting rules across all listeners in a Gateway or at the HTTPRoute or HTTPRouteRule level.
- Use the role-based and hierarchical concept of defaults and overrides to improve collaboration and ensure compliance.
- Configure limits conditionally based on metadata and request data.
- Share counters by using a backend store in multicluster environments.
- Connect your applications with DNSPolicy
- Standard API that is not based on custom annotations.
- Automatically populate DNS records based on listener hosts and addresses expressed by Gateway API resources.
- Configure multicluster connectivity and routing options such as geographic and weighted responses.
- Leverage common cloud DNS providers: Amazon Route 53, Microsoft Azure DNS, Google Cloud DNS, or CoreDNS.
- Configure health checks to enable DNS failover.
4.2. Connectivity Link observability
Connectivity Link uses Kuadrant-maintained Gateway API state metrics, metrics exposed by Connectivity Link components, and standard metrics exposed by Envoy to build a set of example template alerts and dashboards. You can download and use these Kuadrant community templates to integrate with Grafana, Prometheus, and Alertmanager deployments, or use them as starting points to modify for your specific needs.
Figure 4.1. Platform engineer Grafana dashboard

The platform engineer dashboard displays details such as the following:
- Policy compliance and governance.
- Resource consumption.
- Error rates.
- Request latency and throughput.
- Multi-window, multi-burn alert templates for API error rates and latency.
- Multicluster split.
Figure 4.2. Application developer Grafana dashboard

The application developer dashboard is less focused on policies than the platform engineer dashboard and is more focused on APIs and applications. For example, this includes details such as request latency and throughput per API, and total requests and error rates by API path.
Figure 4.3. Business user Grafana dashboard

The business user dashboard includes details such as the following:
- Requests per second per API.
- Increase or decrease in rates of API usage over specified times.
Chapter 5. Connectivity Link benefits
Connectivity Link provides the following main business benefits:
- User-role oriented
Gateway API is composed of API resources that correspond to the organizational roles of infrastructure owner, cluster operator, and application developer. Infrastructure owners and cluster operators are platform engineers who define how shared infrastructure can be used by many different non-coordinating application development teams.
Application developers are responsible for creating and managing applications running in a cluster. For example, this includes creating APIs and managing application timeouts, request matching, and path routing to backends.
- Kubernetes-native
- Connectivity Link is designed to use Kubernetes-native features for resource efficiency and optimal use. These features can run on any public or private OpenShift cluster, offering multicloud and hybrid-cloud behavior by default. OpenShift is proven to be scalable, resilient, and highly available.
- Expressive configuration
- Gateway API resources provide built-in capabilities for header-based matching, traffic weighting, and other capabilities that are only currently possible in existing ingress standards by using custom annotations and custom code. This allows for more intelligent routing, security, and isolation of specific routes without the necessity of writing custom code.
- Portability
- Gateway API is an open source standard with many implementations, which is designed by using the concept of flexible conformance. This promotes a highly portable core API that still has flexibility and extensibility to support native capabilities of the environment and implementation. This enables the concepts and core resources to be consistent across implementation and environments, reducing complexity and increasing familiarity.
- Hybrid cloud and multicloud
Connectivity Link includes the flexibility to deploy the same application to any OpenShift cluster hosted on a public or private cloud. This removes a singular dependency or a single point of failure by being tied to a specific cloud provider.
For example, if one cloud provider is having network issues, you can switch your deployment and traffic to another cloud provider to minimize the impact on your customers. This provides high availability and disaster recovery and ensures that you are prepared for the unexpected and can establish uninterrupted service, so that your platforms and applications remain resilient.
- Infrastructure as code
- You can define your infrastructure by using code to ensure that it is version controlled, tested, and easily replicated. Automated scaling leverages OpenShift auto-scaling features to dynamically adjust resources based on workload demand. This also includes the ability to implement robust monitoring and logging solutions to gain full visibility into your OpenShift clusters.
- Modular and flexible
The highly flexible and modular Connectivity Link architecture enables you to use the technologies and tools that you already have in place, while also allowing you to plug into the connectivity management platform for maximum effectiveness. This includes technologies and tools such as the following:
- Cloud service providers: Amazon Web Services, Google Cloud Platform, Microsoft Azure
- DNS providers: Amazon Route 53, Google Cloud DNS, Microsoft Azure DNS, CoreDNS
- Gateway API controllers: OpenShift, OpenShift Service Mesh
- Metrics and alerts: Prometheus, Thanos, Kiali
- Dashboards: Grafana, Red Hat Developer Hub
- GitOps and automation: Red Hat Ansible Automation Platform, OpenShift GitOps, GitHub
- Additional integrations: Red Hat build of Keycloak, Red Hat Service Interconnect
Chapter 6. Connectivity Link user workflows
Connectivity Link includes the following main user persona roles:
- Platform engineer
- Application developer
- Business user
6.1. Platform engineer workflow
Platform engineers use Connectivity Link to set up ingress Gateways on OpenShift clusters in specific regions. They then ensure that all policies are configured identically on all Gateways for consistency.
For example, platform engineers configure DNS policies to ensure that customers in Brazil are routed to the South American data center, and that other customers around the world are routed to the appropriate environment. They also configure TLS, authentication and authorization, and rate limiting policies to ensure that Gateway security, performance, and monitoring conform to the correct standards.
The following diagrams show a high-level overview of the Connectivity Link platform engineer workflow:
Figure 6.1. Connectivity Link platform engineer sets up Gateways

As a platform engineer, you must start by creating one or more Gateways, if they have not already been created.
Figure 6.2. Connectivity Link platform engineer configures Gateway policies

You can connect Gateways by creating a DNS policy and configuring a global load balancing strategy. DNS records are reconciled with your cloud DNS provider automatically, whether in a single-cluster or multicluster environment.
You can secure Gateways by using a TLS policy that automatically generates certificate requests for the hostnames specified in your Gateway. This includes support for the main ACME providers such as Let’s Encrypt. You can also set up application security defaults and overrides by using authentication and authorization policies and rate limiting policies.
In addition, you can observe your connectivity and runtime metrics by using Grafana-based dashboards and alerts. For example, this includes metrics for policy compliance and governance, resource consumption, error rates, request latency and throughput, multicluster split, and so on.
6.2. Application developer workflow
Application developers use Connectivity Link to deploy applications and APIs on OpenShift clusters and Gateways that have already been set up by platform engineers. Application developers ensure that applications and APIs are protected by the required authentication and authorization, and configure rate limits on API requests. They also set up application routes and API definitions and publish them to the cluster.
Application developers use Grafana dashboards to view API metrics, such as uptime, requests per second, latency, and errors per minute, to ensure that APIs meet performance and availability benchmarks achieved by other data centers. The following diagram shows a high-level overview of the Connectivity Link application developer workflow:
Figure 6.3. Connectivity Link application developer configures policies for applications and APIs

As an application developer, you can route to applications from the Gateway, configure protection for your services with root-level authentication, external authorization, and rate limiting. You can also monitor workloads and the status of OpenShift resource metrics and tracing by using Grafana-based observability dashboards and alerts.
6.3. Business user workflow
Business users such as account managers and application owners use Grafana-based observability dashboards to monitor the status of applications and APIs in data centers in specific regions, and work with customers on specific performance metrics. They view API metrics, such as uptime, requests per second, latency, and errors per minute, to ensure that APIs meet the performance and availability benchmarks that are expected by customers.
Business users also communicate with engineering teams if customers experience any issues that can be resolved by platform engineers or application developers.
Appendix A. Using your Red Hat subscription
Red Hat Connectivity Link is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal.
Managing your subscriptions
- Go to access.redhat.com.
- If you do not already have an account, create one.
- Log in to your account.
- In the menu bar, click Subscriptions to view and manage your subscriptions.
Revised on 2025-05-26 14:00:57 UTC