Chapter 1. Introduction to Connectivity Link
Red Hat Connectivity Link is a control plane for configuring the Gateway API data plane in OpenShift Container Platform clusters. You can use it to apply authentication, rate limiting, and DNS policies to gateway resources.
1.1. About Red Hat Connectivity Link Copy linkLink copied to clipboard!
You can use Connectivity Link to connect, secure, observe, and protect all of your service endpoints. Use Connectivity Link in multicloud and hybrid cloud environments. Red Hat Connectivity Link is a single data plane used to apply policies to Gateway API resources in OpenShift Container Platform clusters. Gateway API is structured to meet the different needs of organizational teams.
First, configure and deploy ingress gateways with the role-oriented resources and components of the Kubernetes Gateway API. Then, use Connectivity Link to attach policies to Gateway API resources. Attaching policies means that you can skip the embedding of networking code into your applications, and use a code-as-infrastructure approach instead.
Example policy types:
Configure gateways with TLS policies for:
- Certificate management
- Authentication
- Authorization
- Rate limiting
Integrate DNS policies for:
- Multicluster load balancing
- Health checks
- Remediation
In addition, you can use the following included templates to see what is happening:
- Observability dashboards
- Observability metrics
- Tracing
- Alerts
1.1.1. Red Hat Connectivity Link architecture Copy linkLink copied to clipboard!
The following diagram shows a high-level overview of the Connectivity Link architecture with its main features and technologies:
Figure 1.1. Connectivity Link architecture
Connectivity Link supports OpenShift Service Mesh 3.2 as the Gateway API provider.
1.2. Connectivity Link benefits Copy linkLink copied to clipboard!
Connectivity Link provides the following main benefits:
- Kubernetes-native
- Connectivity Link uses Kubernetes-native features for resource efficiency and optimal use. These features can run on any public or private OpenShift Container Platform cluster, offering multicloud and hybrid-cloud behavior by default.
- Hybrid cloud and multicloud
Connectivity Link includes the flexibility to deploy the same application to any OpenShift Container Platform cluster hosted on a public or private cloud. This removes the dependency on a specific cloud provider.
For example, if one cloud provider is having network issues, you can switch your deployment and traffic to another cloud provider to minimize the impact on your customers. This ability provides high availability and disaster recovery. It also means that your platforms and applications can remain resilient by ensuring that you are prepared for the unexpected and can establish uninterrupted service.
- Use infrastructure as code
- You can define your infrastructure by using code to ensure that it is version controlled, tested, and easily replicated. OpenShift Container Platform auto-scaling features dynamically adjust resources based on workload demand. You can also use Connectivity Link to implement robust monitoring and logging solutions to gain full visibility into your OpenShift Container Platform clusters.
- Use the tools you have
You can use the technologies and tools that you already have in place with Connectivity Link. For example, the following are services and tools you can use:
- Cloud service providers: Amazon Web Services, Google Cloud, Microsoft Azure
- DNS providers: Amazon Route 53, Google Cloud DNS, Microsoft Azure DNS, CoreDNS
- Gateway API controllers: OpenShift Container Platform, OpenShift Container Platform Service Mesh
- Metrics and alerts: Prometheus, Thanos, Kiali
- Dashboards: Grafana, Red Hat Developer Hub
- GitOps and automation: Red Hat Ansible Automation Platform, OpenShift Container Platform GitOps, GitHub
- Additional integrations: Red Hat build of Keycloak, Red Hat Service Interconnect
- Gateway API
Using Gateway API to set up ingress policies on each OpenShift Container Platform cluster means that ingress can be identical and implemented simultaneously.
In Kubernetes-based environments, Gateway API is the standard for deploying ingress gateways and managing application networking. Gateway API provides standardized APIs for ingress traffic management and support for many protocols.
- Observability
- Connectivity Link uses Kuadrant-maintained Gateway API state metrics, metrics exposed by Connectivity Link components, and standard metrics exposed by Envoy to build a set of example template alerts and dashboards. You can download and use these Kuadrant community templates to integrate with Grafana, Prometheus, and Alertmanager deployments, or use them as starting points that you can tailor for your specific needs.
1.3. Connectivity Link features Copy linkLink copied to clipboard!
Connectivity Link includes the following features:
- Multicloud application connectivity
- DNS provider integrations
- High availability and disaster recovery
- Global load balancing
- Application portability
- Application connectivity configuration
- Endpoint health and status checks
- Automatic TLS certificate generation
- Universal authentication
- Kubernetes ingress policy management
- Global DNS policy
- TLS policy
- Auth policy
- Rate-limiting policy
- Token rate-limiting policy
- Traffic weighting and distribution
- User-role-based design
- Multicluster administration
- Observability dashboards and alerts
- OpenShift Container Platform web console dynamic plugin
- Composable API management
- API security and governance
- Advanced API metrics collection
- API-level policies for authentication, authorization, and rate limiting
- Flexible integration with open source tools
1.4. Connectivity Link user workflows Copy linkLink copied to clipboard!
Similar to Gateway API, Connectivity Link is designed with specific user roles in mind, such as:
- Platform or infrastructure engineer
- Application developer
- Business user
Each persona has a different way to work with Connectivity Link, depending on how that person is interacting with your OpenShift Container Platform clusters.
1.4.1. Platform engineer workflow Copy linkLink copied to clipboard!
As a platform engineer or infrastructure provider, you can use Connectivity Link to set up ingress gateways on OpenShift Container Platform clusters in specific regions. You can ensure that all policies are configured identically on all gateways for consistency.
For example, configure DNS policies to ensure that customers in Brazil are routed to the South American data center, and that other customers around the world are routed to the appropriate environment. You can also configure TLS, authentication and authorization, and rate-limiting policies to ensure that gateway security, performance, and monitoring all conform to your standards.
The following diagrams show a high-level overview of the Connectivity Link platform engineer workflow:
Figure 1.2. Connectivity Link platform engineer sets up gateways
As a platform engineer, you must start by creating at least one gateway. If gateways exist, you can move on to configuring policies for your gateways.
Figure 1.3. Connectivity Link platform engineer configures gateway policies
- Connect gateways
- You can connect gateways by creating a DNS policy and configuring a global load balancing strategy. DNS records are reconciled with your cloud DNS provider automatically, whether in a single-cluster or multicluster environment.
- Secure gateways
- You can secure gateways by using a TLS policy that automatically generates certificate requests for the hostnames specified in your gateway. This includes support for the main ACME providers such as Let’s Encrypt. You can also set up application security defaults and overrides by using authentication and authorization policies and rate-limiting policies.
- Observe gateways
- In addition, you can observe your connectivity and runtime metrics by using Grafana-based dashboards and alerts. For example, this includes metrics for policy compliance and governance, resource consumption, error rates, request latency and throughput, multicluster split, and so on.
1.4.2. Application developer workflow Copy linkLink copied to clipboard!
As an application developer, you can use Connectivity Link to deploy applications and APIs on OpenShift Container Platform clusters and gateways that are set up by platform engineers. You can also monitor workloads and the status of OpenShift Container Platform resource metrics and tracing by using Grafana-based observability dashboards and alerts.
- Protect applications
- You can route to applications from the gateway, configure protection for your services with root-level authentication, external authorization, and rate limiting.
- Create application routes and definitions
- Set up application routes and API definitions and publish them to the cluster.
- Observe application and API performance
- You can also monitor workloads and the status of OpenShift Container Platform resource metrics and tracing by using Grafana-based observability dashboards and alerts. View API metrics, such as uptime, requests per second, latency, and errors per minute, to ensure that APIs meet performance and availability benchmarks achieved by other data centers.
The following diagram shows a high-level overview of the Connectivity Link application developer workflow:
Figure 1.4. Connectivity Link application developer configures policies for applications and APIs
1.4.3. Business user workflow Copy linkLink copied to clipboard!
Business users, such as account managers and application owners, can use the data from Connectivity Link to work with customers.
You can use Grafana-based observability dashboards to monitor the status of applications and APIs in data centers in specific regions, and work with customers on specific performance metrics.
Specifically, you can view API metrics, such as uptime, requests per second, latency, and errors per minute, to ensure that APIs meet the performance and availability benchmarks that your customers require.
1.5. Using Connectivity Link technologies and patterns Copy linkLink copied to clipboard!
You can use the following technologies and patterns with Connectivity Link:
- Policy-based configuration
You can use the Connectivity Link policy attachment pattern to add behavior to a Kubernetes object by using configuration that cannot be described in the object
specfield.With policy attachments comes the concept of defaults and overrides. These defaults and overrides mean that you can configure different roles to operate with policy APIs at different levels of the object hierarchy. These policies are then merged with specific rules and strategies to form an effective policy that can be used across your organization.
- WebAssembly plugin
- As a WebAssembly (WASM) plugin developed for the Envoy proxy, Connectivity Link is lightweight, hardware independent, non-intrusive, and secure. This means that clusters that are using OpenShift Container Platform OpenShift Service Mesh, Istio, or Envoy for ingress do not require major changes to their existing ingress objects and policies to begin using Connectivity Link.
- Multicluster configuration mirroring
You can configure and deploy your policies across different cloud service providers in a consistent way with Connectivity Link. Connectivity Link uses multicluster configuration mirroring across multicloud and hybrid cloud environments. You can deploy your routing, configuration, and policies wherever you need them through one interface.
This means that your development, test, and production environments can be consistent. Use Connectivity Link to supply unified experiences, global administration, and security compliance.
The following image shows Connectivity Link multicluster configuration mirroring:
Figure 1.5. Multicluster configuration mirroring across multicloud and hybrid cloud environments
- API connectivity and API management
Connectivity Link provides scalable multicluster and multi-gateway connectivity management, along with API management features such as API observability, authentication, and rate limiting.
Figure 1.6. Connectivity Link API management and connectivity
1.6. Connectivity Link policy APIs Copy linkLink copied to clipboard!
Understand how and when you can use the Connectivity Link core policies and observability features with your cloud applications and APIs.
- Secure your applications with
TLSPolicy -
TLSPolicyis a lightweight wrapper API to manage TLS for targeted gateways. -
Automatically provision TLS certificates based on the gateway listener hosts by using integration with
cert-managerand ACME providers such as Let’s Encrypt. - Configure secrets so that the gateway automatically retrieves them.
-
- Protect your applications with
AuthPolicy -
Use
AuthPolicyobjects to apply authentication and authorization across your selected listeners in a gateway or at theHTTPRouteorHTTPRouteRulelevel. - Use the hierarchical and role-based concept of defaults and overrides to improve collaboration and ensure compliance.
- Use dedicated OIDC authentication providers such as Red Hat build of Keycloak.
-
Apply fine-grained authorization requirements based on
requestandmetadataattributes.
-
Use
- Protect your applications with
RateLimitPolicy -
Apply rate-limiting rules across all listeners in a gateway or at the
HTTPRouteorHTTPRouteRulelevel. - Use the role-based and hierarchical concept of defaults and overrides to improve collaboration and ensure compliance.
- Configure limits conditionally based on metadata and request data.
- Share counters by using a backend store in multicluster environments.
-
Apply rate-limiting rules across all listeners in a gateway or at the
For example, the following rate-limiting policy configures a specified limit of 5 requests per 10 seconds for every listener defined in the target gateway that does not have its own rate limiting policy defined:
Rate-limiting policy example
- Connect your applications with
DNSPolicy -
DNSPolicyis a standard API that is not based on custom annotations. - Automatically populate DNS records based on listener hosts and addresses expressed by Gateway API resources.
- Configure multicluster connectivity and routing options, for example, geographic and weighted responses.
- Use common cloud DNS providers: Amazon Route 53, Microsoft Azure DNS, Google Cloud DNS, or CoreDNS.
- Configure health checks to enable DNS failover.
-
- Observe your ingress traffic
- You can use the Connectivity Link observability features to observe and monitor your gateways, applications, and APIs on OpenShift Container Platform. You can download and use community-based templates to integrate with Grafana, Prometheus, and Alertmanager deployments, or use these templates as starting points to modify for your specific needs.
1.7. Supported configurations with Red Hat Connectivity Link Copy linkLink copied to clipboard!
Connectivity Link must run on a supported combination of OpenShift Container Platform with Red Hat OpenShift Service Mesh as the Gateway API provider, and use the cert-manager Operator for Red Hat OpenShift. Red Hat provides both production and development support for supported configurations and tested integrations according to your subscription agreement.
1.7.1. Supported OpenShift Container Platform version configurations Copy linkLink copied to clipboard!
| Red Hat Connectivity Link | Red Hat OpenShift Container Platform | Red Hat OpenShift Dedicated | Red Hat OpenShift Service on AWS | Microsoft Azure Red Hat OpenShift |
|---|---|---|---|---|
| Version 1.3 | 4.21, 4.20, 4.19 | 4.21, 4.20, 4.19 | 4.21, 4.20, 4.19 | 4.19 |
| Version 1.2 | 4.20, 4.19, 4.18 | 4.20, 4.19, 4.18 | 4.20, 4.19, 4.18 | 4.17 |
For Microsoft Azure, see the Support lifecycle for Azure Red Hat OpenShift 4.
1.7.2. Supported Operators Copy linkLink copied to clipboard!
| Red Hat Connectivity Link | Red Hat OpenShift Service Mesh | cert-manager Operator for Red Hat OpenShift |
|---|---|---|
| Version 1.3 | 3.2 | 1.18 |
| Version 1.2 | 3.1 | 1.17 |
1.7.3. Supported cloud providers Copy linkLink copied to clipboard!
All versions of Connectivity Link support the following platforms as backing cloud providers for OpenShift Container Platform:
- Amazon Web Services
- Google Cloud Platform
- Microsoft Azure
For more information, see the documentation for your chosen cloud provider.
1.7.4. Supported cloud DNS providers Copy linkLink copied to clipboard!
For DNS policies, all versions of Connectivity Link support the following cloud DNS providers:
- Amazon Route 53
- Google Cloud Platform DNS
- Microsoft Azure DNS
For more information, see the documentation for your chosen cloud DNS provider.
1.7.5. Supported on-premise DNS providers Copy linkLink copied to clipboard!
You can use CoreDNS can to configure an on-cluster DNS zone. For more information, see About using on-premise DNS with CoreDNS.
1.7.6. Supported data stores for rate limiting Copy linkLink copied to clipboard!
For rate limiting policies, Connectivity Link supports the following Redis-based data stores for rate limit counters in multicluster environments:
| Red Hat Connectivity Link | Redis Enterprise or Cloud | Amazon Elasticache | Dragonfly Community or Cloud |
|---|---|---|---|
| Version 1.3 | latest | latest | latest |
| Version 1.2 | latest | latest | latest |
For more information, see the documentation for your chosen Redis-based datastore.
1.7.7. Supported identity access management Copy linkLink copied to clipboard!
For authentication policies, Connectivity Link supports API keys and the following products:
| Red Hat Connectivity Link Version | Red Hat build of Keycloak |
|---|---|
| Version 1.3 | Version 26.4 |
| Version 1.2 | Version 26.4 |
For more information, see Supported Configurations for Red Hat build of Keycloak.