このコンテンツは選択した言語では利用できません。
Configuring and deploying Gateway policies with Connectivity Link
Secure, protect, and connect APIs on OpenShift
Abstract
Preface リンクのコピーリンクがクリップボードにコピーされました!
Providing feedback on Red Hat documentation
Red Hat appreciates your feedback on product documentation.
To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to help the documentation team to address your request quickly.
Prerequisite
- You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one.
Procedure
- Click the following link: Create issue.
- In the Summary text box, enter a brief description of the issue.
In the Description text box, provide the following information:
- The URL of the page where you found the issue.
- A detailed description of the issue. You can leave the information in other fields at their default values.
- In the Reporter field, enter your Jira user name.
- Click Create to submit the Jira issue to the documentation team.
Thank you for taking the time to provide feedback.
Chapter 1. Secure, protect, and connect APIs on OpenShift with Connectivity Link リンクのコピーリンクがクリップボードにコピーされました!
This guide shows how you can use Connectivity Link on OpenShift to secure, protect, and connect an API exposed by a Gateway that uses Kubernetes Gateway API. This guide applies to the platform engineer and application developer user roles in Connectivity Link.
In multicluster environments, you must perform the following steps in each cluster individually, unless specifically excluded.
1.1. Connectivity Link capabilities in multicluster environments リンクのコピーリンクがクリップボードにコピーされました!
You can leverage Connectivity Link capabilities in single or multiple OpenShift clusters. The following features are designed to work across multiple clusters as well as in a single-cluster environment:
-
Multicluster ingress: Connectivity Link provides multicluster ingress connectivity using DNS to bring traffic to your Gateways by using a strategy defined in a
DNSPolicy. -
Global rate limiting: Connectivity Link can enable global rate limiting use cases when configured to use a shared Redis-based store for counters based on limits defined by a
RateLimitPolicy. -
Global auth: You can configure a Connectivity Link
AuthPolicyto leverage external auth providers to ensure that different clusters exposing the same API can authenticate and authorize in the same way. -
Automatic TLS certificate generation: You can configure a
TLSPolicyto automatically provision TLS certificates based on Gateway listener hosts by using integration with cert-manager and ACME providers such as Let’s Encrypt. - Integration with federated metrics stores: Connectivity Link has example dashboards and metrics for visualizing your Gateways and observing traffic hitting those Gateways across multiple clusters.
1.2. Connectivity Link user role workflows リンクのコピーリンクがクリップボードにコピーされました!
Platform engineer: This guide shows how platform engineers can deploy Gateways that provide secure communication and are protected and ready for use by application development teams to deploy APIs.
Platform engineers can use Connectivity Link in clusters in different geographic regions to bring specific traffic to geo-located Gateways. This approach reduces latency, distributes load, and protects and secures with global rate limiting and auth policies.
- Application developer: This guide shows how application developers can override the Gateway-level global auth and rate limiting policies to configure application-level auth and rate limiting requirements for specific users.
1.3. Deployment management tooling リンクのコピーリンクがクリップボードにコピーされました!
The examples in this guide use kubectl commands for simplicity. However, working with multiple clusters is complex, and it is best to use a tool such as OpenShift GitOps, based on Argo CD, to manage the deployment of resources to multiple clusters.
Chapter 2. Check your Connectivity Link installation and permissions リンクのコピーリンクがクリップボードにコピーされました!
This guide expects that you have successfully installed Connectivity Link on at least one OpenShift cluster, and that you have the correct user permissions.
Prerequisites
- You completed the Connectivity Link installation steps on one or more clusters, as described in Installing Connectivity Link on OpenShift.
-
You have the
kubectloroccommand installed. - You have write access to the OpenShift namespaces used in this guide.
- You have an AWS account with Amazon Route 53 and a DNS zone for the examples in this guide. Connectivity Link also supports Google Cloud DNS and Microsoft Azure DNS.
Optional:
- For rate limiting in a multicluster environment, you have installed Connectivity Link on more than one cluster and have a shared accessible Redis-based datastore. For more details, see Installing Connectivity Link on OpenShift.
- For Observability, OpenShift user workload monitoring is configured to remote write to a central storage system such as Thanos, as described in Connectivity Link Observability Guide.
Chapter 3. Set up your environment リンクのコピーリンクがクリップボードにコピーされました!
This section shows how you can set up your environment variables and deploy the example Toystore application on your OpenShift cluster.
Prerequisites
Procedure
Set the following environment variables, which are used for convenience in this guide:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow These environment variables are described as follows:
-
KUADRANT_GATEWAY_NS: Namespace for your example Gateway in OpenShift. -
KUADRANT_GATEWAY_NAME: Name of your example Gateway in OpenShift. -
KUADRANT_DEVELOPER_NS: Namespace for the example toystore app in OpenShift. -
KUADRANT_AWS_ACCESS_KEY_ID: AWS key ID with access to manage your DNS zone. -
KUADRANT_AWS_SECRET_ACCESS_KEY: AWS secret access key with permissions to manage your DNS zone. -
KUADRANT_AWS_DNS_PUBLIC_ZONE_ID: AWS Route 53 zone ID for the Gateway. This is the ID of the hosted zone that is displayed in the AWS Route 53 console. -
KUADRANT_ZONE_ROOT_DOMAIN: Root domain in AWS Route 53 associated with your DNS zone ID. KUADRANT_CLUSTER_ISSUER_NAME: Name of the certificate authority or issuer TLS certificates.NoteThis guide uses environment variables for convenience only. Alternatively, if you know the environment variable values, you can set up the required
.yamlfiles to suit your environment.
-
Create the namespace for the Toystore app as follows:
kubectl create ns ${KUADRANT_DEVELOPER_NS}kubectl create ns ${KUADRANT_DEVELOPER_NS}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the Toystore app to the developer namespace:
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/Kuadrant-operator/main/examples/toystore/toystore.yaml -n ${KUADRANT_DEVELOPER_NS}kubectl apply -f https://raw.githubusercontent.com/Kuadrant/Kuadrant-operator/main/examples/toystore/toystore.yaml -n ${KUADRANT_DEVELOPER_NS}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Set up a DNS provider secret リンクのコピーリンクがクリップボードにコピーされました!
Your DNS provider supplies credentials to access the DNS zones that Connectivity Link can use to set up your DNS configuration. You must ensure that these credentials have access to only the DNS zones that you want Connectivity Link to manage with your DNSPolicy.
You must apply the following Secret resource to each cluster. If you are adding an additional cluster, add it to the new cluster.
Prerequisites
Procedure
Create the namespace that the Gateway will be deployed in as follows:
kubectl create ns ${KUADRANT_GATEWAY_NS}kubectl create ns ${KUADRANT_GATEWAY_NS}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the secret credentials in the same namespace as the Gateway as follows:
kubectl -n ${KUADRANT_GATEWAY_NS} create secret generic aws-credentials \ --type=kuadrant.io/aws \ --from-literal=AWS_ACCESS_KEY_ID=$KUADRANT_AWS_ACCESS_KEY_ID \ --from-literal=AWS_SECRET_ACCESS_KEY=$KUADRANT_AWS_SECRET_ACCESS_KEYkubectl -n ${KUADRANT_GATEWAY_NS} create secret generic aws-credentials \ --type=kuadrant.io/aws \ --from-literal=AWS_ACCESS_KEY_ID=$KUADRANT_AWS_ACCESS_KEY_ID \ --from-literal=AWS_SECRET_ACCESS_KEY=$KUADRANT_AWS_SECRET_ACCESS_KEYCopy to Clipboard Copied! Toggle word wrap Toggle overflow Before adding a TLS certificate issuer, create the secret credentials in the
cert-managernamespace as follows:kubectl -n cert-manager create secret generic aws-credentials \ --type=kuadrant.io/aws \ --from-literal=AWS_ACCESS_KEY_ID=$KUADRANT_AWS_ACCESS_KEY_ID \ --from-literal=AWS_SECRET_ACCESS_KEY=$KUADRANT_AWS_SECRET_ACCESS_KEY
kubectl -n cert-manager create secret generic aws-credentials \ --type=kuadrant.io/aws \ --from-literal=AWS_ACCESS_KEY_ID=$KUADRANT_AWS_ACCESS_KEY_ID \ --from-literal=AWS_SECRET_ACCESS_KEY=$KUADRANT_AWS_SECRET_ACCESS_KEYCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 5. Add a TLS certificate issuer リンクのコピーリンクがクリップボードにコピーされました!
To secure communication to your Gateways, you must define a certification authority as an issuer for TLS certificates.
This example uses the Let’s Encrypt TLS certificate issuer for simplicity, but you can use any certificate issuer supported by cert-manager. In multicluster environments, you must add your TLS issuer in each OpenShift cluster.
Prerequisites
Procedure
Enter the following command to define a TLS certificate issuer:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the
ClusterIssuerto become ready as follows:kubectl wait clusterissuer/${KUADRANT_CLUSTER_ISSUER_NAME} --for=condition=ready=truekubectl wait clusterissuer/${KUADRANT_CLUSTER_ISSUER_NAME} --for=condition=ready=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 6. Create your Gateway instance リンクのコピーリンクがクリップボードにコピーされました!
This section shows how you can deploy a Gateway in your OpenShift cluster. This task is typically performed by platform engineers when setting up the infrastructure to be used by application developers.
In a multicluster environment, for Connectivity Link to balance traffic by using DNS across clusters, you must define a Gateway with a shared hostname. You can define this by using an HTTPS listener with a wildcard hostname based on the root domain. As mentioned previously, you must apply these resources to all clusters.
Prerequisites
Procedure
Enter the following command to create the Gateway:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of your Gateway as follows:
kubectl get gateway ${KUADRANT_GATEWAY_NAME} -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Programmed")].message}'kubectl get gateway ${KUADRANT_GATEWAY_NAME} -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Programmed")].message}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Your Gateway should be
AcceptedandProgrammed, which means that it is valid and assigned an external address.Check the status of your HTTPS listener as follows:
kubectl get gateway ${KUADRANT_GATEWAY_NAME} -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.listeners[0].conditions[?(@.type=="Programmed")].message}'kubectl get gateway ${KUADRANT_GATEWAY_NAME} -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.listeners[0].conditions[?(@.type=="Programmed")].message}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow You will see that the HTTPS listener is not yet programmed or ready to accept traffic due to bad TLS configuration. Connectivity Link can help with this by using a TLSPolicy, which is described in the next step.
Chapter 7. Configure your Gateway policies and HTTP route リンクのコピーリンクがクリップボードにコピーされました!
While your Gateway is now deployed, it has no exposed endpoints and your HTTPS listener is not programmed. Next, you can define a TLSPolicy that leverages your CertificateIssuer to set up your HTTPS listener certificates, and define an HTTPRoute for your Gateway to communicate with your backend application API.
You will define an AuthPolicy to set up a default HTTP 403 response for any unprotected endpoints, and a RateLimitPolicy to set up a default artificially low global limit to further protect any endpoints exposed by the Gateway. You will also define a DNSPolicy with a load balancing strategy for your Gateway.
Prerequisites
- Your Gateway is deployed as described in Chapter 6, Create your Gateway instance.
In multicluster environments, you must perform the following steps in each cluster individually, unless specifically excluded.
7.1. Set the TLS policy リンクのコピーリンクがクリップボードにコピーされました!
Procedure
Set the
TLSPolicyfor your Gateway as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that your TLS policy has an
AcceptedandEnforcedstatus as follows:kubectl get tlspolicy ${KUADRANT_GATEWAY_NAME}-tls -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'kubectl get tlspolicy ${KUADRANT_GATEWAY_NAME}-tls -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow This may take a few minutes depending on the TLS provider, for example, Let’s Encrypt.
7.2. Create an HTTP route for your application リンクのコピーリンクがクリップボードにコピーされました!
Procedure
Create an
HTTPRoutefor the example Toystore application as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow While the Gateway is deployed, it currently has exposed endpoints. The next steps are defining an
AuthPolicyto set up a default HTTP 403 response for any unprotected endpoints, and aRateLimitPolicyto set up a default unrealistic low global limit to further protect any exposed endpoints.
7.3. Set the default AuthPolicy リンクのコピーリンクがクリップボードにコピーされました!
Procedure
Set a default
AuthPolicywith adeny-allsetting for your Gateway as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that your
AuthPolicyhasAcceptedandEnforcedstatus as follows:kubectl get authpolicy ${KUADRANT_GATEWAY_NAME}-auth -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'kubectl get authpolicy ${KUADRANT_GATEWAY_NAME}-auth -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4. Set the default RateLimitPolicy リンクのコピーリンクがクリップボードにコピーされました!
Procedure
Set the default
RateLimitPolicywith alow-limitsetting for your Gateway as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt might take a few minutes for the
RateLimitPolicyto be applied depending on your cluster. The limit in this example is artificially low to show it working easily.Check that your
RateLimitPolicyhasAcceptedandEnforcedstatus as follows:kubectl get ratelimitpolicy ${KUADRANT_GATEWAY_NAME}-rlp -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'kubectl get ratelimitpolicy ${KUADRANT_GATEWAY_NAME}-rlp -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.5. Set the DNS policy リンクのコピーリンクがクリップボードにコピーされました!
Procedure
Set the
DNSPolicyfor your Gateway as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
DNSPolicywill use the DNS ProviderSecretthat you defined earlier. Thegeoin this example isGEO-NA, but you can change this to suit your requirements.Check that your
DNSPolicyhas status ofAcceptedandEnforcedas follows:kubectl get dnspolicy ${KUADRANT_GATEWAY_NAME}-dnspolicy -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'kubectl get dnspolicy ${KUADRANT_GATEWAY_NAME}-dnspolicy -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow This might take a few minutes.
Check the status of the DNS health checks that are enabled on your DNSPolicy as follows:
kubectl get dnspolicy ${KUADRANT_GATEWAY_NAME}-dnspolicy -n ${KUADRANT_GATEWAY_NS} -kubectl get dnspolicy ${KUADRANT_GATEWAY_NAME}-dnspolicy -n ${KUADRANT_GATEWAY_NS} -Copy to Clipboard Copied! Toggle word wrap Toggle overflow These health checks flag a published endpoint as healthy or unhealthy based on defined configuration. When unhealthy, an endpoint will not be published if it has not already been published to the DNS provider. An endpoint will only be unpublished if it is part of a multi-value A record, and in all cases can be observed in the DNSPolicy status.
7.6. Test your default rate limit and auth policies リンクのコピーリンクがクリップボードにコピーされました!
You can use a curl command to test the default low-limit and deny-all policies for your Gateway.
Procedure
Enter the following
curlcommand:while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; donewhile :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow You should see a HTTP
403responses.
Chapter 8. Configure on-premises DNS with CoreDNS (Technology Preview) リンクのコピーリンクがクリップボードにコピーされました!
Red Hat Connectivity Link integration with CoreDNS for on-premises DNS is currently available in Red Hat Connectivity Link as a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Red Hat Connectivity Link uses a DNSPolicy to manage DNS records based on Gateway API resources. For on-premises DNS servers like CoreDNS, direct integration might require custom controllers or elevated permissions, which can be complex and pose security risks.
To address this challenge, Connectivity Link supports DNS delegation. Instead of directly managing records on the authoritative on-premises DNS server, you configure that server to delegate a specific subdomain (for example, kuadrant.example.local) to CoreDNS instances managed by Connectivity Link.
The DNSPolicy can then interact with the CoreDNS provider within the OpenShift Container Platform cluster. This CoreDNS instance becomes authoritative for the delegated subdomain and manages the necessary DNS records (A, CNAME, and so on) for Gateways within that subdomain.
The delegate field within the DNSPolicy configuration specifies which DNS provider (in this case, CoreDNS) handles the records for the targeted Gateways.
This guide describes how to set up CoreDNS as a DNS provider for Connectivity Link in a multi-cluster, on-premises environment. This integration allows Connectivity Link to manage DNS entries within your internal network infrastructure.
Prerequisites
- Red Hat Connectivity Link is installed on two separate OpenShift Container Platform clusters (primary and secondary).
-
The
kubectlor oc command-line interface is installed and configured for access to both clusters. - You have administrator privileges on both OpenShift Container Platform clusters.
-
Your OpenShift Container Platform clusters have support for the
loadbalancedservice type that allows UDP traffic on port 53, such as MetalLB. For more information, see Load balancing with MetalLB. - You have access to configure your authoritative on-premises DNS server to delegate a subdomain.
- Kustomize is installed.
Procedure
Set up the primary cluster. Set the following environment variables for your primary cluster context:
export CTX_PRIMARY=<primary_cluster_context_name> # e.g., kind-primary export KUBECONFIG=~/.kube/config # Adjust path if necessary export PRIMARY_CLUSTER_NAME=<primary_cluster_name> # e.g., primary export ONPREM_DOMAIN=<your_onprem_domain> # e.g., example.local export KUADRANT_SUBDOMAIN=kuadrant # Subdomain to delegate
export CTX_PRIMARY=<primary_cluster_context_name> # e.g., kind-primary export KUBECONFIG=~/.kube/config # Adjust path if necessary export PRIMARY_CLUSTER_NAME=<primary_cluster_name> # e.g., primary export ONPREM_DOMAIN=<your_onprem_domain> # e.g., example.local export KUADRANT_SUBDOMAIN=kuadrant # Subdomain to delegateCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install CoreDNS using the Connectivity Link kustomization, which includes the required
kuadrantplugin. Apply the following configuration to the primary cluster, replacing <kuadrant-coredns-kustomize-url> with the actual URL for the Kuadrant CoreDNS kustomization.kustomize build --enable-helm github.com/kuadrant/dns-operator/config/coredns?ref=v0.15.0 | kubectl apply --context ${CTX_PRIMARY} -f -kustomize build --enable-helm github.com/kuadrant/dns-operator/config/coredns?ref=v0.15.0 | kubectl apply --context ${CTX_PRIMARY} -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe default CoreDNS Helm chart does not include the
kuadrantplugin. You must use the Connectivity Link-provided kustomization which bundles a customized CoreDNS build.Wait for the CoreDNS service to get an external IP address and store it:
export COREDNS_IP_PRIMARY=$(kubectl --context $CTX_PRIMARY -n kuadrant-system get service <coredns-service-name> -o jsonpath='{.status.loadBalancer.ingress[0].ip}' echo "CoreDNS Primary IP: ${COREDNS_IP_PRIMARY}"export COREDNS_IP_PRIMARY=$(kubectl --context $CTX_PRIMARY -n kuadrant-system get service <coredns-service-name> -o jsonpath='{.status.loadBalancer.ingress[0].ip}' echo "CoreDNS Primary IP: ${COREDNS_IP_PRIMARY}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow You need this IP address later to configure delegation on your authoritative on-premises DNS server.
Create a
ConfigMapto define the authoritative zone for CoreDNS on the primary cluster. This minimal configuration enables thekuadrantplugin and GeoIP features.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
geoipplugin in this example uses theGeoLite2-City-demo.mmdbdatabase included for demonstration purposes. For production or accurate GeoIP routing, mount your licensed MaxMind GeoIP database into the CoreDNS pod and update the filename in theCorefile.Update the CoreDNS deployment to use the new configuration:
kubectl --context $CTX_PRIMARY -n kuadrant-system patch deployment <coredns-deployment-name> --patch '{"spec":{"template":{"spec":{"volumes":[{"name":"config-volume","configMap":{"name":"coredns-kuadrant-config","items":[{"key":"Corefile","path":"Corefile"}]}}]}}}}'kubectl --context $CTX_PRIMARY -n kuadrant-system patch deployment <coredns-deployment-name> --patch '{"spec":{"template":{"spec":{"volumes":[{"name":"config-volume","configMap":{"name":"coredns-kuadrant-config","items":[{"key":"Corefile","path":"Corefile"}]}}]}}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the deployment rollout to complete:
kubectl --context $CTX_PRIMARY -n kuadrant-system rollout status deployment/<coredns-deployment-name>
kubectl --context $CTX_PRIMARY -n kuadrant-system rollout status deployment/<coredns-deployment-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Kubernetes
Secretthat Connectivity Link uses to interact with CoreDNS. This secret specifies the zones this provider instance is authoritative for.kubectl create secret generic coredns-credentials \ --namespace=kuadrant-system \ --type=kuadrant.io/coredns \ --from-literal=ZONES="${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}" \ --context ${CTX_PRIMARY}kubectl create secret generic coredns-credentials \ --namespace=kuadrant-system \ --type=kuadrant.io/coredns \ --from-literal=ZONES="${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}" \ --context ${CTX_PRIMARY}Copy to Clipboard Copied! Toggle word wrap Toggle overflow On your authoritative on-premises DNS server, configure delegation for the
${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}subdomain to the external IP addresses of the CoreDNS services running on your primary and secondary clusters ($COREDNS_IP_PRIMARYand$COREDNS_IP_SECONDARY). The specific steps depend on your DNS server software (for example, BIND, Windows DNS Server). You typically need to add NS (Name Server) records pointing the subdomain to the CoreDNS IP addresses. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
After configuring delegation, you can test that DNS resolution for the delegated subdomain works correctly by querying your authoritative DNS server for a record within the kuadrant subdomain. The query should be referred to, and answered by, one of the CoreDNS instances.
Next steps
Create DNSPolicy resources in your OpenShift Container Platform clusters, referencing the coredns-credentials secret as the provider. Connectivity Link manages DNS records within the delegated ${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN} zone through the CoreDNS instances.
Chapter 9. Configure token-based rate limiting with TokenRateLimitPolicy リンクのコピーリンクがクリップボードにコピーされました!
Red Hat Connectivity Link provides the TokenRateLimitPolicy custom resource to enforce rate limits based on token consumption rather than the number of requests. This policy extends the Envoy Rate Limit Service (RLS) protocol with automatic token usage extraction. It is particularly useful for protecting Large Language Model (LLM) APIs, where the cost and resource usage correlate more closely with the number of tokens processed.
Unlike the standard RateLimitPolicy which counts requests, TokenRateLimitPolicy counts tokens by extracting usage metrics in the body of the AI inference API call, allowing for finer-grained control over API usage based on actual workload.
9.1. How token rate limiting works リンクのコピーリンクがクリップボードにコピーされました!
The TokenRateLimitPolicy tracks cumulative token usage per client. Before forwarding a request, it checks if the client has already exceeded their limit from previous usage. After the upstream responds, it extracts the actual token cost and updates the client’s counter.
The flow is as follows:
-
On an incoming request, the gateway evaluates the matching rules and predicates from the
TokenRateLimitPolicyresources. - If the request matches, the gateway prepares the necessary rate limit descriptors and monitors the response.
-
After receiving the response, the gateway extracts the
usage.total_tokensfield from the JSON response body. -
The gateway then sends a
RateLimitRequestto Limitador, including the actual token count as ahits_addend. -
Limitador tracks the cumulative token usage and responds to the gateway with
OKorOVER_LIMIT.
9.2. Key features and use cases リンクのコピーリンクがクリップボードにコピーされました!
-
Enforces limits based on token usage by extracting the
usage.total_tokensfield from an OpenAI-style inference JSON response body. - Suitable for consumption-based APIs such as LLMs where the cost is tied to token counts.
- Allows defining different limits based on criteria such as user identity, API endpoints, or HTTP methods.
-
Works with
AuthPolicyto apply specific limits to authenticated users or groups. -
Inherits functionalities from
RateLimitPolicy, including defining multiple limits with different durations and using Redis for shared counters in multi-cluster environments.
9.3. Integrating with AuthPolicy リンクのコピーリンクがクリップボードにコピーされました!
You can combine TokenRateLimitPolicy with AuthPolicy to apply token limits based on authenticated user identity. When an AuthPolicy successfully authenticates a request, it injects identity information which can then be used by the TokenRateLimitPolicy to select the appropriate limit.
For example, you can define different token limits for users belonging to 'free-tier' versus 'premium-tier' groups, identified using claims in a JWT validated by AuthPolicy.
9.4. Configure token-based rate limiting for LLM APIs リンクのコピーリンクがクリップボードにコピーされました!
This guide shows how to configure TokenRateLimitPolicy to protect a hypothetical LLM API deployed on OpenShift Container Platform, integrated with AuthPolicy for user-specific limits.
Prerequisites
- Connectivity Link is installed on your OpenShift Container Platform cluster.
-
A Gateway and an
HTTPRouteare configured to expose your service. -
An
AuthPolicyis configured for authentication (for example, using API keys or OIDC). - Redis is configured for Limitador if running in a multi-cluster setup or requiring persistent counters.
-
Your upstream service is configured to return an OpenAI-compatible JSON response containing a
usage.total_tokensfield in the response body.
Procedure
Create a
TokenRateLimitPolicyresource. This example defines two limits: one for free users on a 10,000 tokens per day request limit, and one for pro users with a 100,000 tokens per day request limit.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the policy:
oc apply -f your-tokenratelimitpolicy.yaml -n my-api-namespace
oc apply -f your-tokenratelimitpolicy.yaml -n my-api-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the policy to ensure it has been accepted and enforced on the target
HTTPRoute. Look for conditions withtype: Acceptedandtype: Enforcedwithstatus: "True".oc get tokenratelimitpolicy llm-protection -n my-api-namespace -o jsonpath='{.status.conditions}'oc get tokenratelimitpolicy llm-protection -n my-api-namespace -o jsonpath='{.status.conditions}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Send requests to your API endpoint, including the required authentication details.
curl -H "Authorization: <auth-details>" \ -d '{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}]}' \ <your-api-endpoint>curl -H "Authorization: <auth-details>" \ -d '{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}]}' \ <your-api-endpoint>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
-
Ensure your upstream service responds with an OpenAI-compatible JSON body containing the
usage.total_tokensfield. -
Requests made when the client is within their token limits should receive a
200 OKresponse or other success status and their token counter will be updated. -
Requests made when the client has already exceeded their token limits should receive a
429 Too Many Requestsresponse.
Chapter 10. Override your Gateway policies for auth and rate limiting リンクのコピーリンクがクリップボードにコピーされました!
As an application developer, you can override your existing Gateway-level policies to configure your application-level auth and rate limiting requirements.
Prerequisites
- Your Connectivity Link policies are configured as described in Chapter 7, Configure your Gateway policies and HTTP route.
10.1. Override the Gateway’s deny-all AuthPolicy リンクのコピーリンクがクリップボードにコピーされました!
You can allow authenticated access to the Toystore API by defining a new AuthPolicy that targets the HTTPRoute resource created in the previous section.
Any new HTTPRoutes will still be affected by the existing Gateway-level policy. Because you want users to now access this API, you must override that Gateway policy. For simplicity, you can use API keys to authenticate the requests, but other options such as OpenID Connect are also available.
Procedure
Ensure that your Connectivity Link system namespace is set correctly as follows:
export KUADRANT_SYSTEM_NS=$(kubectl get kuadrant -A -o jsonpath="{.items[0].metadata.namespace}")export KUADRANT_SYSTEM_NS=$(kubectl get kuadrant -A -o jsonpath="{.items[0].metadata.namespace}")Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define API keys for bob and alice users as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new
AuthPolicyin a different namespace that overrides thedeny-allpolicy created earlier and accepts the API keys as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2. Override the Gateway’s low-limit RateLimitPolicy for specific users リンクのコピーリンクがクリップボードにコピーされました!
The configured Gateway limits provide a good set of limits for the general case. However, as the developer of the Toystore API, you might want to only allow a certain number of requests for specific users, and a general limit for all other users.
Procedure
Create a new
RateLimitPolicyin a different namespace to override the defaultlow-limitpolicy created previously and set rate limits for specific users as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt might take a few minutes for the
RateLimitPolicyto be applied, depending on your cluster.Check that the
RateLimitPolicyhas a status ofAcceptedandEnforcedas follows:kubectl get ratelimitpolicy -n ${KUADRANT_DEVELOPER_NS} toystore-rlp -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'kubectl get ratelimitpolicy -n ${KUADRANT_DEVELOPER_NS} toystore-rlp -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the status of the HTTPRoute is now affected by the
RateLimitPolicyin the same namespace:kubectl get httproute toystore -n ${KUADRANT_DEVELOPER_NS} -o=jsonpath='{.status.parents[0].conditions[?(@.type=="kuadrant.io/RateLimitPolicyAffected")].message}'kubectl get httproute toystore -n ${KUADRANT_DEVELOPER_NS} -o=jsonpath='{.status.parents[0].conditions[?(@.type=="kuadrant.io/RateLimitPolicyAffected")].message}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.3. Test the new Rate limit and Auth policies リンクのコピーリンクがクリップボードにコピーされました!
Send requests as user alice as follows:
while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null -H 'Authorization: APIKEY IAMALICE' "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; donewhile :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null -H 'Authorization: APIKEY IAMALICE' "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow You should see HTTP status
200every second for 5 seconds, followed by HTTP status429every second for 5 seconds.Send requests as user bob as follows:
while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null -H 'Authorization: APIKEY IAMBOB' "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; donewhile :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null -H 'Authorization: APIKEY IAMBOB' "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow You should see HTTP status
200every second for 2 seconds, followed by HTTP status429every second for 8 seconds.
Appendix A. Using your Red Hat subscription リンクのコピーリンクがクリップボードにコピーされました!
Red Hat Connectivity Link is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal.
Managing your subscriptions
- Go to access.redhat.com.
- If you do not already have an account, create one.
- Log in to your account.
- In the menu bar, click Subscriptions to view and manage your subscriptions.
Revised on 2025-11-05 15:08:43 UTC