Configuring and deploying gateway policies
Secure, protect, and connect APIs on OpenShift
Abstract
Chapter 1. Configuring and deploying gateway policies Copy linkLink copied to clipboard!
As a platform engineer or application developer, you can secure, protect, and connect an API exposed by a gateway that uses Gateway API by using Connectivity Link.
1.1. Secure, protect, and connect APIs on OpenShift Container Platform with Connectivity Link Copy linkLink copied to clipboard!
This guide shows how you can use Connectivity Link on OpenShift Container Platform to secure, protect, and connect an API exposed by a Gateway that uses Kubernetes Gateway API. This guide applies to the platform engineer and application developer user roles in Connectivity Link.
In multi-cluster environments, you must perform the following steps in each cluster individually, unless specifically excluded.
1.1.1. Connectivity Link capabilities in multicluster environments Copy linkLink copied to clipboard!
You can use Connectivity Link capabilities in single or multiple OpenShift Container Platform clusters. The following features are designed to work across multiple clusters and in a single-cluster environment:
-
Multicluster ingress: Connectivity Link provides multicluster ingress connectivity using DNS to bring traffic to your gateways by using a strategy defined in a
DNSPolicy. -
Global rate limiting: Connectivity Link can enable global rate limiting use cases when configured to use a shared Redis-based store for counters based on limits defined by a
RateLimitPolicy. -
Global auth: You can configure a Connectivity Link
AuthPolicyto use external auth providers to ensure that different clusters exposing the same API can authenticate and allow in the same way. -
Automatic TLS certificate generation: You can configure a
TLSPolicyto automatically provision TLS certificates based on Gateway listener hosts by using integration with cert-manager and ACME providers such as Let’s Encrypt. - Integration with federated metrics stores: Connectivity Link has example dashboards and metrics for visualizing your gateways and observing traffic hitting those gateways across multiple clusters.
1.1.2. Connectivity Link user role workflows Copy linkLink copied to clipboard!
Platform engineer: This guide shows how platform engineers can deploy gateways that provide secure communication and are protected and ready for use by application development teams to deploy APIs.
Platform engineers can use Connectivity Link in clusters in different geographic regions to bring specific traffic to geo-located gateways. This approach reduces latency, distributes load, and protects and secures with global rate limiting and auth policies.
- Application developer: This guide shows how application developers can override the Gateway-level global auth and rate limiting policies to configure application-level auth and rate limiting requirements for specific users.
1.1.3. Deployment management Copy linkLink copied to clipboard!
The examples in this guide use kubectl commands for simplicity. However, working with multiple clusters is complex, and it is best to use a tool such as OpenShift Container Platform GitOps, based on Argo CD, to manage the deployment of resources to multiple clusters.
1.2. Check your Connectivity Link installation and permissions Copy linkLink copied to clipboard!
This guide expects that you have successfully installed Connectivity Link on at least one OpenShift Container Platform cluster, and that you have the correct user permissions.
- You completed the Connectivity Link installation steps on one or more clusters, as described in Installing Connectivity Link on OpenShift.
-
You have the
kubectloroccommand installed. - You have write access to the OpenShift Container Platform namespaces used in this guide.
- You have an AWS account with Amazon Route 53 and a DNS zone for the examples in this guide. Connectivity Link also supports Google Cloud DNS and Microsoft Azure DNS.
Optional:
- For rate limiting in a multicluster environment, you have installed Connectivity Link on more than one cluster and have a shared accessible Redis-based datastore. For more details, see Installing Connectivity Link on OpenShift.
- For Observability, OpenShift Container Platform user workload monitoring is configured to remote write to a central storage system such as Thanos, as described in Connectivity Link Observability Guide.
1.3. Set up your environment Copy linkLink copied to clipboard!
This section shows how you can set up your environment variables and deploy the example Toystore application on your OpenShift Container Platform cluster.
Prerequisites
Procedure
Optional: Set the following environment variables:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow These environment variables are described as follows:
-
KUADRANT_GATEWAY_NS: Namespace for your example gateway in OpenShift Container Platform. -
KUADRANT_GATEWAY_NAME: Name of your example gateway in OpenShift Container Platform. -
KUADRANT_DEVELOPER_NS: Namespace for the example Toystore app in OpenShift Container Platform. -
KUADRANT_AWS_ACCESS_KEY_ID: AWS key ID with access to manage your DNS zone. -
KUADRANT_AWS_SECRET_ACCESS_KEY: AWS secret access key with permissions to manage your DNS zone. -
KUADRANT_AWS_DNS_PUBLIC_ZONE_ID: AWS Route 53 zone ID for the Gateway. This is the ID of the hosted zone that is displayed in the AWS Route 53 console. -
KUADRANT_ZONE_ROOT_DOMAIN: Root domain in AWS Route 53 associated with your DNS zone ID. KUADRANT_CLUSTER_ISSUER_NAME: Name of the certificate authority or issuer TLS certificates.NoteIf you know your environment variable values, you can set up the required YAML files to suit your environment.
-
Create the namespace for the Toystore app as follows:
kubectl create ns ${KUADRANT_DEVELOPER_NS}$ kubectl create ns ${KUADRANT_DEVELOPER_NS}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the Toystore app to the developer namespace:
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/Kuadrant-operator/main/examples/toystore/toystore.yaml -n ${KUADRANT_DEVELOPER_NS}$ kubectl apply -f https://raw.githubusercontent.com/Kuadrant/Kuadrant-operator/main/examples/toystore/toystore.yaml -n ${KUADRANT_DEVELOPER_NS}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4. Set up a DNS provider secret Copy linkLink copied to clipboard!
Your DNS provider supplies credentials to access the DNS zones that Connectivity Link can use to set up your DNS configuration. You must ensure that these credentials have access to only the DNS zones that you want Connectivity Link to manage with your DNSPolicy.
You must apply the following Secret resource to each cluster. If you are adding an additional cluster, add it to the new cluster.
Prerequisites
Procedure
Create the namespace that the Gateway will be deployed in as follows:
kubectl create ns ${KUADRANT_GATEWAY_NS}kubectl create ns ${KUADRANT_GATEWAY_NS}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the secret credentials in the same namespace as the Gateway as follows:
kubectl -n ${KUADRANT_GATEWAY_NS} create secret generic aws-credentials \ --type=kuadrant.io/aws \ --from-literal=AWS_ACCESS_KEY_ID=$KUADRANT_AWS_ACCESS_KEY_ID \ --from-literal=AWS_SECRET_ACCESS_KEY=$KUADRANT_AWS_SECRET_ACCESS_KEY$ kubectl -n ${KUADRANT_GATEWAY_NS} create secret generic aws-credentials \ --type=kuadrant.io/aws \ --from-literal=AWS_ACCESS_KEY_ID=$KUADRANT_AWS_ACCESS_KEY_ID \ --from-literal=AWS_SECRET_ACCESS_KEY=$KUADRANT_AWS_SECRET_ACCESS_KEYCopy to Clipboard Copied! Toggle word wrap Toggle overflow Before adding a TLS certificate issuer, create the secret credentials in the
cert-managernamespace as follows:kubectl -n cert-manager create secret generic aws-credentials \ --type=kuadrant.io/aws \ --from-literal=AWS_ACCESS_KEY_ID=$KUADRANT_AWS_ACCESS_KEY_ID \ --from-literal=AWS_SECRET_ACCESS_KEY=$KUADRANT_AWS_SECRET_ACCESS_KEY
$ kubectl -n cert-manager create secret generic aws-credentials \ --type=kuadrant.io/aws \ --from-literal=AWS_ACCESS_KEY_ID=$KUADRANT_AWS_ACCESS_KEY_ID \ --from-literal=AWS_SECRET_ACCESS_KEY=$KUADRANT_AWS_SECRET_ACCESS_KEYCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5. Add a TLS certificate issuer Copy linkLink copied to clipboard!
To secure communication to your Gateways, you must define a certification authority as an issuer for TLS certificates.
This example uses the Let’s Encrypt TLS certificate issuer for simplicity, but you can use any certificate issuer supported by cert-manager. In multicluster environments, you must add your TLS issuer in each OpenShift Container Platform cluster.
Prerequisites
Procedure
Enter the following command to define a TLS certificate issuer:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the
ClusterIssuerto become ready as follows:kubectl wait clusterissuer/${KUADRANT_CLUSTER_ISSUER_NAME} --for=condition=ready=true$ kubectl wait clusterissuer/${KUADRANT_CLUSTER_ISSUER_NAME} --for=condition=ready=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.6. Create your Gateway instance Copy linkLink copied to clipboard!
This section shows how you can deploy a Gateway in your OpenShift Container Platform cluster. This task is typically performed by platform engineers when setting up the infrastructure to be used by application developers.
In a multicluster environment, for Connectivity Link to balance traffic by using DNS across clusters, you must define a Gateway with a shared hostname. You can define this by using an HTTPS listener with a wildcard hostname based on the root domain. As mentioned previously, you must apply these resources to all clusters.
Prerequisites
Procedure
Enter the following command to create the Gateway:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of your Gateway as follows:
kubectl get gateway ${KUADRANT_GATEWAY_NAME} -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Programmed")].message}'kubectl get gateway ${KUADRANT_GATEWAY_NAME} -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Programmed")].message}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Your Gateway should be
AcceptedandProgrammed, which means that it is valid and assigned an external address.Check the status of your HTTPS listener as follows:
kubectl get gateway ${KUADRANT_GATEWAY_NAME} -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.listeners[0].conditions[?(@.type=="Programmed")].message}'$ kubectl get gateway ${KUADRANT_GATEWAY_NAME} -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.listeners[0].conditions[?(@.type=="Programmed")].message}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow You will see that the HTTPS listener is not yet programmed or ready to accept traffic due to bad TLS configuration. Connectivity Link can help with this by using a TLSPolicy, which is described in the next step.
1.7. Configure your Gateway policies and HTTP route Copy linkLink copied to clipboard!
While your Gateway is now deployed, it has no exposed endpoints and your HTTPS listener is not programmed. Next, you can do take the following steps:
-
Define a
TLSPolicythat leverages yourCertificateIssuerto set up your HTTPS listener certificates. -
Define an
HTTPRoutefor your Gateway to communicate with your backend application API. -
Define an
AuthPolicyto set up a default HTTP403response for any unprotected endpoints -
Define and a
RateLimitPolicyto set up a default artificially low global limit to further protect any endpoints exposed by the Gateway. -
Define a
DNSPolicywith a load balancing strategy for your Gateway.
In multicluster environments, you must perform the following steps in each cluster individually, unless specifically excluded.
Prerequisites
- Your Gateway is deployed as described in Section 1.6, “Create your Gateway instance”.
Procedure
Set the
TLSPolicyfor your Gateway as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that your TLS policy has an
AcceptedandEnforcedstatus as follows:kubectl get tlspolicy ${KUADRANT_GATEWAY_NAME}-tls -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'$ kubectl get tlspolicy ${KUADRANT_GATEWAY_NAME}-tls -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow This may take a few minutes depending on the TLS provider, for example, Let’s Encrypt.
1.7.1. Create an HTTP route for your application Copy linkLink copied to clipboard!
Procedure
Create an
HTTPRoutefor the example Toystore application as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.7.2. Set the default AuthPolicy Copy linkLink copied to clipboard!
Procedure
Set a default
AuthPolicywith adeny-allsetting for your Gateway as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that your
AuthPolicyhasAcceptedandEnforcedstatus as follows:kubectl get authpolicy ${KUADRANT_GATEWAY_NAME}-auth -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'$ kubectl get authpolicy ${KUADRANT_GATEWAY_NAME}-auth -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.7.3. Set the default RateLimitPolicy Copy linkLink copied to clipboard!
Procedure
Set the default
RateLimitPolicywith alow-limitsetting for your Gateway as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow It might take a few minutes for the
RateLimitPolicyto be applied depending on your cluster. The limit in this example is artificially low to show it working easily.Check that your
RateLimitPolicyhasAcceptedandEnforcedstatus as follows:kubectl get ratelimitpolicy ${KUADRANT_GATEWAY_NAME}-rlp -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'$ kubectl get ratelimitpolicy ${KUADRANT_GATEWAY_NAME}-rlp -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.7.4. Set the DNS policy Copy linkLink copied to clipboard!
Procedure
Set the
DNSPolicyfor your Gateway as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
DNSPolicyuses the DNS ProviderSecretthat you defined earlier. Thegeoin this example isGEO-NA, but you can change this to suit your requirements.Check that your
DNSPolicyhas status ofAcceptedandEnforcedas follows:kubectl get dnspolicy ${KUADRANT_GATEWAY_NAME}-dnspolicy -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'$ kubectl get dnspolicy ${KUADRANT_GATEWAY_NAME}-dnspolicy -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow This might take a few minutes.
Check the status of the DNS health checks that are enabled on your DNSPolicy as follows:
kubectl get dnspolicy ${KUADRANT_GATEWAY_NAME}-dnspolicy -n ${KUADRANT_GATEWAY_NS} -$ kubectl get dnspolicy ${KUADRANT_GATEWAY_NAME}-dnspolicy -n ${KUADRANT_GATEWAY_NS} -Copy to Clipboard Copied! Toggle word wrap Toggle overflow These health checks flag a published endpoint as healthy or unhealthy based on defined configuration. When unhealthy, an endpoint will not be published if it has not already been published to the DNS provider. An endpoint will only be unpublished if it is part of a multi-value A record, and in all cases can be observed in the DNSPolicy status.
1.7.5. Test your default rate limit and auth policies Copy linkLink copied to clipboard!
You can use a curl command to test the default low-limit and deny-all policies for your Gateway.
Procedure
Enter the following
curlcommand:while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; done$ while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow You should see a HTTP
403responses.
1.8. About token-based rate limiting with TokenRateLimitPolicy Copy linkLink copied to clipboard!
Red Hat Connectivity Link provides the TokenRateLimitPolicy custom resource to enforce rate limits based on token consumption rather than the number of requests. This policy extends the Envoy Rate Limit Service (RLS) protocol with automatic token usage extraction. It is particularly useful for protecting Large Language Model (LLM) APIs, where the cost and resource usage correlate more closely with the number of tokens processed.
Unlike the standard RateLimitPolicy which counts requests, TokenRateLimitPolicy counts tokens by extracting usage metrics in the body of the AI inference API call, allowing for finer-grained control over API usage based on actual workload.
1.8.1. How token rate limiting works Copy linkLink copied to clipboard!
The TokenRateLimitPolicy tracks cumulative token usage per client. Before forwarding a request, it checks if the client has already exceeded their limit from previous usage. After the upstream responds, it extracts the actual token cost and updates the client’s counter.
The flow is as follows:
-
On an incoming request, the gateway evaluates the matching rules and predicates from the
TokenRateLimitPolicyresources. - If the request matches, the gateway prepares the necessary rate limit descriptors and monitors the response.
-
After receiving the response, the gateway extracts the
usage.total_tokensfield from the JSON response body. -
The gateway then sends a
RateLimitRequestto Limitador, including the actual token count as ahits_addend. -
Limitador tracks the cumulative token usage and responds to the gateway with
OKorOVER_LIMIT.
1.8.2. Key features and use cases Copy linkLink copied to clipboard!
-
Enforces limits based on token usage by extracting the
usage.total_tokensfield from an OpenAI-style inference JSON response body. - Suitable for consumption-based APIs such as LLMs where the cost is tied to token counts.
- Allows defining different limits based on criteria such as user identity, API endpoints, or HTTP methods.
-
Works with
AuthPolicyto apply specific limits to authenticated users or groups. -
Inherits functionalities from
RateLimitPolicy, including defining multiple limits with different durations and using Redis for shared counters in multi-cluster environments.
1.8.3. Integrating with AuthPolicy Copy linkLink copied to clipboard!
You can combine TokenRateLimitPolicy with AuthPolicy to apply token limits based on authenticated user identity. When an AuthPolicy successfully authenticates a request, it injects identity information that is used by the TokenRateLimitPolicy to select the appropriate limit.
For example, you can define different token limits for users belonging to 'free-tier' compared to 'premium-tier' groups, identified using claims in a JWT validated by AuthPolicy.
1.9. Configure token-based rate limiting with TokenRateLimitPolicy Copy linkLink copied to clipboard!
Red Hat Connectivity Link provides the TokenRateLimitPolicy custom resource to enforce rate limits based on token consumption rather than the number of requests. This policy extends the Envoy Rate Limit Service (RLS) protocol with automatic token usage extraction. It is particularly useful for protecting Large Language Model (LLM) APIs, where the cost and resource usage correlate more closely with the number of tokens processed.
Unlike the standard RateLimitPolicy which counts requests, TokenRateLimitPolicy counts tokens by extracting usage metrics in the body of the AI inference API call, allowing for finer-grained control over API usage based on actual workload.
1.9.1. How token rate limiting works Copy linkLink copied to clipboard!
The TokenRateLimitPolicy tracks cumulative token usage per client. Before forwarding a request, it checks if the client has already exceeded their limit from previous usage. After the upstream responds, it extracts the actual token cost and updates the client’s counter.
The flow is as follows:
-
On an incoming request, the gateway evaluates the matching rules and predicates from the
TokenRateLimitPolicyresources. - If the request matches, the gateway prepares the necessary rate limit descriptors and monitors the response.
-
After receiving the response, the gateway extracts the
usage.total_tokensfield from the JSON response body. -
The gateway then sends a
RateLimitRequestto Limitador, including the actual token count as ahits_addend. -
Limitador tracks the cumulative token usage and responds to the gateway with
OKorOVER_LIMIT.
1.9.2. Key features and use cases Copy linkLink copied to clipboard!
Token-based rate limiting means you complete the following tasks:
-
Enforces limits based on token usage by extracting the
usage.total_tokensfield from an OpenAI-style inference JSON response body. - Suitable for consumption-based APIs such as LLMs where the cost is tied to token counts.
- Allows defining different limits based on criteria such as user identity, API endpoints, or HTTP methods.
-
Works with
AuthPolicyto apply specific limits to authenticated users or groups. -
Inherits functionalities from
RateLimitPolicy, including defining multiple limits with different durations and using Redis for shared counters in multi-cluster environments.
1.9.3. Integrating with AuthPolicy Copy linkLink copied to clipboard!
You can combine TokenRateLimitPolicy with AuthPolicy to apply token limits based on authenticated user identity. When an AuthPolicy successfully authenticates a request, it injects identity information which can then be used by the TokenRateLimitPolicy to select the appropriate limit.
For example, you can define different token limits for users belonging to 'free-tier' versus 'premium-tier' groups, identified using claims in a JWT validated by AuthPolicy.
1.9.4. Configure token-based rate limiting for LLM APIs Copy linkLink copied to clipboard!
This guide shows how to configure TokenRateLimitPolicy to You can protect a hypothetical LLM API deployed on OpenShift Container Platform, integrated with AuthPolicy for user-specific limits.
Prerequisites
- Connectivity Link is installed on your OpenShift Container Platform cluster.
-
A Gateway and an
HTTPRouteare configured to expose your service. -
An
AuthPolicyis configured for authentication (for example, using API keys or OIDC). - Redis is configured for Limitador if running in a multi-cluster setup or requiring persistent counters.
-
Your upstream service is configured to return an OpenAI-compatible JSON response containing a
usage.total_tokensfield in the response body.
Procedure
Create a
TokenRateLimitPolicyresource. This example defines two limits: one for free users on a 10,000 tokens per day request limit, and one for pro users with a 100,000 tokens per day request limit.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the policy:
oc apply -f your-tokenratelimitpolicy.yaml -n my-api-namespace
$ oc apply -f your-tokenratelimitpolicy.yaml -n my-api-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the policy to ensure it has been accepted and enforced on the target
HTTPRoute. Look for conditions withtype: Acceptedandtype: Enforcedwithstatus: "True".oc get tokenratelimitpolicy llm-protection -n my-api-namespace -o jsonpath='{.status.conditions}'$ oc get tokenratelimitpolicy llm-protection -n my-api-namespace -o jsonpath='{.status.conditions}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Send requests to your API endpoint, including the required authentication details.
curl -H "Authorization: <auth-details>" \ -d '{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}]}' \ <your-api-endpoint>$ curl -H "Authorization: <auth-details>" \ -d '{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}]}' \ <your-api-endpoint>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
-
Ensure that your upstream service responds with an OpenAI-compatible JSON body containing the
usage.total_tokensfield. -
Requests made when the client is within their token limits should receive a
200 OKresponse or other success status and their token counter is updated. -
Requests made when the client has already exceeded their token limits should receive a
429 Too Many Requestsresponse.
1.10. Override your Gateway policies for auth and rate limiting Copy linkLink copied to clipboard!
As an application developer, you can override your existing Gateway-level policies to configure your application-level auth and rate limiting requirements.
You can allow authenticated access to the Toystore API by defining a new AuthPolicy that targets the HTTPRoute resource created in the previous section.
Any new HTTPRoutes are affected by the existing Gateway-level policy. Because you want users to now access this API, you must override that Gateway policy. For simplicity, you can use API keys to authenticate the requests, but other options such as OpenID Connect are also available.
Prerequisites
- Your Connectivity Link policies are configured as described in Section 1.7, “Configure your Gateway policies and HTTP route”.
Procedure
Ensure that your Connectivity Link system namespace is set correctly as follows:
export KUADRANT_SYSTEM_NS=$(kubectl get kuadrant -A -o jsonpath="{.items[0].metadata.namespace}")$ export KUADRANT_SYSTEM_NS=$(kubectl get kuadrant -A -o jsonpath="{.items[0].metadata.namespace}")Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define API keys for bob and alice users as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new
AuthPolicyin a different namespace that overrides thedeny-allpolicy created earlier and accepts the API keys as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.11. Overriding the low-limit RateLimitPolicy for specific users Copy linkLink copied to clipboard!
The configured Gateway limits provide a good set of limits for the general case. However, as the developer of the Toystore API, you might want to only allow a certain number of requests for specific users, and a general limit for all other users.
Any new HTTPRoutes are affected by the existing Gateway-level policy. Because you want users to now access this API, you must override that Gateway policy. For simplicity, you can use API keys to authenticate the requests, but other options such as OpenID Connect are also available.
Prerequisites
- Your Connectivity Link policies are configured as described in Section 1.7, “Configure your Gateway policies and HTTP route”.
Procedure
Create a new
RateLimitPolicyin a different namespace to override the defaultlow-limitpolicy created previously and set rate limits for specific users as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow It might take a few minutes for the
RateLimitPolicyto be applied, depending on your cluster.Check that the
RateLimitPolicyhas a status ofAcceptedandEnforcedas follows:kubectl get ratelimitpolicy -n ${KUADRANT_DEVELOPER_NS} toystore-rlp -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'$ kubectl get ratelimitpolicy -n ${KUADRANT_DEVELOPER_NS} toystore-rlp -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the status of the
HTTPRouteis now affected by theRateLimitPolicyin the same namespace:kubectl get httproute toystore -n ${KUADRANT_DEVELOPER_NS} -o=jsonpath='{.status.parents[0].conditions[?(@.type=="kuadrant.io/RateLimitPolicyAffected")].message}'$ kubectl get httproute toystore -n ${KUADRANT_DEVELOPER_NS} -o=jsonpath='{.status.parents[0].conditions[?(@.type=="kuadrant.io/RateLimitPolicyAffected")].message}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Send requests as user alice as follows:
while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null -H 'Authorization: APIKEY IAMALICE' "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; done$ while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null -H 'Authorization: APIKEY IAMALICE' "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow You should see HTTP status
200every second for 5 seconds, followed by HTTP status429every second for 5 seconds.Send requests as user bob as follows:
while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null -H 'Authorization: APIKEY IAMBOB' "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; done$ while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null -H 'Authorization: APIKEY IAMBOB' "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow You should see HTTP status
200every second for 2 seconds, followed by HTTP status429every second for 8 seconds.
Chapter 2. Using on-premise DNS with CoreDNS Copy linkLink copied to clipboard!
You can secure, protect, and connect an API exposed by a gateway that uses Gateway API by using Connectivity Link.
2.1. About using on-premise DNS with CoreDNS Copy linkLink copied to clipboard!
You can self-manage your on-premise DNS by integrating CoreDNS with your DNS infrastructure through access control and zone delegation. Connectivity Link combines the DNS Operator with CoreDNS to simplify your management and security for on-premise DNS servers. You can use CoreDNS in both single-cluster and multi-cluster scenarios.
CoreDNS is best used in environments that change often, where using a DNS-as-code approach makes sense. The following situations are example use cases for integrating with CoreDNS:
- You need to avoid dependency on external cloud DNS services.
- You have regulatory or compliance requirements mandating self-hosted infrastructure.
- You need to keep full control over DNS records.
- You want to delegate specific DNS zones from existing DNS servers to Kubernetes-managed CoreDNS.
- You require consistent DNS management across hybrid or multicloud environments.
- You need to reduce DNS operational costs by eliminating per-query charges.
- You do not want to directly manage DNS records on the on-premise DNS server.
- You need to keep authoritative control on edge DNS servers.
For example:
-
Configure your authoritative on-premise DNS server to delegate a specific subdomain, such as
deployment.example.local, to CoreDNS instances managed by Connectivity Link. -
Any pod-level
dnsPolicyCR can then interact with the CoreDNS provider within the OpenShift Container Platform cluster. You can specify the DNS provider that handles the records for the targeted gateways in thedelegatefield of the DNS policy. - The CoreDNS instance becomes authoritative for the delegated subdomain and manages the necessary DNS records for gateways within that subdomain.
2.2. CoreDNS integration architecture Copy linkLink copied to clipboard!
CoreDNS is a DNS server that consists of default plugins that do several tasks, for example:
- Automatically detects when you add new services to your cluster and adds them to directories.
- Caches recent addresses to avoid the latency of repeated lookups.
- Runs health checks and skips over services that are down.
- Provides dynamic redirects by rewriting queries as they come in.
You can add plugins for observability and other services that you require by updating the CoreDNS with the DNS Operator.
With the DNS Operator, DNS is the first layer of traffic management. You can deploy the DNS Operator to multiple clusters and coordinate them all on a given zone. This means that you can use a shared domain name across clusters to balance traffic based on your requirements.
2.2.1. Technical workflow Copy linkLink copied to clipboard!
To give you integration with CoreDNS, Connectivity Link extends the DNS Operator with the kuadrant CoreDNS plugin that sources records from the kuadrant.io/v1alpha1/DNSRecord custom resource (CR) and applies location-based and weighted response capabilities.
You can create DNS records that point to the CoreDNS secret in one of the three following ways:
- Create it manually.
-
Use a non-delegating DNS policy at a gateway with routes attached. The
kuadrant-operatorCR createsDNSRecordCRs with the secret. -
Use a delegating DNS policy at a gateway. The delegating policy results in the creation of a delegating
DNSRecordCR without a secret reference. All delegating DNS Records are combined into a single authoritative DNS Record. The authoritativeDNSRecorduses a default provider secret.
The DNS Operator reconciles authoritative records that have the CoreDNS secret referenced and applies labels only to those CRs. CoreDNS watches those records and matches the labels with zones configured in the Corefile. If there is a match, the authoritative DNSRecord CR is used to serve a DNS response.
There are no changes to the dnsPolicy API and no required changes to the policy controllers. This integration is isolated to the DNS Operator and the CoreDNS plugin.
The CoreDNS integration supports both single-cluster and multi-cluster deployments.
- Single cluster
Organizations that want to self-host their DNS infrastructure without the complexity of multi-cluster coordination can use single-cluster CoreDNS integration. Using delegation is not required.
A single cluster runs both DNS Operator and CoreDNS with the plugin. CoreDNS only serves
DNSRecordCRs that point to a CoreDNS provider secret. The CoreDNS plugin watches for DNS records labeled with the appropriate zone name and serves them directly. Any authoritativeDNSRecordCR has endpoints from the single cluster.- Multi-cluster delegation
Multiple clusters can participate in serving a single DNS zone through Kubernetes role-based delegation that enables geographic distribution of DNS services and high availability. This implementation enables workloads across multiple clusters to contribute DNS endpoints to a unified zone, with primary clusters maintaining the authoritative view. The role of a cluster is determined by the DNS Operator.
Multi-cluster delegation uses
kubeconfig-based interconnection secrets that grant read access toDNSRecordresources across clusters. This approach reuses Kubernetes role-based access (RBAC).-
Primary clusters: Run both the DNS Operator and CoreDNS and serve the DNS records that are local. The DNS Operator running on primary clusters that delegate reconciles
DNSRecordCRs by reading and merging them. Primary clusters then serve these authoritativeDNSRecordCRs. Each CoreDNS instance serves the relevant authoritativeDNSRecordfor the configured zone. Each primary cluster can independently serve the complete record set. -
Secondary clusters: Only run the DNS Operator. These clusters create delegating
DNSRecordCRs but do not interact with DNS providers directly. If the secret and subdomain are properly configured, these DNS records are automatically reconciled in the primary cluster.
-
Primary clusters: Run both the DNS Operator and CoreDNS and serve the DNS records that are local. The DNS Operator running on primary clusters that delegate reconciles
- Zone labeling
-
CoreDNS integration uses a label-based filtering mechanism. The DNS Operator applies a zone-specific label to
DNSRecordCRs when the CRs are reconciled. The CoreDNS plugin only watches forDNSRecordCRs with labels that match configured zones. This method reduces resource use and provides clear zone boundaries. - GEO and weighted routing
GEO and weighted routing use the same algorithmic approach as cloud providers. By using CoreDNS, you can have parity with cloud DNS provider capabilities and maintain full control over your DNS infrastructure.
-
GEO routing: The CoreDNS
geoipplugin uses geographical-database integration to return region-specific endpoints. - Weighted routing: Applies probabilistic selection based on endpoint weights.
- Combined routing: First applies GEO filtering, then weighted selection within the matched region.
-
GEO routing: The CoreDNS
2.3. CoreDNS DNS records security considerations Copy linkLink copied to clipboard!
As an infrastructure engineer or business lead, you can implement several security best practices when using CoreDNS with Connectivity Link.
Zone configuration DNSRecord custom resources (CRs) have full control over a Zone’s name server (NS) records. Anyone who can create or change a DNSRecord that targets the root of the main domain name with NS records can decide where the all Zone traffic goes. Consider this as you plan your access controls.
For example, use the following access-control best practices:
-
Separate namespaces: Keep zone configuration
DNSRecordCRs in a dedicated, restricted namespace Use least-privilege policies:
-
Strict RBAC: Only grant
DNSRecordcreation permissions to trusted infrastructure engineers and cluster administrators. -
Namespace isolation: Grant application developers
DNSRecordpermissions only in their own namespaces.
-
Strict RBAC: Only grant
-
Audit logging: Enable Kubernetes audit logging to track all
DNSRecordchanges. CoreDNS audit logging is enabled by default for network troubleshooting and traffic pattern observability. -
Version control: Use a DNS-as-code approach. Store zone configuration
DNSRecordCRs in Git and use standardized review processes.
You can use the following RBAC configuration example to get you started with defining access:
2.4. Using CoreDNS with a single cluster Copy linkLink copied to clipboard!
You can use CoreDNS as a DNS provider for Connectivity Link in a single-cluster, on-premise environment. This integration allows Connectivity Link to manage DNS entries within your internal network infrastructure.
In a single-cluster setup, ensure that the endpoints IP address value you use is reachable from the kuadrant-system namespace. The default IP address, 10.96.0.10, is the internal cluster-wide DNS address.
Prerequisites
- Connectivity Link is installed on the OpenShift Container Platform cluster.
-
The OpenShift CLI (
oc) is installed. - You have administrator privileges on the OpenShift Container Platform cluster.
- You are logged in to the cluster you want to configure.
-
Your OpenShift Container Platform clusters have support for the
loadbalancedservice type that allows UDP and TCP traffic on port 53, such as MetalLB. - You have access to configure your authoritative on-premise DNS server.
- Podman is installed.
Procedure
Set up your cluster. Set the following environment variables for your cluster context:
export CTX_PRIMARY=$(kubectl config current-context) \ export KUBECONFIG=~/.kube/config \ export PRIMARY_CLUSTER_NAME=local-cluster \ export ONPREM_DOMAIN=<onprem-domain> \ export KUADRANT_SUBDOMAIN=""
$ export CTX_PRIMARY=$(kubectl config current-context) \ export KUBECONFIG=~/.kube/config \ export PRIMARY_CLUSTER_NAME=local-cluster \ export ONPREM_DOMAIN=<onprem-domain> \ export KUADRANT_SUBDOMAIN=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow For the
ONPREM_DOMAINvariable value, use your actual root domain. For theKUADRANT_SUBDOMAINvariable value, valid values are empty orkuadrant.Extract the CoreDNS manifests from
dns-operatorbundle by running the following commands:podman create --name bundle registry.redhat.io/rhcl-1/dns-operator-bundle:rhcl-1.3.0
$ podman create --name bundle registry.redhat.io/rhcl-1/dns-operator-bundle:rhcl-1.3.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow podman cp bundle:/coredns/manifests.yaml ./coredns-manifests.yaml
$ podman cp bundle:/coredns/manifests.yaml ./coredns-manifests.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow podman rm bundle
$ podman rm bundleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the manifests to the cluster by running the following command:
oc apply -f ./coredns-manifests.yaml
$ oc apply -f ./coredns-manifests.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ConfigMapto define the authoritative zone for CoreDNS. This minimal configuration enables thekuadrantplugin and GeoIP features.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor production or exact GeoIP routing, mount your licensed MaxMind GeoIP database into the CoreDNS pod and update the filename in the
data.Corefile.geoipparameter.Update the CoreDNS deployment to use the new configuration by running the following command:
oc --context $CTX_PRIMARY -n kuadrant-system patch deployment kuadrant-coredns --patch '{"spec":{"template":{"spec":{"volumes":[{"name":"config-volume","configMap":{"name":"coredns-kuadrant-config","items":[{"key":"Corefile","path":"Corefile"}]}}]}}}}'$ oc --context $CTX_PRIMARY -n kuadrant-system patch deployment kuadrant-coredns --patch '{"spec":{"template":{"spec":{"volumes":[{"name":"config-volume","configMap":{"name":"coredns-kuadrant-config","items":[{"key":"Corefile","path":"Corefile"}]}}]}}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set a watch-and-wait command for the deployment rollout to complete:
oc --context $CTX_PRIMARY -n kuadrant-system rollout status deployment/kuadrant-coredns
$ oc --context $CTX_PRIMARY -n kuadrant-system rollout status deployment/kuadrant-corednsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
kuadrant-coredns successfully rolled out
kuadrant-coredns successfully rolled outCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Kubernetes
Secretthat Connectivity Link uses to interact with CoreDNS. This secret specifies the zones this provider instance is authoritative for.oc create secret generic coredns-credentials \ --namespace=kuadrant-system \ --type=kuadrant.io/coredns \ --from-literal=ZONES="${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}" \ --context ${CTX_PRIMARY}$ oc create secret generic coredns-credentials \ --namespace=kuadrant-system \ --type=kuadrant.io/coredns \ --from-literal=ZONES="${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}" \ --context ${CTX_PRIMARY}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check the status of the
DNSRecordCR by running the following commands:oc get dnsrecord <name> -n <namespace> -o jsonpath='{.status.conditions[?(@.type=="Ready")]}'$ oc get dnsrecord <name> -n <namespace> -o jsonpath='{.status.conditions[?(@.type=="Ready")]}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NS1=$(oc get svc kuadrant-coredns -n kuadrant-coredns -o jsonpath='{.status.loadBalancer.ingress[0].ip}') ROOT_HOST=$(oc get dnsrecord <name> -n <namespace> -o jsonpath='{.spec.rootHost}') dig @${NS1} ${ROOT_HOST}$ NS1=$(oc get svc kuadrant-coredns -n kuadrant-coredns -o jsonpath='{.status.loadBalancer.ingress[0].ip}') ROOT_HOST=$(oc get dnsrecord <name> -n <namespace> -o jsonpath='{.spec.rootHost}') dig @${NS1} ${ROOT_HOST}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expect the
Readycondition to beTrue.
Troubleshooting
If you are having undetermined trouble, view the logs for all CoreDNS pods by running the following command:
oc logs -n kuadrant-coredns deployment/kuadrant-coredns
$ oc logs -n kuadrant-coredns deployment/kuadrant-corednsCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the
DNSRecordis not appearing in the zone, verify that the record has the zone label by running the following command:oc get dnsrecords.kuadrant.io -n dnstest -o jsonpath='{.items[*].metadata.labels}' | grep kuadrant.io/coredns-zone-name$ oc get dnsrecords.kuadrant.io -n dnstest -o jsonpath='{.items[*].metadata.labels}' | grep kuadrant.io/coredns-zone-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output should include the zone name, for example
kuadrant.io/coredns-zone-name: k.example.com.If the output does not show the zone name, check that the DNS Operator is running by using the following command:
oc get pods -n dns-operator-system
$ oc get pods -n dns-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can also check the DNS Operator logs by running the following command:
oc logs -n dns-operator-system deployment/dns-operator-controller-manager
$ oc logs -n dns-operator-system deployment/dns-operator-controller-managerCopy to Clipboard Copied! Toggle word wrap Toggle overflow
A couple of common issues can be missing RBAC and GeoIP database.
-
RBAC permissions missing. Check your
ClusterRoleandClusterRoleBindingconfigurations. - GeoIP database file not found. Ensure that your database is accessible.
-
RBAC permissions missing. Check your
Next steps
-
Create
dnsPolicycustom resources in your OpenShift Container Platform pods, referencing thecoredns-credentialssecret as the provider. Connectivity Link manages DNS records within the delegated${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}zone through CoreDNS.
2.5. Using CoreDNS with primary and secondary clusters Copy linkLink copied to clipboard!
You can use CoreDNS as a DNS provider for Connectivity Link in an existing multi-cluster, on-premise environment. This integration allows Connectivity Link to manage DNS entries within your internal network infrastructure.
Prerequisites
- Connectivity Link is installed on two separate OpenShift Container Platform clusters (primary and secondary).
-
OpenShift CLI (
oc) is installed and configured for access to both clusters. - You have administrator privileges on both OpenShift Container Platform clusters.
-
Your OpenShift Container Platform clusters have support for the
loadbalancedservice type that allows UDP and TCP traffic on port 53, such as MetalLB. - You have access to configure your authoritative on-premise DNS server to delegate a subdomain.
- Podman is installed.
-
jqis installed.
Procedure
Set up the primary cluster. Set the following environment variables for your primary cluster context:
export CTX_PRIMARY=<primary_cluster_context_name> # such as, primary \ export KUBECONFIG=~/.kube/config # Adjust path if necessary \ export PRIMARY_CLUSTER_NAME=<primary_cluster_name> # such as, primary \ export ONPREM_DOMAIN=<onprem-domain> # such as, example.local \ export KUADRANT_SUBDOMAIN=kuadrant # Subdomain to delegate
$ export CTX_PRIMARY=<primary_cluster_context_name> # such as, primary \ export KUBECONFIG=~/.kube/config # Adjust path if necessary \ export PRIMARY_CLUSTER_NAME=<primary_cluster_name> # such as, primary \ export ONPREM_DOMAIN=<onprem-domain> # such as, example.local \ export KUADRANT_SUBDOMAIN=kuadrant # Subdomain to delegateCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<primary_cluster_context_name>with the name of the cluster that you are specifying as primary. For theONPREM_DOMAINvariable value, use your actual root domain.Extract the CoreDNS manifests from the
dns-operatorbundle by running the following commands:podman create --name bundle registry.redhat.io/rhcl-1/dns-operator-bundle:rhcl-1.3.0
$ podman create --name bundle registry.redhat.io/rhcl-1/dns-operator-bundle:rhcl-1.3.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow podman cp bundle:/coredns/manifests.yaml ./coredns-manifests.yaml
$ podman cp bundle:/coredns/manifests.yaml ./coredns-manifests.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow podman rm bundle
$ podman rm bundleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the manifests to the cluster by running the following command:
oc apply -f ./coredns-manifests.yaml
$ oc apply -f ./coredns-manifests.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the CoreDNS service to get an external IP address. You need the IP address to configure delegation on your authoritative on-premise DNS server. Retrieve and store the IP address by running the following command:
export COREDNS_IP_PRIMARY=$(oc --context $CTX_PRIMARY -n kuadrant-system get service kuadrant-coredns -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo "CoreDNS Primary IP: ${COREDNS_IP_PRIMARY}"$ export COREDNS_IP_PRIMARY=$(oc --context $CTX_PRIMARY -n kuadrant-system get service kuadrant-coredns -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo "CoreDNS Primary IP: ${COREDNS_IP_PRIMARY}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ConfigMapto define the authoritative zone for CoreDNS on the primary cluster. This minimal configuration enables thekuadrantplugin and GeoIP features.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor production or exact GeoIP routing, mount your licensed MaxMind GeoIP database into the CoreDNS pod and update the filename in the
data.Corefile.geoipparameter.Update the CoreDNS deployment to use the new configuration:
oc --context $CTX_PRIMARY -n kuadrant-system patch deployment kuadrant-coredns --patch '{"spec":{"template":{"spec":{"volumes":[{"name":"config-volume","configMap":{"name":"coredns-kuadrant-config","items":[{"key":"Corefile","path":"Corefile"}]}}]}}}}'$ oc --context $CTX_PRIMARY -n kuadrant-system patch deployment kuadrant-coredns --patch '{"spec":{"template":{"spec":{"volumes":[{"name":"config-volume","configMap":{"name":"coredns-kuadrant-config","items":[{"key":"Corefile","path":"Corefile"}]}}]}}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set a watch-and-wait command for the deployment rollout to complete:
oc --context $CTX_PRIMARY -n kuadrant-system rollout status deployment/kuadrant-coredns
$ oc --context $CTX_PRIMARY -n kuadrant-system rollout status deployment/kuadrant-corednsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
kuadrant-coredns successfully rolled out
kuadrant-coredns successfully rolled outCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Kubernetes
Secretthat Connectivity Link uses to interact with CoreDNS. This secret specifies the zones this provider instance is authoritative for.oc create secret generic coredns-credentials \ --namespace=kuadrant-system \ --type=kuadrant.io/coredns \ --from-literal=ZONES="${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}" \ --context ${CTX_PRIMARY}$ oc create secret generic coredns-credentials \ --namespace=kuadrant-system \ --type=kuadrant.io/coredns \ --from-literal=ZONES="${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}" \ --context ${CTX_PRIMARY}Copy to Clipboard Copied! Toggle word wrap Toggle overflow On your authoritative on-premise DNS server, configure delegation for the
${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}subdomain to the external IP addresses of the CoreDNS services running on your primary and secondary clusters,$COREDNS_IP_PRIMARYand$COREDNS_IP_SECONDARY. The specific steps depend on your DNS server software, for example, BIND, Windows DNS Server. You typically need to add Name Server (NS) records pointing the subdomain to the CoreDNS IP addresses.Example delegation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart CoreDNS by running the following command:
oc -n kuadrant-coredns rollout restart deployment kuadrant-coredns
$ oc -n kuadrant-coredns rollout restart deployment kuadrant-corednsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAfter configuring delegation, you can test that the DNS resolution for the delegated subdomain works correctly by querying your authoritative DNS server for a record within the
kuadrantsubdomain. One of the CoreDNS instances is expected to refer to and answer the query.
Verification
Launch a temporary pod for testing by running the following command:
oc debug node/<node-name>
$ oc debug node/<node-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<node-name>with the node you are testing on.Add
transferto your Corefile by running the following command:oc patch cm kuadrant-coredns -n kuadrant-coredns --type merge \ -p "$(kubectl get cm kuadrant-coredns -n kuadrant-coredns -o jsonpath='{.data.Corefile}' | \ sed 's/kuadrant/transfer {\n to *\n }\n kuadrant/' | \ jq -Rs '{data: {Corefile: .}}')"$ oc patch cm kuadrant-coredns -n kuadrant-coredns --type merge \ -p "$(kubectl get cm kuadrant-coredns -n kuadrant-coredns -o jsonpath='{.data.Corefile}' | \ sed 's/kuadrant/transfer {\n to *\n }\n kuadrant/' | \ jq -Rs '{data: {Corefile: .}}')"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify zone delegation by running the following command:
dig @${EDGE_NS} -k config/bind9/ddns.key -t AXFR example.com$ dig @${EDGE_NS} -k config/bind9/ddns.key -t AXFR example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example,
k.exampleis the delegated zone andns1.k.exampleis the primary zone.Optional. Remove the
transferfrom your Corefile by running the following command:oc patch cm kuadrant-coredns -n kuadrant-coredns --type merge \ -p "$(kubectl get cm kuadrant-coredns -n kuadrant-coredns -o jsonpath='{.data.Corefile}' | \ sed '/transfer {/,/}/d' | \ jq -Rs '{data: {Corefile: .}}')"$ oc patch cm kuadrant-coredns -n kuadrant-coredns --type merge \ -p "$(kubectl get cm kuadrant-coredns -n kuadrant-coredns -o jsonpath='{.data.Corefile}' | \ sed '/transfer {/,/}/d' | \ jq -Rs '{data: {Corefile: .}}')"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the start of authority (SOA) record for the delegated zone by running the following command:
dig @${EDGE_NS} soa k.example.com$ dig @${EDGE_NS} soa k.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
;; ANSWER SECTION: k.example.com. 60 IN SOA ns1.k.example.com. hostmaster.k.example.com. 12345 7200 1800 86400 60
;; ANSWER SECTION: k.example.com. 60 IN SOA ns1.k.example.com. hostmaster.k.example.com. 12345 7200 1800 86400 60Copy to Clipboard Copied! Toggle word wrap Toggle overflow The SOA record is expected to show the primary name server (NS) as confirmation that CoreDNS is responding authoritatively. In this example the primary NS is
ns1.k.example.com.
Next steps
-
Create
DNSPolicyresources in your OpenShift Container Platform clusters, referencing thecoredns-credentialssecret as the provider. Connectivity Link manages DNS records within the delegated${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}zone through CoreDNS.
2.6. CoreDNS Corefile configuration reference Copy linkLink copied to clipboard!
A Corefile is organized into server blocks that define how DNS queries are handled based on the port and zone. Plugin execution order is determined at build time, not by Corefile order, so you can list plugins in any order. When making configurations by using the DNS Operator, you can check the ConfigMap for the resulting server block.
Connectivity Link includes a minimal Corefile that you can update for your uses:
Minimal Corefile
Corefile: |
. {
health
ready
}
Corefile: |
. {
health
ready
}
For a Corefile with configurations, see the following example:
Example configured Corefile
- Zone coordination
-
Each zone in the Corefile must match a zone listed in your CoreDNS provider secret’s
ZONESfield. - Required plugins
-
The
geoipandmetadataplugins are included by default with the Connectivity Link implementation of the CoreDNS Corefile. - Corefile updates
After you update your Corefile, you must always restart your pods for the CoreDNS deployment. You can use the following command:
oc rollout restart deployment/coredns -n kuadrant-system
$ oc rollout restart deployment/coredns -n kuadrant-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can check the status of the rollout by running the following command:
oc rollout status deployment/coredns -n kuadrant-system --watch
$ oc rollout status deployment/coredns -n kuadrant-system --watchCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6.1. Default enabled plugins in CoreDNS Copy linkLink copied to clipboard!
The following plugins are enabled by default in the Connectivity Link CoreDNS plugin. You must ensure CoreDNS compatibility and enable any other plugins that you want to add.
| Plugin | Function |
|---|---|
| acl | Enforces access control policies on source IP addresses and prevents unauthorized access to DNS servers. |
| cache | Enables a front-end cache. |
| cancel | Cancels a request’s context after 5001 milliseconds. |
| debug | Disables the automatic recovery when a crash happens so that a stack trace is generated. |
| errors | Enables error logging. |
| file |
Enables serving zone data from an RFC 1035-style |
| forward | Enables IP forwarding. Facilitates proxying DNS messages to upstream resolvers. |
| geoip |
Lookup |
| header | Modifies the header for queries and responses. |
| health | Enables a health check endpoint. |
| hosts |
Enables serving zone data from an |
| kuadrant |
Enables serving zone data from kuadrant |
| local |
Responds with a basic reply to a local names in the following zones, |
| log | Enables query logging to standard output. Logs are structured for aggregation by cluster logging solutions. |
| loop | Detects simple forwarding loops and halts the server. |
| metadata | Enables a metadata collector. |
| minimal | Minimizes size of the DNS response message whenever possible. |
| nsid | Adds an identifier of this server to each reply. |
| prometheus |
Enables Prometheus metrics. The default listens on |
| ready | Enables a readiness check HTTP endpoint. |
| reload | Allows automatic reload of a changed Corefile. |
| rewrite | Rewrites queries for automatic port forwarding. |
| root | Simply specifies the root of where to find files. |
| secondary | Enables serving a zone retrieved from a primary server. |
| timeouts | Means that you can configure the server read, write and idle timeouts for the TCP, TLS, DoH and DoQ (idle only) servers. |
| tls | Means that you can configure the server certificates for the TLS, gRPC, and DoH servers. |
| transfer | Perform (outgoing) zone transfers for other plugins. |
| view | Defines the conditions that must be met for a DNS request to be routed to the server block. |
| whoami | Returns your resolver’s local IP address, port and transport. |
When using CoreDNS, if you do not need to keep all logs, you can set up the logs directive to only report errors and use the prometheus plugin to gather primary metrics instead. Prometheus metrics give you trends, for example, how many queries failed, without storing every single piece of traffic.
2.7. Troubleshooting CoreDNS with the kuadrant plugin Copy linkLink copied to clipboard!
You can troubleshoot your CoreDNS deployment by restarting CoreDNS and by checking the logs. Use the following commands as needed to investigate your specific errors:
Restart CoreDNS by using the following command:
oc -n kuadrant-coredns rollout restart deployment kuadrant-coredns
$ oc -n kuadrant-coredns rollout restart deployment kuadrant-coredns
You can view CoreDNS logs by running the following command:
oc logs -f deployments/kuadrant-coredns -n kuadrant-coredns
$ oc logs -f deployments/kuadrant-coredns -n kuadrant-coredns
You can get recent logs by running the following command:
oc logs --tail=100 deployments/kuadrant-coredns -n kuadrant-coredns
$ oc logs --tail=100 deployments/kuadrant-coredns -n kuadrant-coredns
2.8. CoreDNS removal or migration Copy linkLink copied to clipboard!
You can remove your CoreDNS integration by deleting the CoreDNS deployment and deleting your DNS policies. To migrate to a different provider, delete existing dnsPolicy CRs and re-create them with the new provider secret reference. No data is permanently locked into CoreDNS.