Configuring and deploying gateway policies


Red Hat Connectivity Link 1.3

Secure, protect, and connect APIs on OpenShift

Red Hat OpenShift Documentation Team

Abstract

This guide explains how to use Connectivity Link policies on OpenShift to secure, protect, and connect an application API exposed by a Gateway based on Kubernetes Gateway API. This includes Gateways deployed on a single OpenShift cluster or distributed across multiple clusters.

As a platform engineer or application developer, you can secure, protect, and connect an API exposed by a gateway that uses Gateway API by using Connectivity Link.

This guide shows how you can use Connectivity Link on OpenShift Container Platform to secure, protect, and connect an API exposed by a Gateway that uses Kubernetes Gateway API. This guide applies to the platform engineer and application developer user roles in Connectivity Link.

Important

In multi-cluster environments, you must perform the following steps in each cluster individually, unless specifically excluded.

You can use Connectivity Link capabilities in single or multiple OpenShift Container Platform clusters. The following features are designed to work across multiple clusters and in a single-cluster environment:

  • Multicluster ingress: Connectivity Link provides multicluster ingress connectivity using DNS to bring traffic to your gateways by using a strategy defined in a DNSPolicy.
  • Global rate limiting: Connectivity Link can enable global rate limiting use cases when configured to use a shared Redis-based store for counters based on limits defined by a RateLimitPolicy.
  • Global auth: You can configure a Connectivity Link AuthPolicy to use external auth providers to ensure that different clusters exposing the same API can authenticate and allow in the same way.
  • Automatic TLS certificate generation: You can configure a TLSPolicy to automatically provision TLS certificates based on Gateway listener hosts by using integration with cert-manager and ACME providers such as Let’s Encrypt.
  • Integration with federated metrics stores: Connectivity Link has example dashboards and metrics for visualizing your gateways and observing traffic hitting those gateways across multiple clusters.

1.1.2. Connectivity Link user role workflows

  • Platform engineer: This guide shows how platform engineers can deploy gateways that provide secure communication and are protected and ready for use by application development teams to deploy APIs.

    Platform engineers can use Connectivity Link in clusters in different geographic regions to bring specific traffic to geo-located gateways. This approach reduces latency, distributes load, and protects and secures with global rate limiting and auth policies.

  • Application developer: This guide shows how application developers can override the Gateway-level global auth and rate limiting policies to configure application-level auth and rate limiting requirements for specific users.

1.1.3. Deployment management

The examples in this guide use kubectl commands for simplicity. However, working with multiple clusters is complex, and it is best to use a tool such as OpenShift Container Platform GitOps, based on Argo CD, to manage the deployment of resources to multiple clusters.

This guide expects that you have successfully installed Connectivity Link on at least one OpenShift Container Platform cluster, and that you have the correct user permissions.

  • You completed the Connectivity Link installation steps on one or more clusters, as described in Installing Connectivity Link on OpenShift.
  • You have the kubectl or oc command installed.
  • You have write access to the OpenShift Container Platform namespaces used in this guide.
  • You have an AWS account with Amazon Route 53 and a DNS zone for the examples in this guide. Connectivity Link also supports Google Cloud DNS and Microsoft Azure DNS.
  • Optional:

    • For rate limiting in a multicluster environment, you have installed Connectivity Link on more than one cluster and have a shared accessible Redis-based datastore. For more details, see Installing Connectivity Link on OpenShift.
    • For Observability, OpenShift Container Platform user workload monitoring is configured to remote write to a central storage system such as Thanos, as described in Connectivity Link Observability Guide.

1.3. Set up your environment

This section shows how you can set up your environment variables and deploy the example Toystore application on your OpenShift Container Platform cluster.

Procedure

  1. Optional: Set the following environment variables:

    export KUADRANT_GATEWAY_NS=api-gateway
    export KUADRANT_GATEWAY_NAME=external
    export KUADRANT_DEVELOPER_NS=toystore
    export KUADRANT_AWS_ACCESS_KEY_ID=xxxx
    export KUADRANT_AWS_SECRET_ACCESS_KEY=xxxx
    export KUADRANT_AWS_DNS_PUBLIC_ZONE_ID=xxxx
    export KUADRANT_ZONE_ROOT_DOMAIN=example.com
    export KUADRANT_CLUSTER_ISSUER_NAME=self-signed
    Copy to Clipboard Toggle word wrap

    These environment variables are described as follows:

    • KUADRANT_GATEWAY_NS: Namespace for your example gateway in OpenShift Container Platform.
    • KUADRANT_GATEWAY_NAME: Name of your example gateway in OpenShift Container Platform.
    • KUADRANT_DEVELOPER_NS: Namespace for the example Toystore app in OpenShift Container Platform.
    • KUADRANT_AWS_ACCESS_KEY_ID: AWS key ID with access to manage your DNS zone.
    • KUADRANT_AWS_SECRET_ACCESS_KEY: AWS secret access key with permissions to manage your DNS zone.
    • KUADRANT_AWS_DNS_PUBLIC_ZONE_ID: AWS Route 53 zone ID for the Gateway. This is the ID of the hosted zone that is displayed in the AWS Route 53 console.
    • KUADRANT_ZONE_ROOT_DOMAIN: Root domain in AWS Route 53 associated with your DNS zone ID.
    • KUADRANT_CLUSTER_ISSUER_NAME: Name of the certificate authority or issuer TLS certificates.

      Note

      If you know your environment variable values, you can set up the required YAML files to suit your environment.

  2. Create the namespace for the Toystore app as follows:

    $ kubectl create ns ${KUADRANT_DEVELOPER_NS}
    Copy to Clipboard Toggle word wrap
  3. Deploy the Toystore app to the developer namespace:

    $ kubectl apply -f https://raw.githubusercontent.com/Kuadrant/Kuadrant-operator/main/examples/toystore/toystore.yaml -n ${KUADRANT_DEVELOPER_NS}
    Copy to Clipboard Toggle word wrap

1.4. Set up a DNS provider secret

Your DNS provider supplies credentials to access the DNS zones that Connectivity Link can use to set up your DNS configuration. You must ensure that these credentials have access to only the DNS zones that you want Connectivity Link to manage with your DNSPolicy.

Note

You must apply the following Secret resource to each cluster. If you are adding an additional cluster, add it to the new cluster.

Procedure

  1. Create the namespace that the Gateway will be deployed in as follows:

    kubectl create ns ${KUADRANT_GATEWAY_NS}
    Copy to Clipboard Toggle word wrap
  2. Create the secret credentials in the same namespace as the Gateway as follows:

    $ kubectl -n ${KUADRANT_GATEWAY_NS} create secret generic aws-credentials \
      --type=kuadrant.io/aws \
      --from-literal=AWS_ACCESS_KEY_ID=$KUADRANT_AWS_ACCESS_KEY_ID \
      --from-literal=AWS_SECRET_ACCESS_KEY=$KUADRANT_AWS_SECRET_ACCESS_KEY
    Copy to Clipboard Toggle word wrap
  3. Before adding a TLS certificate issuer, create the secret credentials in the cert-manager namespace as follows:

    $ kubectl -n cert-manager create secret generic aws-credentials \
      --type=kuadrant.io/aws \
      --from-literal=AWS_ACCESS_KEY_ID=$KUADRANT_AWS_ACCESS_KEY_ID \
      --from-literal=AWS_SECRET_ACCESS_KEY=$KUADRANT_AWS_SECRET_ACCESS_KEY
    Copy to Clipboard Toggle word wrap

1.5. Add a TLS certificate issuer

To secure communication to your Gateways, you must define a certification authority as an issuer for TLS certificates.

Note

This example uses the Let’s Encrypt TLS certificate issuer for simplicity, but you can use any certificate issuer supported by cert-manager. In multicluster environments, you must add your TLS issuer in each OpenShift Container Platform cluster.

Procedure

  1. Enter the following command to define a TLS certificate issuer:

    $ kubectl apply -f - <<EOF
    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: ${KUADRANT_CLUSTER_ISSUER_NAME}
    spec:
      selfSigned: {}
    EOF
    Copy to Clipboard Toggle word wrap
  2. Wait for the ClusterIssuer to become ready as follows:

    $ kubectl wait clusterissuer/${KUADRANT_CLUSTER_ISSUER_NAME} --for=condition=ready=true
    Copy to Clipboard Toggle word wrap

1.6. Create your Gateway instance

This section shows how you can deploy a Gateway in your OpenShift Container Platform cluster. This task is typically performed by platform engineers when setting up the infrastructure to be used by application developers.

Note

In a multicluster environment, for Connectivity Link to balance traffic by using DNS across clusters, you must define a Gateway with a shared hostname. You can define this by using an HTTPS listener with a wildcard hostname based on the root domain. As mentioned previously, you must apply these resources to all clusters.

Procedure

  1. Enter the following command to create the Gateway:

    $ kubectl apply -f - <<EOF
    apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
      name: ${KUADRANT_GATEWAY_NAME}
      namespace: ${KUADRANT_GATEWAY_NS}
      labels:
        kuadrant.io/gateway: "true"
    spec:
        gatewayClassName: istio
        listeners:
        - allowedRoutes:
            namespaces:
              from: All
          hostname: "api.${KUADRANT_ZONE_ROOT_DOMAIN}"
          name: api
          port: 443
          protocol: HTTPS
          tls:
            certificateRefs:
            - group: ""
              kind: Secret
              name: api-${KUADRANT_GATEWAY_NAME}-tls
            mode: Terminate
    EOF
    Copy to Clipboard Toggle word wrap
  2. Check the status of your Gateway as follows:

    kubectl get gateway ${KUADRANT_GATEWAY_NAME} -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Programmed")].message}'
    Copy to Clipboard Toggle word wrap

    Your Gateway should be Accepted and Programmed, which means that it is valid and assigned an external address.

  3. Check the status of your HTTPS listener as follows:

    $ kubectl get gateway ${KUADRANT_GATEWAY_NAME} -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.listeners[0].conditions[?(@.type=="Programmed")].message}'
    Copy to Clipboard Toggle word wrap

    You will see that the HTTPS listener is not yet programmed or ready to accept traffic due to bad TLS configuration. Connectivity Link can help with this by using a TLSPolicy, which is described in the next step.

While your Gateway is now deployed, it has no exposed endpoints and your HTTPS listener is not programmed. Next, you can do take the following steps:

  • Define a TLSPolicy that leverages your CertificateIssuer to set up your HTTPS listener certificates.
  • Define an HTTPRoute for your Gateway to communicate with your backend application API.
  • Define an AuthPolicy to set up a default HTTP 403 response for any unprotected endpoints
  • Define and a RateLimitPolicy to set up a default artificially low global limit to further protect any endpoints exposed by the Gateway.
  • Define a DNSPolicy with a load balancing strategy for your Gateway.
Important

In multicluster environments, you must perform the following steps in each cluster individually, unless specifically excluded.

Prerequisites

Procedure

  1. Set the TLSPolicy for your Gateway as follows:

    $ kubectl apply -f - <<EOF
    apiVersion: kuadrant.io/v1
    kind: TLSPolicy
    metadata:
      name: ${KUADRANT_GATEWAY_NAME}-tls
      namespace: ${KUADRANT_GATEWAY_NS}
    spec:
      targetRef:
        name: ${KUADRANT_GATEWAY_NAME}
        group: gateway.networking.k8s.io
        kind: Gateway
      issuerRef:
        group: cert-manager.io
        kind: ClusterIssuer
        name: ${KUADRANT_CLUSTER_ISSUER_NAME}
    EOF
    Copy to Clipboard Toggle word wrap
  2. Check that your TLS policy has an Accepted and Enforced status as follows:

    $ kubectl get tlspolicy ${KUADRANT_GATEWAY_NAME}-tls -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'
    Copy to Clipboard Toggle word wrap

    This may take a few minutes depending on the TLS provider, for example, Let’s Encrypt.

1.7.1. Create an HTTP route for your application

Procedure

  1. Create an HTTPRoute for the example Toystore application as follows:

    $ kubectl apply -f - <<EOF
    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: toystore
      namespace: ${KUADRANT_DEVELOPER_NS}
      labels:
        deployment: toystore
        service: toystore
    spec:
      parentRefs:
      - name: ${KUADRANT_GATEWAY_NAME}
        namespace: ${KUADRANT_GATEWAY_NS}
      hostnames:
      - "api.${KUADRANT_ZONE_ROOT_DOMAIN}"
      rules:
      - matches:
        - method: GET
          path:
            type: PathPrefix
            value: "/cars"
        - method: GET
          path:
            type: PathPrefix
            value: "/health"
        backendRefs:
        - name: toystore
          port: 80
    EOF
    Copy to Clipboard Toggle word wrap

1.7.2. Set the default AuthPolicy

Procedure

  1. Set a default AuthPolicy with a deny-all setting for your Gateway as follows:

    kubectl apply -f - <<EOF
    apiVersion: kuadrant.io/v1
    kind: AuthPolicy
    metadata:
      name: ${KUADRANT_GATEWAY_NAME}-auth
      namespace: ${KUADRANT_GATEWAY_NS}
    spec:
      targetRef:
        group: gateway.networking.k8s.io
        kind: Gateway
        name: ${KUADRANT_GATEWAY_NAME}
      defaults:
       when:
         - predicate: "request.path != '/health'"
       rules:
        authorization:
          deny-all:
            opa:
              rego: "allow = false"
        response:
          unauthorized:
            headers:
              "content-type":
                value: application/json
            body:
              value: |
                {
                  "error": "Forbidden",
                  "message": "Access denied by default by the gateway operator. If you are the administrator of the service, create a specific auth policy for the route."
                }
    EOF
    Copy to Clipboard Toggle word wrap
  2. Check that your AuthPolicy has Accepted and Enforced status as follows:

    $ kubectl get authpolicy ${KUADRANT_GATEWAY_NAME}-auth -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'
    Copy to Clipboard Toggle word wrap

1.7.3. Set the default RateLimitPolicy

Procedure

  1. Set the default RateLimitPolicy with a low-limit setting for your Gateway as follows:

    kubectl apply -f  - <<EOF
    apiVersion: kuadrant.io/v1
    kind: RateLimitPolicy
    metadata:
      name: ${KUADRANT_GATEWAY_NAME}-rlp
      namespace: ${KUADRANT_GATEWAY_NS}
    spec:
      targetRef:
        group: gateway.networking.k8s.io
        kind: Gateway
        name: ${KUADRANT_GATEWAY_NAME}
      defaults:
        limits:
          "low-limit":
            rates:
            - limit: 1
              window: 10s
    EOF
    Copy to Clipboard Toggle word wrap

    It might take a few minutes for the RateLimitPolicy to be applied depending on your cluster. The limit in this example is artificially low to show it working easily.

  2. Check that your RateLimitPolicy has Accepted and Enforced status as follows:

    $ kubectl get ratelimitpolicy ${KUADRANT_GATEWAY_NAME}-rlp -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'
    Copy to Clipboard Toggle word wrap

1.7.4. Set the DNS policy

Procedure

  1. Set the DNSPolicy for your Gateway as follows:

    $ kubectl apply -f - <<EOF
    apiVersion: kuadrant.io/v1
    kind: DNSPolicy
    metadata:
      name: ${KUADRANT_GATEWAY_NAME}-dnspolicy
      namespace: ${KUADRANT_GATEWAY_NS}
    spec:
      healthCheck:
        failureThreshold: 3
        interval: 1m
        path: /health
      loadBalancing:
        defaultGeo: true
        geo: GEO-NA
        weight: 120
      targetRef:
        name: ${KUADRANT_GATEWAY_NAME}
        group: gateway.networking.k8s.io
        kind: Gateway
      providerRefs:
      - name: aws-credentials # Secret created earlier
    EOF
    Copy to Clipboard Toggle word wrap

    The DNSPolicy uses the DNS Provider Secret that you defined earlier. The geo in this example is GEO-NA, but you can change this to suit your requirements.

  2. Check that your DNSPolicy has status of Accepted and Enforced as follows:

    $ kubectl get dnspolicy ${KUADRANT_GATEWAY_NAME}-dnspolicy -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'
    Copy to Clipboard Toggle word wrap

    This might take a few minutes.

  3. Check the status of the DNS health checks that are enabled on your DNSPolicy as follows:

    $ kubectl get dnspolicy ${KUADRANT_GATEWAY_NAME}-dnspolicy -n ${KUADRANT_GATEWAY_NS} -
    Copy to Clipboard Toggle word wrap

    These health checks flag a published endpoint as healthy or unhealthy based on defined configuration. When unhealthy, an endpoint will not be published if it has not already been published to the DNS provider. An endpoint will only be unpublished if it is part of a multi-value A record, and in all cases can be observed in the DNSPolicy status.

You can use a curl command to test the default low-limit and deny-all policies for your Gateway.

Procedure

  • Enter the following curl command:

    $ while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null  "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; done
    Copy to Clipboard Toggle word wrap

    You should see a HTTP 403 responses.

Red Hat Connectivity Link provides the TokenRateLimitPolicy custom resource to enforce rate limits based on token consumption rather than the number of requests. This policy extends the Envoy Rate Limit Service (RLS) protocol with automatic token usage extraction. It is particularly useful for protecting Large Language Model (LLM) APIs, where the cost and resource usage correlate more closely with the number of tokens processed.

Unlike the standard RateLimitPolicy which counts requests, TokenRateLimitPolicy counts tokens by extracting usage metrics in the body of the AI inference API call, allowing for finer-grained control over API usage based on actual workload.

1.8.1. How token rate limiting works

The TokenRateLimitPolicy tracks cumulative token usage per client. Before forwarding a request, it checks if the client has already exceeded their limit from previous usage. After the upstream responds, it extracts the actual token cost and updates the client’s counter.

The flow is as follows:

  1. On an incoming request, the gateway evaluates the matching rules and predicates from the TokenRateLimitPolicy resources.
  2. If the request matches, the gateway prepares the necessary rate limit descriptors and monitors the response.
  3. After receiving the response, the gateway extracts the usage.total_tokens field from the JSON response body.
  4. The gateway then sends a RateLimitRequest to Limitador, including the actual token count as a hits_addend.
  5. Limitador tracks the cumulative token usage and responds to the gateway with OK or OVER_LIMIT.

1.8.2. Key features and use cases

  • Enforces limits based on token usage by extracting the usage.total_tokens field from an OpenAI-style inference JSON response body.
  • Suitable for consumption-based APIs such as LLMs where the cost is tied to token counts.
  • Allows defining different limits based on criteria such as user identity, API endpoints, or HTTP methods.
  • Works with AuthPolicy to apply specific limits to authenticated users or groups.
  • Inherits functionalities from RateLimitPolicy, including defining multiple limits with different durations and using Redis for shared counters in multi-cluster environments.

1.8.3. Integrating with AuthPolicy

You can combine TokenRateLimitPolicy with AuthPolicy to apply token limits based on authenticated user identity. When an AuthPolicy successfully authenticates a request, it injects identity information that is used by the TokenRateLimitPolicy to select the appropriate limit.

For example, you can define different token limits for users belonging to 'free-tier' compared to 'premium-tier' groups, identified using claims in a JWT validated by AuthPolicy.

Red Hat Connectivity Link provides the TokenRateLimitPolicy custom resource to enforce rate limits based on token consumption rather than the number of requests. This policy extends the Envoy Rate Limit Service (RLS) protocol with automatic token usage extraction. It is particularly useful for protecting Large Language Model (LLM) APIs, where the cost and resource usage correlate more closely with the number of tokens processed.

Unlike the standard RateLimitPolicy which counts requests, TokenRateLimitPolicy counts tokens by extracting usage metrics in the body of the AI inference API call, allowing for finer-grained control over API usage based on actual workload.

1.9.1. How token rate limiting works

The TokenRateLimitPolicy tracks cumulative token usage per client. Before forwarding a request, it checks if the client has already exceeded their limit from previous usage. After the upstream responds, it extracts the actual token cost and updates the client’s counter.

The flow is as follows:

  1. On an incoming request, the gateway evaluates the matching rules and predicates from the TokenRateLimitPolicy resources.
  2. If the request matches, the gateway prepares the necessary rate limit descriptors and monitors the response.
  3. After receiving the response, the gateway extracts the usage.total_tokens field from the JSON response body.
  4. The gateway then sends a RateLimitRequest to Limitador, including the actual token count as a hits_addend.
  5. Limitador tracks the cumulative token usage and responds to the gateway with OK or OVER_LIMIT.

1.9.2. Key features and use cases

Token-based rate limiting means you complete the following tasks:

  • Enforces limits based on token usage by extracting the usage.total_tokens field from an OpenAI-style inference JSON response body.
  • Suitable for consumption-based APIs such as LLMs where the cost is tied to token counts.
  • Allows defining different limits based on criteria such as user identity, API endpoints, or HTTP methods.
  • Works with AuthPolicy to apply specific limits to authenticated users or groups.
  • Inherits functionalities from RateLimitPolicy, including defining multiple limits with different durations and using Redis for shared counters in multi-cluster environments.

1.9.3. Integrating with AuthPolicy

You can combine TokenRateLimitPolicy with AuthPolicy to apply token limits based on authenticated user identity. When an AuthPolicy successfully authenticates a request, it injects identity information which can then be used by the TokenRateLimitPolicy to select the appropriate limit.

For example, you can define different token limits for users belonging to 'free-tier' versus 'premium-tier' groups, identified using claims in a JWT validated by AuthPolicy.

This guide shows how to configure TokenRateLimitPolicy to You can protect a hypothetical LLM API deployed on OpenShift Container Platform, integrated with AuthPolicy for user-specific limits.

Prerequisites

  • Connectivity Link is installed on your OpenShift Container Platform cluster.
  • A Gateway and an HTTPRoute are configured to expose your service.
  • An AuthPolicy is configured for authentication (for example, using API keys or OIDC).
  • Redis is configured for Limitador if running in a multi-cluster setup or requiring persistent counters.
  • Your upstream service is configured to return an OpenAI-compatible JSON response containing a usage.total_tokens field in the response body.

Procedure

  1. Create a TokenRateLimitPolicy resource. This example defines two limits: one for free users on a 10,000 tokens per day request limit, and one for pro users with a 100,000 tokens per day request limit.

    apiVersion: kuadrant.io/v1alpha1
    kind: TokenRateLimitPolicy
    metadata:
      name: llm-protection
    spec:
      targetRef:
        group: gateway.networking.k8s.io
        kind: Gateway
        name: ai-gateway
      limits:
        free-users:
          rates:
            - limit: 10000 # 10k tokens per day for free tier
              window: 24h
          when:
            - predicate: request.path == "/v1/chat/completions" # Inference traffic only
            - predicate: |
                auth.identity.groups.split(",").exists(g, g == "free")
          counters:
            - expression: auth.identity.userid
        pro-users:
          rates:
            - limit: 100000 # 200 tokens per minute for pro users
              window: 24h
          when:
            - predicate: request.path == "/v1/chat/completions" # Inference traffic only
            - predicate: |
                auth.identity.groups.split(",").exists(g, g == "pro")
          counters:
            - expression: auth.identity.userid
    Copy to Clipboard Toggle word wrap
  2. Apply the policy:

    $ oc apply -f your-tokenratelimitpolicy.yaml -n my-api-namespace
    Copy to Clipboard Toggle word wrap
  3. Check the status of the policy to ensure it has been accepted and enforced on the target HTTPRoute. Look for conditions with type: Accepted and type: Enforced with status: "True".

    $ oc get tokenratelimitpolicy llm-protection -n my-api-namespace -o jsonpath='{.status.conditions}'
    Copy to Clipboard Toggle word wrap
  4. Send requests to your API endpoint, including the required authentication details.

    $ curl -H "Authorization: <auth-details>" \
         -d '{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}]}' \
         <your-api-endpoint>
    Copy to Clipboard Toggle word wrap

Verification

  • Ensure that your upstream service responds with an OpenAI-compatible JSON body containing the usage.total_tokens field.
  • Requests made when the client is within their token limits should receive a 200 OK response or other success status and their token counter is updated.
  • Requests made when the client has already exceeded their token limits should receive a 429 Too Many Requests response.

As an application developer, you can override your existing Gateway-level policies to configure your application-level auth and rate limiting requirements.

You can allow authenticated access to the Toystore API by defining a new AuthPolicy that targets the HTTPRoute resource created in the previous section.

Important

Any new HTTPRoutes are affected by the existing Gateway-level policy. Because you want users to now access this API, you must override that Gateway policy. For simplicity, you can use API keys to authenticate the requests, but other options such as OpenID Connect are also available.

Prerequisites

Procedure

  1. Ensure that your Connectivity Link system namespace is set correctly as follows:

    $ export KUADRANT_SYSTEM_NS=$(kubectl get kuadrant -A -o jsonpath="{.items[0].metadata.namespace}")
    Copy to Clipboard Toggle word wrap
  2. Define API keys for bob and alice users as follows:

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: bob-key
      namespace: ${KUADRANT_SYSTEM_NS}
      labels:
        authorino.kuadrant.io/managed-by: authorino
        app: toystore
      annotations:
        secret.kuadrant.io/user-id: bob
    stringData:
      api_key: IAMBOB
    type: Opaque
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: alice-key
      namespace: ${KUADRANT_SYSTEM_NS}
      labels:
        authorino.kuadrant.io/managed-by: authorino
        app: toystore
      annotations:
        secret.kuadrant.io/user-id: alice
    stringData:
      api_key: IAMALICE
    type: Opaque
    EOF
    Copy to Clipboard Toggle word wrap
  3. Create a new AuthPolicy in a different namespace that overrides the deny-all policy created earlier and accepts the API keys as follows:

    kubectl apply -f - <<EOF
    apiVersion: kuadrant.io/v1
    kind: AuthPolicy
    metadata:
      name: toystore-auth
      namespace: ${KUADRANT_DEVELOPER_NS}
    spec:
      targetRef:
        group: gateway.networking.k8s.io
        kind: HTTPRoute
        name: toystore
      defaults:
       when:
         - predicate: "request.path != '/health'"
       rules:
        authentication:
          "api-key-users":
            apiKey:
              selector:
                matchLabels:
                  app: toystore
            credentials:
              authorizationHeader:
                prefix: APIKEY
        response:
          success:
            filters:
              "identity":
                json:
                  properties:
                    "userid":
                      selector: auth.identity.metadata.annotations.secret\.kuadrant\.io/user-id
    EOF
    Copy to Clipboard Toggle word wrap

The configured Gateway limits provide a good set of limits for the general case. However, as the developer of the Toystore API, you might want to only allow a certain number of requests for specific users, and a general limit for all other users.

Important

Any new HTTPRoutes are affected by the existing Gateway-level policy. Because you want users to now access this API, you must override that Gateway policy. For simplicity, you can use API keys to authenticate the requests, but other options such as OpenID Connect are also available.

Prerequisites

Procedure

  1. Create a new RateLimitPolicy in a different namespace to override the default low-limit policy created previously and set rate limits for specific users as follows:

    kubectl apply -f - <<EOF
    apiVersion: kuadrant.io/v1
    kind: RateLimitPolicy
    metadata:
      name: toystore-rlp
      namespace: ${KUADRANT_DEVELOPER_NS}
    spec:
      targetRef:
        group: gateway.networking.k8s.io
        kind: HTTPRoute
        name: toystore
      limits:
        "general-user":
          rates:
    
          - limit: 5
            window: 10s
          counters:
          - expression: auth.identity.userid
          when:
          - predicate: "auth.identity.userid != 'bob'"
        "bob-limit":
          rates:
          - limit: 2
            window: 10s
          when:
          - predicate: "auth.identity.userid == 'bob'"
    EOF
    Copy to Clipboard Toggle word wrap

    It might take a few minutes for the RateLimitPolicy to be applied, depending on your cluster.

  2. Check that the RateLimitPolicy has a status of Accepted and Enforced as follows:

    $ kubectl get ratelimitpolicy -n ${KUADRANT_DEVELOPER_NS} toystore-rlp -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'
    Copy to Clipboard Toggle word wrap
  3. Check that the status of the HTTPRoute is now affected by the RateLimitPolicy in the same namespace:

    $ kubectl get httproute toystore -n ${KUADRANT_DEVELOPER_NS} -o=jsonpath='{.status.parents[0].conditions[?(@.type=="kuadrant.io/RateLimitPolicyAffected")].message}'
    Copy to Clipboard Toggle word wrap

Verification

  1. Send requests as user alice as follows:

    $ while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null -H 'Authorization: APIKEY IAMALICE' "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; done
    Copy to Clipboard Toggle word wrap

    You should see HTTP status 200 every second for 5 seconds, followed by HTTP status 429 every second for 5 seconds.

  2. Send requests as user bob as follows:

    $ while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null -H 'Authorization: APIKEY IAMBOB' "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; done
    Copy to Clipboard Toggle word wrap

    You should see HTTP status 200 every second for 2 seconds, followed by HTTP status 429 every second for 8 seconds.

Chapter 2. Using on-premise DNS with CoreDNS

You can secure, protect, and connect an API exposed by a gateway that uses Gateway API by using Connectivity Link.

2.1. About using on-premise DNS with CoreDNS

You can self-manage your on-premise DNS by integrating CoreDNS with your DNS infrastructure through access control and zone delegation. Connectivity Link combines the DNS Operator with CoreDNS to simplify your management and security for on-premise DNS servers. You can use CoreDNS in both single-cluster and multi-cluster scenarios.

CoreDNS is best used in environments that change often, where using a DNS-as-code approach makes sense. The following situations are example use cases for integrating with CoreDNS:

  • You need to avoid dependency on external cloud DNS services.
  • You have regulatory or compliance requirements mandating self-hosted infrastructure.
  • You need to keep full control over DNS records.
  • You want to delegate specific DNS zones from existing DNS servers to Kubernetes-managed CoreDNS.
  • You require consistent DNS management across hybrid or multicloud environments.
  • You need to reduce DNS operational costs by eliminating per-query charges.
  • You do not want to directly manage DNS records on the on-premise DNS server.
  • You need to keep authoritative control on edge DNS servers.

For example:

  • Configure your authoritative on-premise DNS server to delegate a specific subdomain, such as deployment.example.local, to CoreDNS instances managed by Connectivity Link.
  • Any pod-level dnsPolicy CR can then interact with the CoreDNS provider within the OpenShift Container Platform cluster. You can specify the DNS provider that handles the records for the targeted gateways in the delegate field of the DNS policy.
  • The CoreDNS instance becomes authoritative for the delegated subdomain and manages the necessary DNS records for gateways within that subdomain.

2.2. CoreDNS integration architecture

CoreDNS is a DNS server that consists of default plugins that do several tasks, for example:

  • Automatically detects when you add new services to your cluster and adds them to directories.
  • Caches recent addresses to avoid the latency of repeated lookups.
  • Runs health checks and skips over services that are down.
  • Provides dynamic redirects by rewriting queries as they come in.

You can add plugins for observability and other services that you require by updating the CoreDNS with the DNS Operator.

With the DNS Operator, DNS is the first layer of traffic management. You can deploy the DNS Operator to multiple clusters and coordinate them all on a given zone. This means that you can use a shared domain name across clusters to balance traffic based on your requirements.

2.2.1. Technical workflow

To give you integration with CoreDNS, Connectivity Link extends the DNS Operator with the kuadrant CoreDNS plugin that sources records from the kuadrant.io/v1alpha1/DNSRecord custom resource (CR) and applies location-based and weighted response capabilities.

You can create DNS records that point to the CoreDNS secret in one of the three following ways:

  • Create it manually.
  • Use a non-delegating DNS policy at a gateway with routes attached. The kuadrant-operator CR creates DNSRecord CRs with the secret.
  • Use a delegating DNS policy at a gateway. The delegating policy results in the creation of a delegating DNSRecord CR without a secret reference. All delegating DNS Records are combined into a single authoritative DNS Record. The authoritative DNSRecord uses a default provider secret.

The DNS Operator reconciles authoritative records that have the CoreDNS secret referenced and applies labels only to those CRs. CoreDNS watches those records and matches the labels with zones configured in the Corefile. If there is a match, the authoritative DNSRecord CR is used to serve a DNS response.

There are no changes to the dnsPolicy API and no required changes to the policy controllers. This integration is isolated to the DNS Operator and the CoreDNS plugin.

The CoreDNS integration supports both single-cluster and multi-cluster deployments.

Single cluster

Organizations that want to self-host their DNS infrastructure without the complexity of multi-cluster coordination can use single-cluster CoreDNS integration. Using delegation is not required.

A single cluster runs both DNS Operator and CoreDNS with the plugin. CoreDNS only serves DNSRecord CRs that point to a CoreDNS provider secret. The CoreDNS plugin watches for DNS records labeled with the appropriate zone name and serves them directly. Any authoritative DNSRecord CR has endpoints from the single cluster.

Multi-cluster delegation

Multiple clusters can participate in serving a single DNS zone through Kubernetes role-based delegation that enables geographic distribution of DNS services and high availability. This implementation enables workloads across multiple clusters to contribute DNS endpoints to a unified zone, with primary clusters maintaining the authoritative view. The role of a cluster is determined by the DNS Operator.

Multi-cluster delegation uses kubeconfig-based interconnection secrets that grant read access to DNSRecord resources across clusters. This approach reuses Kubernetes role-based access (RBAC).

  • Primary clusters: Run both the DNS Operator and CoreDNS and serve the DNS records that are local. The DNS Operator running on primary clusters that delegate reconciles DNSRecord CRs by reading and merging them. Primary clusters then serve these authoritative DNSRecord CRs. Each CoreDNS instance serves the relevant authoritative DNSRecord for the configured zone. Each primary cluster can independently serve the complete record set.
  • Secondary clusters: Only run the DNS Operator. These clusters create delegating DNSRecord CRs but do not interact with DNS providers directly. If the secret and subdomain are properly configured, these DNS records are automatically reconciled in the primary cluster.
Zone labeling
CoreDNS integration uses a label-based filtering mechanism. The DNS Operator applies a zone-specific label to DNSRecord CRs when the CRs are reconciled. The CoreDNS plugin only watches for DNSRecord CRs with labels that match configured zones. This method reduces resource use and provides clear zone boundaries.
GEO and weighted routing

GEO and weighted routing use the same algorithmic approach as cloud providers. By using CoreDNS, you can have parity with cloud DNS provider capabilities and maintain full control over your DNS infrastructure.

  • GEO routing: The CoreDNS geoip plugin uses geographical-database integration to return region-specific endpoints.
  • Weighted routing: Applies probabilistic selection based on endpoint weights.
  • Combined routing: First applies GEO filtering, then weighted selection within the matched region.

2.3. CoreDNS DNS records security considerations

As an infrastructure engineer or business lead, you can implement several security best practices when using CoreDNS with Connectivity Link.

Zone configuration DNSRecord custom resources (CRs) have full control over a Zone’s name server (NS) records. Anyone who can create or change a DNSRecord that targets the root of the main domain name with NS records can decide where the all Zone traffic goes. Consider this as you plan your access controls.

For example, use the following access-control best practices:

  • Separate namespaces: Keep zone configuration DNSRecord CRs in a dedicated, restricted namespace
  • Use least-privilege policies:

    • Strict RBAC: Only grant DNSRecord creation permissions to trusted infrastructure engineers and cluster administrators.
    • Namespace isolation: Grant application developers DNSRecord permissions only in their own namespaces.
  • Audit logging: Enable Kubernetes audit logging to track all DNSRecord changes. CoreDNS audit logging is enabled by default for network troubleshooting and traffic pattern observability.
  • Version control: Use a DNS-as-code approach. Store zone configuration DNSRecord CRs in Git and use standardized review processes.

You can use the following RBAC configuration example to get you started with defining access:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: dns-zone-config-admin
  namespace: kuadrant-coredns
rules:
- apiGroups: ["kuadrant.io"]
  resources: ["dnsrecords"]
  verbs: ["create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: dns-zone-config-admin-binding
  namespace: kuadrant-coredns
subjects:
- kind: User
  name: dns-admin@example.com  # Only trusted administrators
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: dns-zone-config-admin
  apiGroup: rbac.authorization.k8s.io
Copy to Clipboard Toggle word wrap

2.4. Using CoreDNS with a single cluster

You can use CoreDNS as a DNS provider for Connectivity Link in a single-cluster, on-premise environment. This integration allows Connectivity Link to manage DNS entries within your internal network infrastructure.

Important

In a single-cluster setup, ensure that the endpoints IP address value you use is reachable from the kuadrant-system namespace. The default IP address, 10.96.0.10, is the internal cluster-wide DNS address.

Prerequisites

  • Connectivity Link is installed on the OpenShift Container Platform cluster.
  • The OpenShift CLI (oc) is installed.
  • You have administrator privileges on the OpenShift Container Platform cluster.
  • You are logged in to the cluster you want to configure.
  • Your OpenShift Container Platform clusters have support for the loadbalanced service type that allows UDP and TCP traffic on port 53, such as MetalLB.
  • You have access to configure your authoritative on-premise DNS server.
  • Podman is installed.

Procedure

  1. Set up your cluster. Set the following environment variables for your cluster context:

    $ export CTX_PRIMARY=$(kubectl config current-context) \
      export KUBECONFIG=~/.kube/config \
      export PRIMARY_CLUSTER_NAME=local-cluster \
      export ONPREM_DOMAIN=<onprem-domain> \
      export KUADRANT_SUBDOMAIN=""
    Copy to Clipboard Toggle word wrap

    For the ONPREM_DOMAIN variable value, use your actual root domain. For the KUADRANT_SUBDOMAIN variable value, valid values are empty or kuadrant.

  2. Extract the CoreDNS manifests from dns-operator bundle by running the following commands:

    $ podman create --name bundle registry.redhat.io/rhcl-1/dns-operator-bundle:rhcl-1.3.0
    Copy to Clipboard Toggle word wrap
    $ podman cp bundle:/coredns/manifests.yaml ./coredns-manifests.yaml
    Copy to Clipboard Toggle word wrap
    $ podman rm bundle
    Copy to Clipboard Toggle word wrap
  3. Apply the manifests to the cluster by running the following command:

    $ oc apply -f ./coredns-manifests.yaml
    Copy to Clipboard Toggle word wrap
  4. Create a ConfigMap to define the authoritative zone for CoreDNS. This minimal configuration enables the kuadrant plugin and GeoIP features.

    $ cat | oc --context $CTX_PRIMARY apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: coredns-kuadrant-config
      namespace: kuadrant-coredns
    data:
      Corefile: |
        ${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}:53 {
            debug
            errors
            health {
                lameduck 5s
            }
            ready
            log
            geoip <GeoIP-database-name>.mmdb {
                edns-subnet
            }
            metadata
            kuadrant
        }
    Copy to Clipboard Toggle word wrap
    Note

    For production or exact GeoIP routing, mount your licensed MaxMind GeoIP database into the CoreDNS pod and update the filename in the data.Corefile.geoip parameter.

  5. Update the CoreDNS deployment to use the new configuration by running the following command:

    $ oc --context $CTX_PRIMARY -n kuadrant-system patch deployment kuadrant-coredns --patch '{"spec":{"template":{"spec":{"volumes":[{"name":"config-volume","configMap":{"name":"coredns-kuadrant-config","items":[{"key":"Corefile","path":"Corefile"}]}}]}}}}'
    Copy to Clipboard Toggle word wrap
  6. Set a watch-and-wait command for the deployment rollout to complete:

    $ oc --context $CTX_PRIMARY -n kuadrant-system rollout status deployment/kuadrant-coredns
    Copy to Clipboard Toggle word wrap

    Example output

    kuadrant-coredns successfully rolled out
    Copy to Clipboard Toggle word wrap

  7. Create the Kubernetes Secret that Connectivity Link uses to interact with CoreDNS. This secret specifies the zones this provider instance is authoritative for.

    $ oc create secret generic coredns-credentials \
      --namespace=kuadrant-system \
      --type=kuadrant.io/coredns \
      --from-literal=ZONES="${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}" \
      --context ${CTX_PRIMARY}
    Copy to Clipboard Toggle word wrap

Verification

  • Check the status of the DNSRecord CR by running the following commands:

    $ oc get dnsrecord <name> -n <namespace> -o jsonpath='{.status.conditions[?(@.type=="Ready")]}'
    Copy to Clipboard Toggle word wrap
    $ NS1=$(oc get svc kuadrant-coredns -n kuadrant-coredns -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
      ROOT_HOST=$(oc get dnsrecord <name> -n <namespace> -o jsonpath='{.spec.rootHost}')
      dig @${NS1} ${ROOT_HOST}
    Copy to Clipboard Toggle word wrap

    Expect the Ready condition to be True.

Troubleshooting

  • If you are having undetermined trouble, view the logs for all CoreDNS pods by running the following command:

    $ oc logs -n kuadrant-coredns deployment/kuadrant-coredns
    Copy to Clipboard Toggle word wrap
  • If the DNSRecord is not appearing in the zone, verify that the record has the zone label by running the following command:

    $ oc get dnsrecords.kuadrant.io -n dnstest -o jsonpath='{.items[*].metadata.labels}' | grep kuadrant.io/coredns-zone-name
    Copy to Clipboard Toggle word wrap

    The output should include the zone name, for example kuadrant.io/coredns-zone-name: k.example.com.

    • If the output does not show the zone name, check that the DNS Operator is running by using the following command:

      $ oc get pods -n dns-operator-system
      Copy to Clipboard Toggle word wrap
    • You can also check the DNS Operator logs by running the following command:

      $ oc logs -n dns-operator-system deployment/dns-operator-controller-manager
      Copy to Clipboard Toggle word wrap
  • A couple of common issues can be missing RBAC and GeoIP database.

    • RBAC permissions missing. Check your ClusterRole and ClusterRoleBinding configurations.
    • GeoIP database file not found. Ensure that your database is accessible.

Next steps

  • Create dnsPolicy custom resources in your OpenShift Container Platform pods, referencing the coredns-credentials secret as the provider. Connectivity Link manages DNS records within the delegated ${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN} zone through CoreDNS.

You can use CoreDNS as a DNS provider for Connectivity Link in an existing multi-cluster, on-premise environment. This integration allows Connectivity Link to manage DNS entries within your internal network infrastructure.

Prerequisites

  • Connectivity Link is installed on two separate OpenShift Container Platform clusters (primary and secondary).
  • OpenShift CLI (oc) is installed and configured for access to both clusters.
  • You have administrator privileges on both OpenShift Container Platform clusters.
  • Your OpenShift Container Platform clusters have support for the loadbalanced service type that allows UDP and TCP traffic on port 53, such as MetalLB.
  • You have access to configure your authoritative on-premise DNS server to delegate a subdomain.
  • Podman is installed.
  • jq is installed.

Procedure

  1. Set up the primary cluster. Set the following environment variables for your primary cluster context:

    $ export CTX_PRIMARY=<primary_cluster_context_name> # such as, primary \
      export KUBECONFIG=~/.kube/config # Adjust path if necessary \
      export PRIMARY_CLUSTER_NAME=<primary_cluster_name> # such as, primary \
      export ONPREM_DOMAIN=<onprem-domain> # such as, example.local \
      export KUADRANT_SUBDOMAIN=kuadrant # Subdomain to delegate
    Copy to Clipboard Toggle word wrap

    Replace <primary_cluster_context_name> with the name of the cluster that you are specifying as primary. For the ONPREM_DOMAIN variable value, use your actual root domain.

  2. Extract the CoreDNS manifests from the dns-operator bundle by running the following commands:

    $ podman create --name bundle registry.redhat.io/rhcl-1/dns-operator-bundle:rhcl-1.3.0
    Copy to Clipboard Toggle word wrap
    $ podman cp bundle:/coredns/manifests.yaml ./coredns-manifests.yaml
    Copy to Clipboard Toggle word wrap
    $ podman rm bundle
    Copy to Clipboard Toggle word wrap
  3. Apply the manifests to the cluster by running the following command:

    $ oc apply -f ./coredns-manifests.yaml
    Copy to Clipboard Toggle word wrap
  4. Wait for the CoreDNS service to get an external IP address. You need the IP address to configure delegation on your authoritative on-premise DNS server. Retrieve and store the IP address by running the following command:

    $ export COREDNS_IP_PRIMARY=$(oc --context $CTX_PRIMARY -n kuadrant-system get service kuadrant-coredns -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    echo "CoreDNS Primary IP: ${COREDNS_IP_PRIMARY}"
    Copy to Clipboard Toggle word wrap
  5. Create a ConfigMap to define the authoritative zone for CoreDNS on the primary cluster. This minimal configuration enables the kuadrant plugin and GeoIP features.

    $ cat | oc --context $CTX_PRIMARY apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: coredns-kuadrant-config
      namespace: kuadrant-coredns
    data:
      Corefile: |
        ${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}:53 {
            debug
            errors
            health {
                lameduck 5s
            }
            ready
            log
            geoip <GeoIP-database-name>.mmdb {
                edns-subnet
            }
            metadata
            kuadrant
        }
    Copy to Clipboard Toggle word wrap
    Note

    For production or exact GeoIP routing, mount your licensed MaxMind GeoIP database into the CoreDNS pod and update the filename in the data.Corefile.geoip parameter.

  6. Update the CoreDNS deployment to use the new configuration:

    $ oc --context $CTX_PRIMARY -n kuadrant-system patch deployment kuadrant-coredns --patch '{"spec":{"template":{"spec":{"volumes":[{"name":"config-volume","configMap":{"name":"coredns-kuadrant-config","items":[{"key":"Corefile","path":"Corefile"}]}}]}}}}'
    Copy to Clipboard Toggle word wrap
  7. Set a watch-and-wait command for the deployment rollout to complete:

    $ oc --context $CTX_PRIMARY -n kuadrant-system rollout status deployment/kuadrant-coredns
    Copy to Clipboard Toggle word wrap

    Example output

    kuadrant-coredns successfully rolled out
    Copy to Clipboard Toggle word wrap

  8. Create the Kubernetes Secret that Connectivity Link uses to interact with CoreDNS. This secret specifies the zones this provider instance is authoritative for.

    $ oc create secret generic coredns-credentials \
      --namespace=kuadrant-system \
      --type=kuadrant.io/coredns \
      --from-literal=ZONES="${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}" \
      --context ${CTX_PRIMARY}
    Copy to Clipboard Toggle word wrap
  9. On your authoritative on-premise DNS server, configure delegation for the ${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN} subdomain to the external IP addresses of the CoreDNS services running on your primary and secondary clusters, $COREDNS_IP_PRIMARY and $COREDNS_IP_SECONDARY. The specific steps depend on your DNS server software, for example, BIND, Windows DNS Server. You typically need to add Name Server (NS) records pointing the subdomain to the CoreDNS IP addresses.

    Example delegation

    ; Delegate kuadrant.example.local to CoreDNS instances
    $ORIGIN ${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}.
    @       IN      SOA     ns1.${ONPREM_DOMAIN}. hostmaster.${ONPREM_DOMAIN}. (
                            2023102601 ; serial
                            7200       ; refresh (2 hours)
                            3600       ; retry (1 hour)
                            1209600    ; expire (2 weeks)
                            3600       ; minimum (1 hour)
                            )
            IN      NS      coredns-primary.${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}.
    
    coredns-primary   IN A ${COREDNS_IP_PRIMARY}
    Copy to Clipboard Toggle word wrap

  10. Restart CoreDNS by running the following command:

    $ oc -n kuadrant-coredns rollout restart deployment kuadrant-coredns
    Copy to Clipboard Toggle word wrap
    Note

    After configuring delegation, you can test that the DNS resolution for the delegated subdomain works correctly by querying your authoritative DNS server for a record within the kuadrant subdomain. One of the CoreDNS instances is expected to refer to and answer the query.

Verification

  1. Launch a temporary pod for testing by running the following command:

    $ oc debug node/<node-name>
    Copy to Clipboard Toggle word wrap

    Replace <node-name> with the node you are testing on.

  2. Add transfer to your Corefile by running the following command:

    $ oc patch cm kuadrant-coredns -n kuadrant-coredns --type merge \
       -p "$(kubectl get cm kuadrant-coredns -n kuadrant-coredns -o jsonpath='{.data.Corefile}' | \
       sed 's/kuadrant/transfer {\n        to *\n    }\n    kuadrant/' | \
       jq -Rs '{data: {Corefile: .}}')"
    Copy to Clipboard Toggle word wrap
  3. Verify zone delegation by running the following command:

    $ dig @${EDGE_NS} -k config/bind9/ddns.key -t AXFR example.com
    Copy to Clipboard Toggle word wrap

    Example output

    example.com.            30      IN      SOA     example.com. root.example.com. 17 30 30 30 30
    example.com.            30      IN      NS      ns.example.com.
    k.example.com.          300     IN      NS      ns1.k.example.com.
    ns1.k.example.com.      300     IN      A       172.18.0.16
    ns.example.com.         30      IN      A       127.0.0.1
    example.com.            30      IN      SOA     example.com. root.example.com. 17 30 30 30 30
    Copy to Clipboard Toggle word wrap

    In this example, k.example is the delegated zone and ns1.k.example is the primary zone.

  4. Optional. Remove the transfer from your Corefile by running the following command:

    $ oc patch cm kuadrant-coredns -n kuadrant-coredns --type merge \
       -p "$(kubectl get cm kuadrant-coredns -n kuadrant-coredns -o jsonpath='{.data.Corefile}' | \
       sed '/transfer {/,/}/d' | \
       jq -Rs '{data: {Corefile: .}}')"
    Copy to Clipboard Toggle word wrap
  5. Verify the start of authority (SOA) record for the delegated zone by running the following command:

    $ dig @${EDGE_NS} soa k.example.com
    Copy to Clipboard Toggle word wrap

    Example output

    ;; ANSWER SECTION:
    k.example.com.          60      IN      SOA     ns1.k.example.com. hostmaster.k.example.com. 12345 7200 1800 86400 60
    Copy to Clipboard Toggle word wrap

    The SOA record is expected to show the primary name server (NS) as confirmation that CoreDNS is responding authoritatively. In this example the primary NS is ns1.k.example.com.

Next steps

  • Create DNSPolicy resources in your OpenShift Container Platform clusters, referencing the coredns-credentials secret as the provider. Connectivity Link manages DNS records within the delegated ${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN} zone through CoreDNS.

2.6. CoreDNS Corefile configuration reference

A Corefile is organized into server blocks that define how DNS queries are handled based on the port and zone. Plugin execution order is determined at build time, not by Corefile order, so you can list plugins in any order. When making configurations by using the DNS Operator, you can check the ConfigMap for the resulting server block.

Connectivity Link includes a minimal Corefile that you can update for your uses:

Minimal Corefile

Corefile: |
    . {
        health
        ready
    }
Copy to Clipboard Toggle word wrap

For a Corefile with configurations, see the following example:

Example configured Corefile

k.example.com {
    debug
    errors
    log
    health {
        lameduck 5s
    }
    ready
    geoip GeoLite2-City-demo.mmdb {
        edns-subnet
    }
    metadata
    transfer {
        to *
    }
    kuadrant
    prometheus 0.0.0.0:9153
}
Copy to Clipboard Toggle word wrap

Zone coordination
Each zone in the Corefile must match a zone listed in your CoreDNS provider secret’s ZONES field.
Required plugins
The geoip and metadata plugins are included by default with the Connectivity Link implementation of the CoreDNS Corefile.
Corefile updates

After you update your Corefile, you must always restart your pods for the CoreDNS deployment. You can use the following command:

$ oc rollout restart deployment/coredns -n kuadrant-system
Copy to Clipboard Toggle word wrap

You can check the status of the rollout by running the following command:

$ oc rollout status deployment/coredns -n kuadrant-system --watch
Copy to Clipboard Toggle word wrap

2.6.1. Default enabled plugins in CoreDNS

The following plugins are enabled by default in the Connectivity Link CoreDNS plugin. You must ensure CoreDNS compatibility and enable any other plugins that you want to add.

Expand
PluginFunction

acl

Enforces access control policies on source IP addresses and prevents unauthorized access to DNS servers.

cache

Enables a front-end cache.

cancel

Cancels a request’s context after 5001 milliseconds.

debug

Disables the automatic recovery when a crash happens so that a stack trace is generated.

errors

Enables error logging.

file

Enables serving zone data from an RFC 1035-style master file.

forward

Enables IP forwarding. Facilitates proxying DNS messages to upstream resolvers.

geoip

Lookup .mmdb (MaxMind db file format) databases using the client IP, then adds associated geoip data to the context request.

header

Modifies the header for queries and responses.

health

Enables a health check endpoint.

hosts

Enables serving zone data from an /etc/hosts style file.

kuadrant

Enables serving zone data from kuadrant DNSRecord custom resources. Uses logic from the CoreDNS file plugin to create a functioning DNS server.

local

Responds with a basic reply to a local names in the following zones, localhost, 0.in-addr.arpa, 127.in-addr.arpa, 255.in-addr.arpa and any query asking for localhost.<domain>.

log

Enables query logging to standard output. Logs are structured for aggregation by cluster logging solutions.

loop

Detects simple forwarding loops and halts the server.

metadata

Enables a metadata collector.

minimal

Minimizes size of the DNS response message whenever possible.

nsid

Adds an identifier of this server to each reply.

prometheus

Enables Prometheus metrics. The default listens on localhost:9153. The metrics path is to /metrics.

ready

Enables a readiness check HTTP endpoint.

reload

Allows automatic reload of a changed Corefile.

rewrite

Rewrites queries for automatic port forwarding.

root

Simply specifies the root of where to find files.

secondary

Enables serving a zone retrieved from a primary server.

timeouts

Means that you can configure the server read, write and idle timeouts for the TCP, TLS, DoH and DoQ (idle only) servers.

tls

Means that you can configure the server certificates for the TLS, gRPC, and DoH servers.

transfer

Perform (outgoing) zone transfers for other plugins.

view

Defines the conditions that must be met for a DNS request to be routed to the server block.

whoami

Returns your resolver’s local IP address, port and transport.

Tip

When using CoreDNS, if you do not need to keep all logs, you can set up the logs directive to only report errors and use the prometheus plugin to gather primary metrics instead. Prometheus metrics give you trends, for example, how many queries failed, without storing every single piece of traffic.

You can troubleshoot your CoreDNS deployment by restarting CoreDNS and by checking the logs. Use the following commands as needed to investigate your specific errors:

Restart CoreDNS by using the following command:

$ oc -n kuadrant-coredns rollout restart deployment kuadrant-coredns
Copy to Clipboard Toggle word wrap

You can view CoreDNS logs by running the following command:

$ oc logs -f deployments/kuadrant-coredns -n kuadrant-coredns
Copy to Clipboard Toggle word wrap

You can get recent logs by running the following command:

$ oc logs --tail=100 deployments/kuadrant-coredns -n kuadrant-coredns
Copy to Clipboard Toggle word wrap

2.8. CoreDNS removal or migration

You can remove your CoreDNS integration by deleting the CoreDNS deployment and deleting your DNS policies. To migrate to a different provider, delete existing dnsPolicy CRs and re-create them with the new provider secret reference. No data is permanently locked into CoreDNS.

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top