Chapter 5. External and Ingress routing
5.1. Routing overview
Knative leverages OpenShift Container Platform TLS termination to provide routing for Knative services. When a Knative service is created, an OpenShift Container Platform route is automatically created for the service. This route is managed by the OpenShift Serverless Operator. The OpenShift Container Platform route exposes the Knative service through the same domain as the OpenShift Container Platform cluster.
You can disable Operator control of OpenShift Container Platform routing so that you can configure a Knative route to directly use your TLS certificates instead.
Knative routes can also be used alongside the OpenShift Container Platform route to provide additional fine-grained routing capabilities, such as traffic splitting.
5.1.1. Additional resources for OpenShift Container Platform
5.2. Customizing labels and annotations
OpenShift Container Platform routes support the use of custom labels and annotations, which you can configure by modifying the metadata
spec of a Knative service. Custom labels and annotations are propagated from the service to the Knative route, then to the Knative ingress, and finally to the OpenShift Container Platform route.
5.2.1. Customizing labels and annotations for OpenShift Container Platform routes
Prerequisites
- You must have the OpenShift Serverless Operator and Knative Serving installed on your OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
).
Procedure
Create a Knative service that contains the label or annotation that you want to propagate to the OpenShift Container Platform route:
To create a service by using YAML:
Example service created by using YAML
apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> labels: <label_name>: <label_value> annotations: <annotation_name>: <annotation_value> ...
To create a service by using the Knative (
kn
) CLI, enter:Example service created by using a
kn
command$ kn service create <service_name> \ --image=<image> \ --annotation <annotation_name>=<annotation_value> \ --label <label_value>=<label_value>
Verify that the OpenShift Container Platform route has been created with the annotation or label that you added by inspecting the output from the following command:
Example command for verification
$ oc get routes.route.openshift.io \ -l serving.knative.openshift.io/ingressName=<service_name> \ 1 -l serving.knative.openshift.io/ingressNamespace=<service_namespace> \ 2 -n knative-serving-ingress -o yaml \ | grep -e "<label_name>: \"<label_value>\"" -e "<annotation_name>: <annotation_value>" 3
5.3. Configuring routes for Knative services
If you want to configure a Knative service to use your TLS certificate on OpenShift Container Platform, you must disable the automatic creation of a route for the service by the OpenShift Serverless Operator and instead manually create a route for the service.
When you complete the following procedure, the default OpenShift Container Platform route in the knative-serving-ingress
namespace is not created. However, the Knative route for the application is still created in this namespace.
5.3.1. Configuring OpenShift Container Platform routes for Knative services
Prerequisites
- The OpenShift Serverless Operator and Knative Serving component must be installed on your OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
).
Procedure
Create a Knative service that includes the
serving.knative.openshift.io/disableRoute=true
annotation:ImportantThe
serving.knative.openshift.io/disableRoute=true
annotation instructs OpenShift Serverless to not automatically create a route for you. However, the service still shows a URL and reaches a status ofReady
. This URL does not work externally until you create your own route with the same hostname as the hostname in the URL.Create a Knative
Service
resource:Example resource
apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> annotations: serving.knative.openshift.io/disableRoute: "true" spec: template: spec: containers: - image: <image> ...
Apply the
Service
resource:$ oc apply -f <filename>
Optional. Create a Knative service by using the
kn service create
command:Example
kn
command$ kn service create <service_name> \ --image=gcr.io/knative-samples/helloworld-go \ --annotation serving.knative.openshift.io/disableRoute=true
Verify that no OpenShift Container Platform route has been created for the service:
Example command
$ $ oc get routes.route.openshift.io \ -l serving.knative.openshift.io/ingressName=$KSERVICE_NAME \ -l serving.knative.openshift.io/ingressNamespace=$KSERVICE_NAMESPACE \ -n knative-serving-ingress
You will see the following output:
No resources found in knative-serving-ingress namespace.
Create a
Route
resource in theknative-serving-ingress
namespace:apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 600s 1 name: <route_name> 2 namespace: knative-serving-ingress 3 spec: host: <service_host> 4 port: targetPort: http2 to: kind: Service name: kourier weight: 100 tls: insecureEdgeTerminationPolicy: Allow termination: edge 5 key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE---- wildcardPolicy: None
- 1
- The timeout value for the OpenShift Container Platform route. You must set the same value as the
max-revision-timeout-seconds
setting (600s
by default). - 2
- The name of the OpenShift Container Platform route.
- 3
- The namespace for the OpenShift Container Platform route. This must be
knative-serving-ingress
. - 4
- The hostname for external access. You can set this to
<service_name>-<service_namespace>.<domain>
. - 5
- The certificates you want to use. Currently, only
edge
termination is supported.
Apply the
Route
resource:$ oc apply -f <filename>
5.4. Global HTTPS redirection
HTTPS redirection provides redirection for incoming HTTP requests. These redirected HTTP requests are encrypted. You can enable HTTPS redirection for all services on the cluster by configuring the httpProtocol
spec for the KnativeServing
custom resource (CR).
5.4.1. HTTPS redirection global settings
Example KnativeServing
CR that enables HTTPS redirection
apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: network: httpProtocol: "redirected" ...
5.5. URL scheme for external routes
The URL scheme of external routes defaults to HTTPS for enhanced security. This scheme is determined by the default-external-scheme
key in the KnativeServing
custom resource (CR) spec.
5.5.1. Setting the URL scheme for external routes
Default spec
... spec: config: network: default-external-scheme: "https" ...
You can override the default spec to use HTTP by modifying the default-external-scheme
key:
HTTP override spec
... spec: config: network: default-external-scheme: "http" ...
5.6. HTTPS redirection per service
You can enable or disable HTTPS redirection for a service by configuring the networking.knative.dev/http-option
annotation.
5.6.1. Redirecting HTTPS for a service
The following example shows how you can use this annotation in a Knative Service
YAML object:
apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example namespace: default annotations: networking.knative.dev/http-protocol: "redirected" spec: ...
5.7. Cluster local availability
By default, Knative services are published to a public IP address. Being published to a public IP address means that Knative services are public applications, and have a publicly accessible URL.
Publicly accessible URLs are accessible from outside of the cluster. However, developers may need to build back-end services that are only be accessible from inside the cluster, known as private services. Developers can label individual services in the cluster with the networking.knative.dev/visibility=cluster-local
label to make them private.
For OpenShift Serverless 1.15.0 and newer versions, the serving.knative.dev/visibility
label is no longer available. You must update existing services to use the networking.knative.dev/visibility
label instead.
5.7.1. Setting cluster availability to cluster local
Prerequisites
- The OpenShift Serverless Operator and Knative Serving are installed on the cluster.
- You have created a Knative service.
Procedure
Set the visibility for your service by adding the
networking.knative.dev/visibility=cluster-local
label:$ oc label ksvc <service_name> networking.knative.dev/visibility=cluster-local
Verification
Check that the URL for your service is now in the format
http://<service_name>.<namespace>.svc.cluster.local
, by entering the following command and reviewing the output:$ oc get ksvc
Example output
NAME URL LATESTCREATED LATESTREADY READY REASON hello http://hello.default.svc.cluster.local hello-tx2g7 hello-tx2g7 True
5.7.2. Enabling TLS authentication for cluster local services
For cluster local services, the Kourier local gateway kourier-internal
is used. If you want to use TLS traffic against the Kourier local gateway, you must configure your own server certificates in the local gateway.
Prerequisites
- You have installed the OpenShift Serverless Operator and Knative Serving.
- You have administrator permissions.
-
You have installed the OpenShift (
oc
) CLI.
Procedure
Deploy server certificates in the
knative-serving-ingress
namespace:$ export san="knative"
NoteSubject Alternative Name (SAN) validation is required so that these certificates can serve the request to
<app_name>.<namespace>.svc.cluster.local
.Generate a root key and certificate:
$ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \ -subj '/O=Example/CN=Example' \ -keyout ca.key \ -out ca.crt
Generate a server key that uses SAN validation:
$ openssl req -out tls.csr -newkey rsa:2048 -nodes -keyout tls.key \ -subj "/CN=Example/O=Example" \ -addext "subjectAltName = DNS:$san"
Create server certificates:
$ openssl x509 -req -extfile <(printf "subjectAltName=DNS:$san") \ -days 365 -in tls.csr \ -CA ca.crt -CAkey ca.key -CAcreateserial -out tls.crt
Configure a secret for the Kourier local gateway:
Deploy a secret in
knative-serving-ingress
namespace from the certificates created by the previous steps:$ oc create -n knative-serving-ingress secret tls server-certs \ --key=tls.key \ --cert=tls.crt --dry-run=client -o yaml | oc apply -f -
Update the
KnativeServing
custom resource (CR) spec to use the secret that was created by the Kourier gateway:Example KnativeServing CR
... spec: config: kourier: cluster-cert-secret: server-certs ...
The Kourier controller sets the certificate without restarting the service, so that you do not need to restart the pod.
You can access the Kourier internal service with TLS through port 443
by mounting and using the ca.crt
from the client.
5.8. Kourier Gateway service type
The Kourier Gateway is exposed by default as the ClusterIP
service type. This service type is determined by the service-type
ingress spec in the KnativeServing
custom resource (CR).
Default spec
... spec: ingress: kourier: service-type: ClusterIP ...
5.8.1. Setting the Kourier Gateway service type
You can override the default service type to use a load balancer service type instead by modifying the service-type
spec:
LoadBalancer override spec
... spec: ingress: kourier: service-type: LoadBalancer ...
5.9. Using HTTP2 and gRPC
OpenShift Serverless supports only insecure or edge-terminated routes. Insecure or edge-terminated routes do not support HTTP2 on OpenShift Container Platform. These routes also do not support gRPC because gRPC is transported by HTTP2. If you use these protocols in your application, you must call the application using the ingress gateway directly. To do this you must find the ingress gateway’s public address and the application’s specific host.
5.9.1. Interacting with a serverless application using HTTP2 and gRPC
This method applies to OpenShift Container Platform 4.10 and later. For older versions, see the following section.
Prerequisites
- Install OpenShift Serverless Operator and Knative Serving on your cluster.
-
Install the OpenShift CLI (
oc
). - Create a Knative service.
- Upgrade OpenShift Container Platform 4.10 or later.
- Enable HTTP/2 on OpenShift Ingress controller.
Procedure
Add the
serverless.openshift.io/default-enable-http2=true
annotation to theKnativeServing
Custom Resource:$ oc annotate knativeserving <your_knative_CR> -n knative-serving serverless.openshift.io/default-enable-http2=true
After the annotation is added, you can verify that the
appProtocol
value of the Kourier service ish2c
:$ oc get svc -n knative-serving-ingress kourier -o jsonpath="{.spec.ports[0].appProtocol}"
Example output
h2c
Now you can use the gRPC framework over the HTTP/2 protocol for external traffic, for example:
import "google.golang.org/grpc" grpc.Dial( YOUR_URL, 1 grpc.WithTransportCredentials(insecure.NewCredentials())), 2 )
Additional resources
5.9.2. Interacting with a serverless application using HTTP2 and gRPC in OpenShift Container Platform 4.9 and older
This method needs to expose Kourier Gateway using the LoadBalancer
service type. You can configure this by adding the following YAML to your KnativeServing
custom resource definition (CRD):
... spec: ingress: kourier: service-type: LoadBalancer ...
Prerequisites
- Install OpenShift Serverless Operator and Knative Serving on your cluster.
-
Install the OpenShift CLI (
oc
). - Create a Knative service.
Procedure
- Find the application host. See the instructions in Verifying your serverless application deployment.
Find the ingress gateway’s public address:
$ oc -n knative-serving-ingress get svc kourier
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kourier LoadBalancer 172.30.51.103 a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com 80:31380/TCP,443:31390/TCP 67m
The public address is surfaced in the
EXTERNAL-IP
field, and in this case isa83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com
.Manually set the host header of your HTTP request to the application’s host, but direct the request itself against the public address of the ingress gateway.
$ curl -H "Host: hello-default.example.com" a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com
Example output
Hello Serverless!
You can also make a direct gRPC request against the ingress gateway:
import "google.golang.org/grpc" grpc.Dial( "a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com:80", grpc.WithAuthority("hello-default.example.com:80"), grpc.WithInsecure(), )
NoteEnsure that you append the respective port, 80 by default, to both hosts as shown in the previous example.
5.10. Using Serving with OpenShift ingress sharding
You can use Knative Serving with OpenShift ingress sharding to split ingress traffic based on domains. This allows you to manage and route network traffic to different parts of a cluster more efficiently.
Even with OpenShift ingress sharding in place, OpenShift Serverless traffic is still routed through a single Knative Ingress Gateway and the activator component in the knative-serving
project.
For more information about isolating the network traffic, see Using Service Mesh to isolate network traffic with OpenShift Serverless.
Prerequisites
- You have installed the OpenShift Serverless Operator and Knative Serving.
- You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated.
5.10.1. Configuring OpenShift ingress shards
Before configuring Knative Serving, you must configure OpenShift ingress shards.
Procedure
Use a label selector in the
IngressController
CR to configure OpenShift Serverless to match specific ingress shards with different domains:Example
IngressController
CRapiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: ingress-dev 1 namespace: openshift-ingress-operator spec: routeSelector: matchLabels: router: dev 2 domain: "dev.serverless.cluster.example.com" 3 # ... --- apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: ingress-prod 4 namespace: openshift-ingress-operator spec: routeSelector: matchLabels: router: prod 5 domain: "prod.serverless.cluster.example.com" 6 # ...
5.10.2. Configuring custom domains in the KnativeServing CR
After configuring OpenShift ingress shards, you must configure Knative Serving to match them.
Procedure
In the
KnativeServing
CR, configure Serving to use the same domains and labels as your ingress shards by adding thespec.config.domain
field:Example
KnativeServing
CRspec: config: domain: 1 dev.serverless.cluster.example.com: | selector: router: dev prod.serverless.cluster.example.com: | selector: router: prod # ...
- 1
- These values need to match the values in the ingress shard configuration.
5.10.3. Targeting a specific ingress shard in the Knative Service
After configuring ingress sharding and Knative Serving, you can target a specific ingress shard in your Knative Service resources using a label.
Procedure
In your
Service
CR, add the label selector that matches a specific shard:Example Service CR
apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello-dev labels: router: dev 1 spec: template: spec: containers: - image: docker.io/openshift/hello-openshift --- apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello-prod labels: router: prod 2 spec: template: spec: containers: - image: docker.io/openshift/hello-openshift # ...
5.10.4. Verifying Serving with OpenShift ingress sharding configuration
After configuring ingress sharding, Knative Serving, and your service, you can verify that your service uses the correct route and the selected ingress shard.
Procedure
Print information about the services in the cluster by running the following command:
$ oc get ksvc
Example output
NAME URL LATESTCREATED LATESTREADY READY REASON hello-dev https://hello-dev-default.dev.serverless.cluster.example.com hello-dev-00001 hello-dev-00001 True hello-prod https://hello-prod-default.prod.serverless.cluster.example.com hello-prod-00001 hello-prod-00001 True
Verify that your service uses the correct route and the selected ingress shard by running the following command:
$ oc get route -n knative-serving-ingress -o jsonpath='{range .items[*]}{@.metadata.name}{" "}{@.spec.host}{" "}{@.status.ingress[*].routerName}{"\n"}{end}'
Example output
route-19e6628b-77af-4da0-9b4c-1224934b2250-323461616533 hello-prod-default.prod.serverless.cluster.example.com ingress-prod route-cb5085d9-b7da-4741-9a56-96c88c6adaaa-373065343266 hello-dev-default.dev.serverless.cluster.example.com ingress-dev