Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 4. Configuring OpenShift Serverless applications
4.1. Multi-container support for Serving Copier lienLien copié sur presse-papiers!
You can deploy a multi-container pod by using a single Knative service. This method is useful for separating application responsibilities into smaller, specialized parts.
4.1.1. Configuring a multi-container service Copier lienLien copié sur presse-papiers!
Multi-container support is enabled by default. You can create a multi-container pod by specifiying multiple containers in the service.
Procedure
Modify your service to include additional containers. Only one container can handle requests, so specify
portsfor exactly one container. Here is an example configuration with two containers:Multiple containers configuration
apiVersion: serving.knative.dev/v1 kind: Service ... spec: template: spec: containers: - name: first-container1 image: gcr.io/knative-samples/helloworld-go ports: - containerPort: 80802 - name: second-container3 image: gcr.io/knative-samples/helloworld-java
4.1.2. Probing a multi-container service Copier lienLien copié sur presse-papiers!
You can specify readiness and liveness probes for multiple containers. This feature is not enabled by default and you must configure it using the KnativeServing custom resource (CR).
Procedure
Configure multi-container probing for your service by enabling the
multi-container-probingfeature in theKnativeServingCR.Multi-container probing configuration
... spec: config: features: "multi-container-probing": enabled1 ...- 1
- Enabled multi-container-probing feature
Apply the updated
KnativeServingCR.$ oc apply -f <filename>Modify your multi-container service to include the specified probes.
Multi-container probing
apiVersion: serving.knative.dev/v1 kind: Service ... spec: template: spec: containers: - name: first-container image: ghcr.io/knative/helloworld-go:latest ports: - containerPort: 8080 readinessProbe:1 httpGet: port: 8080 - name: second-container image: gcr.io/knative-samples/helloworld-java readinessProbe:2 httpGet: port: 8090
4.1.2.1. Additional resources Copier lienLien copié sur presse-papiers!
4.2. EmptyDir volumes Copier lienLien copié sur presse-papiers!
emptyDir volumes are empty volumes that are created when a pod is created, and are used to provide temporary working disk space. emptyDir volumes are deleted when the pod they were created for is deleted.
4.2.1. Configuring the EmptyDir extension Copier lienLien copié sur presse-papiers!
The kubernetes.podspec-volumes-emptydir extension controls whether emptyDir volumes can be used with Knative Serving. To enable using emptyDir volumes, you must modify the KnativeServing custom resource (CR) to include the following YAML:
Example KnativeServing CR
apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
name: knative-serving
spec:
config:
features:
kubernetes.podspec-volumes-emptydir: enabled
...
4.3. Persistent Volume Claims for Serving Copier lienLien copié sur presse-papiers!
Some serverless applications require permanent data storage. By configuring different volume types, you can provide data storage for Knative services. Serving supports mounting of the volume types such as secret, configMap, projected, and emptyDir.
You can configure persistent volume claims (PVCs) for your Knative services. The Persistent volume types are implemented as plugins. To determine if there are any persistent volume types available, you can check the available or installed storage classes in your cluster. Persistent volumes are supported, but require a feature flag to be enabled.
The mounting of large volumes can lead to a considerable delay in the start time of the application.
4.3.1. Enabling PVC support Copier lienLien copié sur presse-papiers!
Procedure
To enable Knative Serving to use PVCs and write to them, modify the
KnativeServingcustom resource (CR) to include the following YAML:Enabling PVCs with write access
... spec: config: features: "kubernetes.podspec-persistent-volume-claim": enabled "kubernetes.podspec-persistent-volume-write": enabled ...-
The
kubernetes.podspec-persistent-volume-claimextension controls whether persistent volumes (PVs) can be used with Knative Serving. -
The
kubernetes.podspec-persistent-volume-writeextension controls whether PVs are available to Knative Serving with the write access.
-
The
To claim a PV, modify your service to include the PV configuration. For example, you might have a persistent volume claim with the following configuration:
NoteUse the storage class that supports the access mode you are requesting. For example, you can use the
ocs-storagecluster-cephfsstorage class for theReadWriteManyaccess mode.The
ocs-storagecluster-cephfsstorage class is supported and comes from Red Hat OpenShift Data Foundation.PersistentVolumeClaim configuration
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: example-pv-claim namespace: my-ns spec: accessModes: - ReadWriteMany storageClassName: ocs-storagecluster-cephfs resources: requests: storage: 1GiIn this case, to claim a PV with write access, modify your service as follows:
Knative service PVC configuration
apiVersion: serving.knative.dev/v1 kind: Service metadata: namespace: my-ns ... spec: template: spec: containers: ... volumeMounts:1 - mountPath: /data name: mydata readOnly: false volumes: - name: mydata persistentVolumeClaim:2 claimName: example-pv-claim readOnly: false3 NoteTo successfully use persistent storage in Knative services, you need additional configuration, such as the user permissions for the Knative container user.
4.4. Init containers Copier lienLien copié sur presse-papiers!
Init containers are specialized containers that are run before application containers in a pod. They are generally used to implement initialization logic for an application, which may include running setup scripts or downloading required configurations. You can enable the use of init containers for Knative services by modifying the KnativeServing custom resource (CR).
Init containers may cause longer application start-up times and should be used with caution for serverless applications, which are expected to scale up and down frequently.
4.4.1. Enabling init containers Copier lienLien copié sur presse-papiers!
Prerequisites
- You have installed OpenShift Serverless Operator and Knative Serving on your cluster.
- You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated.
Procedure
Enable the use of init containers by adding the
kubernetes.podspec-init-containersflag to theKnativeServingCR:Example KnativeServing CR
apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: features: kubernetes.podspec-init-containers: enabled ...
4.5. Startup probes Copier lienLien copié sur presse-papiers!
Startup probes verify whether a service has started successfully, helping to reduce cold start times for containers with slow startup processes. Startup probes run only during the container’s initialization phase and do not execute periodically. If a startup probe fails, the container adheres to the defined restartPolicy.
4.5.1. Progress deadline Copier lienLien copié sur presse-papiers!
By default, services have a progress deadline that defines the time limit for a service to complete its initial startup. When using startup probes, ensure that the progress deadline is set to exceed the maximum time required by the startup probes. If the progress deadline is set too low, the startup probes might not finish before the deadline is reached, which can prevent the service from starting.
Consider increasing the progress deadline if you encounter any of these conditions in your deployment:
- The service image takes a long time to pull due to its size.
-
The service takes a long time to become
READYbecause of initial cache priming. - The cluster relies on autoscaling to allocate resources for new pods.
4.5.2. Configuring startup probing Copier lienLien copié sur presse-papiers!
For OpenShift Serverless Serving, startup probes are not defined by default. You can define startup probes for your containers in your deployment configuration.
Procedure
Define startup probes for your service by modifying your deployment configuration. The following example shows a configuration with two containers:
Example of defined starup probes
apiVersion: serving.knative.dev/v1 kind: Service # ... spec: template: spec: containers: - name: first-container image: <image> ports: - containerPort: 8080 # ... startupProbe:1 httpGet: port: 8080 path: "/" - name: second-container image: <image> # ... startupProbe:2 httpGet: port: 8081 path: "/"
4.5.3. Configuring the progress deadline Copier lienLien copié sur presse-papiers!
You can configure progress deadline settings to specify the maximum time allowed for your deployment to progress before the system reports a failure for the Knative Revision. This time limit can be specified in seconds or minutes.
To configure the progress deadline effectively, consider the following parameters:
-
initialDelaySeconds -
failureThreshold -
periodSeconds -
timeoutSeconds
If the initial scale is not achieved within the specified time limit, the Knative Autoscaler component scales the revision to 0, and the Knative service enters a terminal Failed state.
By default, the progress deadline is set to 600 seconds. This value is specified as a Golang time.Duration string and must be rounded to the nearest second.
Procedure
To configure the progress deadline setting, use an annotation in your deployment configuration.
Example of progress deadline set to 60 seconds
apiVersion: serving.knative.dev/v1 kind: Service ... spec: template: metadata: annotations: serving.knative.dev/progress-deadline: "60s" spec: containers: - image: ghcr.io/knative/helloworld-go:latest
4.6. Resolving image tags to digests Copier lienLien copié sur presse-papiers!
If the Knative Serving controller has access to the container registry, Knative Serving resolves image tags to a digest when you create a revision of a service. This is known as tag-to-digest resolution, and helps to provide consistency for deployments.
4.6.1. Tag-to-digest resolution Copier lienLien copié sur presse-papiers!
To give the controller access to the container registry on OpenShift Container Platform, you must create a secret and then configure controller custom certificates. You can configure controller custom certificates by modifying the controller-custom-certs spec in the KnativeServing custom resource (CR). The secret must reside in the same namespace as the KnativeServing CR.
If a secret is not included in the KnativeServing CR, this setting defaults to using public key infrastructure (PKI). When using PKI, the cluster-wide certificates are automatically injected into the Knative Serving controller by using the config-service-sa config map. The OpenShift Serverless Operator populates the config-service-sa config map with cluster-wide certificates and mounts the config map as a volume to the controller.
4.6.1.1. Configuring tag-to-digest resolution by using a secret Copier lienLien copié sur presse-papiers!
If the controller-custom-certs spec uses the Secret type, the secret is mounted as a secret volume. Knative components consume the secret directly, assuming that the secret has the required certificates.
Prerequisites
- You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated.
- You have installed the OpenShift Serverless Operator and Knative Serving on your cluster.
Procedure
Create a secret:
Example command
$ oc -n knative-serving create secret generic custom-secret --from-file=<secret_name>.crt=<path_to_certificate>Configure the
controller-custom-certsspec in theKnativeServingcustom resource (CR) to use theSecrettype:Example KnativeServing CR
apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: controller-custom-certs: name: custom-secret type: Secret
4.7. Configuring deployment resources Copier lienLien copié sur presse-papiers!
In Knative Serving, the config-deployment config map contains settings that determine how Kubernetes Deployment resources are configured for Knative services. In OpenShift Serverless Serving, you can configure these settings in the deployment section of your KnativeServing custom resource (CR).
You can use the deployment section to configure the following:
- Tag resolution
- Runtime environments
- Progress deadlines
4.7.1. Skipping tag resolution Copier lienLien copié sur presse-papiers!
Skipping tag resolution in OpenShift Serverless Serving can speed up deployments by avoiding unnecessary queries to the container registry, reducing latency and dependency on registry availability.
You can configure Serving to skip tag resolution by modifying the registriesSkippingTagResolving setting in your KnativeServing custom resource (CR).
Procedure
In your
KnativeServingCR, modify theregistriesSkippingTagResolvingsetting with the list of registries for which tag resoution will be skipped:Example of configured tag resolution skipping
apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: deployment: registriesSkippingTagResolving: "registry.example.com, another.registry.com"
4.7.2. Configuring selectable RuntimeClassName Copier lienLien copié sur presse-papiers!
You can configure OpenShift Serverless Serving to set a specific RuntimeClassName resource for Deployments by updating the runtime-class-name setting in your KnativeServing custom resource (CR).
This setting interacts with service labels, applying either the default RuntimeClassName or the one that matches the most labels associated with the service.
Procedure
In your
KnativeServingCR, configure theruntime-class-namesetting:Example of configured
runtime-class-namesettingapiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: deployment: runtime-class-name: | kata: {} gvisor: selector: my-label: selector
4.7.3. Progress deadline Copier lienLien copié sur presse-papiers!
By default, services have a progress deadline that defines the time limit for a service to complete its initial startup.
Consider increasing the progress deadline if you encounter any of these conditions in your deployment:
- The service image takes a long time to pull due to its size.
-
The service takes a long time to become
READYbecause of initial cache priming. - The cluster relies on autoscaling to allocate resources for new pods.
If the initial scale is not achieved within the specified time limit, the Knative Autoscaler component scales the revision to 0, and the service enters a terminal Failed state.
4.7.3.1. Configuring the progress deadline Copier lienLien copié sur presse-papiers!
Configure progress deadline settings to set the maximum time allowed in seconds or minutes for deployment progress before the system reports a Knative Revision failure.
By default, the progress deadline is set to 600 seconds. This value is specified as a Go time.Duration string and must be rounded to the nearest second.
Procedure
Configure progress deadline by modifying your KnativeServing custom resource (CR).
In your
KnativeServingCR, set the value ofprogressDeadline:Example of progress deadline set to 60 seconds
apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: deployment: progressDeadline: "60s"
4.8. Configuring Kourier Copier lienLien copié sur presse-papiers!
Kourier is a lightweight Kubernetes-native Ingress for Knative Serving. Kourier acts as a gateway for Knative, routing HTTP traffic to Knative services.
4.8.1. Accessing the current Envoy bootstrap configuration Copier lienLien copié sur presse-papiers!
The Envoy proxy component in Kourier handles inbound and outbound HTTP traffic for the Knative services. By default, Kourier contains an Envoy bootstrap configuration in the kourier-bootstrap configuration map in the knative-serving-ingress namespace.
Procedure
To get the current Envoy bootstrap configuration, run the following command:
Example command
$ oc get cm kourier-bootstrap -n knative-serving-ingress -o yamlFor example, with the default configuration, the example command produces the output that contains the following excerpts:
Example output
Name: kourier-bootstrap Namespace: knative-serving-ingress Labels: app.kubernetes.io/component=net-kourier app.kubernetes.io/name=knative-serving app.kubernetes.io/version=release-v1.10 networking.knative.dev/ingress-provider=kourier serving.knative.openshift.io/ownerName=knative-serving serving.knative.openshift.io/ownerNamespace=knative-serving Annotations: manifestival: newExample
Dataoutputdynamic_resources: ads_config: transport_api_version: V3 api_type: GRPC rate_limit_settings: {} grpc_services: - envoy_grpc: {cluster_name: xds_cluster} cds_config: resource_api_version: V3 ads: {} lds_config: resource_api_version: V3 ads: {} node: cluster: kourier-knative id: 3scale-kourier-gateway static_resources: listeners: - name: stats_listener address: socket_address: address: 0.0.0.0 port_value: 9000 filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager stat_prefix: stats_server http_filters: - name: envoy.filters.http.router typed_config: "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router route_config: virtual_hosts: - name: admin_interface domains: - "*" routes: - match: safe_regex: regex: '/(certs|stats(/prometheus)?|server_info|clusters|listeners|ready)?' headers: - name: ':method' string_match: exact: GET route: cluster: service_stats clusters: - name: service_stats connect_timeout: 0.250s type: static load_assignment: cluster_name: service_stats endpoints: lb_endpoints: endpoint: address: pipe: path: /tmp/envoy.admin - name: xds_cluster # This keepalive is recommended by envoy docs. # https://www.envoyproxy.io/docs/envoy/latest/api-docs/xds_protocol typed_extension_protocol_options: envoy.extensions.upstreams.http.v3.HttpProtocolOptions: "@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions explicit_http_config: http2_protocol_options: connection_keepalive: interval: 30s timeout: 5s connect_timeout: 1s load_assignment: cluster_name: xds_cluster endpoints: lb_endpoints: endpoint: address: socket_address: address: "net-kourier-controller.knative-serving-ingress.svc.cluster.local." port_value: 18000 type: STRICT_DNS admin: access_log: - name: envoy.access_loggers.stdout typed_config: "@type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog address: pipe: path: /tmp/envoy.admin layered_runtime: layers: - name: static-layer static_layer: envoy.reloadable_features.override_request_timeout_by_gateway_timeout: falseExample
BinaryDataoutputEvents: <none>
4.8.2. Customizing kourier-bootstrap for Kourier getaways Copier lienLien copié sur presse-papiers!
The Envoy proxy component in Kourier handles inbound and outbound HTTP traffic for the Knative services. By default, Kourier contains an Envoy bootstrap configuration in the kourier-bootstrap configuration map in the knative-serving-ingress namespace. You can change this configuration map to a custom one.
Prerequisites
- You have installed the OpenShift Serverless Operator and Knative Serving.
- You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated.
Procedure
Specify a custom bootstrapping configuration map by changing the
spec.ingress.kourier.bootstrap-configmapfield in theKnativeServingcustom resource (CR):Example KnativeServing CR
apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: network: ingress-class: kourier.ingress.networking.knative.dev ingress: kourier: bootstrap-configmap: my-configmap enabled: true # ...
4.8.3. Enabling administrator interface access Copier lienLien copié sur presse-papiers!
You can change the envoy bootstrap configuration to enable access to the administrator interface.
This procedure assumes sufficient knowledge of Knative, as changing envoy bootstrap configuration might result in Knative failure. Red Hat does not support custom configurations that are not tested or shipped with the product.
Prerequisites
- You have installed the OpenShift Serverless Operator and Knative Serving.
- You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated.
Procedure
To enable administrator interface access, locate this configuration in your bootstrapping configuration map:
pipe: path: /tmp/envoy.adminSubstitute it with the following configuration:
socket_address:1 address: 127.0.0.1 port_value: 9901- 1
- This configuration enables access to the Envoy admin interface on the loopback address (127.0.0.1) and port 9901.
Apply the
socket_addressconfiguration in theservice_statscluster configuration and in theadminconfiguration:The first is in the
service_statscluster configuration:clusters: - name: service_stats connect_timeout: 0.250s type: static load_assignment: cluster_name: service_stats endpoints: lb_endpoints: endpoint: address: socket_address: address: 127.0.0.1 port_value: 9901The second is in the
adminconfiguration:admin: access_log: - name: envoy.access_loggers.stdout typed_config: "@type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog address: socket_address: address: 127.0.0.1 port_value: 9901
4.9. Restrictive network policies Copier lienLien copié sur presse-papiers!
4.9.1. Clusters with restrictive network policies Copier lienLien copié sur presse-papiers!
If you are using a cluster that multiple users have access to, your cluster might use network policies to control which pods, services, and namespaces can communicate with each other over the network. If your cluster uses restrictive network policies, it is possible that Knative system pods are not able to access your Knative application. For example, if your namespace has the following network policy, which denies all requests, Knative system pods cannot access your Knative application:
Example NetworkPolicy object that denies all requests to the namespace
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-by-default
namespace: example-namespace
spec:
podSelector:
ingress: []
4.9.2. Enabling communication with Knative applications on a cluster with restrictive network policies Copier lienLien copié sur presse-papiers!
To allow access to your applications from Knative system pods, you must add a label to each of the Knative system namespaces, and then create a NetworkPolicy object in your application namespace that allows access to the namespace for other namespaces that have this label.
A network policy that denies requests to non-Knative services on your cluster still prevents access to these services. However, by allowing access from Knative system namespaces to your Knative application, you are allowing access to your Knative application from all namespaces in the cluster.
If you do not want to allow access to your Knative application from all namespaces on the cluster, you might want to use JSON Web Token authentication for Knative services instead. JSON Web Token authentication for Knative services requires Service Mesh.
Prerequisites
-
Install the OpenShift CLI (
oc). - OpenShift Serverless Operator and Knative Serving are installed on your cluster.
Procedure
Add the
knative.openshift.io/system-namespace=truelabel to each Knative system namespace that requires access to your application:Label the
knative-servingnamespace:$ oc label namespace knative-serving knative.openshift.io/system-namespace=trueLabel the
knative-serving-ingressnamespace:$ oc label namespace knative-serving-ingress knative.openshift.io/system-namespace=trueLabel the
knative-eventingnamespace:$ oc label namespace knative-eventing knative.openshift.io/system-namespace=trueLabel the
knative-kafkanamespace:$ oc label namespace knative-kafka knative.openshift.io/system-namespace=true
Create a
NetworkPolicyobject in your application namespace to allow access from namespaces with theknative.openshift.io/system-namespacelabel:Example
NetworkPolicyobjectapiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: <network_policy_name>1 namespace: <namespace>2 spec: ingress: - from: - namespaceSelector: matchLabels: knative.openshift.io/system-namespace: "true" podSelector: {} policyTypes: - Ingress
4.10. Configuring revision timeouts Copier lienLien copié sur presse-papiers!
You can configure timeout durations for revisions globally or individually to control the time spent on requests.
4.10.1. Configuring revision timeout Copier lienLien copié sur presse-papiers!
You can configure the default number of seconds for the revision timeout based on the request.
Prerequisites
- You have installed the OpenShift Serverless Operator and Knative Serving.
- You have cluster administrator permissions on OpenShift Container Platform, or cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated.
Procedure
Choose the appropriate method to configure the revision timeout:
To configure the revision timeout globally, set the
revision-timeout-secondsfield in theKnativeServingcustom resource (CR):apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: defaults: revision-timeout-seconds: "300"To configure the timeout per revision by setting the
timeoutSecondsfield in your service definition:apiVersion: serving.knative.dev/v1 kind: Service metadata: namespace: my-ns spec: template: spec: timeoutSeconds: 300 containers: - image: ghcr.io/knative/helloworld-go:latest
4.10.2. Configuring maximum revision timeout Copier lienLien copié sur presse-papiers!
By seting the maximum revision timeout, you can ensure that no revision can exceed a specific limit.
Prerequisites
- You have installed the OpenShift Serverless Operator and Knative Serving.
- You have cluster administrator permissions on OpenShift Container Platform, or cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated.
Procedure
To configure the maximum revision timeout, set the
max-revision-timeout-secondsfield in theKnativeServingcustom resource (CR):If this value is increased, the activator `terminationGracePeriodSeconds` should also be increased to prevent in-flight requests being disrupted.apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: defaults: max-revision-timeout-seconds: "600"