Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 10. Tuning eventing configuration
10.1. Overriding Knative Eventing system deployment configurations Copier lienLien copié sur presse-papiers!
You can override the default configurations for specific deployments by modifying the workloads spec in the KnativeEventing custom resource (CR). Currently, overriding default configuration settings is supported for the eventing-controller, eventing-webhook, and imc-controller fields and for the readiness and liveness fields for probes.
10.1.1. Overriding deployment configurations Copier lienLien copié sur presse-papiers!
Currently, overriding default configuration settings is supported for the eventing-controller, eventing-webhook, and imc-controller fields, and for the readiness and liveness fields for probes.
The replicas spec cannot override the number of replicas for deployments that use the Horizontal Pod Autoscaler (HPA), and does not work for the eventing-webhook deployment.
You can only override probes that are defined in the deployment by default.
In the following example, a KnativeEventing CR overrides the eventing-controller deployment so that:
-
The
eventing-controller readinessprobe timeout is 10 seconds. - The deployment specifies CPU and memory resource limits.
- The deployment runs with 3 replicas.
-
The deployment includes the
example-label: labellabel. -
The deployment includes the
example-annotation: annotationannotation. -
The
nodeSelectorfield selects nodes with thedisktype: hddlabel.
The following example displayes KnativeEventing CR:
apiVersion: operator.knative.dev/v1beta1
kind: KnativeEventing
metadata:
name: knative-eventing
namespace: knative-eventing
spec:
workloads:
- name: eventing-controller
readinessProbes:
- container: controller
timeoutSeconds: 10
resources:
- container: eventing-controller
requests:
cpu: 300m
memory: 100Mi
limits:
cpu: 1000m
memory: 250Mi
replicas: 3
labels:
example-label: label
annotations:
example-annotation: annotation
nodeSelector:
disktype: hdd
-
readinessProbes: You can use thereadinessandlivenessprobe overrides to override all fields of a probe in a container of a deployment as specified in the Kubernetes API except for the fields related to the probe handler:exec,grpc,httpGet, andtcpSocket.
The KnativeEventing CR label and annotation settings override the deployment’s labels and annotations for both the deployment itself and the resulting pods.
10.1.2. Modifying consumer group IDs and topic names Copier lienLien copié sur presse-papiers!
You can change templates for generating consumer group IDs and topic names used by your triggers, brokers, and channels.
Prerequisites
- You have cluster or dedicated administrator permissions on OpenShift Container Platform.
-
You have installed the OpenShift Serverless Operator, Knative Eventing, and the
KnativeKafkacustom resource (CR) on your OpenShift Container Platform cluster. - You have created a project or have access to a project that has the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI (
oc).
Procedure
To change templates for generating consumer group IDs and topic names used by your triggers, brokers, and channels, change the
KnativeKafkaresource:apiVersion: v1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing # ... spec: config: config-kafka-features: triggers.consumergroup.template: <template> brokers.topic.template: <template> channels.topic.template: <template>-
triggers.consumergroup.template: The template for generating the consumer group ID used by your triggers. Use a valid Gotext/templatevalue. Defaults to"knative-trigger-{{ .Namespace }}-{{ .Name }}". -
brokers.topic.template: The template for generating Kafka topic names used by your brokers. Use a valid Gotext/templatevalue. Defaults to"knative-broker-{{ .Namespace }}-{{ .Name }}". -
channels.topic.template: The template for generating Kafka topic names used by your channels. Use a valid Gotext/templatevalue. Defaults to"messaging-kafka.{{ .Namespace }}.{{ .Name }}".
The following example displayes template configuration:
apiVersion: v1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing # ... spec: config: config-kafka-features: triggers.consumergroup.template: "{% raw %}"knative-trigger-{{ .Namespace }}-{{ .Name }}-{{ .annotations.my-annotation }}"{% endraw %}" brokers.topic.template: "{% raw %}"knative-broker-{{ .Namespace }}-{{ .Name }}-{{ .annotations.my-annotation }}"{% endraw %}" channels.topic.template: "{% raw %}"messaging-kafka.{{ .Namespace }}.{{ .Name }}-{{ .annotations.my-annotation }}"{% endraw %}"-
Apply the
KnativeKafkaYAML file by running the following command:$ oc apply -f <knative_kafka_filename>
10.2. High availability Copier lienLien copié sur presse-papiers!
High availability (HA) keeps Kubernetes APIs operational during disruptions. If an active controller fails, another controller takes over and continues processing requests.
OpenShift Serverless uses leader election for HA. After installation, the system runs multiple controller instances that compete for a shared leader election lock. The controller that holds the lock acts as the leader.
10.2.1. Configuring high availability replicas for Knative Eventing Copier lienLien copié sur presse-papiers!
By default, Knative Eventing runs the eventing-controller, eventing-webhook, imc-controller, imc-dispatcher, and mt-broker-controller components with high availability (HA). Each component runs with two replicas. You can change the number of replicas for these components by modifying the spec.high-availability.replicas value in the KnativeEventing custom resource (CR).
For Knative Eventing, the mt-broker-filter and mt-broker-ingress deployments are not scaled by HA. If your environment requires more deployments, scale these components manually.
Prerequisites
- You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated.
- You have installed the OpenShift Serverless Operator and Knative Eventing are on your cluster.
Procedure
-
In the OpenShift Container Platform web console, navigate to OperatorHub
Installed Operators. -
Select the
knative-eventingnamespace. - Click Knative Eventing in the list of Provided APIs for the OpenShift Serverless Operator to go to the Knative Eventing tab.
- Click knative-eventing, then go to the YAML tab in the knative-eventing page.
Change the number of replicas in the
KnativeEventingCR:You get an output similar to the following example:
apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: high-availability: replicas: 3You can also specify the number of replicas for a specific workload.
NoteWorkload-specific configuration overrides the global setting for Knative Eventing.
You get an output similar to the following example:
apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: high-availability: replicas: 3 workloads: - name: mt-broker-filter replicas: 3Verify that the deployment respects the high availability limits:
You get an output similar to the following example command:
$ oc get hpa -n knative-eventingYou get an output similar to the following example:
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE broker-filter-hpa Deployment/mt-broker-filter 1%/70% 3 12 3 112s broker-ingress-hpa Deployment/mt-broker-ingress 1%/70% 3 12 3 112s eventing-webhook Deployment/eventing-webhook 4%/100% 3 7 3 115s
10.2.2. Configuring high availability replicas for the Knative broker implementation for Apache Kafka Copier lienLien copié sur presse-papiers!
By default, the Knative broker implementation for Apache Kafka runs the kafka-controller and kafka-webhook-eventing components with high availability (HA). Each component runs with two replicas. You can change the number of replicas for these components by modifying the spec.high-availability.replicas value in the KnativeKafka custom resource (CR).
Prerequisites
- You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated.
- You have installed the OpenShift Serverless Operator and Knative broker for Apache Kafka on your cluster.
Procedure
-
In the OpenShift Container Platform web console, navigate to OperatorHub
Installed Operators. -
Select the
knative-eventingnamespace. - Click Knative Kafka in the list of Provided APIs for the OpenShift Serverless Operator to go to the Knative Kafka tab.
- Click knative-kafka, then go to the YAML tab in the knative-kafka page.
Change the number of replicas in the
KnativeKafkaCR:You get an output similar to the following example:
apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: high-availability: replicas: 3
10.2.3. Overriding disruption budgets Copier lienLien copié sur presse-papiers!
A Pod Disruption Budget (PDB) is a standard feature of Kubernetes APIs that helps limit the disruption to an application when its pods need to be rescheduled for maintenance reasons.
Procedure
Override the default PDB for a specific resource by modifying the
minAvailableconfiguration value in theKnativeEventingcustom resource (CR).The following example displayes PDB with a
minAvailableseting of 70%:apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: podDisruptionBudgets: - name: eventing-webhook minAvailable: 70%NoteIf you disable high-availability, for example, by changing the
high-availability.replicasvalue to1, make sure you also update the corresponding PDBminAvailablevalue to0. Otherwise, the pod disruption budget prevents automatic cluster or Operator updates.