Chapter 13. Using AMQ Streams Operators
Use the AMQ Streams operators to manage your Kafka cluster, and Kafka topics and users.
13.1. Watching namespaces with AMQ Streams operators Copy linkLink copied to clipboard!
Operators watch and manage AMQ Streams resources in namespaces. The Cluster Operator can watch a single namespace, multiple namespaces, or all namespaces in an OpenShift cluster. The Topic Operator and User Operator can watch a single namespace.
-
The Cluster Operator watches for
Kafkaresources -
The Topic Operator watches for
KafkaTopicresources -
The User Operator watches for
KafkaUserresources
The Topic Operator and the User Operator can only watch a single Kafka cluster in a namespace. And they can only be connected to a single Kafka cluster.
If multiple Topic Operators watch the same namespace, name collisions and topic deletion can occur. This is because each Kafka cluster uses Kafka topics that have the same name (such as __consumer_offsets). Make sure that only one Topic Operator watches a given namespace.
When using multiple User Operators with a single namespace, a user with a given username can exist in more than one Kafka cluster.
If you deploy the Topic Operator and User Operator using the Cluster Operator, they watch the Kafka cluster deployed by the Cluster Operator by default. You can also specify a namespace using watchedNamespace in the operator configuration.
For a standalone deployment of each operator, you specify a namespace and connection to the Kafka cluster to watch in the configuration.
13.2. Using the Cluster Operator Copy linkLink copied to clipboard!
Use the Cluster Operator to deploy a Kafka cluster and other Kafka components.
13.2.1. Role-Based Access Control (RBAC) resources Copy linkLink copied to clipboard!
The Cluster Operator creates and manages RBAC resources for AMQ Streams components that need access to OpenShift resources.
For the Cluster Operator to function, it needs permission within the OpenShift cluster to interact with Kafka resources, such as Kafka and KafkaConnect, as well as managed resources like ConfigMap, Pod, Deployment, StatefulSet, and Service.
Permission is specified through OpenShift role-based access control (RBAC) resources:
-
ServiceAccount -
RoleandClusterRole -
RoleBindingandClusterRoleBinding
13.2.1.1. Delegating privileges to AMQ Streams components Copy linkLink copied to clipboard!
The Cluster Operator runs under a service account called strimzi-cluster-operator. It is assigned cluster roles that give it permission to create the RBAC resources for AMQ Streams components. Role bindings associate the cluster roles with the service account.
OpenShift prevents components operating under one ServiceAccount from granting another ServiceAccount privileges that the granting ServiceAccount does not have. Because the Cluster Operator creates the RoleBinding and ClusterRoleBinding RBAC resources needed by the resources it manages, it requires a role that gives it the same privileges.
The following tables describe the RBAC resources created by the Cluster Operator.
| Name | Used by |
|---|---|
|
| Kafka broker pods |
|
| ZooKeeper pods |
|
| Kafka Connect pods |
|
| MirrorMaker pods |
|
| MirrorMaker 2 pods |
|
| Kafka Bridge pods |
|
| Entity Operator |
| Name | Used by |
|---|---|
|
| Cluster Operator |
|
| Cluster Operator |
|
| Cluster Operator |
|
| Cluster Operator, rack feature (when used) |
|
| Cluster Operator, Topic Operator, User Operator |
|
| Cluster Operator, Kafka clients for rack awareness |
| Name | Used by |
|---|---|
|
| Cluster Operator |
|
| Cluster Operator, Kafka brokers for rack awareness |
|
| Cluster Operator, Kafka clients for rack awareness |
| Name | Used by |
|---|---|
|
| Cluster Operator |
|
| Cluster Operator, Kafka brokers for rack awareness |
13.2.1.2. Running the Cluster Operator using a ServiceAccount Copy linkLink copied to clipboard!
The Cluster Operator is best run using a ServiceAccount:
Example ServiceAccount for the Cluster Operator
The Deployment of the operator then needs to specify this in its spec.template.spec.serviceAccountName:
Partial example of Deployment for the Cluster Operator
Note line 12, where strimzi-cluster-operator is specified as the serviceAccountName.
13.2.1.3. ClusterRole resources Copy linkLink copied to clipboard!
The Cluster Operator uses ClusterRole resources to provide the necessary access to resources. Depending on the OpenShift cluster setup, a cluster administrator might be needed to create the cluster roles.
Cluster administrator rights are only needed for the creation of ClusterRole resources. The Cluster Operator will not run under a cluster admin account.
ClusterRole resources follow the principle of least privilege and contain only those privileges needed by the Cluster Operator to operate the cluster of the Kafka component. The first set of assigned privileges allow the Cluster Operator to manage OpenShift resources such as StatefulSet, Deployment, Pod, and ConfigMap.
All cluster roles are required by the Cluster Operator in order to delegate privileges.
The Cluster Operator uses the strimzi-cluster-operator-namespaced and strimzi-cluster-operator-global cluster roles to grant permission at the namespace-scoped resources level and cluster-scoped resources level.
ClusterRole with namespaced resources for the Cluster Operator
ClusterRole with cluster-scoped resources for the Cluster Operator
The strimzi-cluster-operator-leader-election cluster role represents the permissions needed for the leader election.
ClusterRole with leader election permissions
The strimzi-kafka-broker cluster role represents the access needed by the init container in Kafka pods that use rack awareness.
A role binding named strimzi-<cluster_name>-kafka-init grants the <cluster_name>-kafka service account access to nodes within a cluster using the strimzi-kafka-broker role. If the rack feature is not used and the cluster is not exposed through nodeport, no binding is created.
ClusterRole for the Cluster Operator allowing it to delegate access to OpenShift nodes to the Kafka broker pods
The strimzi-entity-operator cluster role represents the access needed by the Topic Operator and User Operator.
The Topic Operator produces OpenShift events with status information, so the <cluster_name>-entity-operator service account is bound to the strimzi-entity-operator role, which grants this access via the strimzi-entity-operator role binding.
ClusterRole for the Cluster Operator allowing it to delegate access to events to the Topic and User Operators
The strimzi-kafka-client cluster role represents the access needed by Kafka clients that use rack awareness.
ClusterRole for the Cluster Operator allowing it to delegate access to OpenShift nodes to the Kafka client-based pods
13.2.1.4. ClusterRoleBinding resources Copy linkLink copied to clipboard!
The Cluster Operator uses ClusterRoleBinding and RoleBinding resources to associate its ClusterRole with its ServiceAccount: Cluster role bindings are required by cluster roles containing cluster-scoped resources.
Example ClusterRoleBinding for the Cluster Operator
Cluster role bindings are also needed for the cluster roles used in delegating privileges:
Example ClusterRoleBinding for the Cluster Operator and Kafka broker rack awareness
Example ClusterRoleBinding for the Cluster Operator and Kafka client rack awareness
Cluster roles containing only namespaced resources are bound using role bindings only.
Example RoleBinding for the Cluster Operator
Example RoleBinding for the Cluster Operator and Kafka broker rack awareness
13.2.2. ConfigMap for Cluster Operator logging Copy linkLink copied to clipboard!
Cluster Operator logging is configured through a ConfigMap named strimzi-cluster-operator.
A ConfigMap containing logging configuration is created when installing the Cluster Operator. This ConfigMap is described in the file install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml. You configure Cluster Operator logging by changing the data field log4j2.properties in this ConfigMap.
To update the logging configuration, you can edit the 050-ConfigMap-strimzi-cluster-operator.yaml file and then run the following command:
oc create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml
oc create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml
Alternatively, edit the ConfigMap directly:
oc edit configmap strimzi-cluster-operator
oc edit configmap strimzi-cluster-operator
To change the frequency of the reload interval, set a time in seconds in the monitorInterval option in the created ConfigMap.
If the ConfigMap is missing when the Cluster Operator is deployed, the default logging values are used.
If the ConfigMap is accidentally deleted after the Cluster Operator is deployed, the most recently loaded logging configuration is used. Create a new ConfigMap to load a new logging configuration.
Do not remove the monitorInterval option from the ConfigMap.
13.2.3. Configuring the Cluster Operator with environment variables Copy linkLink copied to clipboard!
You can configure the Cluster Operator using environment variables. The supported environment variables are listed here.
The environment variables are specified for the container image of the Cluster Operator in its Deployment configuration file. (install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml)
STRIMZI_NAMESPACEA comma-separated list of namespaces that the operator operates in. When not set, set to empty string, or set to
*, the Cluster Operator operates in all namespaces.The Cluster Operator deployment might use the downward API to set this automatically to the namespace the Cluster Operator is deployed in.
Example configuration for Cluster Operator namespaces
env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespaceenv: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow STRIMZI_FULL_RECONCILIATION_INTERVAL_MS- Optional, default is 120000 ms. The interval between periodic reconciliations, in milliseconds.
STRIMZI_OPERATION_TIMEOUT_MS- Optional, default 300000 ms. The timeout for internal operations, in milliseconds. Increase this value when using AMQ Streams on clusters where regular OpenShift operations take longer than usual (because of slow downloading of Docker images, for example).
STRIMZI_ZOOKEEPER_ADMIN_SESSION_TIMEOUT_MS-
Optional, default 10000 ms. The session timeout for the Cluster Operator’s ZooKeeper admin client, in milliseconds. Increase the value if ZooKeeper requests from the Cluster Operator are regularly failing due to timeout issues. There is a maximum allowed session time set on the ZooKeeper server side via the
maxSessionTimeoutconfig. By default, the maximum session timeout value is 20 times the defaulttickTime(whose default is 2000) at 40000 ms. If you require a higher timeout, change themaxSessionTimeoutZooKeeper server configuration value. STRIMZI_OPERATIONS_THREAD_POOL_SIZE- Optional, default 10. The worker thread pool size, which is used for various asynchronous and blocking operations that are run by the Cluster Operator.
STRIMZI_OPERATOR_NAME- Optional, defaults to the pod’s hostname. The operator name identifies the AMQ Streams instance when emitting OpenShift events.
STRIMZI_OPERATOR_NAMESPACEThe name of the namespace where the Cluster Operator is running. Do not configure this variable manually. Use the downward API.
env: - name: STRIMZI_OPERATOR_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespaceenv: - name: STRIMZI_OPERATOR_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow STRIMZI_OPERATOR_NAMESPACE_LABELSOptional. The labels of the namespace where the AMQ Streams Cluster Operator is running. Use namespace labels to configure the namespace selector in network policies. Network policies allow the AMQ Streams Cluster Operator access only to the operands from the namespace with these labels. When not set, the namespace selector in network policies is configured to allow access to the Cluster Operator from any namespace in the OpenShift cluster.
env: - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2env: - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2Copy to Clipboard Copied! Toggle word wrap Toggle overflow STRIMZI_LABELS_EXCLUSION_PATTERNOptional, default regex pattern is
^app.kubernetes.io/(?!part-of).*. The regex exclusion pattern used to filter labels propagation from the main custom resource to its subresources. The labels exclusion filter is not applied to labels in template sections such asspec.kafka.template.pod.metadata.labels.env: - name: STRIMZI_LABELS_EXCLUSION_PATTERN value: "^key1.*"env: - name: STRIMZI_LABELS_EXCLUSION_PATTERN value: "^key1.*"Copy to Clipboard Copied! Toggle word wrap Toggle overflow STRIMZI_CUSTOM_{COMPONENT_NAME}_LABELSOptional. One or more custom labels to apply to all the pods created by the
{COMPONENT_NAME}custom resource. The Cluster Operator labels the pods when the custom resource is created or is next reconciled.Labels can be applied to the following components:
-
KAFKA -
KAFKA_CONNECT -
KAFKA_CONNECT_BUILD -
ZOOKEEPER -
ENTITY_OPERATOR -
KAFKA_MIRROR_MAKER2 -
KAFKA_MIRROR_MAKER -
CRUISE_CONTROL -
KAFKA_BRIDGE -
KAFKA_EXPORTER
-
STRIMZI_CUSTOM_RESOURCE_SELECTOROptional. The label selector to filter the custom resources handled by the Cluster Operator. The operator will operate only on those custom resources that have the specified labels set. Resources without these labels will not be seen by the operator. The label selector applies to
Kafka,KafkaConnect,KafkaBridge,KafkaMirrorMaker, andKafkaMirrorMaker2resources.KafkaRebalanceandKafkaConnectorresources are operated only when their corresponding Kafka and Kafka Connect clusters have the matching labels.env: - name: STRIMZI_CUSTOM_RESOURCE_SELECTOR value: label1=value1,label2=value2env: - name: STRIMZI_CUSTOM_RESOURCE_SELECTOR value: label1=value1,label2=value2Copy to Clipboard Copied! Toggle word wrap Toggle overflow STRIMZI_KAFKA_IMAGES-
Required. The mapping from the Kafka version to the corresponding Docker image containing a Kafka broker for that version. The required syntax is whitespace or comma-separated
<version>=<image>pairs. For example3.3.1=registry.redhat.io/amq-streams/kafka-33-rhel8:2.4.0, 3.4.0=registry.redhat.io/amq-streams/kafka-34-rhel8:2.4.0. This is used when aKafka.spec.kafka.versionproperty is specified but not theKafka.spec.kafka.imagein theKafkaresource. STRIMZI_DEFAULT_KAFKA_INIT_IMAGE-
Optional, default
registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.4.0. The image name to use as default for the init container if no image is specified as thekafka-init-imagein theKafkaresource. The init container is started before the broker for initial configuration work, such as rack support. STRIMZI_KAFKA_CONNECT_IMAGES-
Required. The mapping from the Kafka version to the corresponding Docker image of Kafka Connect for that version. The required syntax is whitespace or comma-separated
<version>=<image>pairs. For example3.3.1=registry.redhat.io/amq-streams/kafka-33-rhel8:2.4.0, 3.4.0=registry.redhat.io/amq-streams/kafka-34-rhel8:2.4.0. This is used when aKafkaConnect.spec.versionproperty is specified but not theKafkaConnect.spec.image. STRIMZI_KAFKA_MIRROR_MAKER_IMAGES-
Required. The mapping from the Kafka version to the corresponding Docker image of MirrorMaker for that version. The required syntax is whitespace or comma-separated
<version>=<image>pairs. For example3.3.1=registry.redhat.io/amq-streams/kafka-33-rhel8:2.4.0, 3.4.0=registry.redhat.io/amq-streams/kafka-34-rhel8:2.4.0. This is used when aKafkaMirrorMaker.spec.versionproperty is specified but not theKafkaMirrorMaker.spec.image. STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE-
Optional, default
registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.4.0. The image name to use as the default when deploying the Topic Operator if no image is specified as theKafka.spec.entityOperator.topicOperator.imagein theKafkaresource. STRIMZI_DEFAULT_USER_OPERATOR_IMAGE-
Optional, default
registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.4.0. The image name to use as the default when deploying the User Operator if no image is specified as theKafka.spec.entityOperator.userOperator.imagein theKafkaresource. STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE-
Optional, default
registry.redhat.io/amq-streams/kafka-34-rhel8:2.4.0. The image name to use as the default when deploying the sidecar container for the Entity Operator if no image is specified as theKafka.spec.entityOperator.tlsSidecar.imagein theKafkaresource. The sidecar provides TLS support. STRIMZI_IMAGE_PULL_POLICY-
Optional. The
ImagePullPolicythat is applied to containers in all pods managed by the Cluster Operator. The valid values areAlways,IfNotPresent, andNever. If not specified, the OpenShift defaults are used. Changing the policy will result in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. STRIMZI_IMAGE_PULL_SECRETS-
Optional. A comma-separated list of
Secretnames. The secrets referenced here contain the credentials to the container registries where the container images are pulled from. The secrets are specified in theimagePullSecretsproperty for all pods created by the Cluster Operator. Changing this list results in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. STRIMZI_KUBERNETES_VERSIONOptional. Overrides the OpenShift version information detected from the API server.
Example configuration for OpenShift version override
Copy to Clipboard Copied! Toggle word wrap Toggle overflow KUBERNETES_SERVICE_DNS_DOMAINOptional. Overrides the default OpenShift DNS domain name suffix.
By default, services assigned in the OpenShift cluster have a DNS domain name that uses the default suffix
cluster.local.For example, for broker kafka-0:
<cluster-name>-kafka-0.<cluster-name>-kafka-brokers.<namespace>.svc.cluster.local
<cluster-name>-kafka-0.<cluster-name>-kafka-brokers.<namespace>.svc.cluster.localCopy to Clipboard Copied! Toggle word wrap Toggle overflow The DNS domain name is added to the Kafka broker certificates used for hostname verification.
If you are using a different DNS domain name suffix in your cluster, change the
KUBERNETES_SERVICE_DNS_DOMAINenvironment variable from the default to the one you are using in order to establish a connection with the Kafka brokers.STRIMZI_CONNECT_BUILD_TIMEOUT_MS- Optional, default 300000 ms. The timeout for building new Kafka Connect images with additional connectors, in milliseconds. Consider increasing this value when using AMQ Streams to build container images containing many connectors or using a slow container registry.
STRIMZI_NETWORK_POLICY_GENERATIONOptional, default
true. Network policy for resources. Network policies allow connections between Kafka components.Set this environment variable to
falseto disable network policy generation. You might do this, for example, if you want to use custom network policies. Custom network policies allow more control over maintaining the connections between components.STRIMZI_DNS_CACHE_TTL-
Optional, default
30. Number of seconds to cache successful name lookups in local DNS resolver. Any negative value means cache forever. Zero means do not cache, which can be useful for avoiding connection errors due to long caching policies being applied. STRIMZI_POD_SET_RECONCILIATION_ONLY-
Optional, default
false. When set totrue, the Cluster Operator reconciles only theStrimziPodSetresources and any changes to the other custom resources (Kafka,KafkaConnect, and so on) are ignored. This mode is useful for ensuring that your pods are recreated if needed, but no other changes happen to the clusters. STRIMZI_FEATURE_GATES- Optional. Enables or disables the features and functionality controlled by feature gates.
STRIMZI_POD_SECURITY_PROVIDER_CLASS-
Optional. Configuration for the pluggable
PodSecurityProviderclass, which can be used to provide the security context configuration for Pods and containers.
13.2.3.1. Leader election environment variables Copy linkLink copied to clipboard!
Use leader election environment variables when running additional Cluster Operator replicas. You might run additional replicas to safeguard against disruption caused by major failure.
STRIMZI_LEADER_ELECTION_ENABLED-
Optional, disabled (
false) by default. Enables or disables leader election, which allows additional Cluster Operator replicas to run on standby.
Leader election is disabled by default. It is only enabled when applying this environment variable on installation.
STRIMZI_LEADER_ELECTION_LEASE_NAME-
Required when leader election is enabled. The name of the OpenShift
Leaseresource that is used for the leader election. STRIMZI_LEADER_ELECTION_LEASE_NAMESPACERequired when leader election is enabled. The namespace where the OpenShift
Leaseresource used for leader election is created. You can use the downward API to configure it to the namespace where the Cluster Operator is deployed.env: - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespaceenv: - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow STRIMZI_LEADER_ELECTION_IDENTITYRequired when leader election is enabled. Configures the identity of a given Cluster Operator instance used during the leader election. The identity must be unique for each operator instance. You can use the downward API to configure it to the name of the pod where the Cluster Operator is deployed.
env: - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.nameenv: - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow STRIMZI_LEADER_ELECTION_LEASE_DURATION_MS- Optional, default 15000 ms. Specifies the duration the acquired lease is valid.
STRIMZI_LEADER_ELECTION_RENEW_DEADLINE_MS- Optional, default 10000 ms. Specifies the period the leader should try to maintain leadership.
STRIMZI_LEADER_ELECTION_RETRY_PERIOD_MS- Optional, default 2000 ms. Specifies the frequency of updates to the lease lock by the leader.
13.2.3.2. Restricting Cluster Operator access with network policy Copy linkLink copied to clipboard!
Use the STRIMZI_OPERATOR_NAMESPACE_LABELS environment variable to establish network policy for the Cluster Operator using namespace labels.
The Cluster Operator can run in the same namespace as the resources it manages, or in a separate namespace. By default, the STRIMZI_OPERATOR_NAMESPACE environment variable is configured to use the downward API to find the namespace the Cluster Operator is running in. If the Cluster Operator is running in the same namespace as the resources, only local access is required and allowed by AMQ Streams.
If the Cluster Operator is running in a separate namespace to the resources it manages, any namespace in the OpenShift cluster is allowed access to the Cluster Operator unless network policy is configured. By adding namespace labels, access to the Cluster Operator is restricted to the namespaces specified.
Network policy configured for the Cluster Operator deployment
13.2.3.3. Setting the time interval for periodic reconciliation Copy linkLink copied to clipboard!
Use the STRIMZI_FULL_RECONCILIATION_INTERVAL_MS variable to set the time interval for periodic reconciliations.
The Cluster Operator reacts to all notifications about applicable cluster resources received from the OpenShift cluster. If the operator is not running, or if a notification is not received for any reason, resources will get out of sync with the state of the running OpenShift cluster. In order to handle failovers properly, a periodic reconciliation process is executed by the Cluster Operator so that it can compare the state of the resources with the current cluster deployments in order to have a consistent state across all of them.
13.2.4. Configuring the Cluster Operator with default proxy settings Copy linkLink copied to clipboard!
If you are running a Kafka cluster behind a HTTP proxy, you can still pass data in and out of the cluster. For example, you can run Kafka Connect with connectors that push and pull data from outside the proxy. Or you can use a proxy to connect with an authorization server.
Configure the Cluster Operator deployment to specify the proxy environment variables. The Cluster Operator accepts standard proxy configuration (HTTP_PROXY, HTTPS_PROXY and NO_PROXY) as environment variables. The proxy settings are applied to all AMQ Streams containers.
The format for a proxy address is http://IP-ADDRESS:PORT-NUMBER. To set up a proxy with a name and password, the format is http://USERNAME:PASSWORD@IP-ADDRESS:PORT-NUMBER.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinitionand RBAC (ClusterRole, andRoleBinding) resources.
Procedure
To add proxy environment variables to the Cluster Operator, update its
Deploymentconfiguration (install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml).Example proxy configuration for the Cluster Operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, edit the
Deploymentdirectly:oc edit deployment strimzi-cluster-operator
oc edit deployment strimzi-cluster-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you updated the YAML file instead of editing the
Deploymentdirectly, apply the changes:oc create -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
oc create -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.2.5. Running multiple Cluster Operator replicas with leader election Copy linkLink copied to clipboard!
The default Cluster Operator configuration enables leader election. Use leader election to run multiple parallel replicas of the Cluster Operator. One replica is elected as the active leader and operates the deployed resources. The other replicas run in standby mode. When the leader stops or fails, one of the standby replicas is elected as the new leader and starts operating the deployed resources.
By default, AMQ Streams runs with a single Cluster Operator replica that is always the leader replica. When a single Cluster Operator replica stops or fails, OpenShift starts a new replica.
Running the Cluster Operator with multiple replicas is not essential. But it’s useful to have replicas on standby in case of large-scale disruptions. For example, suppose multiple worker nodes or an entire availability zone fails. This failure might cause the Cluster Operator pod and many Kafka pods to go down at the same time. If subsequent pod scheduling causes congestion through lack of resources, this can delay operations when running a single Cluster Operator.
13.2.5.1. Configuring Cluster Operator replicas Copy linkLink copied to clipboard!
To run additional Cluster Operator replicas in standby mode, you will need to increase the number of replicas and enable leader election. To configure leader election, use the leader election environment variables.
To make the required changes, configure the following Cluster Operator installation files located in install/cluster-operator/:
- 060-Deployment-strimzi-cluster-operator.yaml
- 022-ClusterRole-strimzi-cluster-operator-role.yaml
- 022-RoleBinding-strimzi-cluster-operator.yaml
Leader election has its own ClusterRole and RoleBinding RBAC resources that target the namespace where the Cluster Operator is running, rather than the namespace it is watching.
The default deployment configuration creates a Lease resource called strimzi-cluster-operator in the same namespace as the Cluster Operator. The Cluster Operator uses leases to manage leader election. The RBAC resources provide the permissions to use the Lease resource. If you use a different Lease name or namespace, update the ClusterRole and RoleBinding files accordingly.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinitionand RBAC (ClusterRole, andRoleBinding) resources.
Procedure
Edit the Deployment resource that is used to deploy the Cluster Operator, which is defined in the 060-Deployment-strimzi-cluster-operator.yaml file.
Change the
replicasproperty from the default (1) to a value that matches the required number of replicas.Increasing the number of Cluster Operator replicas
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the leader election
envproperties are set.If they are not set, configure them.
To enable leader election,
STRIMZI_LEADER_ELECTION_ENABLEDmust be set totrue(default).In this example, the name of the lease is changed to
my-strimzi-cluster-operator.Configuring leader election environment variables for the Cluster Operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For a description of the available environment variables, see Section 13.2.3.1, “Leader election environment variables”.
If you specified a different name or namespace for the
Leaseresource used in leader election, update the RBAC resources.(optional) Edit the
ClusterRoleresource in the022-ClusterRole-strimzi-cluster-operator-role.yamlfile.Update
resourceNameswith the name of theLeaseresource.Updating the ClusterRole references to the lease
Copy to Clipboard Copied! Toggle word wrap Toggle overflow (optional) Edit the
RoleBindingresource in the022-RoleBinding-strimzi-cluster-operator.yamlfile.Update
subjects.nameandsubjects.namespacewith the name of theLeaseresource and the namespace where it was created.Updating the RoleBinding references to the lease
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the Cluster Operator:
oc create -f install/cluster-operator -n myproject
oc create -f install/cluster-operator -n myprojectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the deployment:
oc get deployments -n myproject
oc get deployments -n myprojectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 3/3 3 3
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 3/3 3 3Copy to Clipboard Copied! Toggle word wrap Toggle overflow READYshows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLEoutput shows the correct number of replicas.
13.2.6. FIPS support Copy linkLink copied to clipboard!
Federal Information Processing Standards (FIPS) are standards for computer security and interoperability. When running AMQ Streams on a FIPS-enabled OpenShift cluster, the OpenJDK used in AMQ Streams container images automatically switches to FIPS mode. From version 2.4, AMQ Streams can run on FIPS-enabled OpenShift clusters without any changes or special configuration. It uses only the FIPS-compliant security libraries from the OpenJDK.
Minimum password length
When running in the FIPS mode, SCRAM-SHA-512 passwords need to be at least 32 characters long. From AMQ Streams 2.4, the default password length in AMQ Streams User Operator is set to 32 characters as well. If you have a Kafka cluster with custom configuration that uses a password length that is less than 32 characters, you need to update your configuration. If you have any users with passwords shorter than 32 characters, you need to regenerate a password with the required length. You can do that, for example, by deleting the user secret and waiting for the User Operator to create a new password with the appropriate length.
If you are using FIPS-enabled OpenShift clusters, you may experience higher memory consumption compared to regular OpenShift clusters. To avoid any issues, we suggest increasing the memory request to at least 512Mi.
13.2.6.1. Disabling FIPS mode Copy linkLink copied to clipboard!
AMQ Streams automatically switches to FIPS mode when running on a FIPS-enabled OpenShift cluster. Disable FIPS mode by setting the FIPS_MODE environment variable to disabled in the deployment configuration for the Cluster Operator. With FIPS mode disabled, AMQ Streams automatically disables FIPS in the OpenJDK for all components. With FIPS mode disabled, AMQ Streams is not FIPS compliant. The AMQ Streams operators, as well as all operands, run in the same way as if they were running on an OpenShift cluster without FIPS enabled.
Procedure
To disable the FIPS mode in the Cluster Operator, update its
Deploymentconfiguration (install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml) and add theFIPS_MODEenvironment variable.Example FIPS configuration for the Cluster Operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Disables the FIPS mode.
Alternatively, edit the
Deploymentdirectly:oc edit deployment strimzi-cluster-operator
oc edit deployment strimzi-cluster-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you updated the YAML file instead of editing the
Deploymentdirectly, apply the changes:oc apply -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
oc apply -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3. Using the Topic Operator Copy linkLink copied to clipboard!
When you create, modify or delete a topic using the KafkaTopic resource, the Topic Operator ensures those changes are reflected in the Kafka cluster.
For more information on the KafkaTopic resource, see the KafkaTopic schema reference.
Deploying the Topic Operator
You can deploy the Topic Operator using the Cluster Operator or as a standalone operator. You would use a standalone Topic Operator with a Kafka cluster that is not managed by the Cluster Operator.
For deployment instructions, see the following:
To deploy the standalone Topic Operator, you need to set environment variables to connect to a Kafka cluster. These environment variables do not need to be set if you are deploying the Topic Operator using the Cluster Operator as they will be set by the Cluster Operator.
13.3.1. Kafka topic resource Copy linkLink copied to clipboard!
The KafkaTopic resource is used to configure topics, including the number of partitions and replicas.
The full schema for KafkaTopic is described in KafkaTopic schema reference.
13.3.1.1. Identifying a Kafka cluster for topic handling Copy linkLink copied to clipboard!
A KafkaTopic resource includes a label that specifies the name of the Kafka cluster (derived from the name of the Kafka resource) to which it belongs.
For example:
The label is used by the Topic Operator to identify the KafkaTopic resource and create a new topic, and also in subsequent handling of the topic.
If the label does not match the Kafka cluster, the Topic Operator cannot identify the KafkaTopic and the topic is not created.
13.3.1.2. Kafka topic usage recommendations Copy linkLink copied to clipboard!
When working with topics, be consistent. Always operate on either KafkaTopic resources or topics directly in OpenShift. Avoid routinely switching between both methods for a given topic.
Use topic names that reflect the nature of the topic, and remember that names cannot be changed later.
If creating a topic in Kafka, use a name that is a valid OpenShift resource name, otherwise the Topic Operator will need to create the corresponding KafkaTopic with a name that conforms to the OpenShift rules.
For information on the requirements for identifiers and names in OpenShift, refer to Object Names and IDs.
13.3.1.3. Kafka topic naming conventions Copy linkLink copied to clipboard!
Kafka and OpenShift impose their own validation rules for the naming of topics in Kafka and KafkaTopic.metadata.name respectively. There are valid names for each which are invalid in the other.
Using the spec.topicName property, it is possible to create a valid topic in Kafka with a name that would be invalid for the Kafka topic in OpenShift.
The spec.topicName property inherits Kafka naming validation rules:
- The name must not be longer than 249 characters.
-
Valid characters for Kafka topics are ASCII alphanumerics,
.,_, and-. -
The name cannot be
.or.., though.can be used in a name, such asexampleTopic.or.exampleTopic.
spec.topicName must not be changed.
For example:
- 1
- Upper case is invalid in OpenShift.
cannot be changed to:
Some Kafka client applications, such as Kafka Streams, can create topics in Kafka programmatically. If those topics have names that are invalid OpenShift resource names, the Topic Operator gives them a valid metadata.name based on the Kafka name. Invalid characters are replaced and a hash is appended to the name. For example:
13.3.2. Topic Operator topic store Copy linkLink copied to clipboard!
The Topic Operator uses Kafka to store topic metadata describing topic configuration as key-value pairs. The topic store is based on the Kafka Streams key-value mechanism, which uses Kafka topics to persist the state.
Topic metadata is cached in-memory and accessed locally within the Topic Operator. Updates from operations applied to the local in-memory cache are persisted to a backup topic store on disk. The topic store is continually synchronized with updates from Kafka topics or OpenShift KafkaTopic custom resources. Operations are handled rapidly with the topic store set up this way, but should the in-memory cache crash it is automatically repopulated from the persistent storage.
13.3.2.1. Internal topic store topics Copy linkLink copied to clipboard!
Internal topics support the handling of topic metadata in the topic store.
__strimzi_store_topic- Input topic for storing the topic metadata
__strimzi-topic-operator-kstreams-topic-store-changelog- Retains a log of compacted topic store values
Do not delete these topics, as they are essential to the running of the Topic Operator.
13.3.2.2. Migrating topic metadata from ZooKeeper Copy linkLink copied to clipboard!
In previous releases of AMQ Streams, topic metadata was stored in ZooKeeper. The new process removes this requirement, bringing the metadata into the Kafka cluster, and under the control of the Topic Operator.
When upgrading to AMQ Streams 2.4, the transition to Topic Operator control of the topic store is seamless. Metadata is found and migrated from ZooKeeper, and the old store is deleted.
13.3.2.3. Downgrading to an AMQ Streams version that uses ZooKeeper to store topic metadata Copy linkLink copied to clipboard!
If you are reverting back to a version of AMQ Streams earlier than 1.7, which uses ZooKeeper for the storage of topic metadata, you still downgrade your Cluster Operator to the previous version, then downgrade Kafka brokers and client applications to the previous Kafka version as standard.
However, you must also delete the topics that were created for the topic store using a kafka-admin command, specifying the bootstrap address of the Kafka cluster. For example:
oc run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-34-rhel8:2.4.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete
oc run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-34-rhel8:2.4.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete
The command must correspond to the type of listener and authentication used to access the Kafka cluster.
The Topic Operator will reconstruct the ZooKeeper topic metadata from the state of the topics in Kafka.
13.3.2.4. Topic Operator topic replication and scaling Copy linkLink copied to clipboard!
The recommended configuration for topics managed by the Topic Operator is a topic replication factor of 3, and a minimum of 2 in-sync replicas.
- 1
- The number of partitions for the topic.
- 2
- The number of replica topic partitions. Currently, this cannot be changed in the
KafkaTopicresource, but it can be changed using thekafka-reassign-partitions.shtool. - 3
- The minimum number of replica partitions that a message must be successfully written to, or an exception is raised.
In-sync replicas are used in conjunction with the acks configuration for producer applications. The acks configuration determines the number of follower partitions a message must be replicated to before the message is acknowledged as successfully received. The Topic Operator runs with acks=all, whereby messages must be acknowledged by all in-sync replicas.
When scaling Kafka clusters by adding or removing brokers, replication factor configuration is not changed and replicas are not reassigned automatically. However, you can use the kafka-reassign-partitions.sh tool to change the replication factor, and manually reassign replicas to brokers.
Alternatively, though the integration of Cruise Control for AMQ Streams cannot change the replication factor for topics, the optimization proposals it generates for rebalancing Kafka include commands that transfer partition replicas and change partition leadership.
13.3.2.5. Handling changes to topics Copy linkLink copied to clipboard!
A fundamental problem that the Topic Operator needs to solve is that there is no single source of truth: both the KafkaTopic resource and the Kafka topic can be modified independently of the Topic Operator. Complicating this, the Topic Operator might not always be able to observe changes at each end in real time. For example, when the Topic Operator is down.
To resolve this, the Topic Operator maintains information about each topic in the topic store. When a change happens in the Kafka cluster or OpenShift, it looks at both the state of the other system and the topic store in order to determine what needs to change to keep everything in sync. The same thing happens whenever the Topic Operator starts, and periodically while it is running.
For example, suppose the Topic Operator is not running, and a KafkaTopic called my-topic is created. When the Topic Operator starts, the topic store does not contain information on my-topic, so it can infer that the KafkaTopic was created after it was last running. The Topic Operator creates the topic corresponding to my-topic, and also stores metadata for my-topic in the topic store.
If you update Kafka topic configuration or apply a change through the KafkaTopic custom resource, the topic store is updated after the Kafka cluster is reconciled.
The topic store also allows the Topic Operator to manage scenarios where the topic configuration is changed in Kafka topics and updated through OpenShift KafkaTopic custom resources, as long as the changes are not incompatible. For example, it is possible to make changes to the same topic config key, but to different values. For incompatible changes, the Kafka configuration takes priority, and the KafkaTopic is updated accordingly.
You can also use the KafkaTopic resource to delete topics using a oc delete -f KAFKA-TOPIC-CONFIG-FILE command. To be able to do this, delete.topic.enable must be set to true (default) in the spec.kafka.config of the Kafka resource.
13.3.3. Configuring Kafka topics Copy linkLink copied to clipboard!
Use the properties of the KafkaTopic resource to configure Kafka topics.
You can use oc apply to create or modify topics, and oc delete to delete existing topics.
For example:
-
oc apply -f <topic_config_file> -
oc delete KafkaTopic <topic_name>
This procedure shows how to create a topic with 10 partitions and 2 replicas.
Before you start
It is important that you consider the following before making your changes:
- Kafka does not support decreasing the number of partitions.
-
Increasing
spec.partitionsfor topics with keys will change how records are partitioned, which can be particularly problematic when the topic uses semantic partitioning. AMQ Streams does not support making the following changes through the
KafkaTopicresource:-
Using
spec.replicasto change the number of replicas that were initially specified -
Changing topic names using
spec.topicName
-
Using
Prerequisites
- A running Kafka cluster configured with a Kafka broker listener using mTLS authentication and TLS encryption.
- A running Topic Operator (typically deployed with the Entity Operator).
-
For deleting a topic,
delete.topic.enable=true(default) in thespec.kafka.configof theKafkaresource.
Procedure
Configure the
KafkaTopicresource.Example Kafka topic configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipWhen modifying a topic, you can get the current version of the resource using
oc get kafkatopic orders -o yaml.Create the
KafkaTopicresource in OpenShift.oc apply -f <topic_config_file>
oc apply -f <topic_config_file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the ready status of the topic to change to
True:oc get kafkatopics -o wide -w -n <namespace>
oc get kafkatopics -o wide -w -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Kafka topic status
NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-1 my-cluster 10 3 True my-topic-2 my-cluster 10 3 my-topic-3 my-cluster 10 3 True
NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-1 my-cluster 10 3 True my-topic-2 my-cluster 10 3 my-topic-3 my-cluster 10 3 TrueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Topic creation is successful when the
READYoutput showsTrue.If the
READYcolumn stays blank, get more details on the status from the resource YAML or from the Topic Operator logs.Messages provide details on the reason for the current status.
oc get kafkatopics my-topic-2 -o yaml
oc get kafkatopics my-topic-2 -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Details on a topic with a
NotReadystatusCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the reason the topic is not ready is because the original number of partitions was reduced in the
KafkaTopicconfiguration. Kafka does not support this.After resetting the topic configuration, the status shows the topic is ready.
oc get kafkatopics my-topic-2 -o wide -w -n <namespace>
oc get kafkatopics my-topic-2 -o wide -w -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Status update of the topic
NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-2 my-cluster 10 3 True
NAME CLUSTER PARTITIONS REPLICATION FACTOR READY my-topic-2 my-cluster 10 3 TrueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Fetching the details shows no messages
oc get kafkatopics my-topic-2 -o yaml
oc get kafkatopics my-topic-2 -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Details on a topic with a
READYstatusCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3.4. Configuring the Topic Operator with resource requests and limits Copy linkLink copied to clipboard!
You can allocate resources, such as CPU and memory, to the Topic Operator and set a limit on the amount of resources it can consume.
Prerequisites
- The Cluster Operator is running.
Procedure
Update the Kafka cluster configuration in an editor, as required:
oc edit kafka MY-CLUSTER
oc edit kafka MY-CLUSTERCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the
spec.entityOperator.topicOperator.resourcesproperty in theKafkaresource, set the resource requests and limits for the Topic Operator.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the new configuration to create or update the resource.
oc apply -f <kafka_configuration_file>
oc apply -f <kafka_configuration_file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.4. Using the User Operator Copy linkLink copied to clipboard!
When you create, modify or delete a user using the KafkaUser resource, the User Operator ensures those changes are reflected in the Kafka cluster.
For more information on the KafkaUser resource, see the KafkaUser schema reference.
Deploying the User Operator
You can deploy the User Operator using the Cluster Operator or as a standalone operator. You would use a standalone User Operator with a Kafka cluster that is not managed by the Cluster Operator.
For deployment instructions, see the following:
To deploy the standalone User Operator, you need to set environment variables to connect to a Kafka cluster. These environment variables do not need to be set if you are deploying the User Operator using the Cluster Operator as they will be set by the Cluster Operator.
13.4.1. Configuring Kafka users Copy linkLink copied to clipboard!
Use the properties of the KafkaUser resource to configure Kafka users.
You can use oc apply to create or modify users, and oc delete to delete existing users.
For example:
-
oc apply -f <user_config_file> -
oc delete KafkaUser <user_name>
Users represent Kafka clients. When you configure Kafka users, you enable the user authentication and authorization mechanisms required by clients to access Kafka. The mechanism used must match the equivalent Kafka configuration. For more information on using Kafka and KafkaUser resources to secure access to Kafka brokers, see Securing access to Kafka brokers.
Prerequisites
- A running Kafka cluster configured with a Kafka broker listener using mTLS authentication and TLS encryption.
- A running User Operator (typically deployed with the Entity Operator).
Procedure
Configure the
KafkaUserresource.This example specifies mTLS authentication and simple authorization using ACLs.
Example Kafka user configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
KafkaUserresource in OpenShift.oc apply -f <user_config_file>
oc apply -f <user_config_file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the ready status of the user to change to
True:oc get kafkausers -o wide -w -n <namespace>
oc get kafkausers -o wide -w -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Kafka user status
NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-1 my-cluster tls simple True my-user-2 my-cluster tls simple my-user-3 my-cluster tls simple True
NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-1 my-cluster tls simple True my-user-2 my-cluster tls simple my-user-3 my-cluster tls simple TrueCopy to Clipboard Copied! Toggle word wrap Toggle overflow User creation is successful when the
READYoutput showsTrue.If the
READYcolumn stays blank, get more details on the status from the resource YAML or User Operator logs.Messages provide details on the reason for the current status.
oc get kafkausers my-user-2 -o yaml
oc get kafkausers my-user-2 -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Details on a user with a
NotReadystatusCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the reason the user is not ready is because simple authorization is not enabled in the
Kafkaconfiguration.Kafka configuration for simple authorization
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After updating the Kafka configuration, the status shows the user is ready.
oc get kafkausers my-user-2 -o wide -w -n <namespace>
oc get kafkausers my-user-2 -o wide -w -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Status update of the user
NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-2 my-cluster tls simple True
NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-2 my-cluster tls simple TrueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Fetching the details shows no messages.
oc get kafkausers my-user-2 -o yaml
oc get kafkausers my-user-2 -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Details on a user with a
READYstatusCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.4.2. Configuring the User Operator with resource requests and limits Copy linkLink copied to clipboard!
You can allocate resources, such as CPU and memory, to the User Operator and set a limit on the amount of resources it can consume.
Prerequisites
- The Cluster Operator is running.
Procedure
Update the Kafka cluster configuration in an editor, as required:
oc edit kafka MY-CLUSTER
oc edit kafka MY-CLUSTERCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the
spec.entityOperator.userOperator.resourcesproperty in theKafkaresource, set the resource requests and limits for the User Operator.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor. The Cluster Operator applies the changes automatically.
13.5. Configuring feature gates Copy linkLink copied to clipboard!
AMQ Streams operators support feature gates to enable or disable certain features and functionality. Enabling a feature gate changes the behavior of the relevant operator and introduces the feature to your AMQ Streams deployment.
Feature gates have a default state of either enabled or disabled.
To modify a feature gate’s default state, use the STRIMZI_FEATURE_GATES environment variable in the operator’s configuration. You can modify multiple feature gates using this single environment variable. Specify a comma-separated list of feature gate names and prefixes. A + prefix enables the feature gate and a - prefix disables it.
Example feature gate configuration that enables FeatureGate1 and disables FeatureGate2
env:
- name: STRIMZI_FEATURE_GATES
value: +FeatureGate1,-FeatureGate2
env:
- name: STRIMZI_FEATURE_GATES
value: +FeatureGate1,-FeatureGate2
13.5.1. ControlPlaneListener feature gate Copy linkLink copied to clipboard!
The ControlPlaneListener feature gate has moved to GA, which means it is now permanently enabled and cannot be disabled. With ControlPlaneListener enabled, the connections between the Kafka controller and brokers use an internal control plane listener on port 9090. Replication of data between brokers, as well as internal connections from AMQ Streams operators, Cruise Control, or the Kafka Exporter use the replication listener on port 9091.
With the ControlPlaneListener feature gate permanently enabled, it is no longer possible to upgrade or downgrade directly between AMQ Streams 1.7 and earlier and AMQ Streams 2.3 and newer. You have to first upgrade or downgrade through one of the AMQ Streams versions in-between, disable the ControlPlaneListener feature gate, and then downgrade or upgrade (with the feature gate enabled) to the target version.
13.5.2. ServiceAccountPatching feature gate Copy linkLink copied to clipboard!
The ServiceAccountPatching feature gate has moved to GA, which means it is now permanently enabled and cannot be disabled. With ServiceAccountPatching enabled, the Cluster Operator always reconciles service accounts and updates them when needed. For example, when you change service account labels or annotations using the template property of a custom resource, the operator automatically updates them on the existing service account resources.
13.5.3. UseStrimziPodSets feature gate Copy linkLink copied to clipboard!
The UseStrimziPodSets feature gate has a default state of enabled.
The UseStrimziPodSets feature gate introduces a resource for managing pods called StrimziPodSet. When the feature gate is enabled, this resource is used instead of the StatefulSets. AMQ Streams handles the creation and management of pods instead of OpenShift. Using StrimziPodSets instead of StatefulSets provides more control over the functionality.
When this feature gate is disabled, AMQ Streams relies on StatefulSets to create and manage pods for the ZooKeeper and Kafka clusters. AMQ Streams creates the StatefulSet and OpenShift creates the pods according to the StatefulSet definition. When a pod is deleted, OpenShift is responsible for recreating it. The use of StatefulSets has the following limitations:
- Pods are always created or removed based on their index numbers
- All pods in the StatefulSet need to have a similar configuration
- Changing storage configuration for the Pods in the StatefulSet is complicated
Disabling the UseStrimziPodSets feature gate
To disable the UseStrimziPodSets feature gate, specify -UseStrimziPodSets in the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration.
The UseStrimziPodSets feature gate must be disabled when downgrading to AMQ Streams 2.0 and earlier versions.
13.5.4. (Preview) UseKRaft feature gate Copy linkLink copied to clipboard!
The UseKRaft feature gate has a default state of disabled.
The UseKRaft feature gate deploys the Kafka cluster in the KRaft (Kafka Raft metadata) mode without ZooKeeper. This feature gate is currently intended only for development and testing.
The KRaft mode is not ready for production in Apache Kafka or in AMQ Streams.
When the UseKRaft feature gate is enabled, the Kafka cluster is deployed without ZooKeeper. The .spec.zookeeper properties in the Kafka custom resource will be ignored, but still need to be present. The UseKRaft feature gate provides an API that configures Kafka cluster nodes and their roles. The API is still in development and is expected to change before the KRaft mode is production-ready.
Currently, the KRaft mode in AMQ Streams has the following major limitations:
- Moving from Kafka clusters with ZooKeeper to KRaft clusters or the other way around is not supported.
- Upgrades and downgrades of Apache Kafka versions or the AMQ Streams operator are not supported. Users might need to delete the cluster, upgrade the operator and deploy a new Kafka cluster.
-
The Topic Operator is not supported. The
spec.entityOperator.topicOperatorproperty must be removed from theKafkacustom resource. - SCRAM-SHA-512 authentication is not supported.
-
JBOD storage is not supported. The
type: jbodstorage can be used, but the JBOD array can contain only one disk. -
All Kafka nodes have both the
controllerandbrokerKRaft roles. Kafka clusters with separatecontrollerandbrokernodes are not supported.
Enabling the UseKRaft feature gate
To enable the UseKRaft feature gate, specify +UseKRaft in the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration.
The UseKRaft feature gate depends on the UseStrimziPodSets feature gate. When enabling the UseKRaft feature gate, make sure that the UseStrimziPodSets feature gate is enabled as well.
13.5.5. (Preview) StableConnectIdentities feature gate Copy linkLink copied to clipboard!
The StableConnectIdentities feature gate has a default state of disabled.
The StableConnectIdentities feature gate uses StrimziPodSet resources to manage Kafka Connect and Kafka MirrorMaker 2 pods instead of using OpenShift Deployment resources. StrimziPodSets give the pods stable names and stable addresses, which do not change during rolling upgrades. This helps to minimize the number of rebalances of connector tasks.
Enabling the StableConnectIdentities feature gate
To enable the StableConnectIdentities feature gate, specify +StableConnectIdentities in the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration.
The StableConnectIdentities feature gate must be disabled when downgrading to AMQ Streams 2.3 and earlier versions.
13.5.6. Feature gate releases Copy linkLink copied to clipboard!
Feature gates have three stages of maturity:
- Alpha — typically disabled by default
- Beta — typically enabled by default
- General Availability (GA) — typically always enabled
Alpha stage features might be experimental or unstable, subject to change, or not sufficiently tested for production use. Beta stage features are well tested and their functionality is not likely to change. GA stage features are stable and should not change in the future. Alpha and beta stage features are removed if they do not prove to be useful.
-
The
ControlPlaneListenerfeature gate moved to GA stage in AMQ Streams 2.3. It is now permanently enabled and cannot be disabled. -
The
ServiceAccountPatchingfeature gate moved to GA stage in AMQ Streams 2.3. It is now permanently enabled and cannot be disabled. -
The
UseStrimziPodSetsfeature gate moved to beta stage in AMQ Streams 2.3. It moves to GA in a future release of AMQ Streams when the support for StatefulSets is completely removed. -
The
UseKRaftfeature gate is available for development only and does not currently have a planned release for moving to the beta phase. -
The
StableConnectIdentitiesfeature gate is in alpha stage and is disabled by default.
Feature gates might be removed when they reach GA. This means that the feature was incorporated into the AMQ Streams core features and can no longer be disabled.
| Feature gate | Alpha | Beta | GA |
|---|---|---|---|
|
| 1.8 | 2.0 | 2.3 |
|
| 1.8 | 2.0 | 2.3 |
|
| 2.1 | 2.3 | future release (planned) |
|
| 2.2 | - | - |
|
| 2.4 | future release (planned) | - |
If a feature gate is enabled, you may need to disable it before upgrading or downgrading from a specific AMQ Streams version. The following table shows which feature gates you need to disable when upgrading or downgrading AMQ Streams versions.
| Disable Feature gate | Upgrading from AMQ Streams version | Downgrading to AMQ Streams version |
|---|---|---|
|
| 1.7 and earlier | 1.7 and earlier |
|
| - | 2.0 and earlier |
|
| - | 2.3 and earlier |
13.6. Monitoring operators using Prometheus metrics Copy linkLink copied to clipboard!
AMQ Streams operators expose Prometheus metrics. The metrics are automatically enabled and contain information about the following:
- Number of reconciliations
- Number of Custom Resources the operator is processing
- Duration of reconciliations
- JVM metrics from the operators
Additionally, AMQ Streams provides an example Grafana dashboard for the operator.