このコンテンツは選択した言語では利用できません。

Chapter 1. Deployment overview


Streams for Apache Kafka simplifies the process of running Apache Kafka within an OpenShift cluster.

This guide provides instructions for deploying and managing Streams for Apache Kafka. Deployment options and steps are covered using the example installation files included with Streams for Apache Kafka. While the guide highlights important configuration considerations, it does not cover all available options. For a deeper understanding of the Kafka component configuration options, refer to the Streams for Apache Kafka Custom Resource API Reference.

In addition to deployment instructions, the guide offers pre- and post-deployment guidance. It covers setting up and securing client access to your Kafka cluster. Furthermore, it explores additional deployment options such as metrics integration, distributed tracing, and cluster management tools like Cruise Control and the Streams for Apache Kafka Drain Cleaner. You’ll also find recommendations on managing Streams for Apache Kafka and fine-tuning Kafka configuration for optimal performance.

Upgrade instructions are provided for both Streams for Apache Kafka and Kafka, to help keep your deployment up to date.

Streams for Apache Kafka is designed to be compatible with all types of OpenShift clusters, irrespective of their distribution. Whether your deployment involves public or private clouds, or if you are setting up a local development environment, the instructions in this guide are applicable in all cases.

1.1. Streams for Apache Kafka custom resources

The deployment of Kafka components onto an OpenShift cluster using Streams for Apache Kafka is highly configurable through the use of custom resources. These resources are created as instances of APIs introduced by Custom Resource Definitions (CRDs), which extend OpenShift resources.

CRDs act as configuration instructions to describe the custom resources in an OpenShift cluster, and are provided with Streams for Apache Kafka for each Kafka component used in a deployment, as well as users and topics. CRDs and custom resources are defined as YAML files. Example YAML files are provided with the Streams for Apache Kafka distribution.

CRDs also allow Streams for Apache Kafka resources to benefit from native OpenShift features like CLI accessibility and configuration validation.

1.1.1. Streams for Apache Kafka custom resource example

CRDs require a one-time installation in a cluster to define the schemas used to instantiate and manage Streams for Apache Kafka-specific resources.

After a new custom resource type is added to your cluster by installing a CRD, you can create instances of the resource based on its specification.

Depending on the cluster setup, installation typically requires cluster admin privileges.

Note

Access to manage custom resources is limited to Streams for Apache Kafka administrators. For more information, see Section 6.5, “Designating Streams for Apache Kafka administrators”.

A CRD defines a new kind of resource, such as kind:Kafka, within an OpenShift cluster.

The Kubernetes API server allows custom resources to be created based on the kind and understands from the CRD how to validate and store the custom resource when it is added to the OpenShift cluster.

Each Streams for Apache Kafka-specific custom resource conforms to the schema defined by the CRD for the resource’s kind. The custom resources for Streams for Apache Kafka components have common configuration properties, which are defined under spec.

To understand the relationship between a CRD and a custom resource, let’s look at a sample of the CRD for a Kafka topic.

Kafka topic CRD

apiVersion: kafka.strimzi.io/v1beta2
kind: CustomResourceDefinition
metadata: 
1

  name: kafkatopics.kafka.strimzi.io
  labels:
    app: strimzi
spec: 
2

  group: kafka.strimzi.io
  versions:
    v1beta2
  scope: Namespaced
  names:
    # ...
    singular: kafkatopic
    plural: kafkatopics
    shortNames:
    - kt 
3

  additionalPrinterColumns: 
4

      # ...
  subresources:
    status: {} 
5

  validation: 
6

    openAPIV3Schema:
      properties:
        spec:
          type: object
          properties:
            partitions:
              type: integer
              minimum: 1
            replicas:
              type: integer
              minimum: 1
              maximum: 32767
      # ...
Copy to Clipboard Toggle word wrap

1
The metadata for the topic CRD, its name and a label to identify the CRD.
2
The specification for this CRD, including the group (domain) name, the plural name and the supported schema version, which are used in the URL to access the API of the topic. The other names are used to identify instance resources in the CLI. For example, oc get kafkatopic my-topic or oc get kafkatopics.
3
The shortname can be used in CLI commands. For example, oc get kt can be used as an abbreviation instead of oc get kafkatopic.
4
The information presented when using a get command on the custom resource.
5
The current status of the CRD as described in the schema reference for the resource.
6
openAPIV3Schema validation provides validation for the creation of topic custom resources. For example, a topic requires at least one partition and one replica.
Note

You can identify the CRD YAML files supplied with the Streams for Apache Kafka installation files, because the file names contain an index number followed by ‘Crd’.

Here is a corresponding example of a KafkaTopic custom resource.

Kafka topic custom resource

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic 
1

metadata:
  name: my-topic
  labels:
    strimzi.io/cluster: my-cluster 
2

spec: 
3

  partitions: 1
  replicas: 1
  config:
    retention.ms: 7200000
    segment.bytes: 1073741824
status:
  conditions: 
4

    lastTransitionTime: "2019-08-20T11:37:00.706Z"
    status: "True"
    type: Ready
  observedGeneration: 1
  / ...
Copy to Clipboard Toggle word wrap

1
The kind and apiVersion identify the CRD of which the custom resource is an instance.
2
A label, applicable only to KafkaTopic and KafkaUser resources, that defines the name of the Kafka cluster (which is same as the name of the Kafka resource) to which a topic or user belongs.
3
The spec shows the number of partitions and replicas for the topic as well as the configuration parameters for the topic itself. In this example, the retention period for a message to remain in the topic and the segment file size for the log are specified.
4
Status conditions for the KafkaTopic resource. The type condition changed to Ready at the lastTransitionTime.

Custom resources can be applied to a cluster through the platform CLI. When the custom resource is created, it uses the same validation as the built-in resources of the Kubernetes API.

After a KafkaTopic custom resource is created, the Topic Operator is notified and corresponding Kafka topics are created in Streams for Apache Kafka.

1.1.2. Performing oc operations on custom resources

You can use oc commands to retrieve information and perform other operations on Streams for Apache Kafka custom resources. Use oc commands, such as get, describe, edit, or delete, to perform operations on resource types. For example, oc get kafkatopics retrieves a list of all Kafka topics and oc get kafkas retrieves all deployed Kafka clusters.

When referencing resource types, you can use both singular and plural names: oc get kafkas gets the same results as oc get kafka.

You can also use the short name of the resource. Learning short names can save you time when managing Streams for Apache Kafka. The short name for Kafka is k, so you can also run oc get k to list all Kafka clusters.

Listing Kafka clusters

oc get k

NAME         READY   METADATA STATE   WARNINGS
my-cluster   True    KRaft
Copy to Clipboard Toggle word wrap

Expand
Table 1.1. Long and short names for each Streams for Apache Kafka resource
Streams for Apache Kafka resourceLong nameShort name

Kafka

kafka

k

Kafka Node Pool

kafkanodepool

knp

Kafka Topic

kafkatopic

kt

Kafka User

kafkauser

ku

Kafka Connect

kafkaconnect

kc

Kafka Connector

kafkaconnector

kctr

Kafka MirrorMaker 2

kafkamirrormaker2

kmm2

Kafka Bridge

kafkabridge

kb

Kafka Rebalance

kafkarebalance

kr

Streams for Apache Kafka Pod Set

strimzipodset

sps

1.1.2.1. Resource categories

Categories of custom resources can also be used in oc commands.

All Streams for Apache Kafka custom resources belong to the category strimzi, so you can use strimzi to get all the Streams for Apache Kafka resources with one command.

For example, running oc get strimzi lists all Streams for Apache Kafka custom resources in a given namespace.

Listing all custom resources

oc get strimzi

NAME                                                   PODS   READY PODS   CURRENT PODS   AGE
strimzipodset.core.strimzi.io/my-cluster-brokers       3      3            3              6h11m
strimzipodset.core.strimzi.io/my-cluster-controllers   3      3            3              6h11m

NAME                                         DESIRED REPLICAS   ROLES            NODEIDS
kafkanodepool.kafka.strimzi.io/brokers       3                  ["broker"]       [3,4,5]
kafkanodepool.kafka.strimzi.io/controllers   3                  ["controller"]   [0,1,2]

NAME                                READY   METADATA STATE   WARNINGS
kafka.kafka.strimzi.io/my-cluster   True    KRaft

NAME                                   PARTITIONS REPLICATION FACTOR
kafkatopic.kafka.strimzi.io/kafka-apps 3          3

NAME                                   AUTHENTICATION AUTHORIZATION
kafkauser.kafka.strimzi.io/my-user     tls            simple
Copy to Clipboard Toggle word wrap

The oc get strimzi -o name command returns all resource types and resource names. The -o name option fetches the output in the type/name format

Listing all resource types and names

oc get strimzi -o name

strimzipodset.core.strimzi.io/my-cluster-brokers
strimzipodset.core.strimzi.io/my-cluster-controllers
kafkanodepool.kafka.strimzi.io/brokers
kafkanodepool.kafka.strimzi.io/controllers
kafka.kafka.strimzi.io/my-cluster
kafkatopic.kafka.strimzi.io/kafka-apps
kafkauser.kafka.strimzi.io/my-user
Copy to Clipboard Toggle word wrap

You can combine this strimzi command with other commands. For example, you can pass it into a oc delete command to delete all resources in a single command.

Deleting all custom resources

oc delete $(oc get strimzi -o name)

strimzipodset.core.strimzi.io "my-cluster-brokers" deleted
strimzipodset.core.strimzi.io "my-cluster-controllers" deleted
kafkanodepool.kafka.strimzi.io "brokers" deleted
kafkanodepool.kafka.strimzi.io "controllers" deleted
kafka.kafka.strimzi.io "my-cluster" deleted
kafkatopic.kafka.strimzi.io "kafka-apps" deleted
kafkauser.kafka.strimzi.io "my-user" deleted
Copy to Clipboard Toggle word wrap

Deleting all resources in a single operation might be useful, for example, when you are testing new Streams for Apache Kafka features.

1.1.2.2. Querying the status of sub-resources

There are other values you can pass to the -o option. For example, by using -o yaml you get the output in YAML format. Using -o json will return it as JSON.

You can see all the options in oc get --help.

One of the most useful options is the JSONPath support, which allows you to pass JSONPath expressions to query the Kubernetes API. A JSONPath expression can extract or navigate specific parts of any resource.

For example, you can use the JSONPath expression {.status.listeners[?(@.name=="tls")].bootstrapServers} to get the bootstrap address from the status of the Kafka custom resource and use it in your Kafka clients.

Here, the command retrieves the bootstrapServers value of the listener named tls:

Retrieving the bootstrap address

oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="tls")].bootstrapServers}{"\n"}'

my-cluster-kafka-bootstrap.myproject.svc:9093
Copy to Clipboard Toggle word wrap

By changing the name condition you can also get the address of the other Kafka listeners.

You can use jsonpath to extract any other property or group of properties from any custom resource.

1.1.3. Streams for Apache Kafka custom resource status information

Status properties provide status information for certain custom resources.

The following table lists the custom resources that provide status information (when deployed) and the schemas that define the status properties.

For more information on the schemas, see the Streams for Apache Kafka Custom Resource API Reference.

Expand
Table 1.2. Custom resources that provide status information
Streams for Apache Kafka resourceSchema referencePublishes status information on…​

Kafka

KafkaStatus
ListenerStatus
UsedNodePoolStatus
KafkaAutoRebalanceStatus

The Kafka cluster, its listeners, node pools, and any auto-rebalances on scaling

KafkaNodePool

KafkaNodePoolStatus

The nodes in the node pool, their roles, and the associated Kafka cluster

KafkaTopic

KafkaTopicStatus

Kafka topics in the Kafka cluster

KafkaUser

KafkaUserStatus

Kafka users in the Kafka cluster

KafkaConnect

KafkaConnectStatus

The Kafka Connect cluster and connector plugins

KafkaConnector

KafkaConnectorStatus

KafkaConnector resources

KafkaMirrorMaker2

KafkaMirrorMaker2Status

The Kafka MirrorMaker 2 cluster and internal connectors

KafkaBridge

KafkaBridgeStatus

The Kafka Bridge

KafkaRebalance

KafkaRebalanceStatus

The status and results of a rebalance

StrimziPodSet

StrimziPodSetStatus

The number of pods: being managed, using the current version, and in a ready state

The status property of a resource provides information on the state of the resource. The status.conditions and status.observedGeneration properties are common to all resources.

status.conditions
Status conditions describe the current state of a resource. Status condition properties are useful for tracking progress related to the resource achieving its desired state, as defined by the configuration specified in its spec. Status condition properties provide the time and reason the state of the resource changed, and details of events preventing or delaying the operator from realizing the desired state.
status.observedGeneration
Last observed generation denotes the latest reconciliation of the resource by the Cluster Operator. If the value of observedGeneration is different from the value of metadata.generation (the current version of the deployment), the operator has not yet processed the latest update to the resource. If these values are the same, the status information reflects the most recent changes to the resource.

The status properties also provide resource-specific information. For example, KafkaStatus provides information on listener addresses, and the ID of the Kafka cluster.

KafkaStatus also provides information on the Kafka and Streams for Apache Kafka versions being used. You can check the values of operatorLastSuccessfulVersion and kafkaVersion to determine whether an upgrade of Streams for Apache Kafka or Kafka has completed

Streams for Apache Kafka creates and maintains the status of custom resources, periodically evaluating the current state of the custom resource and updating its status accordingly. When performing an update on a custom resource using oc edit, for example, its status is not editable. Moreover, changing the status would not affect the configuration of the Kafka cluster.

Here we see the status properties for a Kafka custom resource.

Kafka custom resource status

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
spec:
  # ...
status:
  clusterId: XP9FP2P-RByvEy0W4cOEUA 
1

  conditions: 
2

    - lastTransitionTime: '2023-01-20T17:56:29.396588Z'
      status: 'True'
      type: Ready 
3

  kafkaMetadataState: KRaft 
4

  kafkaVersion: 4.1.0 
5

  kafkaNodePools: 
6

    - name: broker
    - name: controller
  listeners: 
7

    - addresses:
        - host: my-cluster-kafka-bootstrap.prm-project.svc
          port: 9092
      bootstrapServers: 'my-cluster-kafka-bootstrap.prm-project.svc:9092'
      name: plain
    - addresses:
        - host: my-cluster-kafka-bootstrap.prm-project.svc
          port: 9093
      bootstrapServers: 'my-cluster-kafka-bootstrap.prm-project.svc:9093'
      certificates:
        - |
          -----BEGIN CERTIFICATE-----

          -----END CERTIFICATE-----
      name: tls
    - addresses:
        - host: >-
            2054284155.us-east-2.elb.amazonaws.com
          port: 9095
      bootstrapServers: >-
        2054284155.us-east-2.elb.amazonaws.com:9095
      certificates:
        - |
          -----BEGIN CERTIFICATE-----

          -----END CERTIFICATE-----
      name: external3
    - addresses:
        - host: ip-10-0-172-202.us-east-2.compute.internal
          port: 31644
      bootstrapServers: 'ip-10-0-172-202.us-east-2.compute.internal:31644'
      certificates:
        - |
          -----BEGIN CERTIFICATE-----

          -----END CERTIFICATE-----
      name: external4
  observedGeneration: 3 
8

  operatorLastSuccessfulVersion: 3.1 
9
Copy to Clipboard Toggle word wrap

1
The Kafka cluster ID.
2
Status conditions describe the current state of the Kafka cluster.
3
The Ready condition indicates that the Cluster Operator considers the Kafka cluster able to handle traffic.
4
Kafka metadata state that shows KRaft is managing Kafka metadata and coordinating operations.
5
The version of Kafka being used by the Kafka cluster.
6
The node pools belonging to the Kafka cluster.
7
The listeners describe Kafka bootstrap addresses by type.
8
The observedGeneration value indicates the last reconciliation of the Kafka custom resource by the Cluster Operator.
9
The version of the operator that successfully completed the last reconciliation.
Note

The Kafka bootstrap addresses listed in the status do not signify that those endpoints or the Kafka cluster is in a Ready state.

1.1.4. Finding the status of a custom resource

Use oc with the status subresource of a custom resource to retrieve information about the resource.

Prerequisites

  • An OpenShift cluster.
  • The Cluster Operator is running.

Procedure

  • Specify the custom resource and use the -o jsonpath option to apply a standard JSONPath expression to select the status property:

    oc get kafka <kafka_resource_name> -o jsonpath='{.status}' | jq
    Copy to Clipboard Toggle word wrap

    This expression returns all the status information for the specified custom resource. You can use dot notation, such as status.listeners or status.observedGeneration, to fine-tune the status information you wish to see.

    Using the jq command line JSON parser tool makes it easier to read the output.

1.2. Streams for Apache Kafka operators

Streams for Apache Kafka uses operators to deploy and manage Kafka components. The operators continuously monitor Streams for Apache Kafka custom resources (like Kafka, KafkaTopic, and KafkaUser) and reconcile the state of Kafka components to match their configuration.

This reconciliation process involves three main operations:

Creation
When you create a Streams for Apache Kafka custom resource, the responsible operator detects it and takes the necessary actions to create the component. This might involve creating OpenShift resources, such as Deployment, Pod, Service, and ConfigMap, or configuring items inside the Kafka cluster itself, such as topics and users.
Update
Each time you update a custom resource, the operator detects the change and applies a corresponding update if the changes are valid. This could trigger a rolling update of pods or reconfigure a resource within Kafka. Rolling updates maintain the availability of the Kafka cluster, but can lead to service disruption in the Kafka clients.
Deletion
When you delete a custom resource, the operator detects the deletion and acts to remove the component. Most dependent resources are deleted automatically by OpenShift garbage collection. The exact behavior depends on the resource type. For a Kafka cluster, PVCs are retained by default to prevent data loss. For a Kafka topic, the topic is fully deleted from the Apache Kafka cluster.

Streams for Apache Kafka provides the following operators, each responsible for different aspects of a Kafka deployment:

Cluster Operator (required)

The Cluster Operator is the core operator and must be deployed first. It handles the deployment and lifecycle of Apache Kafka clusters on OpenShift, automating the setup of Kafka nodes and related resources.

Additionally, Streams for Apache Kafka provides Drain Cleaner, which is deployed separately. Drain Cleaner supports the Cluster Operator in managing pod evictions for Kafka clusters.

Entity Operator (recommended)

The Entity Operator can be deployed by the Cluster Operator. It runs in a single pod and includes one or both of the following operators:

  • Topic Operator to manage Kafka topics.
  • User Operator to manage Kafka users.

Each operator runs in a separate container within the Entity Operator pod.

Note

The Topic Operator and User Operator can also be deployed standalone (without the Entity Operator) to manage topics and users for a Kafka cluster that is not managed by Streams for Apache Kafka.

1.2.1. Operator-watched Kafka resources

Operators watch and manage Kafka resources within defined OpenShift namespaces. The namespace scope of where each operator can watch these resources differs.

You can choose the namespace scope for the Cluster Operator. The Topic Operator and the User Operator can only watch a single Kafka cluster in a namespace. And they can only be connected to a single Kafka cluster.

Warning

While the operator can be configured to watch multiple namespaces, each watched namespace should contain only one instance of a specific component type, such as one Kafka cluster, to avoid conflicts.

Expand
Table 1.3. Operator resource and scope
OperatorWatched resourcesNamespace scope

Cluster Operator

Kafka
KafkaNodePool
KafkaConnect
KafkaConnector
KafkaMirrorMaker2
KafkaBridge
KafkaRebalance

Single, multiple, or all

Topic Operator

KafkaTopic

Single namespace (one Kafka cluster only)

User Operator

KafkaUser

Single namespace (one Kafka cluster only)

Note

For a standalone deployment of the Topic Operator or User Operator, you specify a namespace and connection to the Kafka cluster to watch in the configuration.

1.2.2. Managing RBAC resources

The Cluster Operator creates and manages role-based access control (RBAC) resources for Streams for Apache Kafka components that need access to OpenShift resources.

For the Cluster Operator to function, it needs permission within the OpenShift cluster to interact with Kafka resources, such as Kafka and KafkaConnect, as well as managed resources like ConfigMap, Pod, Deployment, and Service.

Permission is specified through the following OpenShift RBAC resources:

  • ServiceAccount
  • Role and ClusterRole
  • RoleBinding and ClusterRoleBinding

1.2.2.1. Delegating privileges to Streams for Apache Kafka components

The Cluster Operator runs under a service account called strimzi-cluster-operator, which is assigned cluster roles that give it permission to create the necessary RBAC resources for Streams for Apache Kafka components. Role bindings associate the cluster roles with the service account.

OpenShift enforces privilege escalation prevention, meaning the Cluster Operator cannot grant privileges it does not possess, nor can it grant such privileges in a namespace it cannot access. Consequently, the Cluster Operator must have the necessary privileges for all the components it orchestrates.

The Cluster Operator must be able to do the following:

  • Enable the Topic Operator to manage KafkaTopic resources by creating Role and RoleBinding resources in the relevant namespace.
  • Enable the User Operator to manage KafkaUser resources by creating Role and RoleBinding resources in the relevant namespace.
  • Allow Streams for Apache Kafka to discover the failure domain of a Node by creating a ClusterRoleBinding.

When using rack-aware partition assignment, broker pods need to access information about the Node they are running on, such as the Availability Zone in Amazon AWS. Similarly, when using NodePort type listeners, broker pods need to advertise the address of the Node they are running on. Since a Node is a cluster-scoped resource, this access must be granted through a ClusterRoleBinding, not a namespace-scoped RoleBinding.

The following sections describe the RBAC resources required by the Cluster Operator.

1.2.2.2. ClusterRole resources

The Cluster Operator uses ClusterRole resources to provide the necessary access to resources. Depending on the OpenShift cluster setup, a cluster administrator might be needed to create the cluster roles.

Note

Cluster administrator rights are only needed for the creation of ClusterRole resources. The Cluster Operator will not run under a cluster admin account.

The RBAC resources follow the principle of least privilege and contain only those privileges needed by the Cluster Operator to operate the cluster of the Kafka component.

All cluster roles are required by the Cluster Operator in order to delegate privileges.

Expand
Table 1.4. ClusterRole resources
NameDescription

strimzi-cluster-operator-namespaced

Access rights for namespace-scoped resources used by the Cluster Operator to deploy and manage the operands.

strimzi-cluster-operator-global

Access rights for cluster-scoped resources used by the Cluster Operator to deploy and manage the operands.

strimzi-cluster-operator-leader-election

Access rights used by the Cluster Operator for leader election.

strimzi-cluster-operator-watched

Access rights used by the Cluster Operator to watch and manage the Streams for Apache Kafka custom resources.

strimzi-kafka-broker

Access rights to allow Kafka brokers to get the topology labels from OpenShift worker nodes when rack-awareness is used.

strimzi-entity-operator

Access rights used by the Topic and User Operators to manage Kafka users and topics.

strimzi-kafka-client

Access rights to allow Kafka Connect, MirrorMaker (1 and 2), and Kafka Bridge to get the topology labels from OpenShift worker nodes when rack-awareness is used.

1.2.2.3. ClusterRoleBinding resources

The Cluster Operator uses ClusterRoleBinding and RoleBinding resources to associate its ClusterRole with its ServiceAccount. Cluster role bindings are required by cluster roles containing cluster-scoped resources.

Expand
Table 1.5. ClusterRoleBinding resources
NameDescription

strimzi-cluster-operator

Grants the Cluster Operator the rights from the strimzi-cluster-operator-global cluster role.

strimzi-cluster-operator-kafka-broker-delegation

Grants the Cluster Operator the rights from the strimzi-entity-operator cluster role.

strimzi-cluster-operator-kafka-client-delegation

Grants the Cluster Operator the rights from the strimzi-kafka-client cluster role.

Expand
Table 1.6. RoleBinding resources
NameDescription

strimzi-cluster-operator

Grants the Cluster Operator the rights from the strimzi-cluster-operator-namespaced cluster role.

strimzi-cluster-operator-leader-election

Grants the Cluster Operator the rights from the strimzi-cluster-operator-leader-election cluster role.

strimzi-cluster-operator-watched

Grants the Cluster Operator the rights from the strimzi-cluster-operator-watched cluster role.

strimzi-cluster-operator-entity-operator-delegation

Grants the Cluster Operator the rights from the strimzi-cluster-operator-entity-operator-delegation cluster role.

1.2.2.4. ServiceAccount resources

The Cluster Operator runs using the strimzi-cluster-operator ServiceAccount. This service account grants it the privileges it requires to manage the operands. The Cluster Operator creates additional ClusterRoleBinding and RoleBinding resources to delegate some of these RBAC rights to the operands.

Each of the operands uses its own service account created by the Cluster Operator. This allows the Cluster Operator to follow the principle of least privilege and give the operands only the access rights that are really need.

Expand
Table 1.7. ServiceAccount resources
NameUsed by

<cluster_name>-kafka

Kafka broker pods

<cluster_name>-entity-operator

Entity Operator

<cluster_name>-cruise-control

Cruise Control pods

<cluster_name>-kafka-exporter

Kafka Exporter pods

<cluster_name>-connect

Kafka Connect pods

<cluster_name>-mirror-maker

MirrorMaker pods

<cluster_name>-mirrormaker2

MirrorMaker 2 pods

<cluster_name>-bridge

Kafka Bridge pods

1.2.3. Managing pod resources

The StrimziPodSet custom resource is used by Streams for Apache Kafka to create and manage Kafka, Kafka Connect, and MirrorMaker 2 pods.

You must not create, update, or delete StrimziPodSet resources. The StrimziPodSet custom resource is used internally and resources are managed solely by the Cluster Operator. As a consequence, the Cluster Operator must be running properly to avoid the possibility of pods not starting and Kafka clusters not being available.

Note

OpenShift Deployment resources are used for creating and managing the pods of other components.

1.2.4. Lock acquisition warnings for cluster operations

The Cluster Operator ensures that only one operation runs at a time for each cluster by using locks. If another operation attempts to start while a lock is held, it waits until the current operation completes.

Operations such as cluster creation, rolling updates, scaling down, and scaling up are managed by the Cluster Operator.

If acquiring a lock takes longer than the configured timeout (STRIMZI_OPERATION_TIMEOUT_MS), a DEBUG message is logged:

Example DEBUG message for lock acquisition

DEBUG AbstractOperator:406 - Reconciliation #55(timer) Kafka(myproject/my-cluster): Failed to acquire lock lock::myproject::Kafka::my-cluster within 10000ms.
Copy to Clipboard Toggle word wrap

Timed-out operations are retried during the next periodic reconciliation in intervals defined by STRIMZI_FULL_RECONCILIATION_INTERVAL_MS (by default 120 seconds).

If an INFO message continues to appear with the same same reconciliation number, it might indicate a lock release error:

Example INFO message for reconciliation

INFO  AbstractOperator:399 - Reconciliation #1(watch) Kafka(myproject/my-cluster): Reconciliation is in progress
Copy to Clipboard Toggle word wrap

Restarting the Cluster Operator can resolve such issues.

1.3. Using the Kafka Bridge to connect with a Kafka cluster

You can use the Kafka Bridge API to create and manage consumers and send and receive records over HTTP rather than the native Kafka protocol.

When you set up the Kafka Bridge you configure HTTP access to the Kafka cluster. You can then use the Kafka Bridge to produce and consume messages from the cluster, as well as performing other operations through its REST interface.

1.4. Seamless FIPS support

Federal Information Processing Standards (FIPS) are standards for computer security and interoperability. When running Streams for Apache Kafka on a FIPS-enabled OpenShift cluster, the OpenJDK used in Streams for Apache Kafka container images automatically switches to FIPS mode. From version 2.3, Streams for Apache Kafka can run on FIPS-enabled OpenShift clusters without any changes or special configuration. It uses only the FIPS-compliant security libraries from the OpenJDK.

Important

If you are using FIPS-enabled OpenShift clusters, you may experience higher memory consumption compared to regular OpenShift clusters. To avoid any issues, we suggest increasing the memory request to at least 512Mi.

For more information about the NIST validation program and validated modules, see Cryptographic Module Validation Program on the NIST website.

Note

Compatibility with Streams for Apache Kafka Proxy and Streams for Apache Kafka Console has not been tested regarding FIPS support. While they are expected to function properly, we cannot guarantee full support at this time.

1.4.1. Minimum password length

When running in the FIPS mode, SCRAM-SHA-512 passwords need to be at least 32 characters long. From Streams for Apache Kafka 2.3, the default password length in Streams for Apache Kafka User Operator is set to 32 characters as well. If you have a Kafka cluster with custom configuration that uses a password length that is less than 32 characters, you need to update your configuration. If you have any users with passwords shorter than 32 characters, you need to regenerate a password with the required length. You can do that, for example, by deleting the user secret and waiting for the User Operator to create a new password with the appropriate length.

1.5. Document conventions

User-replaced values, also known as replaceables, are shown in with angle brackets (< >). Underscores ( _ ) are used for multi-word values. If the value refers to code or commands, monospace is also used.

For example, the following code shows that <my_namespace> must be replaced by the correct namespace name:

sed -i 's/namespace: .*/namespace: <my_namespace>/' install/cluster-operator/*RoleBinding*.yaml
Copy to Clipboard Toggle word wrap
Red Hat logoGithubredditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

Theme

© 2026 Red Hat
トップに戻る