このコンテンツは選択した言語では利用できません。

Glossary


Red Hat Streams for Apache Kafka 3.1

Discover the features and functions of Streams for Apache Kafka 3.1 on OpenShift Container Platform

Abstract

This glossary explains terminology unique to Streams for Apache Kafka on OpenShift and its components. Terms from Kafka and OpenShift are outside its scope.

Providing feedback on Red Hat documentation

We appreciate your feedback on our documentation.

To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly.

Prerequisite

  • You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one.

Procedure

  1. Click Create issue.
  2. In the Summary text box, enter a brief description of the issue.
  3. In the Description text box, provide the following information:

    • The URL of the page where you found the issue.
    • A detailed description of the issue.
      You can leave the information in any other fields at their default values.
  4. Add a reporter name.
  5. Click Create to submit the Jira issue to the documentation team.

Thank you for taking the time to provide feedback.

Chapter 1. A

1.1. Access Operator

An optional operator that simplifies the sharing of Kafka connection information and credentials between namespaces. Connection details are stored centrally in a Secret resource.

1.2. Authentication

Defines how clients prove their identity to the Kafka cluster. Streams for Apache Kafka manages authentication as a client-server relationship:

  • Server-Side: The Kafka cluster’s listeners are configured to require a specific authentication type.
  • Client-Side: A client (a KafkaUser or a client-based Kafka component managed by Streams for Apache Kafka) must be configured to provide matching credentials.

    Listener authentication (Server-Side)
    Listener authentication is configured per listener in the spec.kafka.listeners array of the Kafka custom resource. Supported types include tls, scram-sha-512, and custom.
    Client authentication (Kafka user)
    For Kafka users, authentication is managed using the KafkaUser custom resource. Supported types are tls, tls-external (using an external CA), and scram-sha-512. Streams for Apache Kafka automatically creates the necessary Secret resources for the user.
    Client authentication (Kafka components)
    For Streams for Apache Kafka-managed components, authentication is managed in the custom resource of the component, such as KafkaConnect. Supported types include tls, scram-sha-256, scram-sha-512, plain, and custom.

For more information, see the following:

1.3. Authorization (cluster-wide)

Defines which actions an authenticated client is permitted to perform on Kafka resources, such as writing to or reading from a topic. Configuration involves setting a cluster-wide mechanism and then, if required, defining user-specific rules.

Cluster-wide authorization
This defines the overall mechanism used by the Kafka cluster to control client actions. It’s configured in the spec.kafka.authorization section of the Kafka custom resource. Supported types include simple (using the Kafka’s built-in authorizer) and custom (using custom authorizers).
User authorization (ACLs)
This defines specific Access Control Lists (ACLs) for a user, granting permissions to perform actions on Kafka resources. The ACLs are defined in the spec.authorization section of the KafkaUser custom resource. If using a custom authorization mechanism, user permissions are typically managed within the external authorization system and not through the KafkaUser resource.

For more information, see the following:

Chapter 2. C

2.1. Clients CA

A Certificate Authority managed by the Streams for Apache Kafka Cluster Operator that issues TLS certificates for Kafka clients. These certificates are used for mutual TLS (mTLS) authentication between external clients and Kafka brokers.

2.2. Cluster CA

A Certificate Authority managed by the Streams for Apache Kafka Cluster Operator that issues TLS certificates to secure communication between Kafka brokers, internal components, and Kafka clients. These certificates enable encrypted and authenticated communication over TLS.

2.3. Cluster Operator

The central operator responsible for deploying and managing Kafka clusters, Kafka Connect, Kafka MirrorMaker, and related components.

For more information, see the following:

2.4. Cruise Control

A component that provides automated Kafka cluster rebalancing and optimization. Cruise Control is configured through the Kafka custom resource, while rebalancing operations are managed using the KafkaRebalance custom resource.

For more information, see the following:

Chapter 3. D

3.1. Drain Cleaner

A utility installed as a separate component that ensures safe pod evictions during rolling updates to prevent data loss or downtime.

For more information, see Evicting pods with the Streams for Apache Kafka Drain Cleaner.

Chapter 4. E

4.1. Encryption

Streams for Apache Kafka supports Transport Layer Security (TLS) to encrypt communication between Kafka and its clients. TLS is enabled per listener in the Kafka custom resource, and communication between internal components is always encrypted.

4.2. Entity Operator

The Entity Operator runs the Topic Operator and User Operator in separate containers within its pod, allowing them to handle topic and user management.

Chapter 5. F

5.1. Feature gate

Used to enable or disable specific features and functions managed by Streams for Apache Kafka operators. New features may be introduced initially through feature gates.

For more information, see Feature gates.

Chapter 6. K

6.1. Kafka (custom resource)

A custom resource for deploying and configuring a Kafka cluster, including settings for nodes, listeners, storage, security, and internal components like Cruise Control and the Entity Operator.

For more information, see the Kafka schema reference.

6.2. Kafka Bridge

Provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster.

For more information, see Using the Kafka Bridge.

6.3. KafkaBridge (custom resource)

A custom resource used to deploy and configure a Kafka Bridge instance, specifying replicas, authentication, and connection details.

For more information, see the KafkaBridge schema reference.

6.4. KafkaConnect (custom resource)

A custom resource used to deploy and configure a Kafka Connect cluster for integrating external systems with Kafka.

For more information, see the KafkaConnect schema reference.

6.5. KafkaConnector (custom resource)

A custom resource for managing individual Kafka connectors in a Kafka Connect cluster declaratively and independently of the KafkaConnect deployment.

For more information, see the KafkaConnector schema reference.

6.6. KafkaExporter

The Kafka Exporter exposes Kafka metrics for Prometheus. It is configured as part of the Kafka custom resource.

For more information, see the KafkaExporterSpec schema reference.

6.7. KafkaMirrorMaker2 (custom resource)

A custom resource for deploying a Kafka MirrorMaker 2 instance to replicate data between Kafka clusters.

For more information, see the KafkaMirrorMaker2 schema reference.

6.8. KafkaNodePool (custom resource)

A custom resource used to configure distinct groups of nodes within a Kafka cluster. Nodes in a node pool can be configured to operate as Kafka brokers, controllers, or both.

For more information, see the KafkaNodePool schema reference.

6.9. KafkaRebalance (custom resource)

A custom resource that triggers and manages cluster rebalancing through Cruise Control by setting optimization goals.

Rebalance modes:

full
Load rebalanced across all brokers
add-brokers
Replicas moved to newly added brokers
remove-brokers
Replicas moved off brokers being removed
remove-disks
Data moved between storage volumes within the same broker

For more information, see the KafkaRebalance schema reference.

6.10. KafkaTopic (custom resource)

A custom resource for managing Kafka topics (creation, configuration, deletion) through the Topic Operator.

For more information, see the KafkaTopic schema reference.

6.11. KafkaUser (custom resource)

A custom resource for managing Kafka users (creation, configuration, deletion) through the User Operator, including their authentication credentials and access permissions.

For more information, see the KafkaUser schema reference.

Chapter 7. L

7.1. Listener

Defines how clients connect to the Kafka cluster. Streams for Apache Kafka supports several listener types for exposing Kafka internally or externally.

Listener types:

internal
Kafka exposed only within the OpenShift cluster
route
Kafka exposed externally using OpenShift Routes
loadbalancer
Kafka exposed externally using a LoadBalancer service
nodeport
Kafka exposed externally using NodePort services
ingress
Kafka exposed externally using OpenShift NGINX Ingress with TLS passthrough
cluster-ip
Kafka exposed using a per-broker ClusterIP service

7.2. Logging (configuration)

Logging for Kafka components and Streams for Apache Kafka operators is configured through their custom resources. The configuration uses Log4j2 and supports dynamic updates without restarting pods.

Configuration methods:

inline
Loggers and levels are defined directly in the custom resource. Used for simple changes to log levels.
external
Loggers and levels are defined in a ConfigMap referenced by the custom resource. Used for complex, reusable, or filtered configurations.

Chapter 8. M

8.1. Metrics

Streams for Apache Kafka components can expose Prometheus-formatted metrics for monitoring. Metrics for components are enabled through its custom resource.

For more information, see Introducing metrics.

8.2. Metrics Reporter

A component that exposes metrics from Streams for Apache Kafka-managed components such as Kafka brokers, Kafka Connect, Kafka MirrorMaker 2, and Kafka Bridge in Prometheus format. The Metrics Reporter is enabled through the metricsConfig property in the corresponding custom resource.

Chapter 9. N

9.1. Network policy

Streams for Apache Kafka automatically creates a NetworkPolicy resource for each listener, allowing connections from all namespaces by default. You can configure the networkPolicyPeers property to restrict access to specific applications or namespaces.

Chapter 10. S

10.1. Storage (configuration)

Defines disk storage for Kafka nodes within the KafkaNodePool custom resource.

Supported storage types:

ephemeral
Temporary storage tied to the pod lifecycle
persistent-claim
Durable storage using PersistentVolumeClaims (PVCs)
jbod
Multiple disks or volumes (ephemeral or persistent)

For more information, see Configuring Kafka storage.

10.2. Streams for Apache Kafka API schema

The formal specification that defines the structure, properties, and validation rules for Streams for Apache Kafka custom resources. Also referred to as the Streams for Apache Kafka custom resource schema.

10.3. Streams for Apache Kafka Operator

The primary deployment artifact for Streams for Apache Kafka. An operator that installs and configures components for running Kafka on OpenShift, including the Cluster Operator.

10.4. Streams for Apache Kafka operators

The suite of OpenShift operators (Cluster Operator, Topic Operator, User Operator) that automate Kafka cluster management.

10.5. StrimziPodSet (custom resource)

A custom resource used by the Streams for Apache Kafka Cluster Operator to manage the lifecycle of broker pods, replacing OpenShift StatefulSet resources to provide greater control over pod identity and updates.

10.6. Super user

A Kafka user principal with full administrative access that bypasses all ACL checks. Super users are configured via the superUsers property in the Kafka custom resource when simple authorization is enabled.

Chapter 11. T

11.1. Tiered storage

A capability enabling Kafka brokers to store topic log segments across different storage tiers, such as local disk and remote object storage. It is configured through the Kafka custom resource.

For more information, see the following:

11.2. Topic Operator

The operator responsible for managing Kafka topics through KafkaTopic custom resources.

Chapter 12. U

12.1. Upgrade

The process of updating the Cluster Operator and the Kafka cluster it manages. Upgrade typically involves upgrading the operator first, then the Kafka version, and finally the metadata version.

Upgrade paths:

incremental upgrade
Move between consecutive minor versions
multi-version upgrade
Skip one or more minor versions

For more information, see Upgrading Streams for Apache Kafka.

12.2. User Operator

The operator responsible for managing Kafka users and ACLs through KafkaUser custom resources.

Appendix A. Using your subscription

Streams for Apache Kafka is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal.

A.1. Accessing Your Account

  1. Go to access.redhat.com.
  2. If you do not already have an account, create one.
  3. Log in to your account.

A.2. Activating a Subscription

  1. Go to access.redhat.com.
  2. Navigate to My Subscriptions.
  3. Navigate to Activate a subscription and enter your 16-digit activation number.

A.3. Downloading Zip and Tar Files

To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required.

  1. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads.
  2. Locate the Streams for Apache Kafka entries in the INTEGRATION AND AUTOMATION category.
  3. Select the desired Streams for Apache Kafka product. The Software Downloads page opens.
  4. Click the Download link for your component.

A.4. Installing packages with DNF

To install a package and all the package dependencies, use:

dnf install <package_name>
Copy to Clipboard Toggle word wrap

To install a previously-downloaded package from a local directory, use:

dnf install <path_to_download_package>
Copy to Clipboard Toggle word wrap

Revised on 2025-12-16 10:57:49 UTC

Legal Notice

Copyright © Red Hat.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

Theme

© 2026 Red Hat
トップに戻る