このコンテンツは選択した言語では利用できません。
Glossary
Discover the features and functions of Streams for Apache Kafka 3.1 on OpenShift Container Platform
Abstract
Providing feedback on Red Hat documentation リンクのコピーリンクがクリップボードにコピーされました!
We appreciate your feedback on our documentation.
To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly.
Prerequisite
- You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one.
Procedure
- Click Create issue.
- In the Summary text box, enter a brief description of the issue.
In the Description text box, provide the following information:
- The URL of the page where you found the issue.
-
A detailed description of the issue.
You can leave the information in any other fields at their default values.
- Add a reporter name.
- Click Create to submit the Jira issue to the documentation team.
Thank you for taking the time to provide feedback.
Chapter 1. A リンクのコピーリンクがクリップボードにコピーされました!
1.1. Access Operator リンクのコピーリンクがクリップボードにコピーされました!
An optional operator that simplifies the sharing of Kafka connection information and credentials between namespaces. Connection details are stored centrally in a Secret resource.
1.2. Authentication リンクのコピーリンクがクリップボードにコピーされました!
Defines how clients prove their identity to the Kafka cluster. Streams for Apache Kafka manages authentication as a client-server relationship:
- Server-Side: The Kafka cluster’s listeners are configured to require a specific authentication type.
Client-Side: A client (a
KafkaUseror a client-based Kafka component managed by Streams for Apache Kafka) must be configured to provide matching credentials.- Listener authentication (Server-Side)
-
Listener authentication is configured per listener in the
spec.kafka.listenersarray of theKafkacustom resource. Supported types includetls,scram-sha-512, andcustom. - Client authentication (Kafka user)
-
For Kafka users, authentication is managed using the
KafkaUsercustom resource. Supported types aretls,tls-external(using an external CA), andscram-sha-512. Streams for Apache Kafka automatically creates the necessarySecretresources for the user. - Client authentication (Kafka components)
-
For Streams for Apache Kafka-managed components, authentication is managed in the custom resource of the component, such as
KafkaConnect. Supported types includetls,scram-sha-256,scram-sha-512,plain, andcustom.
For more information, see the following:
1.3. Authorization (cluster-wide) リンクのコピーリンクがクリップボードにコピーされました!
Defines which actions an authenticated client is permitted to perform on Kafka resources, such as writing to or reading from a topic. Configuration involves setting a cluster-wide mechanism and then, if required, defining user-specific rules.
- Cluster-wide authorization
-
This defines the overall mechanism used by the Kafka cluster to control client actions. It’s configured in the
spec.kafka.authorizationsection of theKafkacustom resource. Supported types includesimple(using the Kafka’s built-in authorizer) andcustom(using custom authorizers). - User authorization (ACLs)
-
This defines specific Access Control Lists (ACLs) for a user, granting permissions to perform actions on
Kafkaresources. The ACLs are defined in thespec.authorizationsection of theKafkaUsercustom resource. If using a custom authorization mechanism, user permissions are typically managed within the external authorization system and not through theKafkaUserresource.
For more information, see the following:
Chapter 2. C リンクのコピーリンクがクリップボードにコピーされました!
2.1. Clients CA リンクのコピーリンクがクリップボードにコピーされました!
A Certificate Authority managed by the Streams for Apache Kafka Cluster Operator that issues TLS certificates for Kafka clients. These certificates are used for mutual TLS (mTLS) authentication between external clients and Kafka brokers.
2.2. Cluster CA リンクのコピーリンクがクリップボードにコピーされました!
A Certificate Authority managed by the Streams for Apache Kafka Cluster Operator that issues TLS certificates to secure communication between Kafka brokers, internal components, and Kafka clients. These certificates enable encrypted and authenticated communication over TLS.
2.3. Cluster Operator リンクのコピーリンクがクリップボードにコピーされました!
The central operator responsible for deploying and managing Kafka clusters, Kafka Connect, Kafka MirrorMaker, and related components.
For more information, see the following:
2.4. Cruise Control リンクのコピーリンクがクリップボードにコピーされました!
A component that provides automated Kafka cluster rebalancing and optimization. Cruise Control is configured through the Kafka custom resource, while rebalancing operations are managed using the KafkaRebalance custom resource.
For more information, see the following:
Chapter 3. D リンクのコピーリンクがクリップボードにコピーされました!
3.1. Drain Cleaner リンクのコピーリンクがクリップボードにコピーされました!
A utility installed as a separate component that ensures safe pod evictions during rolling updates to prevent data loss or downtime.
For more information, see Evicting pods with the Streams for Apache Kafka Drain Cleaner.
Chapter 4. E リンクのコピーリンクがクリップボードにコピーされました!
4.1. Encryption リンクのコピーリンクがクリップボードにコピーされました!
Streams for Apache Kafka supports Transport Layer Security (TLS) to encrypt communication between Kafka and its clients. TLS is enabled per listener in the Kafka custom resource, and communication between internal components is always encrypted.
4.2. Entity Operator リンクのコピーリンクがクリップボードにコピーされました!
The Entity Operator runs the Topic Operator and User Operator in separate containers within its pod, allowing them to handle topic and user management.
Chapter 5. F リンクのコピーリンクがクリップボードにコピーされました!
5.1. Feature gate リンクのコピーリンクがクリップボードにコピーされました!
Used to enable or disable specific features and functions managed by Streams for Apache Kafka operators. New features may be introduced initially through feature gates.
For more information, see Feature gates.
Chapter 6. K リンクのコピーリンクがクリップボードにコピーされました!
6.1. Kafka (custom resource) リンクのコピーリンクがクリップボードにコピーされました!
A custom resource for deploying and configuring a Kafka cluster, including settings for nodes, listeners, storage, security, and internal components like Cruise Control and the Entity Operator.
For more information, see the Kafka schema reference.
6.2. Kafka Bridge リンクのコピーリンクがクリップボードにコピーされました!
Provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster.
For more information, see Using the Kafka Bridge.
6.3. KafkaBridge (custom resource) リンクのコピーリンクがクリップボードにコピーされました!
A custom resource used to deploy and configure a Kafka Bridge instance, specifying replicas, authentication, and connection details.
For more information, see the KafkaBridge schema reference.
6.4. KafkaConnect (custom resource) リンクのコピーリンクがクリップボードにコピーされました!
A custom resource used to deploy and configure a Kafka Connect cluster for integrating external systems with Kafka.
For more information, see the KafkaConnect schema reference.
6.5. KafkaConnector (custom resource) リンクのコピーリンクがクリップボードにコピーされました!
A custom resource for managing individual Kafka connectors in a Kafka Connect cluster declaratively and independently of the KafkaConnect deployment.
For more information, see the KafkaConnector schema reference.
6.6. KafkaExporter リンクのコピーリンクがクリップボードにコピーされました!
The Kafka Exporter exposes Kafka metrics for Prometheus. It is configured as part of the Kafka custom resource.
For more information, see the KafkaExporterSpec schema reference.
6.7. KafkaMirrorMaker2 (custom resource) リンクのコピーリンクがクリップボードにコピーされました!
A custom resource for deploying a Kafka MirrorMaker 2 instance to replicate data between Kafka clusters.
For more information, see the KafkaMirrorMaker2 schema reference.
6.8. KafkaNodePool (custom resource) リンクのコピーリンクがクリップボードにコピーされました!
A custom resource used to configure distinct groups of nodes within a Kafka cluster. Nodes in a node pool can be configured to operate as Kafka brokers, controllers, or both.
For more information, see the KafkaNodePool schema reference.
6.9. KafkaRebalance (custom resource) リンクのコピーリンクがクリップボードにコピーされました!
A custom resource that triggers and manages cluster rebalancing through Cruise Control by setting optimization goals.
Rebalance modes:
- full
- Load rebalanced across all brokers
- add-brokers
- Replicas moved to newly added brokers
- remove-brokers
- Replicas moved off brokers being removed
- remove-disks
- Data moved between storage volumes within the same broker
For more information, see the KafkaRebalance schema reference.
6.10. KafkaTopic (custom resource) リンクのコピーリンクがクリップボードにコピーされました!
A custom resource for managing Kafka topics (creation, configuration, deletion) through the Topic Operator.
For more information, see the KafkaTopic schema reference.
6.11. KafkaUser (custom resource) リンクのコピーリンクがクリップボードにコピーされました!
A custom resource for managing Kafka users (creation, configuration, deletion) through the User Operator, including their authentication credentials and access permissions.
For more information, see the KafkaUser schema reference.
Chapter 7. L リンクのコピーリンクがクリップボードにコピーされました!
7.1. Listener リンクのコピーリンクがクリップボードにコピーされました!
Defines how clients connect to the Kafka cluster. Streams for Apache Kafka supports several listener types for exposing Kafka internally or externally.
Listener types:
- internal
- Kafka exposed only within the OpenShift cluster
- route
- Kafka exposed externally using OpenShift Routes
- loadbalancer
-
Kafka exposed externally using a
LoadBalancerservice - nodeport
-
Kafka exposed externally using
NodePortservices - ingress
- Kafka exposed externally using OpenShift NGINX Ingress with TLS passthrough
- cluster-ip
-
Kafka exposed using a per-broker
ClusterIPservice
7.2. Logging (configuration) リンクのコピーリンクがクリップボードにコピーされました!
Logging for Kafka components and Streams for Apache Kafka operators is configured through their custom resources. The configuration uses Log4j2 and supports dynamic updates without restarting pods.
Configuration methods:
- inline
- Loggers and levels are defined directly in the custom resource. Used for simple changes to log levels.
- external
-
Loggers and levels are defined in a
ConfigMapreferenced by the custom resource. Used for complex, reusable, or filtered configurations.
Chapter 8. M リンクのコピーリンクがクリップボードにコピーされました!
8.1. Metrics リンクのコピーリンクがクリップボードにコピーされました!
Streams for Apache Kafka components can expose Prometheus-formatted metrics for monitoring. Metrics for components are enabled through its custom resource.
For more information, see Introducing metrics.
8.2. Metrics Reporter リンクのコピーリンクがクリップボードにコピーされました!
A component that exposes metrics from Streams for Apache Kafka-managed components such as Kafka brokers, Kafka Connect, Kafka MirrorMaker 2, and Kafka Bridge in Prometheus format. The Metrics Reporter is enabled through the metricsConfig property in the corresponding custom resource.
Chapter 9. N リンクのコピーリンクがクリップボードにコピーされました!
9.1. Network policy リンクのコピーリンクがクリップボードにコピーされました!
Streams for Apache Kafka automatically creates a NetworkPolicy resource for each listener, allowing connections from all namespaces by default. You can configure the networkPolicyPeers property to restrict access to specific applications or namespaces.
Chapter 10. S リンクのコピーリンクがクリップボードにコピーされました!
10.1. Storage (configuration) リンクのコピーリンクがクリップボードにコピーされました!
Defines disk storage for Kafka nodes within the KafkaNodePool custom resource.
Supported storage types:
- ephemeral
- Temporary storage tied to the pod lifecycle
- persistent-claim
- Durable storage using PersistentVolumeClaims (PVCs)
- jbod
- Multiple disks or volumes (ephemeral or persistent)
For more information, see Configuring Kafka storage.
10.2. Streams for Apache Kafka API schema リンクのコピーリンクがクリップボードにコピーされました!
The formal specification that defines the structure, properties, and validation rules for Streams for Apache Kafka custom resources. Also referred to as the Streams for Apache Kafka custom resource schema.
10.3. Streams for Apache Kafka Operator リンクのコピーリンクがクリップボードにコピーされました!
The primary deployment artifact for Streams for Apache Kafka. An operator that installs and configures components for running Kafka on OpenShift, including the Cluster Operator.
10.4. Streams for Apache Kafka operators リンクのコピーリンクがクリップボードにコピーされました!
The suite of OpenShift operators (Cluster Operator, Topic Operator, User Operator) that automate Kafka cluster management.
10.5. StrimziPodSet (custom resource) リンクのコピーリンクがクリップボードにコピーされました!
A custom resource used by the Streams for Apache Kafka Cluster Operator to manage the lifecycle of broker pods, replacing OpenShift StatefulSet resources to provide greater control over pod identity and updates.
10.6. Super user リンクのコピーリンクがクリップボードにコピーされました!
A Kafka user principal with full administrative access that bypasses all ACL checks. Super users are configured via the superUsers property in the Kafka custom resource when simple authorization is enabled.
Chapter 11. T リンクのコピーリンクがクリップボードにコピーされました!
11.1. Tiered storage リンクのコピーリンクがクリップボードにコピーされました!
A capability enabling Kafka brokers to store topic log segments across different storage tiers, such as local disk and remote object storage. It is configured through the Kafka custom resource.
For more information, see the following:
11.2. Topic Operator リンクのコピーリンクがクリップボードにコピーされました!
The operator responsible for managing Kafka topics through KafkaTopic custom resources.
Chapter 12. U リンクのコピーリンクがクリップボードにコピーされました!
12.1. Upgrade リンクのコピーリンクがクリップボードにコピーされました!
The process of updating the Cluster Operator and the Kafka cluster it manages. Upgrade typically involves upgrading the operator first, then the Kafka version, and finally the metadata version.
Upgrade paths:
- incremental upgrade
- Move between consecutive minor versions
- multi-version upgrade
- Skip one or more minor versions
For more information, see Upgrading Streams for Apache Kafka.
12.2. User Operator リンクのコピーリンクがクリップボードにコピーされました!
The operator responsible for managing Kafka users and ACLs through KafkaUser custom resources.
Appendix A. Using your subscription リンクのコピーリンクがクリップボードにコピーされました!
Streams for Apache Kafka is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal.
A.1. Accessing Your Account リンクのコピーリンクがクリップボードにコピーされました!
- Go to access.redhat.com.
- If you do not already have an account, create one.
- Log in to your account.
A.2. Activating a Subscription リンクのコピーリンクがクリップボードにコピーされました!
- Go to access.redhat.com.
- Navigate to My Subscriptions.
- Navigate to Activate a subscription and enter your 16-digit activation number.
A.3. Downloading Zip and Tar Files リンクのコピーリンクがクリップボードにコピーされました!
To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required.
- Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads.
- Locate the Streams for Apache Kafka entries in the INTEGRATION AND AUTOMATION category.
- Select the desired Streams for Apache Kafka product. The Software Downloads page opens.
- Click the Download link for your component.
A.4. Installing packages with DNF リンクのコピーリンクがクリップボードにコピーされました!
To install a package and all the package dependencies, use:
dnf install <package_name>
dnf install <package_name>
To install a previously-downloaded package from a local directory, use:
dnf install <path_to_download_package>
dnf install <path_to_download_package>
Revised on 2025-12-16 10:57:49 UTC