此内容没有您所选择的语言版本。

Chapter 6. Preparing for your Streams for Apache Kafka deployment


Prepare for a deployment of Streams for Apache Kafka by completing any necessary pre-deployment tasks. Take the necessary preparatory steps according to your specific requirements, such as the following:

Note

To run the commands in this guide, your cluster user must have the rights to manage role-based access control (RBAC) and CRDs.

6.1. Deployment prerequisites

To deploy Streams for Apache Kafka, you will need the following:

  • An OpenShift 4.16–4.20 cluster.

    Streams for Apache Kafka is based on Strimzi 0.48.x.

  • The oc command-line tool is installed and configured to connect to the running cluster.

6.2. Planning your Cluster Operator deployment

To support a stable and reliable Streams for Apache Kafka deployment, follow the best practices in this section. Run a single Cluster Operator per OpenShift cluster, choose an appropriate watch strategy, and isolate components within watched namespaces to reduce the risk of conflicts and unexpected behavior.

6.2.1. Avoiding deployment conflicts

A single operator is capable of managing multiple Kafka clusters across different namespaces. Deploying multiple instances of the Cluster Operator, particularly with different versions, introduces the following risks:

Resource conflicts
Conflicts over cluster-scoped resources like Custom Resource Definitions (CRDs) and ClusterRoles, leading to unpredictable behavior. This conflict occurs even when the operators are deployed in separate namespaces.
Version incompatibility
Different operator versions can create compatibility issues with the Kafka clusters they manage. New Streams for Apache Kafka releases may introduce features, bug fixes, or other changes that are not backward-compatible.

Approach to avoid risks

To avoid these risks, the recommended approach to deploying the Cluster Operator is as follows:

Run a single Cluster Operator
Deploy only one Cluster Operator per OpenShift cluster.
Consider a dedicated namespace
Install the Cluster Operator in its own namespace, separate from the Kafka components it manages. This separation is most useful when the operator is configured to watch multiple namespaces, but it can also help prevent uncontrolled growth of resources in a single namespace.
Keep everything updated
Regularly update Streams for Apache Kafka and the version of Kafka it manages so that you have the latest features, bug fixes, and enhancements.

6.2.2. Choosing namespace watch options

You configure the Cluster Operator to watch for changes to Kafka resources in specific namespaces.

You can configure the operator to watch:

Choosing to watch a specific list of multiple namespaces can have the biggest impact on performance due to increased processing overhead. To optimize performance, the recommended modes are to either watch a single namespace for focused monitoring or all namespaces for a comprehensive view of the entire cluster.

6.2.3. Isolating components in watched namespaces

After deploying the Cluster Operator, it begins watching specified namespaces for changes to Kafka resources. To reduce risks and maintain reliability, isolate component types within each watched namespace. Each namespace should contain only one instance of a given component type, such as one Kafka cluster, to avoid the following types of issues:

  • Conflicting resource names
  • Ambiguity in access management
  • Topic and user name collisions
  • Unpredictable behavior during upgrades or recovery
Note

As Streams for Apache Kafka is based on Strimzi, the same issues can also arise when combining Streams for Apache Kafka operators with Strimzi operators in an OpenShift cluster.

6.3. Pushing container images to your own registry

Container images for Streams for Apache Kafka are available in the Red Hat Ecosystem Catalog. The installation YAML files provided by Streams for Apache Kafka pull the images directly from the Red Hat Ecosystem Catalog.

If you do not have access to the Red Hat Ecosystem Catalog, or want to use your own container repository, do the following:

  1. Pull all container images listed here.
  2. Push them into your own registry.
  3. Update the image names in the installation YAML files.
Note

Each Kafka version supported for the release has a separate image.

6.3.1. Streams for Apache Kafka container images

Expand
Container imageNamespace/RepositoryDescription

Kafka

  • registry.redhat.io/amq-streams/kafka-41-rhel9:3.1.0
  • registry.redhat.io/amq-streams/kafka-40-rhel9:3.1.0

Images for running Kafka components, including:

  • Kafka Broker
  • Kafka Connect
  • Kafka MirrorMaker 2
  • Cruise Control

Operator

  • registry.redhat.io/amq-streams/strimzi-rhel9-operator:3.1.0
  • registry.redhat.io/amq-streams/strimzi-operator-bundle:3.1.0

Images for running the Streams for Apache Kafka operators:

  • Cluster Operator
  • Topic Operator
  • User Operator
  • Kafka Initializer

Kafka Bridge

  • registry.redhat.io/amq-streams/bridge-rhel9:3.1.0

Image for running the Streams for Apache Kafka Bridge

Streams for Apache Kafka Drain Cleaner

  • registry.redhat.io/amq-streams/drain-cleaner-rhel9:3.1.0

Image for running the Streams for Apache Kafka Drain Cleaner

Streams for Apache Kafka Proxy

  • registry.redhat.io/amq-streams/proxy-rhel9:3.1.0
  • registry.redhat.io/amq-streams/proxy-rhel9-operator:3.1.0
  • registry.redhat.io/amq-streams/proxy-operator-bundle:3.1.0

Images for running the Streams for Apache Kafka Proxy

Streams for Apache Kafka Console

  • registry.redhat.io/amq-streams/console-ui-rhel9:3.1.0
  • registry.redhat.io/amq-streams/console-api-rhel9:3.1.0
  • registry.redhat.io/amq-streams/console-rhel9-operator:3.1.0
  • registry.redhat.io/amq-streams/console-operator-bundle:3.1.0

Images for running the Streams for Apache Kafka Console

The installation YAML files provided by Streams for Apache Kafka pull container images directly from the Red Hat Ecosystem Catalog. If a Streams for Apache Kafka deployment requires authentication, configure authentication credentials in a secret and add it to the installation YAML.

Note

Authentication is not usually required, but might be requested on certain platforms.

Prerequisites

  • You need your Red Hat username and password or the login details from your Red Hat registry service account.
Note

You can use your Red Hat subscription to create a registry service account from the Red Hat Customer Portal.

Procedure

  1. Create a pull secret containing your login details and the container registry where the Streams for Apache Kafka image is pulled from:

    oc create secret docker-registry <pull_secret_name> \
        --docker-server=registry.redhat.io \
        --docker-username=<user_name> \
        --docker-password=<password> \
        --docker-email=<email>

    Add your user name and password. The email address is optional.

  2. Edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml deployment file to specify the pull secret using the STRIMZI_IMAGE_PULL_SECRETS environment variable:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: strimzi-cluster-operator
    spec:
      # ...
      template:
        spec:
          serviceAccountName: strimzi-cluster-operator
          containers:
            # ...
            env:
              - name: STRIMZI_IMAGE_PULL_SECRETS
                value: "<pull_secret_name>"
    # ...

    The secret applies to all pods created by the Cluster Operator.

6.5. Designating Streams for Apache Kafka administrators

Streams for Apache Kafka provides custom resources for configuration of your deployment. By default, permission to view, create, edit, and delete these resources is limited to OpenShift cluster administrators. Streams for Apache Kafka provides two cluster roles that you can use to assign these rights to other users:

  • strimzi-view allows users to view and list Streams for Apache Kafka resources.
  • strimzi-admin allows users to also create, edit or delete Streams for Apache Kafka resources.

When you install these roles, they will automatically aggregate (add) these rights to the default OpenShift cluster roles. strimzi-view aggregates to the view role, and strimzi-admin aggregates to the edit and admin roles. Because of the aggregation, you might not need to assign these roles to users who already have similar rights.

The following procedure shows how to assign a strimzi-admin role that allows non-cluster administrators to manage Streams for Apache Kafka resources.

A system administrator can designate Streams for Apache Kafka administrators after the Cluster Operator is deployed.

Prerequisites

  • The Streams for Apache Kafka admin deployment files, which are included in the Streams for Apache Kafka deployment files.
  • The Streams for Apache Kafka Custom Resource Definitions (CRDs) and role-based access control (RBAC) resources to manage the CRDs have been deployed with the Cluster Operator.

Procedure

  1. Create the strimzi-view and strimzi-admin cluster roles in OpenShift.

    oc create -f install/strimzi-admin
  2. If needed, assign the roles that provide access rights to users that require them.

    oc create clusterrolebinding strimzi-admin --clusterrole=strimzi-admin --user=user1 --user=user2
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2026 Red Hat
返回顶部