이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 6. Preparing for your Streams for Apache Kafka deployment


Prepare for a deployment of Streams for Apache Kafka by completing any necessary pre-deployment tasks. Take the necessary preparatory steps according to your specific requirements, such as the following:

Note

To run the commands in this guide, your cluster user must have the rights to manage role-based access control (RBAC) and CRDs.

6.1. Deployment prerequisites

To deploy Streams for Apache Kafka, you will need the following:

  • An OpenShift 4.14 and later cluster.

    Streams for Apache Kafka is based on Strimzi 0.45.x.

  • The oc command-line tool is installed and configured to connect to the running cluster.

6.2. Operator deployment best practices

Potential issues can arise from installing more than one Streams for Apache Kafka operator in the same OpenShift cluster, especially when using different versions. Each Streams for Apache Kafka operator manages a set of resources in an OpenShift cluster. When you install multiple Streams for Apache Kafka operators, they may attempt to manage the same resources concurrently. This can lead to conflicts and unpredictable behavior within your cluster. Conflicts can still occur even if you deploy Streams for Apache Kafka operators in different namespaces within the same OpenShift cluster. Although namespaces provide some degree of resource isolation, certain resources managed by the Streams for Apache Kafka operator, such as Custom Resource Definitions (CRDs) and roles, have a cluster-wide scope.

Additionally, installing multiple operators with different versions can result in compatibility issues between the operators and the Kafka clusters they manage. Different versions of Streams for Apache Kafka operators may introduce changes, bug fixes, or improvements that are not backward-compatible.

To avoid the issues associated with installing multiple Streams for Apache Kafka operators in an OpenShift cluster, the following guidelines are recommended:

  • Install the Streams for Apache Kafka operator in a separate namespace from the Kafka cluster and other Kafka components it manages, to ensure clear separation of resources and configurations.
  • Use a single Streams for Apache Kafka operator to manage all your Kafka instances within an OpenShift cluster.
  • Update the Streams for Apache Kafka operator and the supported Kafka version as often as possible to reflect the latest features and enhancements.

By following these best practices and ensuring consistent updates for a single Streams for Apache Kafka operator, you can enhance the stability of managing Kafka instances in an OpenShift cluster. This approach also enables you to make the most of Streams for Apache Kafka’s latest features and capabilities.

Note

As Streams for Apache Kafka is based on Strimzi, the same issues can also arise when combining Streams for Apache Kafka operators with Strimzi operators in an OpenShift cluster.

6.3. Pushing container images to your own registry

Container images for Streams for Apache Kafka are available in the Red Hat Ecosystem Catalog. The installation YAML files provided by Streams for Apache Kafka will pull the images directly from the Red Hat Ecosystem Catalog.

If you do not have access to the Red Hat Ecosystem Catalog or want to use your own container repository, do the following:

  1. Pull all container images listed here
  2. Push them into your own registry
  3. Update the image names in the installation YAML files
Note

Each Kafka version supported for the release has a separate image.

Expand
Table 6.1. Streams for Apache Kafka container images
Container imageNamespace/RepositoryDescription

Kafka

  • registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.3
  • registry.redhat.io/amq-streams/kafka-38-rhel9:2.9.3

Images for running Kafka, including:

  • Kafka Broker
  • Kafka Connect
  • Kafka MirrorMaker
  • ZooKeeper
  • Cruise Control

Operator

  • registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.9.3

Image for running the operators:

  • Cluster Operator
  • Topic Operator
  • User Operator
  • Kafka Initializer

Kafka Bridge

  • registry.redhat.io/amq-streams/bridge-rhel9:2.9.3

Image for running the Streams for Apache Kafka Bridge

Streams for Apache Kafka Drain Cleaner

  • registry.redhat.io/amq-streams/drain-cleaner-rhel9:2.9.3

Image for running the Streams for Apache Kafka Drain Cleaner

Streams for Apache Kafka Proxy

  • registry.redhat.io/amq-streams/proxy-rhel9:2.9.3

Image for running the Streams for Apache Kafka Proxy

Streams for Apache Kafka Console

  • registry.redhat.io/amq-streams/console-ui-rhel9:2.9.3
  • registry.redhat.io/amq-streams/console-api-rhel9:2.9.3

Images for running the Streams for Apache Kafka Console

6.4. Creating a pull secret for authentication to the container image registry

The installation YAML files provided by Streams for Apache Kafka pull container images directly from the Red Hat Ecosystem Catalog. If a Streams for Apache Kafka deployment requires authentication, configure authentication credentials in a secret and add it to the installation YAML.

Note

Authentication is not usually required, but might be requested on certain platforms.

Prerequisites

  • You need your Red Hat username and password or the login details from your Red Hat registry service account.
Note

You can use your Red Hat subscription to create a registry service account from the Red Hat Customer Portal.

Procedure

  1. Create a pull secret containing your login details and the container registry where the Streams for Apache Kafka image is pulled from:

    oc create secret docker-registry <pull_secret_name> \
        --docker-server=registry.redhat.io \
        --docker-username=<user_name> \
        --docker-password=<password> \
        --docker-email=<email>
    Copy to Clipboard Toggle word wrap

    Add your user name and password. The email address is optional.

  2. Edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml deployment file to specify the pull secret using the STRIMZI_IMAGE_PULL_SECRETS environment variable:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: strimzi-cluster-operator
    spec:
      # ...
      template:
        spec:
          serviceAccountName: strimzi-cluster-operator
          containers:
            # ...
            env:
              - name: STRIMZI_IMAGE_PULL_SECRETS
                value: "<pull_secret_name>"
    # ...
    Copy to Clipboard Toggle word wrap

    The secret applies to all pods created by the Cluster Operator.

6.5. Designating Streams for Apache Kafka administrators

Streams for Apache Kafka provides custom resources for configuration of your deployment. By default, permission to view, create, edit, and delete these resources is limited to OpenShift cluster administrators. Streams for Apache Kafka provides two cluster roles that you can use to assign these rights to other users:

  • strimzi-view allows users to view and list Streams for Apache Kafka resources.
  • strimzi-admin allows users to also create, edit or delete Streams for Apache Kafka resources.

When you install these roles, they will automatically aggregate (add) these rights to the default OpenShift cluster roles. strimzi-view aggregates to the view role, and strimzi-admin aggregates to the edit and admin roles. Because of the aggregation, you might not need to assign these roles to users who already have similar rights.

The following procedure shows how to assign a strimzi-admin role that allows non-cluster administrators to manage Streams for Apache Kafka resources.

A system administrator can designate Streams for Apache Kafka administrators after the Cluster Operator is deployed.

Prerequisites

  • The Streams for Apache Kafka admin deployment files, which are included in the Streams for Apache Kafka deployment files.
  • The Streams for Apache Kafka Custom Resource Definitions (CRDs) and role-based access control (RBAC) resources to manage the CRDs have been deployed with the Cluster Operator.

Procedure

  1. Create the strimzi-view and strimzi-admin cluster roles in OpenShift.

    oc create -f install/strimzi-admin
    Copy to Clipboard Toggle word wrap
  2. If needed, assign the roles that provide access rights to users that require them.

    oc create clusterrolebinding strimzi-admin --clusterrole=strimzi-admin --user=user1 --user=user2
    Copy to Clipboard Toggle word wrap
맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat