Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 4. Preparing for your Streams for Apache Kafka deployment

download PDF

Prepare for a deployment of Streams for Apache Kafka by completing any necessary pre-deployment tasks. Take the necessary preparatory steps according to your specific requirements, such as the following:

Note

To run the commands in this guide, your cluster user must have the rights to manage role-based access control (RBAC) and CRDs.

4.1. Deployment prerequisites

To deploy Streams for Apache Kafka, you will need the following:

  • An OpenShift 4.12 to 4.15 cluster.

    Streams for Apache Kafka is based on Strimzi 0.40.x.

  • The oc command-line tool is installed and configured to connect to the running cluster.

4.2. Operator deployment best practices

Potential issues can arise from installing more than one Streams for Apache Kafka operator in the same OpenShift cluster, especially when using different versions. Each Streams for Apache Kafka operator manages a set of resources in an OpenShift cluster. When you install multiple Streams for Apache Kafka operators, they may attempt to manage the same resources concurrently. This can lead to conflicts and unpredictable behavior within your cluster. Conflicts can still occur even if you deploy Streams for Apache Kafka operators in different namespaces within the same OpenShift cluster. Although namespaces provide some degree of resource isolation, certain resources managed by the Streams for Apache Kafka operator, such as Custom Resource Definitions (CRDs) and roles, have a cluster-wide scope.

Additionally, installing multiple operators with different versions can result in compatibility issues between the operators and the Kafka clusters they manage. Different versions of Streams for Apache Kafka operators may introduce changes, bug fixes, or improvements that are not backward-compatible.

To avoid the issues associated with installing multiple Streams for Apache Kafka operators in an OpenShift cluster, the following guidelines are recommended:

  • Install the Streams for Apache Kafka operator in a separate namespace from the Kafka cluster and other Kafka components it manages, to ensure clear separation of resources and configurations.
  • Use a single Streams for Apache Kafka operator to manage all your Kafka instances within an OpenShift cluster.
  • Update the Streams for Apache Kafka operator and the supported Kafka version as often as possible to reflect the latest features and enhancements.

By following these best practices and ensuring consistent updates for a single Streams for Apache Kafka operator, you can enhance the stability of managing Kafka instances in an OpenShift cluster. This approach also enables you to make the most of Streams for Apache Kafka’s latest features and capabilities.

Note

As Streams for Apache Kafka is based on Strimzi, the same issues can also arise when combining Streams for Apache Kafka operators with Strimzi operators in an OpenShift cluster.

4.3. Downloading Streams for Apache Kafka release artifacts

To use deployment files to install Streams for Apache Kafka, download and extract the files from the Streams for Apache Kafka software downloads page.

Streams for Apache Kafka release artifacts include sample YAML files to help you deploy the components of Streams for Apache Kafka to OpenShift, perform common operations, and configure your Kafka cluster.

Use oc to deploy the Cluster Operator from the install/cluster-operator folder of the downloaded ZIP file. For more information about deploying and configuring the Cluster Operator, see Section 6.2, “Deploying the Cluster Operator”.

In addition, if you want to use standalone installations of the Topic and User Operators with a Kafka cluster that is not managed by the Streams for Apache Kafka Cluster Operator, you can deploy them from the install/topic-operator and install/user-operator folders.

Note

Streams for Apache Kafka container images are also available through the Red Hat Ecosystem Catalog. However, we recommend that you use the YAML files provided to deploy Streams for Apache Kafka.

4.4. Pushing container images to your own registry

Container images for Streams for Apache Kafka are available in the Red Hat Ecosystem Catalog. The installation YAML files provided by Streams for Apache Kafka will pull the images directly from the Red Hat Ecosystem Catalog.

If you do not have access to the Red Hat Ecosystem Catalog or want to use your own container repository, do the following:

  1. Pull all container images listed here
  2. Push them into your own registry
  3. Update the image names in the installation YAML files
Note

Each Kafka version supported for the release has a separate image.

Container imageNamespace/RepositoryDescription

Kafka

  • registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0
  • registry.redhat.io/amq-streams/kafka-36-rhel9:2.7.0

Streams for Apache Kafka image for running Kafka, including:

  • Kafka Broker
  • Kafka Connect
  • Kafka MirrorMaker
  • ZooKeeper
  • TLS Sidecars
  • Cruise Control

Operator

  • registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0

Streams for Apache Kafka image for running the operators:

  • Cluster Operator
  • Topic Operator
  • User Operator
  • Kafka Initializer

Kafka Bridge

  • registry.redhat.io/amq-streams/bridge-rhel9:2.7.0

Streams for Apache Kafka image for running the Streams for Apache Kafka Bridge

Streams for Apache Kafka Drain Cleaner

  • registry.redhat.io/amq-streams/drain-cleaner-rhel9:2.7.0

Streams for Apache Kafka image for running the Streams for Apache Kafka Drain Cleaner

Streams for Apache Kafka Proxy

  • registry.redhat.io/amq-streams/proxy-rhel9-operator:2.7.0

Streams for Apache Kafka image for running the Streams for Apache Kafka Proxy

Streams for Apache Kafka Console

  • registry.redhat.io/amq-streams/console-rhel9-operator:2.7.0

Streams for Apache Kafka image for running the Streams for Apache Kafka Console

4.5. Creating a pull secret for authentication to the container image registry

The installation YAML files provided by Streams for Apache Kafka pull container images directly from the Red Hat Ecosystem Catalog. If a Streams for Apache Kafka deployment requires authentication, configure authentication credentials in a secret and add it to the installation YAML.

Note

Authentication is not usually required, but might be requested on certain platforms.

Prerequisites

  • You need your Red Hat username and password or the login details from your Red Hat registry service account.
Note

You can use your Red Hat subscription to create a registry service account from the Red Hat Customer Portal.

Procedure

  1. Create a pull secret containing your login details and the container registry where the Streams for Apache Kafka image is pulled from:

    oc create secret docker-registry <pull_secret_name> \
        --docker-server=registry.redhat.io \
        --docker-username=<user_name> \
        --docker-password=<password> \
        --docker-email=<email>

    Add your user name and password. The email address is optional.

  2. Edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml deployment file to specify the pull secret using the STRIMZI_IMAGE_PULL_SECRETS environment variable:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: strimzi-cluster-operator
    spec:
      # ...
      template:
        spec:
          serviceAccountName: strimzi-cluster-operator
          containers:
            # ...
            env:
              - name: STRIMZI_IMAGE_PULL_SECRETS
                value: "<pull_secret_name>"
    # ...

    The secret applies to all pods created by the Cluster Operator.

4.6. Designating Streams for Apache Kafka administrators

Streams for Apache Kafka provides custom resources for configuration of your deployment. By default, permission to view, create, edit, and delete these resources is limited to OpenShift cluster administrators. Streams for Apache Kafka provides two cluster roles that you can use to assign these rights to other users:

  • strimzi-view allows users to view and list Streams for Apache Kafka resources.
  • strimzi-admin allows users to also create, edit or delete Streams for Apache Kafka resources.

When you install these roles, they will automatically aggregate (add) these rights to the default OpenShift cluster roles. strimzi-view aggregates to the view role, and strimzi-admin aggregates to the edit and admin roles. Because of the aggregation, you might not need to assign these roles to users who already have similar rights.

The following procedure shows how to assign a strimzi-admin role that allows non-cluster administrators to manage Streams for Apache Kafka resources.

A system administrator can designate Streams for Apache Kafka administrators after the Cluster Operator is deployed.

Prerequisites

  • The Streams for Apache Kafka Custom Resource Definitions (CRDs) and role-based access control (RBAC) resources to manage the CRDs have been deployed with the Cluster Operator.

Procedure

  1. Create the strimzi-view and strimzi-admin cluster roles in OpenShift.

    oc create -f install/strimzi-admin
  2. If needed, assign the roles that provide access rights to users that require them.

    oc create clusterrolebinding strimzi-admin --clusterrole=strimzi-admin --user=user1 --user=user2
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.