此内容没有您所选择的语言版本。
Chapter 6. Preparing for your Streams for Apache Kafka deployment
Prepare for a deployment of Streams for Apache Kafka by completing any necessary pre-deployment tasks. Take the necessary preparatory steps according to your specific requirements, such as the following:
- Ensuring you have the necessary prerequisites before deploying Streams for Apache Kafka
- Considering operator deployment best practices
- Pushing the Streams for Apache Kafka container images into your own registry (if required)
- Creating a pull secret for authentication to the container image registry
- Setting up admin roles to enable configuration of custom resources used in the deployment
To run the commands in this guide, your cluster user must have the rights to manage role-based access control (RBAC) and CRDs.
6.1. Deployment prerequisites 复制链接链接已复制到粘贴板!
To deploy Streams for Apache Kafka, you will need the following:
An OpenShift 4.16–4.20 cluster.
Streams for Apache Kafka is based on Strimzi 0.48.x.
-
The
occommand-line tool is installed and configured to connect to the running cluster.
6.2. Planning your Cluster Operator deployment 复制链接链接已复制到粘贴板!
To support a stable and reliable Streams for Apache Kafka deployment, follow the best practices in this section. Run a single Cluster Operator per OpenShift cluster, choose an appropriate watch strategy, and isolate components within watched namespaces to reduce the risk of conflicts and unexpected behavior.
6.2.1. Avoiding deployment conflicts 复制链接链接已复制到粘贴板!
A single operator is capable of managing multiple Kafka clusters across different namespaces. Deploying multiple instances of the Cluster Operator, particularly with different versions, introduces the following risks:
- Resource conflicts
- Conflicts over cluster-scoped resources like Custom Resource Definitions (CRDs) and ClusterRoles, leading to unpredictable behavior. This conflict occurs even when the operators are deployed in separate namespaces.
- Version incompatibility
- Different operator versions can create compatibility issues with the Kafka clusters they manage. New Streams for Apache Kafka releases may introduce features, bug fixes, or other changes that are not backward-compatible.
Approach to avoid risks
To avoid these risks, the recommended approach to deploying the Cluster Operator is as follows:
- Run a single Cluster Operator
- Deploy only one Cluster Operator per OpenShift cluster.
- Consider a dedicated namespace
- Install the Cluster Operator in its own namespace, separate from the Kafka components it manages. This separation is most useful when the operator is configured to watch multiple namespaces, but it can also help prevent uncontrolled growth of resources in a single namespace.
- Keep everything updated
- Regularly update Streams for Apache Kafka and the version of Kafka it manages so that you have the latest features, bug fixes, and enhancements.
6.2.2. Choosing namespace watch options 复制链接链接已复制到粘贴板!
You configure the Cluster Operator to watch for changes to Kafka resources in specific namespaces.
You can configure the operator to watch:
Choosing to watch a specific list of multiple namespaces can have the biggest impact on performance due to increased processing overhead. To optimize performance, the recommended modes are to either watch a single namespace for focused monitoring or all namespaces for a comprehensive view of the entire cluster.
6.2.3. Isolating components in watched namespaces 复制链接链接已复制到粘贴板!
After deploying the Cluster Operator, it begins watching specified namespaces for changes to Kafka resources. To reduce risks and maintain reliability, isolate component types within each watched namespace. Each namespace should contain only one instance of a given component type, such as one Kafka cluster, to avoid the following types of issues:
- Conflicting resource names
- Ambiguity in access management
- Topic and user name collisions
- Unpredictable behavior during upgrades or recovery
As Streams for Apache Kafka is based on Strimzi, the same issues can also arise when combining Streams for Apache Kafka operators with Strimzi operators in an OpenShift cluster.
6.3. Pushing container images to your own registry 复制链接链接已复制到粘贴板!
Container images for Streams for Apache Kafka are available in the Red Hat Ecosystem Catalog. The installation YAML files provided by Streams for Apache Kafka pull the images directly from the Red Hat Ecosystem Catalog.
If you do not have access to the Red Hat Ecosystem Catalog, or want to use your own container repository, do the following:
- Pull all container images listed here.
- Push them into your own registry.
- Update the image names in the installation YAML files.
Each Kafka version supported for the release has a separate image.
6.3.1. Streams for Apache Kafka container images 复制链接链接已复制到粘贴板!
| Container image | Namespace/Repository | Description |
|---|---|---|
| Kafka |
| Images for running Kafka components, including:
|
| Operator |
| Images for running the Streams for Apache Kafka operators:
|
| Kafka Bridge |
| Image for running the Streams for Apache Kafka Bridge |
| Streams for Apache Kafka Drain Cleaner |
| Image for running the Streams for Apache Kafka Drain Cleaner |
| Streams for Apache Kafka Proxy |
| Images for running the Streams for Apache Kafka Proxy |
| Streams for Apache Kafka Console |
| Images for running the Streams for Apache Kafka Console |
The installation YAML files provided by Streams for Apache Kafka pull container images directly from the Red Hat Ecosystem Catalog. If a Streams for Apache Kafka deployment requires authentication, configure authentication credentials in a secret and add it to the installation YAML.
Authentication is not usually required, but might be requested on certain platforms.
Prerequisites
- You need your Red Hat username and password or the login details from your Red Hat registry service account.
You can use your Red Hat subscription to create a registry service account from the Red Hat Customer Portal.
Procedure
Create a pull secret containing your login details and the container registry where the Streams for Apache Kafka image is pulled from:
oc create secret docker-registry <pull_secret_name> \ --docker-server=registry.redhat.io \ --docker-username=<user_name> \ --docker-password=<password> \ --docker-email=<email>Add your user name and password. The email address is optional.
Edit the
install/cluster-operator/060-Deployment-strimzi-cluster-operator.yamldeployment file to specify the pull secret using theSTRIMZI_IMAGE_PULL_SECRETSenvironment variable:apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: # ... env: - name: STRIMZI_IMAGE_PULL_SECRETS value: "<pull_secret_name>" # ...The secret applies to all pods created by the Cluster Operator.
6.5. Designating Streams for Apache Kafka administrators 复制链接链接已复制到粘贴板!
Streams for Apache Kafka provides custom resources for configuration of your deployment. By default, permission to view, create, edit, and delete these resources is limited to OpenShift cluster administrators. Streams for Apache Kafka provides two cluster roles that you can use to assign these rights to other users:
-
strimzi-viewallows users to view and list Streams for Apache Kafka resources. -
strimzi-adminallows users to also create, edit or delete Streams for Apache Kafka resources.
When you install these roles, they will automatically aggregate (add) these rights to the default OpenShift cluster roles. strimzi-view aggregates to the view role, and strimzi-admin aggregates to the edit and admin roles. Because of the aggregation, you might not need to assign these roles to users who already have similar rights.
The following procedure shows how to assign a strimzi-admin role that allows non-cluster administrators to manage Streams for Apache Kafka resources.
A system administrator can designate Streams for Apache Kafka administrators after the Cluster Operator is deployed.
Prerequisites
- The Streams for Apache Kafka admin deployment files, which are included in the Streams for Apache Kafka deployment files.
- The Streams for Apache Kafka Custom Resource Definitions (CRDs) and role-based access control (RBAC) resources to manage the CRDs have been deployed with the Cluster Operator.
Procedure
Create the
strimzi-viewandstrimzi-admincluster roles in OpenShift.oc create -f install/strimzi-adminIf needed, assign the roles that provide access rights to users that require them.
oc create clusterrolebinding strimzi-admin --clusterrole=strimzi-admin --user=user1 --user=user2