Chapter 3. Deploying Kafka components using the Streams for Apache Kafka operator
When installed on Openshift, the Streams for Apache Kafka operator makes Kafka components available for installation from the user interface.
The following Kafka components are available for installation:
- Kafka
- Kafka Node Pool
- Kafka Connect
- Kafka MirrorMaker 2
- Kafka Topic
- Kafka User
- Kafka Bridge
- Kafka Connector
- Kafka Rebalance
You select the component and create an instance. As a minimum, you create a Kafka instance and node pool. This procedure describes how to create a Kafka instance with separate node pools for brokers and controllers. You can configure the default installation specification before you perform the installation.
The process is the same for creating instances of other Kafka components.
Prerequisites
- The Streams for Apache Kafka operator is installed on the OpenShift cluster.
Procedure
Navigate in the web console to the Operators > Installed Operators page and click Streams for Apache Kafka to display the operator details.
From Provided APIs, you can create instances of Kafka components.
Click Create instance under Kafka to create a Kafka instance.
By default, you’ll create a Kafka cluster called
my-cluster:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster annotations: strimzi.io/node-pools: enabled strimzi.io/kraft: enabled spec: kafka: config: offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true version: 4.0.0 metadataVersion: 4.0 entityOperator: topicOperator: {} userOperator: {}Click Create to start the installation of Kafka.
The
Kafkaresource remains in a pending state until at least one node pool is created.Click Create instance under KafkaNodePool to create a node pool instance.
Switch to YAML view and paste a minimal broker pool configuration with ephemeral storage:
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: broker labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: ephemeralClick Create instance under KafkaNodePool to create a second node pool instance.
Switch to YAML view and paste a minimal controller pool configuration with ephemeral storage:
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: controller labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - controller storage: type: jbod volumes: - id: 0 type: ephemeral kraftMetadata: shared- Select the Kafka page to show the installed Kafka clusters. Wait until the status of the Kafka cluster changes to Ready.
These examples use ephemeral storage for evaluation only. For production deployments, configure persistent volumes.