Chapter 2. Getting started with AMQ Streams


AMQ Streams works on all types of clusters, from public and private clouds to local deployments intended for development. This guide assumes that an OpenShift cluster is available and the oc command-line tool is installed and configured to connect to the running cluster.

AMQ Streams is based on Strimzi 0.14.x. This chapter describes the procedures to deploy AMQ Streams on OpenShift 3.11 and later.

Note

To run the commands in this guide, your cluster user must have the rights to manage role-based access control (RBAC) and CRDs.

For more information about OpenShift and setting up OpenShift cluster, see OpenShift documentation.

2.1. Installing AMQ Streams and deploying components

To install AMQ Streams, download and extract the amq-streams-x.y.z-ocp-install-examples.zip file from the AMQ Streams download site.

The folder contains several YAML files to help you deploy the components of AMQ Streams to OpenShift, perform common operations, and configure your Kafka cluster. The YAML files are referenced throughout this documentation.

The remainder of this chapter provides an overview of each component and instructions for deploying the components to OpenShift using the YAML files provided.

Note

Although container images for AMQ Streams are available in the Red Hat Container Catalog, we recommend that you use the YAML files provided instead.

2.2. Custom resources

Custom resource definitions (CRDs) extend the Kubernetes API, providing definitions to create and modify custom resources to an OpenShift cluster. Custom resources are created as instances of CRDs.

In AMQ Streams, CRDs introduce custom resources specific to AMQ Streams to an OpenShift cluster, such as Kafka, Kafka Connect, Kafka Mirror Maker, and users and topics custom resources. CRDs provide configuration instructions, defining the schemas used to instantiate and manage the AMQ Streams-specific resources. CRDs also allow AMQ Streams resources to benefit from native OpenShift features like CLI accessibility and configuration validation.

CRDs require a one-time installation in a cluster. Depending on the cluster setup, installation typically requires cluster admin privileges.

Note

Access to manage custom resources is limited to AMQ Streams administrators.

CRDs and custom resources are defined as YAML files.

A CRD defines a new kind of resource, such as kind:Kafka, within an OpenShift cluster.

The Kubernetes API server allows custom resources to be created based on the kind and understands from the CRD how to validate and store the custom resource when it is added to the OpenShift cluster.

Warning

When CRDs are deleted, custom resources of that type are also deleted. Additionally, the resources created by the custom resource, such as pods and statefulsets are also deleted.

2.2.1. AMQ Streams custom resource example

Each AMQ Streams-specific custom resource conforms to the schema defined by the CRD for the resource’s kind.

To understand the relationship between a CRD and a custom resource, let’s look at a sample of the CRD for a Kafka topic.

Kafka topic CRD

apiVersion: kafka.strimzi.io/v1beta1
kind: CustomResourceDefinition
metadata: 1
  name: kafkatopics.kafka.strimzi.io
  labels:
    app: strimzi
spec: 2
  group: kafka.strimzi.io
  versions:
    v1beta1
  scope: Namespaced
  names:
    # ...
    singular: kafkatopic
    plural: kafkatopics
    shortNames:
    - kt 3
  additionalPrinterColumns: 4
      # ...
  subresources:
    status: {} 5
  validation: 6
    openAPIV3Schema:
      properties:
        spec:
          type: object
          properties:
            partitions:
              type: integer
              minimum: 1
            replicas:
              type: integer
              minimum: 1
              maximum: 32767
      # ...

1
The metadata for the topic CRD, its name and a label to identify the CRD.
2
The specification for this CRD, including the group (domain) name, the plural name and the supported schema version, which are used in the URL to access the API of the topic. The other names are used to identify instance resources in the CLI. For example, oc get kafkatopic my-topic or oc get kafkatopics.
3
The shortname can be used in CLI commands. For example, oc get kt can be used as an abbreviation instead of oc get kafkatopic.
4
The information presented when using a get command on the custom resource.
5
The current status of the CRD as described in the schema reference for the resource.
6
openAPIV3Schema validation provides validation for the creation of topic custom resources. For example, a topic requires at least one partition and one replica.
Note

You can identify the CRD YAML files supplied with the AMQ Streams installation files, because the file names contain an index number followed by ‘Crd’.

Here is a corresponding example of a KafkaTopic custom resource.

Kafka topic custom resource

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic 1
metadata:
  name: my-topic
  labels:
    strimzi.io/cluster: my-cluster 2
spec: 3
  partitions: 1
  replicas: 1
  config:
    retention.ms: 7200000
    segment.bytes: 1073741824
status:
  conditions: 4
    lastTransitionTime: "2019-08-20T11:37:00.706Z"
    status: "True"
    type: Ready
  observedGeneration: 1
  / ...

1
The kind and apiVersion identify the CRD of which the custom resource is an instance.
2
A label, applicable only to KafkaTopic and KafkaUser resources, that defines the name of the Kafka cluster (which is same as the name of the Kafka resource) to which a topic or user belongs.

The name is used by the Topic Operator and User Operator to identify the Kafka cluster when creating a topic or user.

3
The spec shows the number of partitions and replicas for the topic as well as the configuration parameters for the topic itself. In this example, the retention period for a message to remain in the topic and the segment file size for the log are specified.
4
Status conditions for the KafkaTopic resource. The type condition changed to Ready at the lastTransitionTime.

Custom resources can be applied to a cluster through the platform CLI. When the custom resource is created, it uses the same validation as the built-in resources of the Kubernetes API.

After a KafkaTopic custom resource is created, the Topic Operator is notified and corresponding Kafka topics are created in AMQ Streams.

2.2.2. AMQ Streams custom resource status

The status property of a AMQ Streams custom resource publishes information about the resource to users and tools that need it.

Several resources have a status property, as described in the following table.

AMQ Streams resourceSchema referencePublishes status information on…​

Kafka

Section C.58, “KafkaStatus schema reference”

The Kafka cluster.

KafkaConnect

Section C.77, “KafkaConnectStatus schema reference”

The Kafka Connect cluster, if deployed.

KafkaConnectS2I

Section C.80, “KafkaConnectS2Istatus schema reference”

The Kafka Connect cluster with Source-to-Image support, if deployed.

KafkaMirrorMaker

Section C.101, “KafkaMirrorMakerStatus schema reference”

The Kafka MirrorMaker tool, if deployed.

KafkaTopic

Section C.83, “KafkaTopicStatus schema reference”

Kafka topics in your Kafka cluster.

KafkaUser

Section C.94, “KafkaUserStatus schema reference”

Kafka users in your Kafka cluster.

KafkaBridge

Section C.109, “KafkaBridgeStatus schema reference”

The AMQ Streams Kafka Bridge, if deployed.

The status property of a resource provides information on the resource’s:

  • Current state, in the status.conditions property
  • Last observed generation, in the status.observedGeneration property

The status property also provides resource-specific information. For example:

  • KafkaConnectStatus provides the REST API endpoint for Kafka Connect connectors.
  • KafkaUserStatus provides the user name of the Kafka user and the Secret in which their credentials are stored.
  • KafkaBridgeStatus provides the HTTP address at which external client applications can access the Bridge service.

A resource’s current state is useful for tracking progress related to the resource achieving its desired state, as defined by the spec property. The status conditions provide the time and reason the state of the resource changed and details of events preventing or delaying the operator from realizing the resource’s desired state.

The last observed generation is the generation of the resource that was last reconciled by the Cluster Operator. If the value of observedGeneration is different from the value of metadata.generation, the operator has not yet processed the latest update to the resource. If these values are the same, the status information reflects the most recent changes to the resource.

AMQ Streams creates and maintains the status of custom resources, periodically evaluating the current state of the custom resource and updating its status accordingly. When performing an update on a custom resource using oc edit, for example, its status is not editable. Moreover, changing the status would not affect the configuration of the Kafka cluster.

Here we see the status property specified for a Kafka custom resource.

Kafka custom resource with status

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
spec:
  # ...
status:
  conditions: 1
  - lastTransitionTime: 2019-07-23T23:46:57+0000
    status: "True"
    type: Ready 2
  observedGeneration: 4 3
  listeners: 4
  - addresses:
    - host: my-cluster-kafka-bootstrap.myproject.svc
      port: 9092
    type: plain
  - addresses:
    - host: my-cluster-kafka-bootstrap.myproject.svc
      port: 9093
    type: tls
  - addresses:
    - host: 172.29.49.180
      port: 9094
    type: external
    # ...

1
Status conditions describe criteria related to the status that cannot be deduced from the existing resource information, or are specific to the instance of a resource.
2
The Ready condition indicates whether the Cluster Operator currently considers the Kafka cluster able to handle traffic.
3
The observedGeneration indicates the generation of the Kafka custom resource that was last reconciled by the Cluster Operator.
4
The listeners describe the current Kafka bootstrap addresses by type.
Important

The address in the custom resource status for external listeners with type nodeport is currently not supported.

Note

The Kafka bootstrap addresses listed in the status do not signify that those endpoints or the Kafka cluster is in a ready state.

Accessing status information

You can access status information for a resource from the command line. For more information, see Chapter 16, Checking the status of a custom resource.

2.3. Cluster Operator

The Cluster Operator is responsible for deploying and managing Apache Kafka clusters within an OpenShift cluster.

2.3.1. Overview of the Cluster Operator component

AMQ Streams uses the Cluster Operator to deploy and manage clusters for:

  • Kafka (including Zookeeper, Entity Operator and Kafka Exporter)
  • Kafka Connect
  • Kafka Mirror Maker
  • Kafka Bridge

Custom resources are used to deploy the clusters.

For example, to deploy a Kafka cluster:

  • A Kafka resource with the cluster configuration is created within the OpenShift cluster.
  • The Cluster Operator deploys a corresponding Kafka cluster, based on what is declared in the Kafka resource.

The Cluster Operator can also deploy (through Entity Operator configuration of the Kafka resource):

  • A Topic Operator to provide operator-style topic management through KafkaTopic custom resources
  • A User Operator to provide operator-style user management through KafkaUser custom resources

For more information on the configuration options supported by the Kafka resource, see Section 3.1, “Kafka cluster configuration”.

Note

On OpenShift, a Kafka Connect deployment can incorporate a Source2Image feature to provides a convenient way to include connectors.

Example architecture for the Cluster Operator

Cluster Operator

2.3.2. Watch options for a Cluster Operator deployment

When the Cluster Operator is running, it starts to watch for updates of Kafka resources.

Depending on the deployment, the Cluster Operator can watch Kafka resources from:

Note

AMQ Streams provides example YAML files to make the deployment process easier.

The Cluster Operator watches the following resources:

  • Kafka for the Kafka cluster.
  • KafkaConnect for the Kafka Connect cluster.
  • KafkaConnectS2I for the Kafka Connect cluster with Source2Image support.
  • KafkaMirrorMaker for the Kafka Mirror Maker instance.
  • KafkaBridge for the Kafka Bridge instance

When one of these resources is created in the OpenShift cluster, the operator gets the cluster description from the resource and starts creating a new cluster for the resource by creating the necessary OpenShift resources, such as StatefulSets, Services and ConfigMaps.

Each time a Kafka resource is updated, the operator performs corresponding updates on the OpenShift resources that make up the cluster for the resource.

Resources are either patched or deleted, and then recreated in order to make the cluster for the resource reflect the desired state of the cluster. This operation might cause a rolling update that might lead to service disruption.

When a resource is deleted, the operator undeploys the cluster and deletes all related OpenShift resources.

2.3.3. Deploying the Cluster Operator to watch a single namespace

Prerequisites

  • This procedure requires use of an OpenShift user account which is able to create CustomResourceDefinitions, ClusterRoles and ClusterRoleBindings. Use of Role Base Access Control (RBAC) in the OpenShift cluster usually means that permission to create, edit, and delete these resources is limited to OpenShift cluster administrators, such as system:admin.
  • Modify the installation files according to the namespace the Cluster Operator is going to be installed in.

    On Linux, use:

    sed -i 's/namespace: .*/namespace: my-namespace/' install/cluster-operator/*RoleBinding*.yaml

    On MacOS, use:

    sed -i '' 's/namespace: .*/namespace: my-namespace/' install/cluster-operator/*RoleBinding*.yaml

Procedure

  • Deploy the Cluster Operator:

    oc apply -f install/cluster-operator -n my-namespace

2.3.4. Deploying the Cluster Operator to watch multiple namespaces

Prerequisites

  • This procedure requires use of an OpenShift user account which is able to create CustomResourceDefinitions, ClusterRoles and ClusterRoleBindings. Use of Role Base Access Control (RBAC) in the OpenShift cluster usually means that permission to create, edit, and delete these resources is limited to OpenShift cluster administrators, such as system:admin.
  • Edit the installation files according to the namespace the Cluster Operator is going to be installed in.

    On Linux, use:

    sed -i 's/namespace: .*/namespace: my-namespace/' install/cluster-operator/*RoleBinding*.yaml

    On MacOS, use:

    sed -i '' 's/namespace: .*/namespace: my-namespace/' install/cluster-operator/*RoleBinding*.yaml

Procedure

  1. Edit the file install/cluster-operator/050-Deployment-strimzi-cluster-operator.yaml and in the environment variable STRIMZI_NAMESPACE list all the namespaces where Cluster Operator should watch for resources. For example:

    apiVersion: apps/v1
    kind: Deployment
    spec:
      # ...
      template:
        spec:
          serviceAccountName: strimzi-cluster-operator
          containers:
          - name: strimzi-cluster-operator
            image: registry.redhat.io/amq7/amq-streams-operator:1.3.0
            imagePullPolicy: IfNotPresent
            env:
            - name: STRIMZI_NAMESPACE
              value: watched-namespace-1,watched-namespace-2,watched-namespace-3
  2. For all namespaces which should be watched by the Cluster Operator (watched-namespace-1, watched-namespace-2, watched-namespace-3 in the above example), install the RoleBindings. Replace the watched-namespace with the namespace used in the previous step.

    This can be done using oc apply:

    oc apply -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n watched-namespace
    oc apply -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n watched-namespace
    oc apply -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n watched-namespace
  3. Deploy the Cluster Operator

    This can be done using oc apply:

    oc apply -f install/cluster-operator -n my-namespace

2.3.5. Deploying the Cluster Operator to watch all namespaces

You can configure the Cluster Operator to watch AMQ Streams resources across all namespaces in your OpenShift cluster. When running in this mode, the Cluster Operator automatically manages clusters in any new namespaces that are created.

Prerequisites

  • This procedure requires use of an OpenShift user account which is able to create CustomResourceDefinitions, ClusterRoles and ClusterRoleBindings. Use of Role Base Access Control (RBAC) in the OpenShift cluster usually means that permission to create, edit, and delete these resources is limited to OpenShift cluster administrators, such as system:admin.
  • Your OpenShift cluster is running.

Procedure

  1. Configure the Cluster Operator to watch all namespaces:

    1. Edit the 050-Deployment-strimzi-cluster-operator.yaml file.
    2. Set the value of the STRIMZI_NAMESPACE environment variable to *.

      apiVersion: apps/v1
      kind: Deployment
      spec:
        # ...
        template:
          spec:
            # ...
            serviceAccountName: strimzi-cluster-operator
            containers:
            - name: strimzi-cluster-operator
              image: registry.redhat.io/amq7/amq-streams-operator:1.3.0
              imagePullPolicy: IfNotPresent
              env:
              - name: STRIMZI_NAMESPACE
                value: "*"
              # ...
  2. Create ClusterRoleBindings that grant cluster-wide access to all namespaces to the Cluster Operator.

    Use the oc create clusterrolebinding command:

    oc create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-namespace:strimzi-cluster-operator
    oc create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-namespace:strimzi-cluster-operator
    oc create clusterrolebinding strimzi-cluster-operator-topic-operator-delegation --clusterrole=strimzi-topic-operator --serviceaccount my-namespace:strimzi-cluster-operator

    Replace my-namespace with the namespace in which you want to install the Cluster Operator.

  3. Deploy the Cluster Operator to your OpenShift cluster.

    Use the oc apply command:

    oc apply -f install/cluster-operator -n my-namespace

2.4. Kafka cluster

You can use AMQ Streams to deploy an ephemeral or persistent Kafka cluster to OpenShift. When installing Kafka, AMQ Streams also installs a Zookeeper cluster and adds the necessary configuration to connect Kafka with Zookeeper.

You can also use it to deploy Kafka Exporter.

Ephemeral cluster
In general, an ephemeral (that is, temporary) Kafka cluster is suitable for development and testing purposes, not for production. This deployment uses emptyDir volumes for storing broker information (for Zookeeper) and topics or partitions (for Kafka). Using an emptyDir volume means that its content is strictly related to the pod life cycle and is deleted when the pod goes down.
Persistent cluster
A persistent Kafka cluster uses PersistentVolumes to store Zookeeper and Kafka data. The PersistentVolume is acquired using a PersistentVolumeClaim to make it independent of the actual type of the PersistentVolume. For example, it can use Amazon EBS volumes in Amazon AWS deployments without any changes in the YAML files. The PersistentVolumeClaim can use a StorageClass to trigger automatic volume provisioning.

AMQ Streams includes two templates for deploying a Kafka cluster:

  • kafka-ephemeral.yaml deploys an ephemeral cluster, named my-cluster by default.
  • kafka-persistent.yaml deploys a persistent cluster, named my-cluster by default.

The cluster name is defined by the name of the resource and cannot be changed after the cluster has been deployed. To change the cluster name before you deploy the cluster, edit the Kafka.metadata.name property of the resource in the relevant YAML file.

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
# ...

2.4.1. Deploying the Kafka cluster

You can deploy an ephemeral or persistent Kafka cluster to OpenShift on the command line.

Prerequisites

  • The Cluster Operator is deployed.

Procedure

  1. If you plan to use the cluster for development or testing purposes, you can create and deploy an ephemeral cluster using oc apply.

    oc apply -f examples/kafka/kafka-ephemeral.yaml
  2. If you plan to use the cluster in production, create and deploy a persistent cluster using oc apply.

    oc apply -f examples/kafka/kafka-persistent.yaml

Additional resources

2.5. Kafka Connect

Kafka Connect is a tool for streaming data between Apache Kafka and external systems. It provides a framework for moving large amounts of data into and out of your Kafka cluster while maintaining scalability and reliability. Kafka Connect is typically used to integrate Kafka with external databases and storage and messaging systems.

You can use Kafka Connect to:

  • Build connector plug-ins (as JAR files) for your Kafka cluster
  • Run connectors

Kafka Connect includes the following built-in connectors for moving file-based data into and out of your Kafka cluster.

File ConnectorDescription

FileStreamSourceConnector

Transfers data to your Kafka cluster from a file (the source).

FileStreamSinkConnector

Transfers data from your Kafka cluster to a file (the sink).

In AMQ Streams, you can use the Cluster Operator to deploy a Kafka Connect to your OpenShift cluster. OpenShift also supports Kafka Connect Source-2-Image (S2I) clusters.

A Kafka Connect cluster is implemented as a Deployment with a configurable number of workers. The Kafka Connect REST API is available on port 8083, as the <connect-cluster-name>-connect-api service.

For more information on deploying a Kafka Connect S2I cluster on OpenShift, see Creating a container image using OpenShift builds and Source-to-Image.

2.5.1. Deploying Kafka Connect to your cluster

You can deploy a Kafka Connect cluster to your OpenShift cluster by using the Cluster Operator.

Procedure

  • Use the oc apply command to create a KafkaConnect resource based on the kafka-connect.yaml file:

    oc apply -f examples/kafka-connect/kafka-connect.yaml

2.5.2. Extending Kafka Connect with connector plug-ins

The AMQ Streams container images for Kafka Connect include the two built-in file connectors: FileStreamSourceConnector and FileStreamSinkConnector. You can add your own connectors by:

  • Creating a container image from the Kafka Connect base image (manually or using your CI (continuous integration), for example).
  • Creating a container image using OpenShift builds and Source-to-Image (S2I) - available only on OpenShift.

2.5.2.1. Creating a Docker image from the Kafka Connect base image

You can use the Kafka container image on Red Hat Container Catalog as a base image for creating your own custom image with additional connector plug-ins.

The following procedure explains how to create your custom image and add it to the /opt/kafka/plugins directory. At startup, the AMQ Streams version of Kafka Connect loads any third-party connector plug-ins contained in the /opt/kafka/plugins directory.

Procedure

  1. Create a new Dockerfile using registry.redhat.io/amq7/amq-streams-kafka-23:1.3.0 as the base image:

    FROM registry.redhat.io/amq7/amq-streams-kafka-23:1.3.0
    USER root:root
    COPY ./my-plugins/ /opt/kafka/plugins/
    USER jboss:jboss
  2. Build the container image.
  3. Push your custom image to your container registry.
  4. Point to the new container image.

    You can either:

    • Edit the KafkaConnect.spec.image property of the KafkaConnect custom resource.

      If set, this property overrides the STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE variable in the Cluster Operator.

      apiVersion: kafka.strimzi.io/v1beta1
      kind: KafkaConnect
      metadata:
        name: my-connect-cluster
      spec:
        #...
        image: my-new-container-image

      or

    • In the install/cluster-operator/050-Deployment-strimzi-cluster-operator.yaml file, edit the STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE variable to point to the new container image and reinstall the Cluster Operator.

Additional resources

2.5.2.2. Creating a container image using OpenShift builds and Source-to-Image

You can use OpenShift builds and the Source-to-Image (S2I) framework to create new container images. An OpenShift build takes a builder image with S2I support, together with source code and binaries provided by the user, and uses them to build a new container image. Once built, container images are stored in OpenShift’s local container image repository and are available for use in deployments.

A Kafka Connect builder image with S2I support is provided on the Red Hat Container Catalog as part of the registry.redhat.io/amq7/amq-streams-kafka-23:1.3.0 image. This S2I image takes your binaries (with plug-ins and connectors) and stores them in the /tmp/kafka-plugins/s2i directory. It creates a new Kafka Connect image from this directory, which can then be used with the Kafka Connect deployment. When started using the enhanced image, Kafka Connect loads any third-party plug-ins from the /tmp/kafka-plugins/s2i directory.

Procedure

  1. On the command line, use the oc apply command to create and deploy a Kafka Connect S2I cluster:

    oc apply -f examples/kafka-connect/kafka-connect-s2i.yaml
  2. Create a directory with Kafka Connect plug-ins:

    $ tree ./my-plugins/
    ./my-plugins/
    ├── debezium-connector-mongodb
    │   ├── bson-3.4.2.jar
    │   ├── CHANGELOG.md
    │   ├── CONTRIBUTE.md
    │   ├── COPYRIGHT.txt
    │   ├── debezium-connector-mongodb-0.7.1.jar
    │   ├── debezium-core-0.7.1.jar
    │   ├── LICENSE.txt
    │   ├── mongodb-driver-3.4.2.jar
    │   ├── mongodb-driver-core-3.4.2.jar
    │   └── README.md
    ├── debezium-connector-mysql
    │   ├── CHANGELOG.md
    │   ├── CONTRIBUTE.md
    │   ├── COPYRIGHT.txt
    │   ├── debezium-connector-mysql-0.7.1.jar
    │   ├── debezium-core-0.7.1.jar
    │   ├── LICENSE.txt
    │   ├── mysql-binlog-connector-java-0.13.0.jar
    │   ├── mysql-connector-java-5.1.40.jar
    │   ├── README.md
    │   └── wkb-1.0.2.jar
    └── debezium-connector-postgres
        ├── CHANGELOG.md
        ├── CONTRIBUTE.md
        ├── COPYRIGHT.txt
        ├── debezium-connector-postgres-0.7.1.jar
        ├── debezium-core-0.7.1.jar
        ├── LICENSE.txt
        ├── postgresql-42.0.0.jar
        ├── protobuf-java-2.6.1.jar
        └── README.md
  3. Use the oc start-build command to start a new build of the image using the prepared directory:

    oc start-build my-connect-cluster-connect --from-dir ./my-plugins/
    Note

    The name of the build is the same as the name of the deployed Kafka Connect cluster.

  4. Once the build has finished, the new image is used automatically by the Kafka Connect deployment.

2.6. Kafka Mirror Maker

The Cluster Operator deploys one or more Kafka Mirror Maker replicas to replicate data between Kafka clusters. This process is called mirroring to avoid confusion with the Kafka partitions replication concept. The Mirror Maker consumes messages from the source cluster and republishes those messages to the target cluster.

For information about example resources and the format for deploying Kafka Mirror Maker, see Kafka Mirror Maker configuration.

2.6.1. Deploying Kafka Mirror Maker

Prerequisites

  • Before deploying Kafka Mirror Maker, the Cluster Operator must be deployed.

Procedure

  • Create a Kafka Mirror Maker cluster from the command-line:

    oc apply -f examples/kafka-mirror-maker/kafka-mirror-maker.yaml

Additional resources

2.7. Kafka Bridge

The Cluster Operator deploys one or more Kafka bridge replicas to send data between Kafka clusters and clients via HTTP API.

For information about example resources and the format for deploying Kafka Bridge, see Kafka Bridge configuration.

2.7.1. Deploying Kafka Bridge to your OpenShift cluster

You can deploy a Kafka Bridge cluster to your OpenShift cluster by using the Cluster Operator.

Procedure

  • Use the oc apply command to create a KafkaBridge resource based on the kafka-bridge.yaml file:

    oc apply -f examples/kafka-bridge/kafka-bridge.yaml

Additional resources

2.8. Deploying example clients

Prerequisites

  • An existing Kafka cluster for the client to connect to.

Procedure

  1. Deploy the producer.

    Use oc run:

    oc run kafka-producer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-23:1.3.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list cluster-name-kafka-bootstrap:9092 --topic my-topic
  2. Type your message into the console where the producer is running.
  3. Press Enter to send the message.
  4. Deploy the consumer.

    Use oc run:

    oc run kafka-consumer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-23:1.3.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server cluster-name-kafka-bootstrap:9092 --topic my-topic --from-beginning
  5. Confirm that you see the incoming messages in the consumer console.

2.9. Topic Operator

The Topic Operator is responsible for managing Kafka topics within a Kafka cluster running within an OpenShift cluster.

2.9.1. Overview of the Topic Operator component

The Topic Operator provides a way of managing topics in a Kafka cluster via OpenShift resources.

Example architecture for the Topic Operator

Topic Operator

The role of the Topic Operator is to keep a set of KafkaTopic OpenShift resources describing Kafka topics in-sync with corresponding Kafka topics.

Specifically, if a KafkaTopic is:

  • Created, the operator will create the topic it describes
  • Deleted, the operator will delete the topic it describes
  • Changed, the operator will update the topic it describes

And also, in the other direction, if a topic is:

  • Created within the Kafka cluster, the operator will create a KafkaTopic describing it
  • Deleted from the Kafka cluster, the operator will delete the KafkaTopic describing it
  • Changed in the Kafka cluster, the operator will update the KafkaTopic describing it

This allows you to declare a KafkaTopic as part of your application’s deployment and the Topic Operator will take care of creating the topic for you. Your application just needs to deal with producing or consuming from the necessary topics.

If the topic is reconfigured or reassigned to different Kafka nodes, the KafkaTopic will always be up to date.

For more details about creating, modifying and deleting topics, see Chapter 5, Using the Topic Operator.

2.9.2. Deploying the Topic Operator using the Cluster Operator

This procedure describes how to deploy the Topic Operator using the Cluster Operator. If you want to use the Topic Operator with a Kafka cluster that is not managed by AMQ Streams, you must deploy the Topic Operator as a standalone component. For more information, see Section 4.2.6, “Deploying the standalone Topic Operator”.

Prerequisites

  • A running Cluster Operator
  • A Kafka resource to be created or updated

Procedure

  1. Ensure that the Kafka.spec.entityOperator object exists in the Kafka resource. This configures the Entity Operator.

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      #...
      entityOperator:
        topicOperator: {}
        userOperator: {}
  2. Configure the Topic Operator using the fields described in Section C.52, “EntityTopicOperatorSpec schema reference”.
  3. Create or update the Kafka resource in OpenShift.

    Use oc apply:

    oc apply -f your-file

Additional resources

2.10. User Operator

The User Operator is responsible for managing Kafka users within a Kafka cluster running within an OpenShift cluster.

2.10.1. Overview of the User Operator component

The User Operator manages Kafka users for a Kafka cluster by watching for KafkaUser resources that describe Kafka users and ensuring that they are configured properly in the Kafka cluster. For example:

  • if a KafkaUser is created, the User Operator will create the user it describes
  • if a KafkaUser is deleted, the User Operator will delete the user it describes
  • if a KafkaUser is changed, the User Operator will update the user it describes

Unlike the Topic Operator, the User Operator does not sync any changes from the Kafka cluster with the OpenShift resources. Unlike the Kafka topics which might be created by applications directly in Kafka, it is not expected that the users will be managed directly in the Kafka cluster in parallel with the User Operator, so this should not be needed.

The User Operator allows you to declare a KafkaUser as part of your application’s deployment. When the user is created, the user credentials will be created in a Secret. Your application needs to use the user and its credentials for authentication and to produce or consume messages.

In addition to managing credentials for authentication, the User Operator also manages authorization rules by including a description of the user’s rights in the KafkaUser declaration.

2.10.2. Deploying the User Operator using the Cluster Operator

Prerequisites

  • A running Cluster Operator
  • A Kafka resource to be created or updated.

Procedure

  1. Edit the Kafka resource ensuring it has a Kafka.spec.entityOperator.userOperator object that configures the User Operator how you want.
  2. Create or update the Kafka resource in OpenShift.

    This can be done using oc apply:

    oc apply -f your-file

Additional resources

2.11. Strimzi Administrators

AMQ Streams includes several custom resources. By default, permission to create, edit, and delete these resources is limited to OpenShift cluster administrators. If you want to allow non-cluster administators to manage AMQ Streams resources, you must assign them the Strimzi Administrator role.

2.11.1. Designating Strimzi Administrators

Prerequisites

  • AMQ Streams CustomResourceDefinitions are installed.

Procedure

  1. Create the strimzi-admin cluster role in OpenShift.

    Use oc apply:

    oc apply -f install/strimzi-admin
  2. Assign the strimzi-admin ClusterRole to one or more existing users in the OpenShift cluster.

    Use oc create:

    oc create clusterrolebinding strimzi-admin --clusterrole=strimzi-admin --user=user1 --user=user2

2.12. Container images

Container images for AMQ Streams are available in the Red Hat Container Catalog. The installation YAML files provided by AMQ Streams will pull the images directly from the Red Hat Container Catalog.

If you do not have access to the Red Hat Container Catalog or want to use your own container repository:

  1. Pull all container images listed here
  2. Push them into your own registry
  3. Update the image names in the installation YAML files
Note

Each Kafka version supported for the release has a separate image.

Container imageNamespace/RepositoryDescription

Kafka

  • registry.redhat.io/amq7/amq-streams-kafka-23:1.3.0
  • registry.redhat.io/amq7/amq-streams-kafka-22:1.3.0

AMQ Streams image for running Kafka, including:

  • Kafka Broker
  • Kafka Connect / S2I
  • Kafka Mirror Maker
  • Zookeeper
  • TLS Sidecars

Operator

  • registry.redhat.io/amq7/amq-streams-operator:1.3.0

AMQ Streams image for running the operators:

  • Cluster Operator
  • Topic Operator
  • User Operator
  • Kafka Initializer

Kafka Bridge

  • registry.redhat.io/amq7/amq-streams-bridge:1.3.0

AMQ Streams image for running the AMQ Streams Kafka Bridge

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.