此内容没有您所选择的语言版本。

Chapter 2. Try AMQ Streams


Install AMQ Streams and start sending and receiving messages from a topic in minutes.

Ensure you have the prerequisites and then follow the tasks in the order provided in this chapter.

2.1. Prerequisites

  • OpenShift Container Platform cluster (3.11 and later) running on which to deploy AMQ Streams

2.2. Downloading AMQ Streams

Download a zip file that contains the resources required for installation and examples for configuration.

Prerequisites

Procedure

  1. Download the amq-streams-x.y.z-ocp-install-examples.zip file from the AMQ Streams download site.
  2. Unzip the file to any destination.

    • On Windows or Mac, you can extract the contents of the ZIP archive by double clicking on the ZIP file.
    • On Red Hat Enterprise Linux, open a terminal window in the target machine and navigate to where the ZIP file was downloaded.

      Extract the ZIP file by executing the following command:

      unzip amq-streams-x.y.z-ocp-install-examples.zip
      Copy to Clipboard Toggle word wrap

2.3. Installing AMQ Streams

Install AMQ Streams with the CRDs required for deployment.

Prerequisites

  • Installation requires a user with cluster-admin role, such as system:admin

Procedure

  1. Login in to the OpenShift cluster with cluster admin privileges.

    For example:

    oc login -u system:admin
    Copy to Clipboard Toggle word wrap
  2. Modify the installation files according to the kafka namespace where you will install the AMQ Streams Kafka Cluster Operator.

    Note

    By default, the files work in the myproject namespace.

    • On Linux, use:

      sed -i 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml
      Copy to Clipboard Toggle word wrap
    • On Mac, use:

      sed -i '' 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml
      Copy to Clipboard Toggle word wrap
  3. Deploy the Custom Resource Definitions (CRDs) and role-based access control (RBAC) resources to manage the CRDs.

    oc new-project kafka
    oc apply -f install/cluster-operator/
    Copy to Clipboard Toggle word wrap
  4. Create the project my-kafka-project where you will deploy your Kafka cluster.

    oc new-project my-kafka-project
    Copy to Clipboard Toggle word wrap
  5. Give access to your non-admin user developer.

    oc adm policy add-role-to-user admin developer -n my-kafka-project
    Copy to Clipboard Toggle word wrap
  6. Enable the Cluster Operator to watch that namespace.

    oc set env deploy/strimzi-cluster-operator STRIMZI_NAMESPACE=kafka,my-kafka-project -n kafka
    Copy to Clipboard Toggle word wrap
    oc apply -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n my-kafka-project
    Copy to Clipboard Toggle word wrap
    oc apply -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n my-kafka-project
    Copy to Clipboard Toggle word wrap
    oc apply -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n my-kafka-project
    Copy to Clipboard Toggle word wrap
  7. Create the new cluster role strimzi-admin.

    oc apply -f install/strimzi-admin
    Copy to Clipboard Toggle word wrap
  8. Add the role to the non-admin user developer.

    oc adm policy add-cluster-role-to-user strimzi-admin developer
    Copy to Clipboard Toggle word wrap

2.4. Creating a cluster

Create a Kafka cluster, then a topic within the cluster.

When you create a cluster, the Cluster Operator you deployed listens for new Kafka resources.

Prerequisites

  • For the Kafka cluster, a Cluster Operator is deployed
  • For the topic, a running Kafka cluster

Procedure

  1. Login as a user.

    For example:

    oc login -u developer
    oc project my-kafka-project
    Copy to Clipboard Toggle word wrap
  2. Create a new my-cluster Kafka cluster with 3 Zookeeper and 3 broker nodes.

    • Use ephemeral storage
    • Expose the Kafka cluster outside of the OpenShift cluster using an external listener configured to use route.

      cat << EOF | oc create -f -
      apiVersion: kafka.strimzi.io/v1beta1
      kind: Kafka
      metadata:
        name: my-cluster
      spec:
        kafka:
          replicas: 3
          listeners:
            plain: {}
            tls: {}
            external:
              type: route
          storage:
            type: ephemeral
        zookeeper:
          replicas: 3
          storage:
            type: ephemeral
        entityOperator:
          topicOperator: {}
      EOF
      Copy to Clipboard Toggle word wrap
  3. Now that your cluster is running, create a topic to publish and subscribe from your external client.

    Create the following my-topic custom resource definition with 3 replicas and 3 partitions in the my-cluster Kafka cluster:

    cat << EOF | oc create -f -
    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaTopic
    metadata:
      name: my-topic
      labels:
        strimzi.io/cluster: "my-cluster"
    spec:
      partitions: 3
      replicas: 3
    EOF
    Copy to Clipboard Toggle word wrap

2.5. Accessing the cluster

As route is used for external access to the cluster, a cluster CA certificate is required to enable TLS (Transport Layer Security) encryption between the broker and the client.

Prerequisites

  • A Kafka cluster running within the OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Find the address of the bootstrap route:

    oc get routes my-cluster-kafka-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'
    Copy to Clipboard Toggle word wrap

    Use the address together with port 443 in your Kafka client as the bootstrap address.

  2. Extract the public certificate of the broker certification authority:

    oc extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crt
    Copy to Clipboard Toggle word wrap
  3. Import the trusted certificate to a truststore:

    keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt
    Copy to Clipboard Toggle word wrap

    You are now ready to start sending and receiving messages.

2.6. Sending and receiving messages from a topic

Test your AMQ Streams installation by sending and receiving messages outside the cluster from my-topic.

In this procedure, you access AMQ Streams from a local client.

Prerequisites

  • AMQ Streams is installed on the OpenShift cluster
  • Zookeeper and Kafka are running
  • Access to the latest version of the Red Hat AMQ Streams archive from the AMQ Streams download site.

Procedure

  1. Download the latest version of the AMQ Streams archive (amq-streams-x.y.z-bin.zip) from the AMQ Streams download site.

    Unzip the file to any destination.

  2. Start the Kafka console producer with the topic my-topic and the authentication properties for TLS:

    bin/kafka-console-producer.sh --broker-list <route-address>:443 --producer-property security.protocol=SSL --producer-property ssl.truststore.password=password --producer-property ssl.truststore.location=./client.truststore.jks --topic my-topic
    Copy to Clipboard Toggle word wrap
  3. Type your message into the console where the producer is running.
  4. Press Enter to send the message.
  5. Press Ctrl+C to exit the Kafka console producer.
  6. Start the consumer to receive the messages:

    bin/kafka-console-consumer.sh --bootstrap-server <route-address>:443 --consumer-property security.protocol=SSL --consumer-property ssl.truststore.password=password --consumer-property ssl.truststore.location=./client.truststore.jks --topic my-topic --from-beginning
    Copy to Clipboard Toggle word wrap
  7. Confirm that you see the incoming messages in the consumer console.
  8. Press Crtl+C to exit the Kafka console consumer.
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2026 Red Hat
返回顶部