此内容没有您所选择的语言版本。
Chapter 2. Try AMQ Streams
Install AMQ Streams and start sending and receiving messages from a topic in minutes.
Ensure you have the prerequisites and then follow the tasks in the order provided in this chapter.
2.1. Prerequisites 复制链接链接已复制到粘贴板!
- OpenShift Container Platform cluster (3.11 and later) running on which to deploy AMQ Streams
2.2. Downloading AMQ Streams 复制链接链接已复制到粘贴板!
Download a zip file that contains the resources required for installation and examples for configuration.
Prerequisites
- Access to the AMQ Streams download site.
Procedure
-
Download the
amq-streams-x.y.z-ocp-install-examples.zipfile from the AMQ Streams download site. Unzip the file to any destination.
- On Windows or Mac, you can extract the contents of the ZIP archive by double clicking on the ZIP file.
On Red Hat Enterprise Linux, open a terminal window in the target machine and navigate to where the ZIP file was downloaded.
Extract the ZIP file by executing the following command:
unzip amq-streams-x.y.z-ocp-install-examples.zip
unzip amq-streams-x.y.z-ocp-install-examples.zipCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Installing AMQ Streams 复制链接链接已复制到粘贴板!
Install AMQ Streams with the CRDs required for deployment.
Prerequisites
-
Installation requires a user with
cluster-adminrole, such assystem:admin
Procedure
Login in to the OpenShift cluster with cluster admin privileges.
For example:
oc login -u system:admin
oc login -u system:adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the installation files according to the
kafkanamespace where you will install the AMQ Streams Kafka Cluster Operator.NoteBy default, the files work in the
myprojectnamespace.On Linux, use:
sed -i 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml
sed -i 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Mac, use:
sed -i '' 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml
sed -i '' 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Deploy the Custom Resource Definitions (CRDs) and role-based access control (RBAC) resources to manage the CRDs.
oc new-project kafka oc apply -f install/cluster-operator/
oc new-project kafka oc apply -f install/cluster-operator/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the project
my-kafka-projectwhere you will deploy your Kafka cluster.oc new-project my-kafka-project
oc new-project my-kafka-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Give access to your non-admin user
developer.oc adm policy add-role-to-user admin developer -n my-kafka-project
oc adm policy add-role-to-user admin developer -n my-kafka-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the Cluster Operator to watch that namespace.
oc set env deploy/strimzi-cluster-operator STRIMZI_NAMESPACE=kafka,my-kafka-project -n kafka
oc set env deploy/strimzi-cluster-operator STRIMZI_NAMESPACE=kafka,my-kafka-project -n kafkaCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n my-kafka-project
oc apply -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n my-kafka-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n my-kafka-project
oc apply -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n my-kafka-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n my-kafka-project
oc apply -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n my-kafka-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new cluster role
strimzi-admin.oc apply -f install/strimzi-admin
oc apply -f install/strimzi-adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the role to the non-admin user
developer.oc adm policy add-cluster-role-to-user strimzi-admin developer
oc adm policy add-cluster-role-to-user strimzi-admin developerCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4. Creating a cluster 复制链接链接已复制到粘贴板!
Create a Kafka cluster, then a topic within the cluster.
When you create a cluster, the Cluster Operator you deployed listens for new Kafka resources.
Prerequisites
- For the Kafka cluster, a Cluster Operator is deployed
- For the topic, a running Kafka cluster
Procedure
Login as a user.
For example:
oc login -u developer oc project my-kafka-project
oc login -u developer oc project my-kafka-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new
my-clusterKafka cluster with 3 Zookeeper and 3 broker nodes.-
Use
ephemeralstorage Expose the Kafka cluster outside of the OpenShift cluster using an external listener configured to use
route.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Use
Now that your cluster is running, create a topic to publish and subscribe from your external client.
Create the following
my-topiccustom resource definition with 3 replicas and 3 partitions in themy-clusterKafka cluster:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Accessing the cluster 复制链接链接已复制到粘贴板!
As route is used for external access to the cluster, a cluster CA certificate is required to enable TLS (Transport Layer Security) encryption between the broker and the client.
Prerequisites
- A Kafka cluster running within the OpenShift cluster
- A running Cluster Operator
Procedure
Find the address of the bootstrap
route:oc get routes my-cluster-kafka-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'oc get routes my-cluster-kafka-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the address together with port 443 in your Kafka client as the bootstrap address.
Extract the public certificate of the broker certification authority:
oc extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crt
oc extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Import the trusted certificate to a truststore:
keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt
keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow You are now ready to start sending and receiving messages.
2.6. Sending and receiving messages from a topic 复制链接链接已复制到粘贴板!
Test your AMQ Streams installation by sending and receiving messages outside the cluster from my-topic.
In this procedure, you access AMQ Streams from a local client.
Prerequisites
- AMQ Streams is installed on the OpenShift cluster
- Zookeeper and Kafka are running
- Access to the latest version of the Red Hat AMQ Streams archive from the AMQ Streams download site.
Procedure
Download the latest version of the AMQ Streams archive (
amq-streams-x.y.z-bin.zip) from the AMQ Streams download site.Unzip the file to any destination.
Start the Kafka console producer with the topic
my-topicand the authentication properties for TLS:bin/kafka-console-producer.sh --broker-list <route-address>:443 --producer-property security.protocol=SSL --producer-property ssl.truststore.password=password --producer-property ssl.truststore.location=./client.truststore.jks --topic my-topic
bin/kafka-console-producer.sh --broker-list <route-address>:443 --producer-property security.protocol=SSL --producer-property ssl.truststore.password=password --producer-property ssl.truststore.location=./client.truststore.jks --topic my-topicCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Type your message into the console where the producer is running.
- Press Enter to send the message.
- Press Ctrl+C to exit the Kafka console producer.
Start the consumer to receive the messages:
bin/kafka-console-consumer.sh --bootstrap-server <route-address>:443 --consumer-property security.protocol=SSL --consumer-property ssl.truststore.password=password --consumer-property ssl.truststore.location=./client.truststore.jks --topic my-topic --from-beginning
bin/kafka-console-consumer.sh --bootstrap-server <route-address>:443 --consumer-property security.protocol=SSL --consumer-property ssl.truststore.password=password --consumer-property ssl.truststore.location=./client.truststore.jks --topic my-topic --from-beginningCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Confirm that you see the incoming messages in the consumer console.
- Press Crtl+C to exit the Kafka console consumer.