此内容没有您所选择的语言版本。
Chapter 2. Evaluate AMQ Streams
The procedures in this chapter provide a quick way to evaluate the functionality of AMQ Streams.
Follow the steps in the order provided to install AMQ Streams, and start sending and receiving messages from a topic:
- Ensure you have the required prerequisites
- Install AMQ Streams
- Create a Kafka cluster
- Enable authentication for secure access to the Kafka cluster
- Access the Kafka cluster to send and receive messages
Ensure you have the prerequisites and then follow the tasks in the order provided in this chapter.
2.1. Prerequisites
- An OpenShift Container Platform cluster (3.11 and later) running on which to deploy AMQ Streams must be running.
- You need to be able to access the AMQ Streams download site.
2.2. Downloading AMQ Streams
A ZIP file contains the resources required for installation of AMQ Streams, along with examples for configuration.
Procedure
- Ensure your subscription has been activated and your system is registered. - For more information about using the Customer Portal to activate your Red Hat subscription and register your system for packages, see Appendix A, Using Your Subscription. 
- 
						Download the amq-streams-x.y.z-ocp-install-examples.zipfile from the AMQ Streams download site.
- Unzip the file to any destination. - Windows or Mac: Extract the contents of the ZIP archive by double clicking on the ZIP file.
- Red Hat Enterprise Linux: Open a terminal window in the target machine and navigate to where the ZIP file was downloaded.
 - Extract the ZIP file with this command: - unzip amq-streams-x.y.z-ocp-install-examples.zip - unzip amq-streams-x.y.z-ocp-install-examples.zip- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
2.3. Installing AMQ Streams
You install AMQ Streams with the Custom Resource Definitions (CRDs) required for deployment.
In this task you create namespaces in the cluster for your deployment. It is good practice to use namespaces to separate functions.
Prerequisites
- 
						Installation requires a user with cluster-adminrole, such assystem:admin.
Procedure
- Login in to the OpenShift cluster using an account that has cluster admin privileges. - For example: - oc login -u system:admin - oc login -u system:admin- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create a new - kafka(project) namespace for the AMQ Streams Kafka Cluster Operator.- oc new-project kafka - oc new-project kafka- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Modify the installation files to reference the new - kafkanamespace where you will install the AMQ Streams Kafka Cluster Operator.Note- By default, the files work in the - myprojectnamespace.- On Linux, use:
 - sed -i 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml - sed -i 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - On Mac, use:
 - sed -i '' 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml - sed -i '' 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Deploy the CRDs and role-based access control (RBAC) resources to manage the CRDs. - oc project kafka oc apply -f install/cluster-operator/ - oc project kafka oc apply -f install/cluster-operator/- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create a new - my-kafka-projectnamespace where you will deploy your Kafka cluster.- oc new-project my-kafka-project - oc new-project my-kafka-project- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Give access to - my-kafka-projectto a non-admin user- developer.- For example: - oc adm policy add-role-to-user admin developer -n my-kafka-project - oc adm policy add-role-to-user admin developer -n my-kafka-project- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Set the value of the STRIMZI_NAMESPACE environment variable to give permission to the Cluster Operator to watch the - my-kafka-projectnamespace.- oc set env deploy/strimzi-cluster-operator STRIMZI_NAMESPACE=kafka,my-kafka-project -n kafka - oc set env deploy/strimzi-cluster-operator STRIMZI_NAMESPACE=kafka,my-kafka-project -n kafka- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - oc apply -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n my-kafka-project - oc apply -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n my-kafka-project- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - oc apply -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n my-kafka-project - oc apply -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n my-kafka-project- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - oc apply -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n my-kafka-project - oc apply -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n my-kafka-project- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - The commands create role bindings that grant permission for the Cluster Operator to access the Kafka cluster. 
- Create a new cluster role - strimzi-admin.- oc apply -f install/strimzi-admin - oc apply -f install/strimzi-admin- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Add the role to the non-admin user - developer.- oc adm policy add-cluster-role-to-user strimzi-admin developer - oc adm policy add-cluster-role-to-user strimzi-admin developer- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
2.4. Creating a cluster
With AMQ Streams installed, you create a Kafka cluster, then a topic within the cluster.
When you create a cluster, the Cluster Operator you deployed when installing AMQ Streams watches for new Kafka resources.
Prerequisites
- For the Kafka cluster, ensure a Cluster Operator is deployed.
- For the topic, you must have a running Kafka cluster.
Procedure
- Log in to the - my-kafka-projectnamespace as user- developer.- For example: - oc login -u developer oc project my-kafka-project - oc login -u developer oc project my-kafka-project- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - After new users log in to OpenShift Container Platform, an account is created for that user. 
- Create a new - my-clusterKafka cluster with 3 Zookeeper and 3 broker nodes.- 
								Use ephemeralstorage
- Expose the Kafka cluster outside of the OpenShift cluster using an external listener configured to use - route.- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- 
								Use 
- Wait for the cluster to be deployed: - oc wait my-kafka-project/my-cluster --for=condition=Ready --timeout=300s -n kafka - oc wait my-kafka-project/my-cluster --for=condition=Ready --timeout=300s -n kafka- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- When your cluster is ready, create a topic to publish and subscribe from your external client. - Create the following - my-topiccustom resource definition with 3 replicas and 3 partitions in the- my-clusterKafka cluster:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
2.5. Accessing the cluster
				As route is used for external access to the cluster, a cluster CA certificate is required to enable TLS (Transport Layer Security) encryption between the broker and the client.
			
Prerequisites
- You need a Kafka cluster running within the OpenShift cluster.
- The Cluster Operator must also be running.
Procedure
- Find the address of the bootstrap - route:- oc get routes my-cluster-kafka-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'- oc get routes my-cluster-kafka-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Use the address together with port 443 in your Kafka client as the bootstrap address. 
- Extract the public certificate of the broker certification authority: - oc extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crt - oc extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crt- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Import the trusted certificate to a truststore: - keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt - keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - You are now ready to start sending and receiving messages. 
2.6. Sending and receiving messages from a topic
				You can test your AMQ Streams installation by sending and receiving messages outside the cluster from my-topic.
			
Use a terminal to run a Kafka producer and consumer on a local machine.
Prerequisites
- Ensure AMQ Streams is installed on the OpenShift cluster.
- ZooKeeper and Kafka must be running to be able to send and receive messages.
- You need a cluster CA certificate for access to the cluster.
- You must be able to access to the latest version of the Red Hat AMQ Streams archive from the AMQ Streams download site.
Procedure
- Download the latest version of the AMQ Streams archive ( - amq-streams-x.y.z-bin.zip) from the AMQ Streams download site.- Unzip the file to any destination. 
- Open a terminal, and start the Kafka console producer with the topic - my-topicand the authentication properties for TLS:- bin/kafka-console-producer.sh --broker-list ROUTE-ADDRESS:443 --producer-property security.protocol=SSL --producer-property ssl.truststore.password=password --producer-property ssl.truststore.location=./client.truststore.jks --topic my-topic - bin/kafka-console-producer.sh --broker-list ROUTE-ADDRESS:443 --producer-property security.protocol=SSL --producer-property ssl.truststore.password=password --producer-property ssl.truststore.location=./client.truststore.jks --topic my-topic- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Type your message into the console where the producer is running.
- Press Enter to send the message.
- Open a new terminal tab or window, and start the Kafka console consumer to receive the messages: - bin/kafka-console-consumer.sh --bootstrap-server ROUTE-ADDRESS:443 --consumer-property security.protocol=SSL --consumer-property ssl.truststore.password=password --consumer-property ssl.truststore.location=./client.truststore.jks --topic my-topic --from-beginning - bin/kafka-console-consumer.sh --bootstrap-server ROUTE-ADDRESS:443 --consumer-property security.protocol=SSL --consumer-property ssl.truststore.password=password --consumer-property ssl.truststore.location=./client.truststore.jks --topic my-topic --from-beginning- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Confirm that you see the incoming messages in the consumer console.
- Press Crtl+C to exit the Kafka console producer and consumer.