이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 2. Getting started


2.1. AMQ Streams distribution

AMQ Streams is distributed as single ZIP file. This ZIP file contains the following AMQ Streams components:

  • Apache ZooKeeper
  • Apache Kafka
  • Apache Kafka Connect
  • Apache Kafka MirrorMaker
  • Kafka Exporter

The Kafka Bridge and Cruise Control components are provided as separate zipped archives.

2.2. Downloading an AMQ Streams Archive

An archived distribution of AMQ Streams is available for download from the Red Hat website. You can download a copy of the distribution by following the steps below.

Procedure

  • Download the latest version of the Red Hat AMQ Streams archive from the Customer Portal.

2.3. Installing AMQ Streams

Follow this procedure to install the latest version of AMQ Streams on Red Hat Enterprise Linux.

For instructions on upgrading an existing cluster to AMQ Streams 1.7, see AMQ Streams and Kafka upgrades.

Procedure

  1. Add new kafka user and group.

    sudo groupadd kafka
    sudo useradd -g kafka kafka
    sudo passwd kafka
  2. Create directory /opt/kafka.

    sudo mkdir /opt/kafka
  3. Create a temporary directory and extract the contents of the AMQ Streams ZIP file.

    mkdir /tmp/kafka
    unzip amq-streams_y.y-x.x.x.zip -d /tmp/kafka
  4. Move the extracted contents into /opt/kafka directory and delete the temporary directory.

    sudo mv /tmp/kafka/kafka_y.y-x.x.x/* /opt/kafka/
    rm -r /tmp/kafka
  5. Change the ownership of the /opt/kafka directory to the kafka user.

    sudo chown -R kafka:kafka /opt/kafka
  6. Create directory /var/lib/zookeeper for storing ZooKeeper data and set its ownership to the kafka user.

    sudo mkdir /var/lib/zookeeper
    sudo chown -R kafka:kafka /var/lib/zookeeper
  7. Create directory /var/lib/kafka for storing Kafka data and set its ownership to the kafka user.

    sudo mkdir /var/lib/kafka
    sudo chown -R kafka:kafka /var/lib/kafka

2.4. Data storage considerations

An efficient data storage infrastructure is essential to the optimal performance of AMQ Streams.

AMQ Streams requires block storage and works well with cloud-based block storage solutions, such as Amazon Elastic Block Store (EBS). The use of file storage is not recommended.

Choose local storage when possible. If local storage is not available, you can use a Storage Area Network (SAN) accessed by a protocol such as Fibre Channel or iSCSI.

2.4.1. Apache Kafka and ZooKeeper storage support

Use separate disks for Apache Kafka and ZooKeeper.

Kafka supports JBOD (Just a Bunch of Disks) storage, a data storage configuration of multiple disks or volumes. JBOD provides increased data storage for Kafka brokers. It can also improve performance.

Solid-state drives (SSDs), though not essential, can improve the performance of Kafka in large clusters where data is sent to and received from multiple topics asynchronously. SSDs are particularly effective with ZooKeeper, which requires fast, low latency data access.

Note

You do not need to provision replicated storage because Kafka and ZooKeeper both have built-in data replication.

2.4.2. File systems

It is recommended that you configure your storage system to use the XFS file system. AMQ Streams is also compatible with the ext4 file system, but this might require additional configuration for best results.

Additional resources

2.5. Running a single node AMQ Streams cluster

This procedure shows how to run a basic AMQ Streams cluster consisting of a single Apache ZooKeeper node and a single Apache Kafka node, both running on the same host. The default configuration files are used for ZooKeeper and Kafka.

Warning

A single node AMQ Streams cluster does not provide reliability and high availability and is suitable only for development purposes.

Prerequisites

  • AMQ Streams is installed on the host

Running the cluster

  1. Edit the ZooKeeper configuration file /opt/kafka/config/zookeeper.properties. Set the dataDir option to /var/lib/zookeeper/:

    dataDir=/var/lib/zookeeper/
  2. Edit the Kafka configuration file /opt/kafka/config/server.properties. Set the log.dirs option to /var/lib/kafka/:

    log.dirs=/var/lib/kafka/
  3. Switch to the kafka user:

    su - kafka
  4. Start ZooKeeper:

    /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties
  5. Check that ZooKeeper is running:

    jcmd | grep zookeeper

    Returns:

    number org.apache.zookeeper.server.quorum.QuorumPeerMain /opt/kafka/config/zookeeper.properties
  6. Start Kafka:

    /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
  7. Check that Kafka is running:

    jcmd | grep kafka

    Returns:

    number kafka.Kafka /opt/kafka/config/server.properties

Additional resources

2.6. Using the cluster

This procedure describes how to start the Kafka console producer and consumer clients and use them to send and receive several messages.

A new topic is automatically created in step one. Topic auto-creation is controlled using the auto.create.topics.enable configuration property (set to true by default). Alternatively, you can configure and create topics before using the cluster. For more information, see Topics.

Procedure

  1. Start the Kafka console producer and configure it to send messages to a new topic:

    /opt/kafka/bin/kafka-console-producer.sh --broker-list <bootstrap-address> --topic <topic-name>

    For example:

    /opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-topic
  2. Enter several messages into the console. Press Enter to send each individual message to your new topic:

    >message 1
    >message 2
    >message 3
    >message 4

    When Kafka creates a new topic automatically, you might receive a warning that the topic does not exist:

    WARN Error while fetching metadata with correlation id 39 :
    {4-3-16-topic1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

    The warning should not reappear after you send further messages.

  3. In a new terminal window, start the Kafka console consumer and configure it to read messages from the beginning of your new topic.

    /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server <bootstrap-address> --topic <topic-name> --from-beginning

    For example:

    /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-topic --from-beginning

    The incoming messages display in the consumer console.

  4. Switch to the producer console and send additional messages. Check that they display in the consumer console.
  5. Stop the Kafka console producer and then the consumer by pressing Ctrl+C.

2.7. Stopping the AMQ Streams services

You can stop the Kafka and ZooKeeper services by running a script. All connections to the Kafka and ZooKeeper services will be terminated.

Prerequisites

  • AMQ Streams is installed on the host
  • ZooKeeper and Kafka are up and running

Procedure

  1. Stop the Kafka broker.

    su - kafka
    /opt/kafka/bin/kafka-server-stop.sh
  2. Confirm that the Kafka broker is stopped.

    jcmd | grep kafka
  3. Stop ZooKeeper.

    su - kafka
    /opt/kafka/bin/zookeeper-server-stop.sh

2.8. Configuring AMQ Streams

Prerequisites

  • AMQ Streams is downloaded and installed on the host

Procedure

  1. Open ZooKeeper and Kafka broker configuration files in a text editor. The configuration files are located at :

    ZooKeeper
    /opt/kafka/config/zookeeper.properties
    Kafka
    /opt/kafka/config/server.properties
  2. Edit the configuration options. The configuration files are in the Java properties format. Every configuration option should be on separate line in the following format:

    <option> = <value>

    Lines starting with # or ! will be treated as comments and will be ignored by AMQ Streams components.

    # This is a comment

    Values can be split into multiple lines by using \ directly before the newline / carriage return.

    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
        username="bob" \
        password="bobs-password";
  3. Save the changes
  4. Restart the ZooKeeper or Kafka broker
  5. Repeat this procedure on all the nodes of the cluster.
Red Hat logoGithubRedditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

© 2024 Red Hat, Inc.