Chapter 14. Adding and removing Kafka brokers and ZooKeeper nodes
In a Kafka cluster, managing the addition and removal of brokers and ZooKeeper nodes is critical to maintaining a stable and scalable system. When you add to the number of available brokers, you can configure the default replication factor and minimum in-sync replicas for topics across the brokers. You can use dynamic reconfiguration to add and remove ZooKeeper nodes from an ensemble without disruption.
14.1. Scaling clusters by adding or removing brokers Copy linkLink copied to clipboard!
Scaling Kafka clusters by adding brokers can increase the performance and reliability of the cluster. Adding more brokers increases available resources, allowing the cluster to handle larger workloads and process more messages. It can also improve fault tolerance by providing more replicas and backups. Conversely, removing underutilized brokers can reduce resource consumption and improve efficiency. Scaling must be done carefully to avoid disruption or data loss. By redistributing partitions across all brokers in the cluster, the resource utilization of each broker is reduced, which can increase the overall throughput of the cluster.
To increase the throughput of a Kafka topic, you can increase the number of partitions for that topic. This allows the load of the topic to be shared between different brokers in the cluster. However, if every broker is constrained by a specific resource (such as I/O), adding more partitions will not increase the throughput. In this case, you need to add more brokers to the cluster.
Adding brokers when running a multi-node Kafka cluster affects the number of brokers in the cluster that act as replicas. The actual replication factor for topics is determined by settings for the default.replication.factor and min.insync.replicas, and the number of available brokers. For example, a replication factor of 3 means that each partition of a topic is replicated across three brokers, ensuring fault tolerance in the event of a broker failure.
Example replica configuration
default.replication.factor = 3 min.insync.replicas = 2
default.replication.factor = 3
min.insync.replicas = 2
When you add or remove brokers, Kafka does not automatically reassign partitions. The best way to do this is using Cruise Control. You can use Cruise Control’s add-brokers and remove-brokers modes when scaling a cluster up or down.
-
Use the
add-brokersmode after scaling up a Kafka cluster to move partition replicas from existing brokers to the newly added brokers. -
Use the
remove-brokersmode before scaling down a Kafka cluster to move partition replicas off the brokers that are going to be removed.
14.2. Adding nodes to a ZooKeeper cluster Copy linkLink copied to clipboard!
Use dynamic reconfiguration to add nodes from a ZooKeeper cluster without stopping the entire cluster. Dynamic Reconfiguration allows ZooKeeper to change the membership of a set of nodes that make up the ZooKeeper cluster without interruption.
Prerequisites
-
Dynamic reconfiguration is enabled in the ZooKeeper configuration file (
reconfigEnabled=true). - ZooKeeper authentication is enabled and you can access the new server using the authentication mechanism.
Procedure
Perform the following steps for each ZooKeeper server you are adding, one at a time:
- Add a server to the ZooKeeper cluster as described in Section 4.1, “Running a multi-node ZooKeeper cluster” and then start ZooKeeper.
- Note the IP address and configured access ports of the new server.
Start a
zookeeper-shellsession for the server. Run the following command from a machine that has access to the cluster (this might be one of the ZooKeeper nodes or your local machine, if it has access)../bin/zookeeper-shell.sh <ip-address>:<zk-port>
./bin/zookeeper-shell.sh <ip-address>:<zk-port>Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the shell session, with the ZooKeeper node running, enter the following line to add the new server to the quorum as a voting member:
reconfig -add server.<positive-id> = <address1>:<port1>:<port2>[:role];[<client-port-address>:]<client-port>
reconfig -add server.<positive-id> = <address1>:<port1>:<port2>[:role];[<client-port-address>:]<client-port>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
reconfig -add server.4=172.17.0.4:2888:3888:participant;172.17.0.4:2181
reconfig -add server.4=172.17.0.4:2888:3888:participant;172.17.0.4:2181Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<positive-id>is the new server ID4.For the two ports,
<port1>2888 is for communication between ZooKeeper servers, and<port2>3888 is for leader election.The new configuration propagates to the other servers in the ZooKeeper cluster; the new server is now a full member of the quorum.
14.3. Removing nodes from a ZooKeeper cluster Copy linkLink copied to clipboard!
Use dynamic reconfiguration to remove nodes from a ZooKeeper cluster without stopping the entire cluster. Dynamic Reconfiguration allows ZooKeeper to change the membership of a set of nodes that make up the ZooKeeper cluster without interruption.
Prerequisites
-
Dynamic reconfiguration is enabled in the ZooKeeper configuration file (
reconfigEnabled=true). - ZooKeeper authentication is enabled and you can access the new server using the authentication mechanism.
Procedure
Perform the following steps, one at a time, for each ZooKeeper server you remove:
Log in to the
zookeeper-shellon one of the servers that will be retained after the scale down (for example, server 1).NoteAccess the server using the authentication mechanism configured for the ZooKeeper cluster.
Remove a server, for example server 5.
reconfig -remove 5
reconfig -remove 5Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Deactivate the server that you removed.