이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 14. Using Kerberos (GSSAPI) authentication
AMQ Streams supports the use of the Kerberos (GSSAPI) authentication protocol for secure single sign-on access to your Kafka cluster. GSSAPI is an API wrapper for Kerberos functionality, insulating applications from underlying implementation changes.
Kerberos is a network authentication system that allows clients and servers to authenticate to each other by using symmetric encryption and a trusted third party, the Kerberos Key Distribution Centre (KDC).
14.1. Setting up AMQ Streams to use Kerberos (GSSAPI) authentication
This procedure shows how to configure AMQ Streams so that Kafka clients can access Kafka and ZooKeeper using Kerberos (GSSAPI) authentication.
The procedure assumes that a Kerberos krb5 resource server has been set up on a Red Hat Enterprise Linux host.
The procedure shows, with examples, how to configure:
- Service principals
- Kafka brokers to use the Kerberos login
- ZooKeeper to use Kerberos login
- Producer and consumer clients to access Kafka using Kerberos authentication
The instructions describe Kerberos set up for a single ZooKeeper and Kafka installation on a single host, with additional configuration for a producer and consumer client.
Prerequisites
To be able to configure Kafka and ZooKeeper to authenticate and authorize Kerberos credentials, you will need:
- Access to a Kerberos server
- A Kerberos client on each Kafka broker host
For more information on the steps to set up a Kerberos server, and clients on broker hosts, see the example Kerberos on RHEL set up configuration.
How you deploy Kerberos depends on your operating system. Red Hat recommends using Identity Management (IdM) when setting up Kerberos on Red Hat Enterprise Linux. Users of an Oracle or IBM JDK must install a Java Cryptography Extension (JCE).
Add service principals for authentication
From your Kerberos server, create service principals (users) for ZooKeeper, Kafka brokers, and Kafka producer and consumer clients.
Service principals must take the form SERVICE-NAME/FULLY-QUALIFIED-HOST-NAME@DOMAIN-REALM.
Create the service principals, and keytabs that store the principal keys, through the Kerberos KDC.
For example:
-
zookeeper/node1.example.redhat.com@EXAMPLE.REDHAT.COM
-
kafka/node1.example.redhat.com@EXAMPLE.REDHAT.COM
-
producer1/node1.example.redhat.com@EXAMPLE.REDHAT.COM
consumer1/node1.example.redhat.com@EXAMPLE.REDHAT.COM
The ZooKeeper service principal must have the same hostname as the
zookeeper.connect
configuration in the Kafkaconfig/server.properties
file:zookeeper.connect=node1.example.redhat.com:2181
If the hostname is not the same, localhost is used and authentication will fail.
-
Create a directory on the host and add the keytab files:
For example:
/opt/kafka/krb5/zookeeper-node1.keytab /opt/kafka/krb5/kafka-node1.keytab /opt/kafka/krb5/kafka-producer1.keytab /opt/kafka/krb5/kafka-consumer1.keytab
Ensure the
kafka
user can access the directory:chown kafka:kafka -R /opt/kafka/krb5
Configure ZooKeeper to use a Kerberos Login
Configure ZooKeeper to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for zookeeper
.
Create or modify the
opt/kafka/config/jaas.conf
file to support ZooKeeper client and server operations:Client { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true 1 storeKey=true 2 useTicketCache=false 3 keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" 4 principal="zookeeper/node1.example.redhat.com@EXAMPLE.REDHAT.COM"; 5 }; Server { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/node1.example.redhat.com@EXAMPLE.REDHAT.COM"; }; QuorumServer { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/node1.example.redhat.com@EXAMPLE.REDHAT.COM"; }; QuorumLearner { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/node1.example.redhat.com@EXAMPLE.REDHAT.COM"; };
- 1
- Set to
true
to get the principal key from the keytab. - 2
- Set to
true
to store the principal key. - 3
- Set to
true
to obtain the Ticket Granting Ticket (TGT) from the ticket cache. - 4
- The
keyTab
property points to the location of the keytab file copied from the Kerberos KDC. The location and file must be readable by thekafka
user. - 5
- The
principal
property is configured to match the fully-qualified principal name created on the KDC host, which follows the formatSERVICE-NAME/FULLY-QUALIFIED-HOST-NAME@DOMAIN-NAME
.
Edit
opt/kafka/config/zookeeper.properties
to use the updated JAAS configuration:# ... requireClientAuthScheme=sasl jaasLoginRenew=3600000 1 kerberos.removeHostFromPrincipal=false 2 kerberos.removeRealmFromPrincipal=false 3 quorum.auth.enableSasl=true 4 quorum.auth.learnerRequireSasl=true 5 quorum.auth.serverRequireSasl=true quorum.auth.learner.loginContext=QuorumLearner 6 quorum.auth.server.loginContext=QuorumServer quorum.auth.kerberos.servicePrincipal=zookeeper/_HOST 7 quorum.cnxn.threads.size=20
- 1
- Controls the frequency for login renewal in milliseconds, which can be adjusted to suit ticket renewal intervals. Default is one hour.
- 2
- Dictates whether the hostname is used as part of the login principal name. If using a single keytab for all nodes in the cluster, this is set to
true
. However, it is recommended to generate a separate keytab and fully-qualified principal for each broker host for troubleshooting. - 3
- Controls whether the realm name is stripped from the principal name for Kerberos negotiations. It is recommended that this setting is set as
false
. - 4
- Enables SASL authentication mechanisms for the ZooKeeper server and client.
- 5
- The
RequireSasl
properties controls whether SASL authentication is required for quorum events, such as master elections. - 6
- The
loginContext
properties identify the name of the login context in the JAAS configuration used for authentication configuration of the specified component. The loginContext names correspond to the names of the relevant sections in theopt/kafka/config/jaas.conf
file. - 7
- Controls the naming convention to be used to form the principal name used for identification. The placeholder
_HOST
is automatically resolved to the hostnames defined by theserver.1
properties at runtime.
Start ZooKeeper with JVM parameters to specify the Kerberos login configuration:
su - kafka export EXTRA_ARGS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf"; /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties
If you are not using the default service name (
zookeeper
), add the name using the-Dzookeeper.sasl.client.username=NAME
parameter.NoteIf you are using the
/etc/krb5.conf
location, you do not need to specify-Djava.security.krb5.conf=/etc/krb5.conf
when starting ZooKeeper, Kafka, or the Kafka producer and consumer.
Configure the Kafka broker server to use a Kerberos login
Configure Kafka to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for kafka
.
Modify the
opt/kafka/config/jaas.conf
file with the following elements:KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/kafka-node1.keytab" principal="kafka/node1.example.redhat.com@EXAMPLE.REDHAT.COM"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab="/opt/kafka/krb5/kafka-node1.keytab" principal="kafka/node1.example.redhat.com@EXAMPLE.REDHAT.COM"; };
Configure each broker in the Kafka cluster by modifying the listener configuration in the
config/server.properties
file so the listeners use the SASL/GSSAPI login.Add the SASL protocol to the map of security protocols for the listener, and remove any unwanted protocols.
For example:
# ... broker.id=0 # ... listeners=SECURE://:9092,REPLICATION://:9094 1 inter.broker.listener.name=REPLICATION # ... listener.security.protocol.map=SECURE:SASL_PLAINTEXT,REPLICATION:SASL_PLAINTEXT 2 # .. sasl.enabled.mechanisms=GSSAPI 3 sasl.mechanism.inter.broker.protocol=GSSAPI 4 sasl.kerberos.service.name=kafka 5 ...
- 1
- Two listeners are configured: a secure listener for general-purpose communications with clients (supporting TLS for communications), and a replication listener for inter-broker communications.
- 2
- For TLS-enabled listeners, the protocol name is SASL_PLAINTEXT. For non-TLS-enabled connectors, the protocol name is SASL_PLAINTEXT. If SSL is not required, you can remove the
ssl.*
properties. - 3
- SASL mechanism for Kerberos authentication is
GSSAPI
. - 4
- Kerberos authentication for inter-broker communication.
- 5
- The name of the service used for authentication requests is specified to distinguish it from other services that may also be using the same Kerberos configuration.
Start the Kafka broker, with JVM parameters to specify the Kerberos login configuration:
su - kafka export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
If the broker and ZooKeeper cluster were previously configured and working with a non-Kerberos-based authentication system, it is possible to start the ZooKeeper and broker cluster and check for configuration errors in the logs.
After starting the broker and Zookeeper instances, the cluster is now configured for Kerberos authentication.
Configure Kafka producer and consumer clients to use Kerberos authentication
Configure Kafka producer and consumer clients to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for producer1
and consumer1
.
Add the Kerberos configuration to the producer or consumer configuration file.
For example:
/opt/kafka/config/producer.properties
# ... sasl.mechanism=GSSAPI 1 security.protocol=SASL_PLAINTEXT 2 sasl.kerberos.service.name=kafka 3 sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \ 4 useKeyTab=true \ useTicketCache=false \ storeKey=true \ keyTab="/opt/kafka/krb5/producer1.keytab" \ principal="producer1/node1.example.redhat.com@EXAMPLE.REDHAT.COM"; # ...
/opt/kafka/config/consumer.properties
# ... sasl.mechanism=GSSAPI security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \ useKeyTab=true \ useTicketCache=false \ storeKey=true \ keyTab="/opt/kafka/krb5/consumer1.keytab" \ principal="consumer1/node1.example.redhat.com@EXAMPLE.REDHAT.COM"; # ...
Run the clients to verify that you can send and receive messages from the Kafka brokers.
Producer client:
export KAFKA_HEAP_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true"; /opt/kafka/bin/kafka-console-producer.sh --producer.config /opt/kafka/config/producer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094
Consumer client:
export KAFKA_HEAP_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true"; /opt/kafka/bin/kafka-console-consumer.sh --consumer.config /opt/kafka/config/consumer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094
Additional resources
- Kerberos man pages: krb5.conf(5), kinit(1), klist(1), and kdestroy(1)
- Example Kerberos server on RHEL set up configuration
- Example client application to authenticate with a Kafka cluster using Kerberos tickets