Chapter 8. Securing Kafka
A secure deployment of AMQ Streams might encompass one or more of the following security measures:
- Encryption for data exchange
- Authentication to prove identity
- Authorization to allow or decline actions executed by users
- Running AMQ Streams on FIPS-enabled OpenShift clusters to ensure data security and system interoperability
8.1. Encryption
AMQ Streams supports Transport Layer Security (TLS), a protocol for encrypted communication.
Communication is always encrypted for communication between:
- Kafka brokers
- ZooKeeper nodes
- Operators and Kafka brokers
- Operators and ZooKeeper nodes
- Kafka Exporter
You can also configure TLS encryption between Kafka brokers and clients. TLS is specified for external clients when configuring an external listener for the Kafka broker.
AMQ Streams components and Kafka clients use digital certificates for encryption. The Cluster Operator sets up certificates to enable encryption within the Kafka cluster. You can provide your own server certificates, referred to as Kafka listener certificates, for communication between Kafka clients and Kafka brokers, and inter-cluster communication.
AMQ Streams uses Secrets to store the certificates and private keys required for mTLS in PEM and PKCS #12 format.
A TLS CA (certificate authority) issues certificates to authenticate the identity of a component. AMQ Streams verifies the certificates for the components against the CA certificate.
- AMQ Streams components are verified against the cluster CA CA
- Kafka clients are verified against the clients CA CA
8.2. Authentication
Kafka listeners use authentication to ensure a secure client connection to the Kafka cluster.
Supported authentication mechanisms:
- mTLS authentication (on listeners with TLS-enabled encryption)
- SASL SCRAM-SHA-512
- OAuth 2.0 token based authentication
- Custom authentication
The User Operator manages user credentials for mTLS and SCRAM authentication, but not OAuth 2.0. For example, through the User Operator you can create a user representing a client that requires access to the Kafka cluster, and specify tls
as the authentication type.
Using OAuth 2.0 token-based authentication, application clients can access Kafka brokers without exposing account credentials. An authorization server handles the granting of access and inquiries about access.
Custom authentication allows for any type of kafka-supported authentication. It can provide more flexibility, but also adds complexity.
8.3. Authorization
Kafka clusters use authorization to control the operations that are permitted on Kafka brokers by specific clients or users. If applied to a Kafka cluster, authorization is enabled for all listeners used for client connection.
If a user is added to a list of super users in a Kafka broker configuration, the user is allowed unlimited access to the cluster regardless of any authorization constraints implemented through authorization mechanisms.
Supported authorization mechanisms:
- Simple authorization
- OAuth 2.0 authorization (if you are using OAuth 2.0 token-based authentication)
- Open Policy Agent (OPA) authorization
- Custom authorization
Simple authorization uses AclAuthorizer
, the default Kafka authorization plugin. AclAuthorizer
uses Access Control Lists (ACLs) to define which users have access to which resources. For custom authorization, you configure your own Authorizer
plugin to enforce ACL rules.
OAuth 2.0 and OPA provide policy-based control from an authorization server. Security policies and permissions used to grant access to resources on Kafka brokers are defined in the authorization server.
URLs are used to connect to the authorization server and verify that an operation requested by a client or user is allowed or denied. Users and clients are matched against the policies created in the authorization server that permit access to perform specific actions on Kafka brokers.
8.4. Federal Information Processing Standards (FIPS)
Federal Information Processing Standards (FIPS) are a set of security standards established by the US government to ensure the confidentiality, integrity, and availability of sensitive data and information that is processed or transmitted by information systems. The OpenJDK used in AMQ Streams container images automatically enables FIPS mode when running on a FIPS-enabled OpenShift cluster.
If you don’t want to use FIPS, you can disable it in the deployment configuration of the Cluster Operator using the FIPS_MODE
environment variable.