Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 8. Securing Kafka

download PDF

A secure deployment of Streams for Apache Kafka might encompass one or more of the following security measures:

  • Encryption for data exchange
  • Authentication to prove identity
  • Authorization to allow or decline actions executed by users
  • Running Streams for Apache Kafka on FIPS-enabled OpenShift clusters to ensure data security and system interoperability

8.1. Encryption

Streams for Apache Kafka supports Transport Layer Security (TLS), a protocol for encrypted communication.

Communication is always encrypted for communication between:

  • Kafka brokers
  • ZooKeeper nodes
  • Kafka brokers and ZooKeeper nodes
  • Operators and Kafka brokers
  • Operators and ZooKeeper nodes
  • Kafka Exporter

You can also configure TLS encryption between Kafka brokers and clients. TLS is specified for external clients when configuring an external listener for the Kafka broker.

Streams for Apache Kafka components and Kafka clients use digital certificates for encryption. The Cluster Operator sets up certificates to enable encryption within the Kafka cluster. You can provide your own server certificates, referred to as Kafka listener certificates, for communication between Kafka clients and Kafka brokers, and inter-cluster communication.

Streams for Apache Kafka uses Secrets to store the certificates and private keys required for mTLS in PEM and PKCS #12 format.

A TLS CA (certificate authority) issues certificates to authenticate the identity of a component. Streams for Apache Kafka verifies the certificates for the components against the CA certificate.

  • Streams for Apache Kafka components are verified against the cluster CA
  • Kafka clients are verified against the clients CA

8.2. Authentication

Kafka listeners use authentication to ensure a secure client connection to the Kafka cluster.

Supported authentication mechanisms:

  • mTLS authentication (on listeners with TLS-enabled encryption)
  • SASL SCRAM-SHA-512
  • OAuth 2.0 token based authentication
  • Custom authentication

The User Operator manages user credentials for mTLS and SCRAM authentication, but not OAuth 2.0. For example, through the User Operator you can create a user representing a client that requires access to the Kafka cluster, and specify tls as the authentication type.

Using OAuth 2.0 token-based authentication, application clients can access Kafka brokers without exposing account credentials. An authorization server handles the granting of access and inquiries about access.

Custom authentication allows for any type of Kafka-supported authentication. It can provide more flexibility, but also adds complexity.

8.3. Authorization

Kafka clusters use authorization to control the operations that are permitted on Kafka brokers by specific clients or users. If applied to a Kafka cluster, authorization is enabled for all listeners used for client connection.

If a user is added to a list of super users in a Kafka broker configuration, the user is allowed unlimited access to the cluster regardless of any authorization constraints implemented through authorization mechanisms.

Supported authorization mechanisms:

  • Simple authorization
  • OAuth 2.0 authorization (if you are using OAuth 2.0 token-based authentication)
  • Open Policy Agent (OPA) authorization
  • Custom authorization

Simple authorization uses the AclAuthorizer and StandardAuthorizer Kafka plugins, which are responsible for managing Access Control Lists (ACLs) that specify user access to various resources. For custom authorization, you configure your own Authorizer plugin to enforce ACL rules.

OAuth 2.0 and OPA provide policy-based control from an authorization server. Security policies and permissions used to grant access to resources on Kafka brokers are defined in the authorization server.

URLs are used to connect to the authorization server and verify that an operation requested by a client or user is allowed or denied. Users and clients are matched against the policies created in the authorization server that permit access to perform specific actions on Kafka brokers.

8.4. Federal Information Processing Standards (FIPS)

Federal Information Processing Standards (FIPS) are a set of security standards established by the US government to ensure the confidentiality, integrity, and availability of sensitive data and information that is processed or transmitted by information systems. The OpenJDK used in Streams for Apache Kafka container images automatically enables FIPS mode when running on a FIPS-enabled OpenShift cluster.

Note

If you don’t want the FIPS mode enabled in the Java OpenJDK, you can disable it in the deployment configuration of the Cluster Operator using the FIPS_MODE environment variable.

For more information about the NIST validation program and validated modules, see Cryptographic Module Validation Program on the NIST website.

Note

Compatibility with the technology previews of Streams for Apache Kafka Proxy and Streams for Apache Kafka Console has not been tested regarding FIPS support. While they are expected to function properly, we cannot guarantee full support at this time.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.