このコンテンツは選択した言語では利用できません。
Chapter 12. Security
AMQ Streams supports encrypted communication between the Kafka and AMQ Streams components using the TLS protocol. Communication between Kafka brokers (interbroker communication), between Zookeeper nodes (internodal communication), and between these and the AMQ Streams operators is always encrypted. Communication between Kafka clients and Kafka brokers is encrypted according to how the cluster is configured. For the Kafka and AMQ Streams components, TLS certificates are also used for authentication.
The Cluster Operator automatically sets up TLS certificates to enable encryption and authentication within your cluster. It also sets up other TLS certificates if you want to enable encryption or TLS authentication between Kafka brokers and clients.
12.1. Certificate Authorities
To support encryption, each AMQ Streams component needs its own private keys and public key certificates. All component certificates are signed by a Certificate Authority (CA) called the cluster CA.
Similarly, each Kafka client application connecting using TLS client authentication needs private keys and certificates. The clients CA is used to sign the certificates for the Kafka clients.
12.1.1. CA certificates
Each CA has a self-signed public key certificate.
Kafka brokers are configured to trust certificates signed by either the clients CA or the cluster CA. Components to which clients do not need to connect, such as Zookeeper, only trust certificates signed by the cluster CA. Client applications that perform mutual TLS authentication have to trust the certificates signed by the cluster CA.
By default, AMQ Streams generates and renews CA certificates automatically. You can configure the management of CA certificates in the Kafka.spec.clusterCa
and Kafka.spec.clientsCa
objects.
12.2. Certificates and Secrets
AMQ Streams stores Certificate Authority (CA), component, and Kafka client private keys and certificates in Secrets
. All keys are 2048 bits in size.
CA certificate validity periods are expressed as a number of days after certificate generation. You can configure the validity period of cluster CA certificates in Kafka.spec.clusterCa.validityDays
and client CA certificates in Kafka.spec.clientsCa.validityDays
.
12.2.1. Cluster CA Secrets
Secret name | Field within Secret | Description |
---|---|---|
|
| The current private key for the cluster CA. |
|
| The current certificate for the cluster CA. |
|
|
Certificate for Kafka broker pod <num>. Signed by a current or former cluster CA private key in |
| Private key for Kafka broker pod <num>. | |
|
|
Certificate for Zookeeper node <num>. Signed by a current or former cluster CA private key in |
| Private key for Zookeeper pod <num>. | |
|
|
Certificate for TLS communication between the Entity Operator and Kafka or Zookeeper. Signed by a current or former cluster CA private key in |
| Private key for TLS communication between the Entity Operator and Kafka or Zookeeper |
The CA certificates in <cluster>-cluster-ca-cert
must be trusted by Kafka client applications so that they validate the Kafka broker certificates when connecting to Kafka brokers over TLS.
Only <cluster>-cluster-ca-cert
needs to be used by clients. All other Secrets
in the table above only need to be accessed by the AMQ Streams components. You can enforce this using OpenShift role-based access controls if necessary.
12.2.2. Client CA Secrets
Secret name | Field within Secret | Description |
---|---|---|
|
| The current private key for the clients CA. |
|
| The current certificate for the clients CA. |
The certificates in <cluster>-clients-ca-cert
are those which the Kafka brokers trust.
<cluster>-cluster-ca
is used to sign certificates of client applications. It needs to be accessible to the AMQ Streams components and for administrative access if you are intending to issue application certificates without using the User Operator. You can enforce this using OpenShift role-based access controls if necessary.
12.2.3. User Secrets
Secret name | Field within Secret | Description |
---|---|---|
|
| Certificate for the user, signed by the clients CA |
| Private key for the user |
12.3. Installing your own CA certificates
This procedure describes how to install your own CA certificates and private keys instead of using CA certificates and private keys generated by the Cluster Operator.
Prerequisites
- The Cluster Operator is running.
- A Kafka cluster is not yet deployed.
Your own X.509 certificates and keys in PEM format for the cluster CA or clients CA.
If you want to use a cluster or clients CA which is not a Root CA, you have to include the whole chain in the certificate file. The chain should be in the following order:
- The cluster or clients CA
- One or more intermediate CAs
- The root CA
- All CAs in the chain should be configured as a CA in the X509v3 Basic Constraints.
Procedure
Put your CA certificate in the corresponding
Secret
(<cluster>-cluster-ca-cert
for the cluster CA or<cluster>-clients-ca-cert
for the clients CA):Run the following commands:
# Delete any existing secret (ignore "Not Exists" errors) oc delete secret <ca-cert-secret> # Create and label the new one oc create secret generic <ca-cert-secret> --from-file=ca.crt=<ca-cert-file>
Put your CA key in the corresponding
Secret
(<cluster>-cluster-ca
for the cluster CA or<cluster>-clients-ca
for the clients CA) Run the following commands:# Delete the existing secret oc delete secret <ca-key-secret> # Create the new one oc create secret generic <ca-key-secret> --from-file=ca.key=<ca-key-file>
Label both
Secrets
with labelsstrimzi.io/kind=Kafka
andstrimzi.io/cluster=<my-cluster>
: Run the following commands:oc label secret <ca-cert-secret> strimzi.io/kind=Kafka strimzi.io/cluster=<my-cluster> oc label secret <ca-key-secret> strimzi.io/kind=Kafka strimzi.io/cluster=<my-cluster>
Create the
Kafka
resource for your cluster, configuring either theKafka.spec.clusterCa
or theKafka.spec.clientsCa
object to not use generated CAs:Example fragment
Kafka
resource configuring the cluster CA to use certificates you supply for yourselfkind: Kafka version: kafka.strimzi.io/v1beta1 spec: # ... clusterCa: generateCertificateAuthority: false
12.4. Certificate renewal
The cluster CA and clients CA certificates are only valid for a limited time period, known as the validity period. This is usually defined as a number of days since the certificate was generated. For auto-generated CA certificates, you can configure the validity period in Kafka.spec.clusterCa.validityDays
and Kafka.spec.clientsCa.validityDays
. The default validity period for both certificates is 365 days. Manually-installed CA certificates should have their own validity period defined.
When a CA certificate expires, components and clients which still trust that certificate will not accept TLS connections from peers whose certificate were signed by the CA private key. The components and clients need to trust the new CA certificate instead.
To allow the renewal of CA certificates without a loss of service, the Cluster Operator will initiate certificate renewal before the old CA certificates expire. You can configure the renewal period in Kafka.spec.clusterCa.renewalDays
and Kafka.spec.clientsCa.renewalDays
(both default to 30 days). The renewal period is measured backwards, from the expiry date of the current certificate.
Not Before Not After | | |<--------------- validityDays --------------->| <--- renewalDays --->|
The behavior of the Cluster Operator during the renewal period depends on whether the relevant setting is enabled, in either Kafka.spec.clusterCa.generateCertificateAuthority
or Kafka.spec.clientsCa.generateCertificateAuthority
.
12.4.1. Renewal process with generated CAs
The Cluster Operator performs the following process to renew CA certificates:
-
Generate a new CA certificate, but retaining the existing key. The new certificate replaces the old one with the name
ca.crt
within the correspondingSecret
. - Generate new client certificates (for Zookeeper nodes, Kafka brokers, and the Entity Operator). This is not strictly necessary because the signing key has not changed, but it keeps the validity period of the client certificate in sync with the CA certificate.
- Restart Zookeeper nodes so that they will trust the new CA certificate and use the new client certificates.
- Restart Kafka brokers so that they will trust the new CA certificate and use the new client certificates.
- Restart the Topic and User Operators so that they will trust the new CA certificate and use the new client certificates.
12.4.2. Client applications
The Cluster Operator is not aware of all the client applications using the Kafka cluster.
Depending on how your applications are configured, you might need take action to ensure they continue working after certificate renewal.
Consider the following important points to ensure that client applications continue working.
- When they connect to the cluster, client applications must trust the cluster CA certificate published in <cluster>-cluster-ca-cert.
-
When using the User Operator to provision client certificates, client applications must use the current
user.crt
anduser.key
published in their<user>
Secret
when they connect to the cluster. For workloads running inside the same OpenShift cluster this can be achieved by mounting the secrets as a volume and having the client Pods construct their key- and truststores from the current state of theSecrets
. For more details on this procedure, see Section 12.8, “Configuring internal clients to trust the cluster CA”. - When renewing client certificates, if you are provisioning client certificates and keys manually, you must generate new client certificates and ensure the new certificates are used by clients within the renewal period. Failure to do this by the end of the renewal period could result in client applications being unable to connect.
12.5. Renewing CA certificates manually
Unless the Kafka.spec.clusterCa.generateCertificateAuthority
and Kafka.spec.clientsCa.generateCertificateAuthority
objects are set to false
, the cluster and clients CA certificates will auto-renew at the start of their respective certificate renewal periods. You can manually renew one or both of these certificates before the certificate renewal period starts, if required for security reasons. A renewed certificate uses the same private key as the old certificate.
Prerequisites
- The Cluster Operator is running.
- A Kafka cluster in which CA certificates and private keys are installed.
Procedure
Apply the
strimzi.io/force-renew
annotation to theSecret
that contains the CA certificate that you want to renew.Certificate Secret Annotate command Cluster CA
<cluster-name>-cluster-ca-cert
oc annotate secret <cluster-name>-cluster-ca-cert strimzi.io/force-renew=true
Clients CA
<cluster-name>-clients-ca-cert
oc annotate secret <cluster-name>-clients-ca-cert strimzi.io/force-renew=true
At the next reconciliation the Cluster Operator will generate a new CA certificate for the Secret
that you annotated. If maintenance time windows are configured, the Cluster Operator will generate the new CA certificate at the first reconciliation within the next maintenance time window.
Client applications must reload the cluster and clients CA certificates that were renewed by the Cluster Operator.
12.6. Replacing private keys
You can replace the private keys used by the cluster CA and clients CA certificates. When a private key is replaced, the Cluster Operator generates a new CA certificate for the new private key.
Prerequisites
- The Cluster Operator is running.
- A Kafka cluster in which CA certificates and private keys are installed.
Procedure
Apply the
strimzi.io/force-replace
annotation to theSecret
that contains the private key that you want to renew.Private key for Secret Annotate command Cluster CA
<cluster-name>-cluster-ca
oc annotate secret <cluster-name>-cluster-ca strimzi.io/force-replace=true
Clients CA
<cluster-name>-clients-ca
oc annotate secret <cluster-name>-clients-ca strimzi.io/force-replace=true
At the next reconciliation the Cluster Operator will:
-
Generate a new private key for the
Secret
that you annotated - Generate a new CA certificate
If maintenance time windows are configured, the Cluster Operator will generate the new private key and CA certificate at the first reconciliation within the next maintenance time window.
Client applications must reload the cluster and clients CA certificates that were renewed by the Cluster Operator.
12.7. TLS connections
12.7.1. Zookeeper communication
Zookeeper does not support TLS itself. By deploying a TLS sidecar within every Zookeeper pod, the Cluster Operator is able to provide data encryption and authentication between Zookeeper nodes in a cluster. Zookeeper only communicates with the TLS sidecar over the loopback interface. The TLS sidecar then proxies all Zookeeper traffic, TLS decrypting data upon entry into a Zookeeper pod, and TLS encrypting data upon departure from a Zookeeper pod.
This TLS encrypting stunnel
proxy is instantiated from the spec.zookeeper.stunnelImage
specified in the Kafka resource.
12.7.2. Kafka interbroker communication
Communication between Kafka brokers is done through an internal listener on port 9091, which is encrypted by default and not accessible to Kafka clients.
Communication between Kafka brokers and Zookeeper nodes uses a TLS sidecar, as described above.
12.7.3. Topic and User Operators
Like the Cluster Operator, the Topic and User Operators each use a TLS sidecar when communicating with Zookeeper. The Topic Operator connects to Kafka brokers on port 9091.
12.7.4. Kafka Client connections
Encrypted communication between Kafka brokers and clients running within the same OpenShift cluster can be provided by configuring the spec.kafka.listeners.tls
listener, which listens on port 9093.
Encrypted communication between Kafka brokers and clients running outside the same OpenShift cluster can be provided by configuring the spec.kafka.listeners.external
listener (the port of the external
listener depends on its type).
Unencrypted client communication with brokers can be configured by spec.kafka.listeners.plain
, which listens on port 9092.
12.8. Configuring internal clients to trust the cluster CA
This procedure describes how to configure a Kafka client that resides inside the OpenShift cluster — connecting to the tls
listener on port 9093 — to trust the cluster CA certificate.
The easiest way to achieve this for an internal client is to use a volume mount to access the Secrets
containing the necessary certificates and keys.
Prerequisites
- The Cluster Operator is running.
-
A
Kafka
resource within the OpenShift cluster. - A Kafka client application inside the OpenShift cluster which will connect using TLS and needs to trust the cluster CA certificate.
Procedure
-
When defining the client
Pod
The Kafka client has to be configured to trust certificates signed by this CA. For the Java-based Kafka Producer, Consumer, and Streams APIs, you can do this by importing the CA certificate into the JVM’s truststore using the following
keytool
command:keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt
To configure the Kafka client, specify the following properties:
-
security.protocol: SSL
when using TLS for encryption (with or without TLS authentication), orsecurity.protocol: SASL_SSL
when using SCRAM-SHA authentication over TLS. -
ssl.truststore.location
: the truststore location where the certificates were imported. -
ssl.truststore.password
: the password for accessing the truststore. This property can be omitted if it is not needed by the truststore.
-
Additional resources
- For the procedure for configuring external clients to trust the cluster CA, see Section 12.9, “Configuring external clients to trust the cluster CA”
12.9. Configuring external clients to trust the cluster CA
This procedure describes how to configure a Kafka client that resides outside the OpenShift cluster – connecting to the external
listener on port 9094 – to trust the cluster CA certificate.
You can use the same procedure to configure clients inside OpenShift, which connect to the tls
listener on port 9093, but it is usually more convenient to access the Secrets
using a volume mount in the client Pod
.
Follow this procedure when setting up the client and during the renewal period, when the old clients CA certificate is replaced.
The <cluster-name>-cluster-ca-cert
Secret
will contain more than one CA certificate during CA certificate renewal. Clients must add all of them to their truststores.
Prerequisites
- The Cluster Operator is running.
-
A
Kafka
resource within the OpenShift cluster. - A Kafka client application outside the OpenShift cluster which will connect using TLS and needs to trust the cluster CA certificate.
Procedure
Extract the cluster CA certificate from the generated
<cluster-name>-cluster-ca-cert
Secret
.Run the following command to extract the certificates:
oc get secret <cluster-name>-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
The Kafka client has to be configured to trust certificates signed by this CA. For the Java-based Kafka Producer, Consumer, and Streams APIs, you can do this by importing the CA certificates into the JVM’s truststore using the following
keytool
command:keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt
To configure the Kafka client, specify the following properties:
-
security.protocol: SSL
when using TLS for encryption (with or without TLS authentication), orsecurity.protocol: SASL_SSL
when using SCRAM-SHA authentication over TLS. -
ssl.truststore.location
: the truststore location where the certificates were imported. -
ssl.truststore.password
: the password for accessing the truststore. This property can be omitted if it is not needed by the truststore.
-
Additional resources
- For the procedure for configuring internal clients to trust the cluster CA, see Section 12.8, “Configuring internal clients to trust the cluster CA”