Chapter 11. Managing TLS certificates
AMQ Streams supports TLS for encrypted communication between Kafka and AMQ Streams components.
Communication is always encrypted between the following components:
- Communication between Kafka and ZooKeeper
- Interbroker communication between Kafka brokers
- Internodal communication between ZooKeeper nodes
- AMQ Streams operator communication with Kafka brokers and ZooKeeper nodes
Communication between Kafka clients and Kafka brokers is encrypted according to how the cluster is configured. For the Kafka and AMQ Streams components, TLS certificates are also used for authentication.
The Cluster Operator automatically sets up and renews TLS certificates to enable encryption and authentication within your cluster. It also sets up other TLS certificates if you want to enable encryption or TLS authentication between Kafka brokers and clients.
Certificate Authority (CA) certificates are generated by the Cluster Operator to verify the identities of components and clients. If you don’t want to use the CAs generated by the Cluster Operator, you can install your own cluster and client CA certificates.
You can also provide Kafka listener certificates for TLS listeners or external listeners that have TLS encryption enabled. Use Kafka listener certificates to incorporate the security infrastructure you already have in place.
Any certificates you provide are not renewed by the Cluster Operator.
Figure 11.1. Example architecture of the communication secured by TLS
11.1. Certificate Authorities
To support encryption, each AMQ Streams component needs its own private keys and public key certificates. All component certificates are signed by an internal Certificate Authority (CA) called the cluster CA.
Similarly, each Kafka client application connecting to AMQ Streams using TLS client authentication needs to provide private keys and certificates. A second internal CA, named the clients CA, is used to sign certificates for the Kafka clients.
11.1.1. CA certificates
Both the cluster CA and clients CA have a self-signed public key certificate.
Kafka brokers are configured to trust certificates signed by either the cluster CA or clients CA. Components that clients do not need to connect to, such as ZooKeeper, only trust certificates signed by the cluster CA. Unless TLS encryption for external listeners is disabled, client applications must trust certificates signed by the cluster CA. This is also true for client applications that perform mutual TLS authentication.
By default, AMQ Streams automatically generates and renews CA certificates issued by the cluster CA or clients CA. You can configure the management of these CA certificates in the Kafka.spec.clusterCa
and Kafka.spec.clientsCa
objects. Certificates provided by users are not renewed.
You can provide your own CA certificates for the cluster CA or clients CA. For more information, see Section 11.1.2, “Installing your own CA certificates”. If you provide your own certificates, you must manually renew them when needed.
11.1.2. Installing your own CA certificates
This procedure describes how to install your own CA certificates and keys instead of using the CA certificates and private keys generated by the Cluster Operator.
The Cluster Operator automatically generates and renews the following secrets:
CLUSTER-NAME-cluster-ca
- The cluster secret that contains the private key for the cluster CA.
CLUSTER-NAME-cluster-ca-cert
- The cluster secret that contains a cluster CA certificate. The certificate contains a public key to validate the identity of Kafka brokers.
CLUSTER-NAME-clients-ca
- The client secret that contains the private key for the client CA.
CLUSTER-NAME-clients-ca-cert
- The client secret that contains a client CA certificate. The certificate contains a public key to validate the identity of clients accessing the Kafka brokers.
AMQ Streams uses these secrets by default.
This procedure describes the steps to replace the secrets to use your own cluster or client CA certificates.
Prerequisites
- The Cluster Operator is running.
- A Kafka cluster is not yet deployed.
Your own X.509 certificates and keys in PEM format for the cluster CA or clients CA.
If you want to use a cluster or clients CA which is not a Root CA, you have to include the whole chain in the certificate file. The chain should be in the following order:
- The cluster or clients CA
- One or more intermediate CAs
- The root CA
- All CAs in the chain should be configured using the X509v3 Basic Constraints extension. Basic Constraints limit the path length of a certificate chain.
- The OpenSSL TLS management tool for converting certificates.
Before you begin
The Cluster Operator generates the following files for the CLUSTER-NAME-cluster-ca-cert
secret:
-
ca.crt
cluster certificate in PEM format -
ca.p12
cluster certificate in PKCS #12 format -
ca.password
to access the PKCS #12 file
Some applications cannot use PEM certificates and support only PKCS #12 certificates. You can also add your own cluster certificate in PKCS #12 format.
If you don’t have a cluster certificate in PKCS #12 format, use the OpenSSL TLS management tool to generate one from your ca.crt
file.
Example certificate generation command
openssl pkcs12 -export -in ca.crt --nokeys -out ca.p12 -password pass:P12-PASSWORD -caname ca.crt
Replace P12-PASSWORD with your own password.
You can do the same for the CLUSTER-NAME-clients-ca-cert
secret, which also contains certificates in PEM and PKCS #12 format by default.
Procedure
Replace the CA certificate generated by the Cluster Operator.
Delete the existing secret.
oc delete secret CA-CERTIFICATE-SECRET
CA-CERTIFICATE-SECRET is the name of the
Secret
:-
CLUSTER-NAME-cluster-ca-cert
for the cluster CA certificate -
CLUSTER-NAME-clients-ca-cert
for the clients CA certificate
Replace CLUSTER-NAME with the name of your Kafka cluster.
Ignore any "Not Exists" errors.
-
Create the new secret.
Client secret creation with a certificate in PEM format only
oc create secret generic CLUSTER-NAME-clients-ca-cert --from-file=ca.crt=ca.crt
Cluster secret creation with certificates in PEM and PKCS #12 format
oc create secret generic CLUSTER-NAME-cluster-ca-cert \ --from-file=ca.crt=ca.crt \ --from-file=ca.p12=ca.p12 \ --from-literal=ca.password=P12-PASSWORD
Replace the private key generated by the Cluster Operator.
Delete the existing secret.
oc delete secret CA-KEY-SECRET
CA-KEY-SECRET is the name of CA key:
-
CLUSTER-NAME-cluster-ca
for the cluster CA key -
CLUSTER-NAME-clients-ca
for the clients CA key
-
Create the new secret.
oc create secret generic CA-KEY-SECRET --from-file=ca.key=ca.key
Label the secrets.
oc label secret CA-CERTIFICATE-SECRET strimzi.io/kind=Kafka strimzi.io/cluster=CLUSTER-NAME
oc label secret CA-KEY-SECRET strimzi.io/kind=Kafka strimzi.io/cluster=CLUSTER-NAME
-
Label
strimzi.io/kind=Kafka
identifies the Kafka custom resource. -
Label
strimzi.io/cluster=CLUSTER-NAME
identifies the Kafka cluster.
-
Label
Annotate the secrets
oc annotate secret CA-CERTIFICATE-SECRET strimzi.io/ca-cert-generation=CA-CERTIFICATE-GENERATION
oc annotate secret CA-KEY-SECRET strimzi.io/ca-key-generation=CA-KEY-GENERATION
-
Annotation
strimzi.io/ca-cert-generation=CA-CERTIFICATE-GENERATION
defines the generation of a new CA certificate. Annotation
strimzi.io/ca-key-generation=CA-KEY-GENERATION
defines the generation of a new CA key.If you are replacing CA certificates automatically generated by the Cluster Operator, use the next higher incremental value from the existing annotation and follow the replacing CA keys procedure. If there are no CA certificates automatically generated by the Cluster Operator, start from 0 (zero) as the incremental value (
strimzi.io/ca-cert-generation=0
) for your own CA certificate. Set a higher incremental value when you renew the certificates.
-
Annotation
Create the
Kafka
resource for your cluster, configuring either theKafka.spec.clusterCa
or theKafka.spec.clientsCa
object to not use generated CAs.Example fragment
Kafka
resource configuring the cluster CA to use certificates you supply for yourselfkind: Kafka version: kafka.strimzi.io/v1beta2 spec: # ... clusterCa: generateCertificateAuthority: false
Additional resources
- To renew CA certificates you have previously installed, see Section 11.3.5, “Renewing your own CA certificates”.
- To replace the private keys of CA certificates you have previously installed, see Section 11.3.6, “Replacing private keys used by your own CA certificates”.
- Section 11.7.1, “Providing your own Kafka listener certificates”.
11.2. Secrets
AMQ Streams uses secrets to store private and public key certificates for Kafka clusters, clients, and users. Secrets are used for establishing TLS encrypted connections between Kafka brokers, and between brokers and clients. They are also used for mutual TLS authentication.
Cluster and clients secrets are always pairs: one contains the public key and one contains the private key.
- Cluster secret
- A cluster secret contains the cluster CA to sign Kafka broker certificates. Connecting clients use the certificate to establish a TLS encrypted connection with a Kafka cluster. The certificate verifies broker identity.
- Client secret
- A client secret contains the clients CA for a user to sign its own client certificate. This allows mutual authentication against the Kafka cluster. The broker validates a client’s identity through the certificate.
- User secret
- A user secret contains a private key and certificate. The secret is created and signed by the clients CA when a new user is created. The key and certificate are used to authenticate and authorize the user when accessing the cluster.
11.2.1. Secrets in PEM and PKCS #12 formats
Secrets provide private keys and certificates in PEM and PKCS #12 formats. Use the format that’s suitable for your client. Using private keys and certificates in PEM format means that users have to get them from the secrets, and generate a corresponding truststore or keystore to use in their applications. PKCS #12 storage provides a truststore or keystore that can be used directly.
PKCS #12 defines an archive file format (.p12
) for storing cryptography objects into a single file with password protection. You can use PKCS #12 to manage certificates and keys in one place.
Each secret contains fields specific to PKCS #12.
-
The
.p12
field contains the certificates and keys. -
The
.password
field is the password that protects the archive.
All keys are 2048 bits in size and are valid by default for 365 days from initial generation. You can change the validity period.
11.2.2. Secrets generated by the Cluster Operator
The Cluster Operator generates the following certificates, which are saved as secrets in the OpenShift cluster. AMQ Streams uses these secrets by default.
The cluster CA and clients CA have separate secrets for the private key and public key.
<cluster_name>-cluster-ca
- Contains the private key of the cluster CA. AMQ Streams and Kafka components use the private key to sign server certificates.
<cluster_name>-cluster-ca-cert
- Contains the public key of the cluster CA. Kafka clients use the public key to verify the identity of the Kafka brokers they are connecting to with TLS server authentication.
<cluster_name>-clients-ca
- Contains the private key of the clients CA. Kafka clients use the private key to sign new user certificates for TLS client authentication when connecting to Kafka brokers.
<cluster_name>-clients-ca-cert
- Contains the public key of the client CA. Kafka brokers use the public key to verify the identity of clients accessing the Kafka brokers when TLS client authentication is used.
Secrets for communication between AMQ Streams components contain a private key and a public key certificate signed by the cluster CA.
<cluster_name>-kafka-brokers
- Contains the private and public keys for Kafka brokers.
<cluster_name>-zookeeper-nodes
- Contains the private and public keys for ZooKeeper nodes.
<cluster_name>-cluster-operator-certs
- Contains the private and public keys for encrypting communication between the Cluster Operator and Kafka or ZooKeeper.
<cluster_name>-entity-topic-operator-certs
- Contains the private and public keys for encrypting communication between the Topic Operator and Kafka or ZooKeeper.
<cluster_name>-entity-user-operator-certs
- Contains the private and public keys for encrypting communication between the User Operator and Kafka or ZooKeeper.
<cluster_name>-cruise-control-certs
- Contains the private and public keys for encrypting communication between Cruise Control and Kafka or ZooKeeper.
<cluster_name>-kafka-exporter-certs
- Contains the private and public keys for encrypting communication between Kafka Exporter and Kafka or ZooKeeper.
You can provide your own server certificates and private keys to connect to to Kafka brokers using Kafka listener certificates rather than certificates signed by the cluster CA or clients CA.
11.2.3. Cluster CA secrets
Cluster CA secrets are managed by the Cluster Operator in a Kafka cluster.
Only the <cluster_name>-cluster-ca-cert
secret is required by clients. All other cluster secrets are accessed by AMQ Streams components. You can enforce this using OpenShift role-based access controls, if necessary.
The CA certificates in <cluster_name>-cluster-ca-cert
must be trusted by Kafka client applications so that they validate the Kafka broker certificates when connecting to Kafka brokers over TLS.
Field | Description |
---|---|
| The current private key for the cluster CA. |
Field | Description |
---|---|
| PKCS #12 archive file for storing certificates and keys. |
| Password for protecting the PKCS #12 archive file. |
| The current certificate for the cluster CA. |
Field | Description |
---|---|
| PKCS #12 archive file for storing certificates and keys. |
| Password for protecting the PKCS #12 archive file. |
|
Certificate for a Kafka broker pod <num>. Signed by a current or former cluster CA private key in |
|
Private key for a Kafka broker pod |
Field | Description |
---|---|
| PKCS #12 archive file for storing certificates and keys. |
| Password for protecting the PKCS #12 archive file. |
|
Certificate for ZooKeeper node <num>. Signed by a current or former cluster CA private key in |
|
Private key for ZooKeeper pod |
Field | Description |
---|---|
| PKCS #12 archive file for storing certificates and keys. |
| Password for protecting the PKCS #12 archive file. |
|
Certificate for TLS communication between the Cluster Operator and Kafka or ZooKeeper. Signed by a current or former cluster CA private key in |
| Private key for TLS communication between the Cluster Operator and Kafka or ZooKeeper. |
Field | Description |
---|---|
| PKCS #12 archive file for storing certificates and keys. |
| Password for protecting the PKCS #12 archive file. |
|
Certificate for TLS communication between the Topic Operator and Kafka or ZooKeeper. Signed by a current or former cluster CA private key in |
| Private key for TLS communication between the Topic Operator and Kafka or ZooKeeper. |
Field | Description |
---|---|
| PKCS #12 archive file for storing certificates and keys. |
| Password for protecting the PKCS #12 archive file. |
|
Certificate for TLS communication between the User Operator and Kafka or ZooKeeper. Signed by a current or former cluster CA private key in |
| Private key for TLS communication between the User Operator and Kafka or ZooKeeper. |
Field | Description |
---|---|
| PKCS #12 archive file for storing certificates and keys. |
| Password for protecting the PKCS #12 archive file. |
|
Certificate for TLS communication between Cruise Control and Kafka or ZooKeeper. Signed by a current or former cluster CA private key in |
| Private key for TLS communication between the Cruise Control and Kafka or ZooKeeper. |
Field | Description |
---|---|
| PKCS #12 archive file for storing certificates and keys. |
| Password for protecting the PKCS #12 archive file. |
|
Certificate for TLS communication between Kafka Exporter and Kafka or ZooKeeper. Signed by a current or former cluster CA private key in |
| Private key for TLS communication between the Kafka Exporter and Kafka or ZooKeeper. |
11.2.4. Client CA secrets
Clients CA secrets are managed by the Cluster Operator in a Kafka cluster.
The certificates in <cluster_name>-clients-ca-cert
are those which the Kafka brokers trust.
The <cluster_name>-clients-ca
secret is used to sign the certificates of client applications. This secret must be accessible to the AMQ Streams components and for administrative access if you are intending to issue application certificates without using the User Operator. You can enforce this using OpenShift role-based access controls, if necessary.
Field | Description |
---|---|
| The current private key for the clients CA. |
Field | Description |
---|---|
| PKCS #12 archive file for storing certificates and keys. |
| Password for protecting the PKCS #12 archive file. |
| The current certificate for the clients CA. |
11.2.5. User secrets
User secrets are managed by the User Operator.
When a user is created using the User Operator, a secret is generated using the name of the user.
Secret name | Field within secret | Description |
---|---|---|
|
| PKCS #12 archive file for storing certificates and keys. |
| Password for protecting the PKCS #12 archive file. | |
| Certificate for the user, signed by the clients CA | |
| Private key for the user |
11.2.6. Adding labels and annotations to cluster CA secrets
By configuring the clusterCaCert
template property in the Kafka
custom resource, you can add custom labels and annotations to the Cluster CA secrets created by the Cluster Operator. Labels and annotations are useful for identifying objects and adding contextual information. You configure template properties in AMQ Streams custom resources.
Example template customization to add labels and annotations to secrets
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... template: clusterCaCert: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 # ...
For more information on configuring template properties, see Section 2.6, “Customizing OpenShift resources”.
11.2.7. Disabling ownerReference
in the CA secrets
By default, the Cluster and Client CA secrets are created with an ownerReference
property that is set to the Kafka
custom resource. This means that, when the Kafka
custom resource is deleted, the CA secrets are also deleted (garbage collected) by OpenShift.
If you want to reuse the CA for a new cluster, you can disable the ownerReference
by setting the generateSecretOwnerReference
property for the Cluster and Client CA secrets to false
in the Kafka
configuration. When the ownerReference
is disabled, CA secrets are not deleted by OpenShift when the corresponding Kafka
custom resource is deleted.
Example Kafka configuration with disabled ownerReference
for Cluster and Client CAs
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka # ... spec: # ... clusterCa: generateSecretOwnerReference: false clientsCa: generateSecretOwnerReference: false # ...
Additional resources
11.3. Certificate renewal and validity periods
Cluster CA and clients CA certificates are only valid for a limited time period, known as the validity period. This is usually defined as a number of days since the certificate was generated.
For CA certificates automatically created by the Cluster Operator, you can configure the validity period of:
-
Cluster CA certificates in
Kafka.spec.clusterCa.validityDays
-
Client CA certificates in
Kafka.spec.clientsCa.validityDays
The default validity period for both certificates is 365 days. Manually-installed CA certificates should have their own validity periods defined.
When a CA certificate expires, components and clients that still trust that certificate will not accept TLS connections from peers whose certificates were signed by the CA private key. The components and clients need to trust the new CA certificate instead.
To allow the renewal of CA certificates without a loss of service, the Cluster Operator will initiate certificate renewal before the old CA certificates expire.
You can configure the renewal period of the certificates created by the Cluster Operator:
-
Cluster CA certificates in
Kafka.spec.clusterCa.renewalDays
-
Client CA certificates in
Kafka.spec.clientsCa.renewalDays
The default renewal period for both certificates is 30 days.
The renewal period is measured backwards, from the expiry date of the current certificate.
Validity period against renewal period
Not Before Not After | | |<--------------- validityDays --------------->| <--- renewalDays --->|
To make a change to the validity and renewal periods after creating the Kafka cluster, you configure and apply the Kafka
custom resource, and manually renew the CA certificates. If you do not manually renew the certificates, the new periods will be used the next time the certificate is renewed automatically.
Example Kafka configuration for certificate validity and renewal periods
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka # ... spec: # ... clusterCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true clientsCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true # ...
The behavior of the Cluster Operator during the renewal period depends on the settings for the certificate generation properties, generateCertificateAuthority
and generateCertificateAuthority
.
true
-
If the properties are set to
true
, a CA certificate is generated automatically by the Cluster Operator, and renewed automatically within the renewal period. false
-
If the properties are set to
false
, a CA certificate is not generated by the Cluster Operator. Use this option if you are installing your own certificates.
11.3.1. Renewal process with automatically generated CA certificates
The Cluster Operator performs the following process to renew CA certificates:
-
Generate a new CA certificate, but retain the existing key. The new certificate replaces the old one with the name
ca.crt
within the correspondingSecret
. - Generate new client certificates (for ZooKeeper nodes, Kafka brokers, and the Entity Operator). This is not strictly necessary because the signing key has not changed, but it keeps the validity period of the client certificate in sync with the CA certificate.
- Restart ZooKeeper nodes so that they will trust the new CA certificate and use the new client certificates.
- Restart Kafka brokers so that they will trust the new CA certificate and use the new client certificates.
- Restart the Topic and User Operators so that they will trust the new CA certificate and use the new client certificates.
11.3.2. Client certificate renewal
The Cluster Operator is not aware of the client applications using the Kafka cluster.
When connecting to the cluster, and to ensure they operate correctly, client applications must:
- Trust the cluster CA certificate published in the <cluster>-cluster-ca-cert Secret.
Use the credentials published in their <user-name> Secret to connect to the cluster.
The User Secret provides credentials in PEM and PKCS #12 format, or it can provide a password when using SCRAM-SHA authentication. The User Operator creates the user credentials when a user is created.
You must ensure clients continue to work after certificate renewal. The renewal process depends on how the clients are configured.
If you are provisioning client certificates and keys manually, you must generate new client certificates and ensure the new certificates are used by clients within the renewal period. Failure to do this by the end of the renewal period could result in client applications being unable to connect to the cluster.
For workloads running inside the same OpenShift cluster and namespace, Secrets can be mounted as a volume so the client Pods construct their keystores and truststores from the current state of the Secrets. For more details on this procedure, see Configuring internal clients to trust the cluster CA.
11.3.3. Manually renewing the CA certificates generated by the Cluster Operator
Cluster and clients CA certificates generated by the Cluster Operator auto-renew at the start of their respective certificate renewal periods. However, you can use the strimzi.io/force-renew
annotation to manually renew one or both of these certificates before the certificate renewal period starts. You might do this for security reasons, or if you have changed the renewal or validity periods for the certificates.
A renewed certificate uses the same private key as the old certificate.
If you are using your own CA certificates, the force-renew
annotation cannot be used. Instead, follow the procedure for renewing your own CA certificates.
Prerequisites
- The Cluster Operator is running.
- A Kafka cluster in which CA certificates and private keys are installed.
Procedure
Apply the
strimzi.io/force-renew
annotation to theSecret
that contains the CA certificate that you want to renew.Table 11.13. Annotation for the Secret that forces renewal of certificates Certificate Secret Annotate command Cluster CA
KAFKA-CLUSTER-NAME-cluster-ca-cert
oc annotate secret KAFKA-CLUSTER-NAME-cluster-ca-cert strimzi.io/force-renew=true
Clients CA
KAFKA-CLUSTER-NAME-clients-ca-cert
oc annotate secret KAFKA-CLUSTER-NAME-clients-ca-cert strimzi.io/force-renew=true
At the next reconciliation the Cluster Operator will generate a new CA certificate for the
Secret
that you annotated. If maintenance time windows are configured, the Cluster Operator will generate the new CA certificate at the first reconciliation within the next maintenance time window.Client applications must reload the cluster and clients CA certificates that were renewed by the Cluster Operator.
Check the period the CA certificate is valid:
For example, using an
openssl
command:oc get secret CA-CERTIFICATE-SECRET -o 'jsonpath={.data.CA-CERTIFICATE}' | base64 -d | openssl x509 -subject -issuer -startdate -enddate -noout
CA-CERTIFICATE-SECRET is the name of the
Secret
, which isKAFKA-CLUSTER-NAME-cluster-ca-cert
for the cluster CA certificate andKAFKA-CLUSTER-NAME-clients-ca-cert
for the clients CA certificate.CA-CERTIFICATE is the name of the CA certificate, such as
jsonpath={.data.ca\.crt}
.The command returns a
notBefore
andnotAfter
date, which is the validity period for the CA certificate.For example, for a cluster CA certificate:
subject=O = io.strimzi, CN = cluster-ca v0 issuer=O = io.strimzi, CN = cluster-ca v0 notBefore=Jun 30 09:43:54 2020 GMT notAfter=Jun 30 09:43:54 2021 GMT
Delete old certificates from the Secret.
When components are using the new certificates, older certificates might still be active. Delete the old certificates to remove any potential security risk.
11.3.4. Replacing private keys used by the CA certificates generated by the Cluster Operator
You can replace the private keys used by the cluster CA and clients CA certificates generated by the Cluster Operator. When a private key is replaced, the Cluster Operator generates a new CA certificate for the new private key.
If you are using your own CA certificates, the force-replace
annotation cannot be used. Instead, follow the procedure for renewing your own CA certificates.
Prerequisites
- The Cluster Operator is running.
- A Kafka cluster in which CA certificates and private keys are installed.
Procedure
Apply the
strimzi.io/force-replace
annotation to theSecret
that contains the private key that you want to renew.Table 11.14. Commands for replacing private keys Private key for Secret Annotate command Cluster CA
CLUSTER-NAME-cluster-ca
oc annotate secret CLUSTER-NAME-cluster-ca strimzi.io/force-replace=true
Clients CA
CLUSTER-NAME-clients-ca
oc annotate secret CLUSTER-NAME-clients-ca strimzi.io/force-replace=true
At the next reconciliation the Cluster Operator will:
-
Generate a new private key for the
Secret
that you annotated - Generate a new CA certificate
If maintenance time windows are configured, the Cluster Operator will generate the new private key and CA certificate at the first reconciliation within the next maintenance time window.
Client applications must reload the cluster and clients CA certificates that were renewed by the Cluster Operator.
Additional resources
11.3.5. Renewing your own CA certificates
This procedure describes how to renew CA certificates that you are using instead of the certificates generated by the Cluster Operator.
If you are not changing the corresponding CA keys, perform the steps in this procedure. Otherwise, perform the steps to replace private keys used by your own CA certificates.
If you are using your own certificates, the Cluster Operator will not renew them automatically. Therefore, it is important that you follow this procedure during the renewal period of the certificate in order to replace CA certificates that will soon expire.
The procedure describes the renewal of CA certificates in PEM format.
Prerequisites
- The Cluster Operator is running.
- Your own CA certificates and private keys are installed.
- You have new cluster or clients X.509 certificates in PEM format.
Procedure
Update the
Secret
for the CA certificate.Edit the existing secret to add the new CA certificate and update the certificate generation annotation value.
oc edit secret <ca_certificate_secret_name>
<ca_certificate_secret_name> is the name of the
Secret
, which is<kafka_cluster_name>-cluster-ca-cert
for the cluster CA certificate and<kafka_cluster_name>-clients-ca-cert
for the clients CA certificate.The following example shows a secret for a cluster CA certificate that’s associated with a Kafka cluster named
my-cluster
.Example secret configuration for a cluster CA certificate
apiVersion: v1 kind: Secret data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: "0" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert #... type: Opaque
Encode your new CA certificate into base64.
cat <path_to_new_certificate> | base64
Update the CA certificate.
Copy the base64-encoded CA certificate from the previous step as the value for the
ca.crt
property underdata
.Increase the value of the CA certificate generation annotation.
Update the
strimzi.io/ca-cert-generation
annotation with a higher incremental value. For example, changestrimzi.io/ca-cert-generation=0
tostrimzi.io/ca-cert-generation=1
. If theSecret
is missing the annotation, the value is treated as0
, so add the annotation with a value of1
.When AMQ Streams generates certificates, the certificate generation annotation is automatically incremented by the Cluster Operator. For manual renewal of your own CA certificates, set the annotations with a higher incremental value. The annotation needs a higher value than the one from the current secret so that the Cluster Operator can roll the pods and update the certificates. The
strimzi.io/ca-cert-generation
has to be incremented on each CA certificate renewal.Save the secret with the new CA certificate and certificate generation annotation value.
Example secret configuration updated with a new CA certificate
apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: "1" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert #... type: Opaque
On the next reconciliation, the Cluster Operator performs a rolling update of ZooKeeper, Kafka, and other components to trust the new CA certificate.
If maintenance time windows are configured, the Cluster Operator will roll the pods at the first reconciliation within the next maintenance time window.
11.3.6. Replacing private keys used by your own CA certificates
This procedure describes how to renew CA certificates and private keys that you are using instead of the certificates and keys generated by the Cluster Operator.
Perform the steps in this procedure when you are also changing the corresponding CA keys. Otherwise, perform the steps to renew your own CA certificates.
If you are using your own certificates, the Cluster Operator will not renew them automatically. Therefore, it is important that you follow this procedure during the renewal period of the certificate in order to replace CA certificates that will soon expire.
The procedure describes the renewal of CA certificates in PEM format.
Before going through the following steps, make sure that the CN (Common Name) of the new CA certificate is different from the current one. For example, when the Cluster Operator renews certificates automatically it adds a v<version_number> suffix to identify a version. Do the same with your own CA certificate by adding a different suffix on each renewal. By using a different key to generate a new CA certificate, you retain the current CA certificate stored in the Secret
.
Prerequisites
- The Cluster Operator is running.
- Your own CA certificates and private keys are installed.
- You have new cluster or clients X.509 certificates and keys in PEM format.
Procedure
Pause the reconciliation of the
Kafka
custom resource.Annotate the custom resource in OpenShift, setting the
pause-reconciliation
annotation totrue
:oc annotate Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation="true"
For example, for a
Kafka
custom resource namedmy-cluster
:oc annotate Kafka my-cluster strimzi.io/pause-reconciliation="true"
Check that the status conditions of the custom resource show a change to
ReconciliationPaused
:oc describe Kafka <name_of_custom_resource>
The
type
condition changes toReconciliationPaused
at thelastTransitionTime
.
Update the
Secret
for the CA certificate.Edit the existing secret to add the new CA certificate and update the certificate generation annotation value.
oc edit secret <ca_certificate_secret_name>
<ca_certificate_secret_name> is the name of the
Secret
, which isKAFKA-CLUSTER-NAME-cluster-ca-cert
for the cluster CA certificate andKAFKA-CLUSTER-NAME-clients-ca-cert
for the clients CA certificate.The following example shows a secret for a cluster CA certificate that’s associated with a Kafka cluster named
my-cluster
.Example secret configuration for a cluster CA certificate
apiVersion: v1 kind: Secret data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: "0" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert #... type: Opaque
Rename the current CA certificate to retain it.
Rename the current
ca.crt
property underdata
asca-<date>.crt
, where <date> is the certificate expiry date in the format YEAR-MONTH-DAYTHOUR-MINUTE-SECONDZ. For exampleca-2022-01-26T17-32-00Z.crt:
. Leave the value for the property as it is to retain the current CA certificate.Encode your new CA certificate into base64.
cat <path_to_new_certificate> | base64
Update the CA certificate.
Create a new
ca.crt
property underdata
and copy the base64-encoded CA certificate from the previous step as the value forca.crt
property.Increase the value of the CA certificate generation annotation.
Update the
strimzi.io/ca-cert-generation
annotation with a higher incremental value. For example, changestrimzi.io/ca-cert-generation=0
tostrimzi.io/ca-cert-generation=1
. If theSecret
is missing the annotation, the value is treated as0
, so add the annotation with a value of1
.When AMQ Streams generates certificates, the certificate generation annotation is automatically incremented by the Cluster Operator. For manual renewal of your own CA certificates, set the annotations with a higher incremental value. The annotation needs a higher value than the one from the current secret so that the Cluster Operator can roll the pods and update the certificates. The
strimzi.io/ca-cert-generation
has to be incremented on each CA certificate renewal.Save the secret with the new CA certificate and certificate generation annotation value.
Example secret configuration updated with a new CA certificate
apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... 1 ca-2022-01-26T17-32-00Z.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 2 metadata: annotations: strimzi.io/ca-cert-generation: "1" 3 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert #... type: Opaque
Update the
Secret
for the CA key used to sign your new CA certificate.Edit the existing secret to add the new CA key and update the key generation annotation value.
oc edit secret <ca_key_name>
<ca_key_name> is the name of CA key, which is
<kafka_cluster_name>-cluster-ca
for the cluster CA key and<kafka_cluster_name>-clients-ca
for the clients CA key.The following example shows a secret for a cluster CA key that’s associated with a Kafka cluster named
my-cluster
.Example secret configuration for a cluster CA key
apiVersion: v1 kind: Secret data: ca.key: SA1cKF1GFDzOIiPOIUQBHDNFGDFS... 1 metadata: annotations: strimzi.io/ca-key-generation: "0" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca #... type: Opaque
Encode the CA key into base64.
cat <path_to_new_key> | base64
Update the CA key.
Copy the base64-encoded CA key from the previous step as the value for the
ca.key
property underdata
.Increase the value of the CA key generation annotation.
Update the
strimzi.io/ca-key-generation
annotation with a higher incremental value. For example, changestrimzi.io/ca-key-generation=0
tostrimzi.io/ca-key-generation=1
. If theSecret
is missing the annotation, it is treated as0
, so add the annotation with a value of1
.When AMQ Streams generates certificates, the key generation annotation is automatically incremented by the Cluster Operator. For manual renewal of your own CA certificates together with a new CA key, set the annotation with a higher incremental value. The annotation needs a higher value than the one from the current secret so that the Cluster Operator can roll the pods and update the certificates and keys. The
strimzi.io/ca-key-generation
has to be incremented on each CA certificate renewal.
Save the secret with the new CA key and key generation annotation value.
Example secret configuration updated with a new CA key
apiVersion: v1 kind: Secret data: ca.key: AB0cKF1GFDzOIiPOIUQWERZJQ0F... 1 metadata: annotations: strimzi.io/ca-key-generation: "1" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca #... type: Opaque
Resume from the pause.
To resume the
Kafka
custom resource reconciliation, set thepause-reconciliation
annotation tofalse
.oc annotate Kafka NAME-OF-CUSTOM-RESOURCE strimzi.io/pause-reconciliation="false"
You can also do the same by removing the
pause-reconciliation
annotation.oc annotate Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation-
On the next reconciliation, the Cluster Operator performs a rolling update of ZooKeeper, Kafka, and other components to trust the new CA certificate. When the rolling update is complete, the Cluster Operator will start a new one to generate new server certificates signed by the new CA key.
If maintenance time windows are configured, the Cluster Operator will roll the pods at the first reconciliation within the next maintenance time window.
11.4. TLS connections
11.4.1. ZooKeeper communication
Communication between the ZooKeeper nodes on all ports, as well as between clients and ZooKeeper, is encrypted using TLS.
Communication between Kafka brokers and ZooKeeper nodes is also encrypted.
11.4.2. Kafka inter-broker communication
Communication between Kafka brokers is always encrypted using TLS.
Unless the ControlPlaneListener
feature gate is enabled, all inter-broker communication goes through an internal listener on port 9091. If you enable the feature gate, traffic from the control plane goes through an internal control plane listener on port 9090. Traffic from the data plane continues to use the existing internal listener on port 9091.
These internal listeners are not available to Kafka clients.
11.4.3. Topic and User Operators
All Operators use encryption for communication with both Kafka and ZooKeeper. In Topic and User Operators, a TLS sidecar is used when communicating with ZooKeeper.
11.4.4. Cruise Control
Cruise Control uses encryption for communication with both Kafka and ZooKeeper. A TLS sidecar is used when communicating with ZooKeeper.
11.4.5. Kafka Client connections
Encrypted or unencrypted communication between Kafka brokers and clients is configured using the tls
property for spec.kafka.listeners
.
11.5. Configuring internal clients to trust the cluster CA
This procedure describes how to configure a Kafka client that resides inside the OpenShift cluster — connecting to a TLS listener — to trust the cluster CA certificate.
The easiest way to achieve this for an internal client is to use a volume mount to access the Secrets
containing the necessary certificates and keys.
Follow the steps to configure trust certificates that are signed by the cluster CA for Java-based Kafka Producer, Consumer, and Streams APIs.
Choose the steps to follow according to the certificate format of the cluster CA: PKCS #12 (.p12
) or PEM (.crt
).
The steps describe how to mount the Cluster Secret that verifies the identity of the Kafka cluster to the client pod.
Prerequisites
- The Cluster Operator must be running.
-
There needs to be a
Kafka
resource within the OpenShift cluster. - You need a Kafka client application inside the OpenShift cluster that will connect using TLS, and needs to trust the cluster CA certificate.
-
The client application must be running in the same namespace as the
Kafka
resource.
Using PKCS #12 format (.p12)
Mount the cluster Secret as a volume when defining the client pod.
For example:
kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/p12 env: - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: my-secret key: my-password volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert
Here we’re mounting:
- The PKCS #12 file into an exact path, which can be configured
- The password into an environment variable, where it can be used for Java configuration
Configure the Kafka client with the following properties:
A security protocol option:
-
security.protocol: SSL
when using TLS for encryption (with or without TLS authentication). -
security.protocol: SASL_SSL
when using SCRAM-SHA authentication over TLS.
-
-
ssl.truststore.location
with the truststore location where the certificates were imported. -
ssl.truststore.password
with the password for accessing the truststore. -
ssl.truststore.type=PKCS12
to identify the truststore type.
Using PEM format (.crt)
Mount the cluster Secret as a volume when defining the client pod.
For example:
kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/crt volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert
- Use the certificate with clients that use certificates in X.509 format.
11.6. Configuring external clients to trust the cluster CA
This procedure describes how to configure a Kafka client that resides outside the OpenShift cluster – connecting to an external
listener – to trust the cluster CA certificate. Follow this procedure when setting up the client and during the renewal period, when the old clients CA certificate is replaced.
Follow the steps to configure trust certificates that are signed by the cluster CA for Java-based Kafka Producer, Consumer, and Streams APIs.
Choose the steps to follow according to the certificate format of the cluster CA: PKCS #12 (.p12
) or PEM (.crt
).
The steps describe how to obtain the certificate from the Cluster Secret that verifies the identity of the Kafka cluster.
The <cluster-name>-cluster-ca-cert
Secret
will contain more than one CA certificate during the CA certificate renewal period. Clients must add all of them to their truststores.
Prerequisites
- The Cluster Operator must be running.
-
There needs to be a
Kafka
resource within the OpenShift cluster. - You need a Kafka client application outside the OpenShift cluster that will connect using TLS, and needs to trust the cluster CA certificate.
Using PKCS #12 format (.p12)
Extract the cluster CA certificate and password from the
CLUSTER-NAME-cluster-ca-cert
Secret of the Kafka cluster.oc get secret CLUSTER-NAME-cluster-ca-cert -o jsonpath='{.data.ca\.p12}' | base64 -d > ca.p12
oc get secret CLUSTER-NAME-cluster-ca-cert -o jsonpath='{.data.ca\.password}' | base64 -d > ca.password
Replace CLUSTER-NAME with the name of the Kafka cluster.
Configure the Kafka client with the following properties:
A security protocol option:
-
security.protocol: SSL
when using TLS for encryption (with or without TLS authentication). -
security.protocol: SASL_SSL
when using SCRAM-SHA authentication over TLS.
-
-
ssl.truststore.location
with the truststore location where the certificates were imported. -
ssl.truststore.password
with the password for accessing the truststore. This property can be omitted if it is not needed by the truststore. -
ssl.truststore.type=PKCS12
to identify the truststore type.
Using PEM format (.crt)
Extract the cluster CA certificate from the
CLUSTER-NAME-cluster-ca-cert
Secret of the Kafka cluster.oc get secret CLUSTER-NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
- Use the certificate with clients that use certificates in X.509 format.
11.7. Kafka listener certificates
You can provide your own server certificates and private keys for any listener with TLS encryption enabled. These user-provided certificates are called Kafka listener certificates.
Providing Kafka listener certificates allows you to leverage existing security infrastructure, such as your organization’s private CA or a public CA. Kafka clients will need to trust the CA which was used to sign the listener certificate.
You must manually renew Kafka listener certificates when needed.
11.7.1. Providing your own Kafka listener certificates
This procedure shows how to configure a listener to use your own private key and server certificate, called a Kafka listener certificate.
Your client applications should use the CA public key as a trusted certificate in order to verify the identity of the Kafka broker.
Prerequisites
- An OpenShift cluster.
- The Cluster Operator is running.
For each listener, a compatible server certificate signed by an external CA.
- Provide an X.509 certificate in PEM format.
- Specify the correct Subject Alternative Names (SANs) for each listener. For more information, see Section 11.7.2, “Alternative subjects in server certificates for Kafka listeners”.
- You can provide a certificate that includes the whole CA chain in the certificate file.
Procedure
Create a
Secret
containing your private key and server certificate:oc create secret generic my-secret --from-file=my-listener-key.key --from-file=my-listener-certificate.crt
Edit the
Kafka
resource for your cluster. Configure the listener to use yourSecret
, certificate file, and private key file in theconfiguration.brokerCertChainAndKey
property.Example configuration for a
loadbalancer
external listener with TLS encryption enabled# ... listeners: - name: plain port: 9092 type: internal tls: false - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key # ...
Example configuration for a TLS listener
# ... listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key # ...
Apply the new configuration to create or update the resource:
oc apply -f kafka.yaml
The Cluster Operator starts a rolling update of the Kafka cluster, which updates the configuration of the listeners.
NoteA rolling update is also started if you update a Kafka listener certificate in a
Secret
that is already used by a TLS or external listener.
11.7.2. Alternative subjects in server certificates for Kafka listeners
In order to use TLS hostname verification with your own Kafka listener certificates, you must use the correct Subject Alternative Names (SANs) for each listener. The certificate SANs must specify hostnames for:
- All of the Kafka brokers in your cluster
- The Kafka cluster bootstrap service
You can use wildcard certificates if they are supported by your CA.
11.7.2.1. TLS listener SAN examples
Use the following examples to help you specify hostnames of the SANs in your certificates for TLS listeners.
Wildcards example
//Kafka brokers *.<cluster-name>-kafka-brokers *.<cluster-name>-kafka-brokers.<namespace>.svc // Bootstrap service <cluster-name>-kafka-bootstrap <cluster-name>-kafka-bootstrap.<namespace>.svc
Non-wildcards example
// Kafka brokers <cluster-name>-kafka-0.<cluster-name>-kafka-brokers <cluster-name>-kafka-0.<cluster-name>-kafka-brokers.<namespace>.svc <cluster-name>-kafka-1.<cluster-name>-kafka-brokers <cluster-name>-kafka-1.<cluster-name>-kafka-brokers.<namespace>.svc # ... // Bootstrap service <cluster-name>-kafka-bootstrap <cluster-name>-kafka-bootstrap.<namespace>.svc
11.7.2.2. External listener SAN examples
For external listeners which have TLS encryption enabled, the hostnames you need to specify in certificates depends on the external listener type
.
External listener type | In the SANs, specify… |
---|---|
|
Addresses of all Kafka broker You can use a matching wildcard name. |
|
Addresses of all Kafka broker You can use a matching wildcard name. |
| Addresses of all OpenShift worker nodes that the Kafka broker pods might be scheduled to. You can use a matching wildcard name. |
Additional resources