このコンテンツは選択した言語では利用できません。
Chapter 3. Configuring the Record Encryption filter
This section describes at a high level how to configure the Record Encryption filter using a previously prepared KMS. Subsections provide in-depth details.
Prerequisites
-
An instance of Streams for Apache Kafka Proxy.
For information on deploying Streams for Apache Kafka Proxy, see the Deploying and Managing Streams for Apache Kafka Proxy on OpenShift guide. - A KMS has been prepared for use by the filter, with KEKs to encrypt records set up for topics.
Procedure
Configure the plugin for your supported KMS, as required.
Create a filter configuration that references the configured KMS plugins.
Apply the filter configuration:
-
In an OpenShift deployment using a
KafkaProtocolFilterresource. See Section 3.5, “ExampleKafkaProtocolFilterresource”
-
In an OpenShift deployment using a
3.1. HashiCorp Vault plugin configuration リンクのコピーリンクがクリップボードにコピーされました!
For HashiCorp Vault, the KMS configuration used by the filter looks like this. Use the Vault Token and Vault Transit Engine URL values from the KMS setup.
A TLS client certificate can be specified using a PKCS#12 or JKS key store file.
Example TLS client certificate configuration using a PKCS#12 key store file
A set of trust anchors for the TLS client can be specified using a PKCS#12 or JKS key store file.
Example TLS client trust change configuration using a PKCS#12 key store file
trust:
storeFile: /opt/cert/server.p12
storeType: PKCS12
storePassword:
passwordFile: /opt/cert/store.password
trust:
storeFile: /opt/cert/server.p12
storeType: PKCS12
storePassword:
passwordFile: /opt/cert/store.password
3.2. AWS KMS plugin configuration リンクのコピーリンクがクリップボードにコピーされました!
For AWS KMS the configuration for authenticating with AWS KMS services looks like this:
Configuration for authenticating with a long-term IAM identity
- 1
- Specifies the name of the KMS provider. Use
AwsKmsService. - 2
- AWS KMS endpoint URL, which must include the
https://scheme. - 3
- (Optional) TLS trust configuration.
- 4
- File containing the AWS access key ID.
- 5
- File containing the AWS secret access key.
- 6
- The AWS region identifier, such as
us-east-1, specifying where your KMS resources are located. This must match the region of the KMS endpoint you’re using.
A TLS client certificate can be specified using a PKCS#12 or JKS key store file.
Example TLS client certificate configuration using a PKCS#12 key store file
A set of trust anchors for the TLS client can be specified using a PKCS#12 or JKS key store file.
Example TLS client trust change configuration using a PKCS#12 key store file
trust:
storeFile: /opt/cert/server.p12
storeType: PKCS12
storePassword:
passwordFile: /opt/cert/store.password
trust:
storeFile: /opt/cert/server.p12
storeType: PKCS12
storePassword:
passwordFile: /opt/cert/store.password
3.3. Azure Key Vault plugin configuration リンクのコピーリンクがクリップボードにコピーされました!
For Azure Key Vault the configuration for authenticating with Microsoft Identity Platform looks like this:
Configuration for authenticating with Microsoft Identity Platform via OAuth 2.0
-
kmsspecifies the name of the KMS provider. UseAzureKeyVault. -
keyVaultNameis the name of the key vault the filter uses. -
keyVaultHostis the key vault host name, without the key vault name prefix. -
oauthEndpointis the URL used for the OAuth 2.0 Client Credentials flow. -
tenantIdis the 32-character identifier for the Microsoft Entra tenant where the OAuth credentials were created. -
clientId.passwordFilespecifies the file that contains the OAuth client ID. -
clientSecret.passwordFilespecifies the file that contains the OAuth client secret. -
scopeis the App ID URI of the target resource that the proxy authenticates to (your Azure Key Vault URI).
A TLS client certificate can be specified using a PKCS#12 or JKS key store file.
Example TLS client certificate configuration using a PKCS#12 key store file
A set of trust anchors for the TLS client can be specified using a PKCS#12 or JKS key store file.
Example TLS client trust change configuration using a PKCS#12 key store file
trust:
storeFile: /opt/cert/server.p12
storeType: PKCS12
storePassword:
passwordFile: /opt/cert/store.password
trust:
storeFile: /opt/cert/server.p12
storeType: PKCS12
storePassword:
passwordFile: /opt/cert/store.password
3.4. Filter configuration リンクのコピーリンクがクリップボードにコピーされました!
This procedure describes how to configure the Record Encryption filter. Provide the filter configuration and the Key Encryption Key (KEK) selector to use. The KEK selector maps topic name to key names. The filter looks up the resulting key name in the KMS.
Prerequisites
-
An instance of Streams for Apache Kafka Proxy.
For information on deploying Streams for Apache Kafka Proxy, see the Deploying and Managing Streams for Apache Kafka Proxy on OpenShift guide. - A KMS is installed and set up for the filter with KEKs to encrypt records set up for topics.
Procedure
Configure a
RecordEncryptiontype filter.Example Record Encryption filter configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The KMS service name.
- 2
- Configuration specific to the KMS provider.
- 3
- The Key Encryption Key (KEK) selector to use. The
$(topicName)is a literal understood by the proxy. For example, if using theTemplateKekSelectorwith the templateKEK-$(topicName), create a key for every topic that is to be encrypted with the key name matching the topic name, prefixed by the stringKEK-. - 4
- The template for deriving the KEK, based on a specific topic name.
- 5
- Optional policy governing the behaviour when the KMS does not contain a key. The default is
PASSTHROUGH_UNENCRYPTEDwhich causes the record to be forwarded, unencrypted, to the target cluster. Users can alternatively specifyREJECTwhich will cause the entire produce request to be rejected. This is a safer alternative if you know that all traffic sent to the Virtual Cluster should be encrypted because unencrypted data will never be forwarded. - 6
- How long after creation of a DEK before it becomes eligible for rotation. On the next encryption request, the cache will asynchronously create a new DEK. Encryption requests will continue to use the old DEK until the new DEK is ready.
- 7
- How long after creation of a DEK until it is removed from the cache. This setting puts an upper bound on how long a DEK can remain cached.
- 8
- The maximum number of records any DEK should be used to encrypt. After this limit is hit, that DEK will be destroyed and a new one created.
encryptionDekRefreshAfterWriteSecondsandencryptionDekExpireAfterWriteSecondsproperties govern the originator usage period of the DEK, which is the amount of time the DEK remains valid for encrypting records. Shortening this period helps limit the impact if the DEK key material is leaked. However, shorter periods increase the number of KMS API calls, which might affect produce and consume latency and raise KMS provider costs.maxEncryptionsPerDekhelps prevent key exhaustion by placing an upper limit of the amount of times that a DEK may be used to encrypt records.- Verify that the encryption has been applied to the specified topics by producing messages through the proxy and then consuming directly and indirectly from the Kafka cluster.
If the filter is unable to find the key in the KMS, the filter passes through the records belonging to that topic in the produce request without encrypting them.
3.5. Example KafkaProtocolFilter resource リンクのコピーリンクがクリップボードにコピーされました!
If your instance of Streams for Apache Kafka Proxy runs on OpenShift, you must use a KafkaProtocolFilter resource to contain the filter configuration.
Here’s a complete example of a KafkaProtocolFilter resource configured for record encryption with Vault KMS:
Example KafkaProtocolFilter resource
Refer to the Deploying and Managing Streams for Apache Kafka Proxy on OpenShift guide for more information about configuration on OpenShift.