このコンテンツは選択した言語では利用できません。
Record Validation filter guide
Validate producer records being sent to Kafka
Abstract
Providing feedback on Red Hat documentation リンクのコピーリンクがクリップボードにコピーされました!
We appreciate your feedback on our documentation.
To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly.
Prerequisite
- You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one.
Procedure
- Click Create issue.
- In the Summary text box, enter a brief description of the issue.
In the Description text box, provide the following information:
- The URL of the page where you found the issue.
-
A detailed description of the issue.
You can leave the information in any other fields at their default values.
- Add a reporter name.
- Click Create to submit the Jira issue to the documentation team.
Thank you for taking the time to provide feedback.
Technology preview リンクのコピーリンクがクリップボードにコピーされました!
The Record Validation filter is a technology preview.
Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope, see Technology Preview Features Support Scope.
About this guide
This guide covers using the Streams for Apache Kafka Proxy Record Validation Filter to validate records sent by Kafka client to Kafka brokers. Refer to other Streams for Apache Kafka Proxy guides for information on running the proxy or for advanced topics such as plugin development.
The Record Validation filter validates records sent by a producer. Only records that pass the validation are sent to the broker. This filter can be used to prevent poison messages—such as those containing corrupted data or invalid formats—from entering the Kafka system, which may otherwise lead to consumer failure.
The filter currently supports two modes of operation:
- Schema Validation ensures the content of the record conforms to a schema stored in an Apicurio Registry.
- JSON Syntax Validation ensures the content of the record contains syntactically valid JSON.
Validation rules can be applied to check the content of the Kafka record key or value.
If the validation fails, the product request is rejected and the producing application receives an error response. The broker will not receive the rejected records.
This filter is currently in incubation and available as a preview. We would not recommend using it in a production environment.
Chapter 1. (Preview) Setting up the Record Validation filter リンクのコピーリンクがクリップボードにコピーされました!
This procedure describes how to set up the Record Validation filter. Provide the filter configuration and rules that the filter uses to check against Kafka record keys and values.
Prerequisites
- An instance of Streams for Apache Kafka Proxy. For information on deploying Streams for Apache Kafka Proxy, see the Deploying and Managing Streams for Apache Kafka Proxy on OpenShift guide.
- A config map for Streams for Apache Kafka Proxy that includes the configuration for creating a virtual cluster.
- Apicurio Registry (if wanting to use Schema validation).
Procedure
Configure a
RecordValidationtype filter.-
In an OpenShift deployment using a
KafkaProtocolFilterresource. See Section 1.1, “ExampleKafkaProtocolFilterresource”
-
In an OpenShift deployment using a
Replace the token <rule definition> in the YAML configuration with either a Schema Validation rule or a JSON Syntax Validation rule depending on your requirements.
Example Schema Validation Rule Definition
The Schema Validation rule validates that the key or value matches a schema identified by its global ID within an Apicurio Schema Registry.
If the key or value does not adhere to the schema, the record will be rejected.
Additionally, if the kafka producer has embedded a global ID within the record it will be validated against the global ID defined by the rule. If they do not match, the record will be rejected. See the Apicurio documentation for details on how the global ID could be embedded into the record. The filter supports extracting ID’s from either the Apicurio globalId record header or from the initial bytes of the serialized content itself.
schemaValidationConfig:
apicurioGlobalId: 1001
apicurioRegistryUrl: http://registry.local:8080
allowNulls: true
allowEmpty: true
schemaValidationConfig:
apicurioGlobalId: 1001
apicurioRegistryUrl: http://registry.local:8080
allowNulls: true
allowEmpty: true
Schema validation mode currently has the capability to enforce only JSON schemas (issue)
Example JSON Syntax Validation Rule Definition
The JSON Syntax Validation rule validates that the key or value contains only syntactically correct JSON.
syntacticallyCorrectJson:
validateObjectKeysUnique: true
allowNulls: true
allowEmpty: true
syntacticallyCorrectJson:
validateObjectKeysUnique: true
allowNulls: true
allowEmpty: true
1.1. Example KafkaProtocolFilter resource リンクのコピーリンクがクリップボードにコピーされました!
If your instance of Streams for Apache Kafka Proxy runs on OpenShift, you must use a KafkaProtocolFilter resource to contain the filter configuration.
Here’s a complete example of a KafkaProtocolFilter resource configured for record validation:
Example KafkaProtocolFilter resource for record validation
Refer to the Deploying and Managing Streams for Apache Kafka Proxy on OpenShift guide for more information about configuration on OpenShift.
Appendix A. Using your subscription リンクのコピーリンクがクリップボードにコピーされました!
Streams for Apache Kafka is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal.
A.1. Accessing Your Account リンクのコピーリンクがクリップボードにコピーされました!
- Go to access.redhat.com.
- If you do not already have an account, create one.
- Log in to your account.
A.2. Activating a Subscription リンクのコピーリンクがクリップボードにコピーされました!
- Go to access.redhat.com.
- Navigate to My Subscriptions.
- Navigate to Activate a subscription and enter your 16-digit activation number.
A.3. Downloading Zip and Tar Files リンクのコピーリンクがクリップボードにコピーされました!
To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required.
- Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads.
- Locate the Streams for Apache Kafka entries in the INTEGRATION AND AUTOMATION category.
- Select the desired Streams for Apache Kafka product. The Software Downloads page opens.
- Click the Download link for your component.
A.4. Installing packages with DNF リンクのコピーリンクがクリップボードにコピーされました!
To install a package and all the package dependencies, use:
dnf install <package_name>
dnf install <package_name>
To install a previously-downloaded package from a local directory, use:
dnf install <path_to_download_package>
dnf install <path_to_download_package>
Revised on 2025-12-16 10:58:00 UTC