Authorization filter guide


Red Hat Streams for Apache Kafka 3.1

Enforce authorization policies on Kafka resources at the proxy

Abstract

Streams for Apache Kafka Proxy is a protocol-aware proxy that extends and secures Kafka-based systems with a flexible filtering mechanism. This guide explains how to use the Authorization filter.

Providing feedback on Red Hat documentation

We appreciate your feedback on our documentation.

To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly.

Prerequisite

  • You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one.

Procedure

  1. Click Create issue.
  2. In the Summary text box, enter a brief description of the issue.
  3. In the Description text box, provide the following information:

    • The URL of the page where you found the issue.
    • A detailed description of the issue.
      You can leave the information in any other fields at their default values.
  4. Add a reporter name.
  5. Click Create to submit the Jira issue to the documentation team.

Thank you for taking the time to provide feedback.

About this guide

This guide covers using the Streams for Apache Kafka Proxy Authorization filter to enforce authorization rules on client requests before they reach the Kafka brokers. Refer to other Streams for Apache Kafka Proxy guides for information about running the proxy or for advanced topics such as plugin development.

Chapter 1. Authorization overview

The Authorization filter provides the ability for the proxy to enforce an authorization policy about Kafka resources. These authorization checks are performed in addition to any authorization checks made by the broker itself. For an action to be allowed, both the authorization filter and broker must decide that the action is allowed.

In general, the Authorization filter makes access decisions in the same manner as Kafka itself. A client cannot distinguish between authorization enforced on the proxy and authorization enforced on the kafka cluster itself.

In order to use the Authorization filter, the proxy must be able to determine the authenticated subject. The authenticated subject is the verified identity of the client, derived from its successful authentication.

  • If your applications use SASL authentication, configure the SASL inspection filter to build the authenticated subject from the successful SASL exchange between the client and the broker.

Chapter 2. Authorization Model

In Kafka, clients perform operations on resources in Kafka.

The following tables list the resource types and the operations that apply to them.

2.1. Resource types and operations

This table lists the resource types and operations enforced by the authorization filter:

Expand
Resource typeOperationsTypical use-case

Topic

READ

Required for a consuming client to fetch records.

WRITE

Required for a producing client to produce records.

CREATE

Required for an admin client to create, delete or alter topics.

DELETE

ALTER

DESCRIBE

Required for an admin client to perform the describe operations that refer to topic resources.

DESCRIBE_CONFIGS

Required for an admin client to perform describe config operations that refer to topic configuration.

ALTER_CONFIGS

Required for an admin client to perform alter config operations that relate to topic configuration.

NOTE: Other Kafka resource types will be included in a future release.

2.2. Implied operation permissions

In the authorization model, some operations imply permission to perform other operations. This table lists the higher-level operations and the implied lower-level operations they include.

Expand
Resource typeOperationImplied Operation

Topic

READ

DESCRIBE

WRITE

DELETE

ALTER

ALTER_CONFIGS

DESCRIBE_CONFIG

Chapter 3. Authorization rules

The authorization rules define which principals can perform specific operations on specific Kafka resources.

3.1. Outline of a rule file

The following example shows the overall outline of a rule file. The sections that follow give more details.

// Comment
from io.kroxylicious.filter.authorization import TopicResource as Topic;

deny User with name = "alice" to * Topic with name = "payments-received";
allow User with name = "alice" to * Topic with name like "payments-*";

otherwise deny;
Copy to Clipboard Toggle word wrap

3.2. Comments

Both line and block comments are supported. Line comments are preceded by //. Block comments are bracketed by /* …​ */ markers. Comments are ignored.

3.3. Imports

Resource types must be imported before use. This is achieved using a from / import statement.

from <package> import <element> [as <alias>][,... , <elementn> [as <aliasn>]];
Copy to Clipboard Toggle word wrap

where:

  • <package> is io.kroxylicious.filter.authorization
  • <element> is a ResourceType implementation name.
  • <alias> is an optional alias for the resource type.

For example, TopicResource is the implementation that represents Kafka topics. To declare it use a import statement like this.

from io.kroxylicious.filter.authorization import TopicResource;
Copy to Clipboard Toggle word wrap

To declare it with an alias, use an import statement like this:

from io.kroxylicious.filter.authorization import TopicResource as Topic;
Copy to Clipboard Toggle word wrap

3.4. Rules

The basic form of a rule is as follows:

<allow|deny> User with <user predicate> to <operation> <resource type> with <resource predicate>;
Copy to Clipboard Toggle word wrap

where:

  • <allow|deny> indicates whether to allow or deny the action.
  • <user predicate> matches the user principal performing the action.
  • <resource type> identifies the resource type being acted upon. This can either be the name of the resource type name or an alias for it.
  • <resource predicate> identifies the resource.
  • <operation> identifies the operation(s) to be performed on the resource.

Rules must be ordered so that any deny rules precede the allow rules.

When rules are evaluated, they are considered from top to bottom, with the first matching rule taking precedence.

3.5. Otherwise deny

Rules files must end with the statement otherwise deny. This stipulation means that all rules files have deny-by-default semantics.

...
otherwise deny;
Copy to Clipboard Toggle word wrap

3.6. User predicates

The following User predicates are supported:

Expand
PredicateDescriptionExample

=

Equals

name = "alice"

IN

Set inclusion

name in +{"alice", "bob"}+

LIKE

Prefix (note that the wildcard * is only permitted at the end of the prefix.)

name like "bob*"

3.7. Resource Predicates

The following resource predicates are supported:

Expand
PredicateDescriptionExample

=

Equality

name = "mytopic"

IN

Set inclusion

name in +{"topic1", "topic2"}+

LIKE

Prefix (wildcard * is permitted only at the end)

name like "finance*"

MATCHING

Regular expression match

name matching "a+"

3.8. Operations

Operations in rules can be specified in the following ways:

  • As a single operation, for example READ
  • As a set of operations, for example {READ, WRITE}
  • As a wildcard that matches any operation, for example *
Note

The Authorization filter does not support the keyword ALL.

Chapter 4. Configuring the Authorization filter

This procedure describes how to set up the Authorization filter by configuring it in Streams for Apache Kafka Proxy.

Prerequisites

Procedure

  1. Configure an Authorization type filter.

  2. Configure the ACL rules.

4.1. Example KafkaProtocolFilter resource

If your instance of Streams for Apache Kafka Proxy runs on OpenShift, you must use a KafkaProtocolFilter resource to contain the filter configuration.

Here’s a complete example of a KafkaProtocolFilter resource configured for authorization:

kind: KafkaProtocolFilter
metadata:
  name: my-authorization
spec:
  type: Authorization
  configTemplate:
    authorizer: AclAuthorizerService
    authorizerConfig:
      aclFile: ${configmap:acl-rules:acl-rules.txt}
Copy to Clipboard Toggle word wrap
  • authorizer is the name of the authorizer service implementation. Currently, this must be AclAuthorizerService.
  • aclFile is the reference file containing the ACL rules. You can use an interpolation reference to reference rules stored within a Kubernetes ConfigMap or Secret resource.

4.2. Example ACL Rules

If your instance of Streams for Apache Kafka Proxy runs on OpenShift, you must use a ConfigMap resource to contain the ACL rules.

Here’s a complete example of a ConfigMap resource configured for authorization:

Example ConfigMap resource containing the ACL rules

apiVersion: v1
kind: ConfigMap
metadata:
  name: acl-rules
data:
  acl-rules.txt: |
    from io.kroxylicious.filter.authorization import TopicResource as Topic;
    deny User with name = "alice" to * Topic with name = "payments-received";
    allow User with name = "alice" to * Topic with name like "payments-*";
    allow User with name = "bob" to * Topic with name = "payments-received";
    otherwise deny;
Copy to Clipboard Toggle word wrap

Chapter 5. Glossary

Glossary of terms used in the Authorization guide.

Subject
The identity of the client for the purposes of applying policies within the proxy. Whether this is the same as the broker’s notion of subject depends on how authentication is configured in the proxy.
Principal
A component of a subject.
Resource
An entity which an authorizer can control access to. Resources are identified by a type and a name. Examples include Kafka topics and consumer groups (where the group ID is treated as the resource name).
Operations
The things that can be done to resources of a particular type. For example, a Kafka topic resource has operations which include describe, read and write.
Action
A resource and an operation such as read the topic called invoices.
Authorizer
A component that makes an authorization decision, usually based on some kind of access policy.
Decision
The outcome of the authorization of a particular action. This is either allow or deny.

Appendix A. Using your subscription

Streams for Apache Kafka is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal.

A.1. Accessing Your Account

  1. Go to access.redhat.com.
  2. If you do not already have an account, create one.
  3. Log in to your account.

A.2. Activating a Subscription

  1. Go to access.redhat.com.
  2. Navigate to My Subscriptions.
  3. Navigate to Activate a subscription and enter your 16-digit activation number.

A.3. Downloading Zip and Tar Files

To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required.

  1. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads.
  2. Locate the Streams for Apache Kafka entries in the INTEGRATION AND AUTOMATION category.
  3. Select the desired Streams for Apache Kafka product. The Software Downloads page opens.
  4. Click the Download link for your component.

A.4. Installing packages with DNF

To install a package and all the package dependencies, use:

dnf install <package_name>
Copy to Clipboard Toggle word wrap

To install a previously-downloaded package from a local directory, use:

dnf install <path_to_download_package>
Copy to Clipboard Toggle word wrap

Revised on 2025-12-16 10:57:38 UTC

Legal Notice

Copyright © Red Hat.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top