Chapter 6. Using the User Operator


The User Operator provides a way of managing Kafka users via OpenShift resources.

6.1. Overview of the User Operator component

The User Operator manages Kafka users for a Kafka cluster by watching for KafkaUser resources that describe Kafka users and ensuring that they are configured properly in the Kafka cluster. For example:

  • if a KafkaUser is created, the User Operator will create the user it describes
  • if a KafkaUser is deleted, the User Operator will delete the user it describes
  • if a KafkaUser is changed, the User Operator will update the user it describes

Unlike the Topic Operator, the User Operator does not sync any changes from the Kafka cluster with the OpenShift resources. Unlike the Kafka topics which might be created by applications directly in Kafka, it is not expected that the users will be managed directly in the Kafka cluster in parallel with the User Operator, so this should not be needed.

The User Operator allows you to declare a KafkaUser as part of your application’s deployment. When the user is created, the user credentials will be created in a Secret. Your application needs to use the user and its credentials for authentication and to produce or consume messages.

In addition to managing credentials for authentication, the User Operator also manages authorization rules by including a description of the user’s rights in the KafkaUser declaration.

6.2. Mutual TLS authentication

Mutual TLS authentication is always used for the communication between Kafka brokers and Zookeeper pods.

Mutual authentication or two-way authentication is when both the server and the client present certificates. AMQ Streams can configure Kafka to use TLS (Transport Layer Security) to provide encrypted communication between Kafka brokers and clients either with or without mutual authentication. When you configure mutual authentication, the broker authenticates the client and the client authenticates the broker.

Note

TLS authentication is more commonly one-way, with one party authenticating the identity of another. For example, when HTTPS is used between a web browser and a web server, the server obtains proof of the identity of the browser.

6.2.1. When to use mutual TLS authentication for clients

Mutual TLS authentication is recommended for authenticating Kafka clients when:

  • The client supports authentication using mutual TLS authentication
  • It is necessary to use the TLS certificates rather than passwords
  • You can reconfigure and restart client applications periodically so that they do not use expired certificates.

6.3. Creating a Kafka user with mutual TLS authentication

Prerequisites

Procedure

  1. Prepare a YAML file containing the KafkaUser to be created.

    An example KafkaUser

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaUser
    metadata:
      name: my-user
      labels:
        strimzi.io/cluster: my-cluster
    spec:
      authentication:
        type: tls
      authorization:
        type: simple
        acls:
          - resource:
              type: topic
              name: my-topic
              patternType: literal
            operation: Read
          - resource:
              type: topic
              name: my-topic
              patternType: literal
            operation: Describe
          - resource:
              type: group
              name: my-group
              patternType: literal
            operation: Read

  2. Create the KafkaUser resource in OpenShift.

    This can be done using oc apply:

    oc apply -f your-file
  3. Use the credentials from the secret my-user in your application

Additional resources

6.4. SCRAM-SHA authentication

SCRAM (Salted Challenge Response Authentication Mechanism) is an authentication protocol that can establish mutual authentication using passwords. AMQ Streams can configure Kafka to use SASL (Simple Authentication and Security Layer) SCRAM-SHA-512 to provide authentication on both unencrypted and TLS-encrypted client connections. TLS authentication is always used internally between Kafka brokers and Zookeeper nodes. When used with a TLS client connection, the TLS protocol provides encryption, but is not used for authentication.

The following properties of SCRAM make it safe to use SCRAM-SHA even on unencrypted connections:

  • The passwords are not sent in the clear over the communication channel. Instead the client and the server are each challenged by the other to offer proof that they know the password of the authenticating user.
  • The server and client each generate a new challenge for each authentication exchange. This means that the exchange is resilient against replay attacks.

6.4.1. Supported SCRAM credentials

AMQ Streams supports SCRAM-SHA-512 only. When a KafkaUser.spec.authentication.type is configured with scram-sha-512 the User Operator will generate a random 12 character password consisting of upper and lowercase ASCII letters and numbers.

6.4.2. When to use SCRAM-SHA authentication for clients

SCRAM-SHA is recommended for authenticating Kafka clients when:

  • The client supports authentication using SCRAM-SHA-512
  • It is necessary to use passwords rather than the TLS certificates
  • Authentication for unencrypted communication is required

6.5. Creating a Kafka user with SCRAM SHA authentication

Prerequisites

Procedure

  1. Prepare a YAML file containing the KafkaUser to be created.

    An example KafkaUser

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaUser
    metadata:
      name: my-user
      labels:
        strimzi.io/cluster: my-cluster
    spec:
      authentication:
        type: scram-sha-512
      authorization:
        type: simple
        acls:
          - resource:
              type: topic
              name: my-topic
              patternType: literal
            operation: Read
          - resource:
              type: topic
              name: my-topic
              patternType: literal
            operation: Describe
          - resource:
              type: group
              name: my-group
              patternType: literal
            operation: Read

  2. Create the KafkaUser resource in OpenShift.

    This can be done using oc apply:

    oc apply -f your-file
  3. Use the credentials from the secret my-user in your application

Additional resources

6.6. Editing a Kafka user

This procedure describes how to change the configuration of an existing Kafka user by using a KafkaUser OpenShift resource.

Prerequisites

Procedure

  1. Prepare a YAML file containing the desired KafkaUser.

    An example KafkaUser

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaUser
    metadata:
      name: my-user
      labels:
        strimzi.io/cluster: my-cluster
    spec:
      authentication:
        type: tls
      authorization:
        type: simple
        acls:
          - resource:
              type: topic
              name: my-topic
              patternType: literal
            operation: Read
          - resource:
              type: topic
              name: my-topic
              patternType: literal
            operation: Describe
          - resource:
              type: group
              name: my-group
              patternType: literal
            operation: Read

  2. Update the KafkaUser resource in OpenShift.

    This can be done using oc apply:

    oc apply -f your-file
  3. Use the updated credentials from the my-user secret in your application.

Additional resources

6.7. Deleting a Kafka user

This procedure describes how to delete a Kafka user created with KafkaUser OpenShift resource.

Prerequisites

Procedure

  • Delete the KafkaUser resource in OpenShift.

    This can be done using oc delete:

    oc delete kafkauser your-user-name

Additional resources

6.8. Kafka User resource

The KafkaUser resource is used to declare a user with its authentication mechanism, authorization mechanism, and access rights.

6.8.1. Authentication

Authentication is configured using the authentication property in KafkaUser.spec. The authentication mechanism enabled for this user will be specified using the type field. Currently, the only supported authentication mechanisms are the TLS Client Authentication mechanism and the SCRAM-SHA-512 mechanism.

When no authentication mechanism is specified, User Operator will not create the user or its credentials.

6.8.1.1. TLS Client Authentication

To use TLS client authentication, set the type field to tls.

An example of KafkaUser with enabled TLS Client Authentication

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
metadata:
  name: my-user
  labels:
    strimzi.io/cluster: my-cluster
spec:
  authentication:
    type: tls
  # ...

When the user is created by the User Operator, it will create a new secret with the same name as the KafkaUser resource. The secret will contain a public and private key which should be used for the TLS Client Authentication. Bundled with them will be the public key of the client certification authority which was used to sign the user certificate. All keys will be in X509 format.

An example of the Secret with user credentials

apiVersion: v1
kind: Secret
metadata:
  name: my-user
  labels:
    strimzi.io/kind: KafkaUser
    strimzi.io/cluster: my-cluster
type: Opaque
data:
  ca.crt: # Public key of the Clients CA
  user.crt: # Public key of the user
  user.key: # Private key of the user

6.8.1.2. SCRAM-SHA-512 Authentication

To use SCRAM-SHA-512 authentication mechanism, set the type field to scram-sha-512.

An example of KafkaUser with enabled SCRAM-SHA-512 authentication

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
metadata:
  name: my-user
  labels:
    strimzi.io/cluster: my-cluster
spec:
  authentication:
    type: scram-sha-512
  # ...

When the user is created by the User Operator, the User Operator will create a new secret with the same name as the KafkaUser resource. The secret contains the generated password in the password key, which is encoded with base64. In order to use the password it must be decoded.

An example of the Secret with user credentials

apiVersion: v1
kind: Secret
metadata:
  name: my-user
  labels:
    strimzi.io/kind: KafkaUser
    strimzi.io/cluster: my-cluster
type: Opaque
data:
  password: Z2VuZXJhdGVkcGFzc3dvcmQ= # Generated password

For decode the generated password:

echo "Z2VuZXJhdGVkcGFzc3dvcmQ=" | base64 --decode

6.8.2. Authorization

Authorization is configured using the authorization property in KafkaUser.spec. The authorization type enabled for a user is specified using the type field. Currently, the only supported authorization type is simple authorization.

If no authorization is specified, the User Operator does not provision any access rights for the user.

6.8.2.1. Simple authorization

Simple authorization uses the default Kafka authorization plugin, SimpleAclAuthorizer.

To use simple authorization, set the type property to simple in KafkaUser.spec.

ACL rules

SimpleAclAuthorizer uses ACL rules to manage access to Kafka brokers.

ACL rules grant access rights to the user, which you specify in the acls property.

An AclRule is specified as a set of properties:

resource

The resource property specifies the resource that the rule applies to.

Simple authorization supports four resource types, which are specified in the type property:

  • Topics (topic)
  • Consumer Groups (group)
  • Clusters (cluster)
  • Transactional IDs (transactionalId)

For Topic, Group, and Transactional ID resources you can specify the name of the resource the rule applies to in the name property.

Cluster type resources have no name.

A name is specified as a literal or a prefix using the patternType property.

  • Literal names are taken exactly as they are specified in the name field.
  • Prefix names use the value from the name as a prefix, and will apply the rule to all resources with names starting with the value.
type

The type property specifies the type of ACL rule, allow or deny.

The type field is optional. If type is unspecified, the ACL rule is treated as an allow rule.

operation

The operation specifies the operation to allow or deny.

The following operations are supported:

  • Read
  • Write
  • Delete
  • Alter
  • Describe
  • All
  • IdempotentWrite
  • ClusterAction
  • Create
  • AlterConfigs
  • DescribeConfigs

Only certain operations work with each resource.

For more details about SimpleAclAuthorizer, ACLs and supported combinations of resources and operations, see Authorization and ACLs.

host

The host property specifies a remote host from which the rule is allowed or denied.

Use an asterisk (*) to allow or deny the operation from all hosts. The host field is optional. If host is unspecified, the * value is used by default.

For more information about the AclRule object, see AclRule schema reference.

An example KafkaUser

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
metadata:
  name: my-user
  labels:
    strimzi.io/cluster: my-cluster
spec:
  # ...
  authorization:
    type: simple
    acls:
      - resource:
          type: topic
          name: my-topic
          patternType: literal
        operation: Read
      - resource:
          type: topic
          name: my-topic
          patternType: literal
        operation: Describe
      - resource:
          type: group
          name: my-group
          patternType: prefix
        operation: Read

6.8.2.2. Super user access to Kafka brokers

If a user is added to a list of super users in a Kafka broker configuration, the user is allowed unlimited access to the cluster regardless of any authorization constraints defined in ACLs.

For more information on configuring super users, see authentication and authorization of Kafka brokers.

6.8.3. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.