此内容没有您所选择的语言版本。

Chapter 2. Deploying the console


Deploy the console using the dedicated operator. After installing the operator, you can create instances of the console.

For each console instance, the operator needs a Prometheus instance to collect and display Kafka cluster metrics. You can configure the console to use an existing Prometheus source. If no source is set, the operator creates a private Prometheus instance when the console is deployed. However, this default setup is not recommended for production and should only be used for development or evaluation purposes.

Connect the console to one or more Kafka clusters to provide visibility into topics, Kafka nodes, and consumer groups.

Configure the console to integrate with related services, including:

  • Authentication providers for securing access to Kafka clusters
  • Kafka Connect clusters for viewing connector and configuration details
  • Metrics providers for monitoring Kafka cluster performance
  • Schema registries for validating and decoding messages using data schemas

Define these integrations in the Console custom resource configuration YAML file.

2.1. Deployment prerequisites

To deploy the console, you need the following:

  • An OpenShift 4.16–4.20 (tested); 4.12, 4.14 (supported) cluster.
  • The oc command-line tool is installed and configured to connect to the OpenShift cluster.
  • Access to the OpenShift cluster using an account with cluster-admin permissions, such as system-admin.
  • A Kafka cluster managed by Streams for Apache Kafka, running on the OpenShift cluster.

Example files are provided for installing a Kafka cluster managed by Streams for Apache Kafka, along with a Kafka user representing the console. These files offer the fastest way to set up and try the console, but you can also use your own Streams for Apache Kafka managed Kafka deployment.

2.1.1. Using your own Kafka cluster

If you use your own Streams for Apache Kafka deployment, verify the configuration by comparing it with the example deployment files provided with the console.

For each Kafka cluster, the Kafka resource used to install the cluster must be configured with the following:

  • Sufficient authorization for the console to connect
  • Metrics properties for the console to be able to display certain data

    The metrics configuration must match the properties specified in the example Kafka (console-kafka) and ConfigMap (console-kafka-metrics) resources.

2.1.2. Deploying a new Kafka cluster

If you already have Streams for Apache Kafka installed but want to create a new Kafka cluster for use with the console, example deployment resources are available to help you get started.

These resources create the following:

  • A Kafka cluster in KRaft mode with SCRAM-SHA-512 authentication.
  • A Streams for Apache Kafka KafkaNodePool resource to manage the cluster nodes.
  • A KafkaUser resource to enable authenticated and authorized console connections to the Kafka cluster.

The KafkaUser custom resource in the 040-KafkaUser-console-kafka-user1.yaml file includes the necessary ACL types to provide authorized access for the console to the Kafka cluster.

The minimum required ACL rules are configured as follows:

  • Describe, DescribeConfigs permissions for the cluster resource
  • Read, Describe, DescribeConfigs permissions for all topic resources
  • Read, Describe permissions for all group resources
Note

To ensure the console has the necessary access to function, a minimum level of authorization must be configured for the principal used in each Kafka cluster connection. The specific permissions may vary based on the authorization framework in use, such as ACLs, Keycloak authorization, OPA, or a custom solution.

When configuring the KafkaUser authentication and authorization, ensure they match the corresponding Kafka configuration:

  • KafkaUser.spec.authentication should match Kafka.spec.kafka.listeners[*].authentication.
  • KafkaUser.spec.authorization should match Kafka.spec.kafka.authorization.

Prerequisites

  • An OpenShift 4.16–4.20 (tested); 4.12, 4.14 (supported) cluster.
  • Access to the OpenShift web console using an account with cluster-admin permissions, such as system:admin.
  • The oc command-line tool is installed and configured to connect to the OpenShift cluster.

Procedure

  1. Download and extract the console installation artifacts.

    The artifacts are included with installation and example files available from the release page.

    The artifacts provide the deployment YAML files to the install the Kafka cluster. Use the sample installation files located in examples/console/resources/kafka.

  2. Set environment variables to update the installation files:

    export NAMESPACE=kafka 
    1
    
    export LISTENER_TYPE=route 
    2
    
    export CLUSTER_DOMAIN=<domain_name> 
    3
    1
    The namespace in which you want to deploy the Kafka operator.
    2
    The listener type used to expose Kafka to the console.
    3
    The cluster domain name for your OpenShift cluster.

    In this example, the namespace variable is defined as kafka and the listener type is route.

  3. Install the Kafka cluster.

    Run the following command to apply the YAML files and deploy the Kafka cluster to the defined namespace:

    cat examples/console/resources/kafka/*.yaml | envsubst | oc apply -n ${NAMESPACE} -f -

    This command reads the YAML files, replaces the namespace environment variables, and applies the resulting configuration to the specified OpenShift namespace.

  4. Check the status of the deployment:

    oc get pods -n kafka

    Output shows the operators and cluster readiness

    NAME                              READY   STATUS   RESTARTS
    strimzi-cluster-operator          1/1     Running  0
    console-kafka-console-nodepool-0  1/1     Running  0
    console-kafka-console-nodepool-1  1/1     Running  0
    console-kafka-console-nodepool-2  1/1     Running  0

    • console-kafka is the name of the cluster.
    • console-nodepool is the name of the node pool.

      A node ID identifies the nodes created.

      With the default deployment, you install three nodes.

      READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.

2.2. Installing the console operator

Install the console operator using one of the following methods:

  • Using the Operator Lifecycle Manager (OLM) command line interface (CLI)
  • From the OperatorHub in the OpenShift web console (Openshift clusters only)
  • By applying the Console RBAC, deployment, and Custom Resource Definition (CRD) resources
Note

OLM and OperatorHub install options will become available after the operator is submitted to and approved for the OperatorHub. See Issue 1526 for tracking progress.

The recommended approach is to install the operator via the OpenShift CLI (oc) using the Operator Lifecycle Manager (OLM) resources. If using the OLM is not suitable for your environment, you can install the operator by applying the CRD directly.

2.2.1. Deploying the console operator using a CRD

This procedure describes how to install the Streams for Apache Kafka Console operator using a Custom Resource Definition (CRD).

Prerequisites

Procedure

  1. Download and extract the console installation artifacts.

    The artifacts are included with installation and example files available from the release page.

    The artifacts include a Custom Resource Definition (CRD) file (console-operator.yaml ) to install the operator without the OLM.

  2. Set an environment variable to define the namespace where you want to install the operator:

    export NAMESPACE=operator-namespace

    In this example, the namespace variable is defined as operator-namespace.

  3. Install the console operator with the CRD.

    Use the sample installation files located in install/console-operator/non-olm. These resources install the operator with cluster-wide scope, allowing it to manage console resources across all namespaces. Run the following command to apply the YAML file:

    cat install/console-operator/non-olm/console-operator.yaml | envsubst | oc apply -n ${NAMESPACE} -f -

    This command reads the YAML file, replaces the namespace environment variables, and applies the resulting configuration to the specified OpenShift namespace.

  4. Check the status of the deployment:

    oc get pods -n operator-namespace

    Output shows the deployment name and readiness

    NAME              READY  STATUS   RESTARTS
    console-operator  1/1    Running  1

    READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.

  5. Use the console operator to deploy the console and connect to a Kafka cluster.

Use the console operator to deploy the Streams for Apache Kafka Console to the same OpenShift cluster as a Kafka cluster managed by Streams for Apache Kafka. Use the console to connect to the Kafka cluster.

Prerequisites

Procedure

  1. Create a Console custom resource in the desired namespace.

    If you deployed the example Kafka cluster provided with the installation artifacts, you can use the configuration specified in the examples/console/resources/console/010-Console-example.yaml configuration file unchanged.

    Otherwise, configure the resource to connect to your Kafka cluster.

    Example console configuration

    apiVersion: console.streamshub.github.com/v1alpha1
    kind: Console
    metadata:
      name: my-console
    spec:
      hostname: my-console.<cluster_domain> 
    1
    
      kafkaClusters:
        - name: console-kafka 
    2
    
          namespace: kafka 
    3
    
          listener: secure 
    4
    
          properties:
            values: [] 
    5
    
            valuesFrom: [] 
    6
    
          credentials:
            kafkaUser:
              name: console-kafka-user1 
    7

    1
    Hostname to access the console by HTTP.
    2
    Name of the Kafka resource representing the cluster.
    3
    Namespace of the Kafka cluster.
    4
    Listener to expose the Kafka cluster for console connection.
    5
    (Optional) Add connection properties if needed.
    6
    (optional) References to config maps or secrets, if needed.
    7
    (Optional) Kafka user created for authenticated access to the Kafka cluster.
  2. Apply the Console configuration to install the console.

    In this example, the console is deployed to the console-namespace namespace:

    oc apply -f examples/console/resources/console/010-Console-example.yaml -n console-namespace
  3. Check the status of the deployment:

    oc get pods -n console-namespace

    Output shows the deployment name and readiness

    NAME           READY  STATUS  RUNNING
    console-kafka  1/1    1       1

  4. Access the console.

    When the console is running, use the hostname specified in the Console resource (spec.hostname) to access the user interface.

Enable secure console connections to Kafka clusters using an OIDC provider. Configure the console deployment to configure connections to any Identity Provider (IdP), such as Keycloak or Dex, that supports OpenID Connect (OIDC). Also define the subjects and roles for user authorization. To use group-based authorization as shown in the examples, configure an OIDC provider that includes a group membership claim, such as groups, in the generated access tokens. The security profiles can be configured for all Kafka cluster connections on a global level, though you can add roles and rules for specific Kafka clusters.

An example configuration is provided in the following file: examples/console/resources/console/console-security-oidc.yaml.

The configuration introduces the following additional properties for console deployment:

security
Properties that define the connection details for the console to connect with the OIDC provider.
subjects
Specifies the subjects (users or groups) and their roles in terms of JWT claims or explicit subject names, determining access permissions.
roles
Defines the roles and associated access rules for users, specifying which resources (like Kafka clusters) they can interact with and what operations they are permitted to perform.

Example security configuration for all clusters

apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
  name: my-console
spec:
  hostname: my-console.<cluster_domain>
  security:
    oidc:
      authServerUrl: <OIDC_discovery_URL> 
1

      clientId: <client_id> 
2

      clientSecret: 
3

        valueFrom:
          secretKeyRef:
            name: my-oidc-secret 
4

            key: client-secret 
5

      trustStore: 
6

        type: JKS
        content:
          valueFrom:
            configMapKeyRef:
              name: my-oidc-configmap
              key: ca.jks
        password: 
7

          value: truststore-password
    subjects:
      - claim: groups 
8

        include: 
9

          - <team_name_1>
          - <team_name_2>
        roleNames: 
10

          - developers
      - claim: groups
        include:
          - <team_name_3>
        roleNames:
          - administrators
      - include: 
11

          - <user_1>
          - <user_2>
        roleNames:
          - administrators
    roles:
      - name: developers
        rules:
          - resources: 
12

              - kafkas
            resourceNames: 
13

                # exact
              - dev-cluster-a
                # wildcard
              - com.example.team.*
                # regular expression
              - /qa-cluster-[xy]/
            privileges: 
14

              - 'ALL'
      - name: administrators
        rules:
          - resources:
              - kafkas
            privileges:
              - 'ALL'
  kafkaClusters:
    - name: console-kafka
      namespace: kafka
      listener: secure
      credentials:
        kafkaUser:
          name: console-kafka-user1

1
The OIDC provider’s issuer URI for endpoint discovery.
2
The client ID that identifies the console to the OIDC provider. This value is obtained from the client credentials in your OIDC provider.
3
The client secret, which authenticates the client when used with the client ID. This value is also obtained from the client credentials in your OIDC provider.
4
The name of the OpenShift Secret where the client secret is stored.
5
The key within the Secret that holds the client secret value.
6
Optional truststore used to validate the OIDC provider’s TLS certificate. Supported formats include JKS, PEM, and PKCS12. Truststore content can be provided using either a ConfigMap (configMapKeyRef) or a Secret (secretKeyRef).
7
Optional password for the truststore. Can be provided as a plaintext value (as shown) or more securely by reference to a Secret. Plaintext values are not recommended for production.
8
JWT claim types or names to identify the users or groups.
9
Users or groups included under the specified claim.
10
Roles assigned to the specified users or groups.
11
Specific users included by name when no claim is specified.
12
Resources that the assigned role can access.
13
Resource name filters that identify which resources the assigned role can access. You can specify exact names (for example, dev-cluster-a), wildcard patterns (for example, com.example.team.*), or regular expressions. When a value is enclosed in slashes (/), the console switches to regular-expression mode (for example, /qa-cluster-[xy]/).
14
Privileges granted to the assigned role for the specified resources.

If you want to specify roles and rules for individual Kafka clusters, add the details under kafka.clusters[].security.roles[]. In the following example, the console-kafka cluster allows developers to list and view selected Kafka resources. Administrators can also update certain resources.

Example security configuration for an individual cluster

apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
  name: my-console
spec:
  hostname: my-console.<cluster_domain>
  # ...
  kafkaClusters:
    - name: console-kafka
      namespace: kafka
      listener: secure
      credentials:
        kafkaUser:
          name: console-kafka-user1
      security:
        roles:
          - name: developers
            rules:
              - resources:
                  - topics
                  - topics/records
                  - consumerGroups
                  - rebalances
                privileges:
                  - GET
                  - LIST
          - name: administrators
            rules:
              - resources:
                  - topics
                  - topics/records
                  - consumerGroups
                  - rebalances
                  - nodes/configs
                privileges:
                  - GET
                  - LIST
              - resources:
                  - consumerGroups
                  - rebalances
                privileges:
                  - UPDATE

Optional OIDC authentication properties

The following properties can be used to further configure oidc authentication. These apply to any part of the console configuration that supports authentication.oidc, such as schema registries or metrics providers.

grantType

Specifies the OIDC grant type to use. Required when using non-interactive authentication flows, where no user login is involved. Supported values:

  • CLIENT: Requires a client ID and secret.
  • PASSWORD: Requires a client ID and secret, plus user credentials (username and password) provided through grantOptions.
grantOptions

Additional parameters specific to the selected grant type. Use grantOptions to provide properties such as username and password when using the PASSWORD grant type.

oidc:
  grantOptions:
    username: my-user
    password: <my_password>
method

Method for passing the client ID and secret to the OIDC provider. Supported values:

  • BASIC: (default) Uses HTTP Basic authentication.
  • POST: Sends credentials as form parameters.
scopes

Optional list of access token scopes to request from the OIDC provider. Defaults are usually defined by the OIDC client configuration. Specify this property if access to the target service requires additional or alternative scopes not granted by default.

oidc:
  scopes:
    - openid
    - registry:read
    - registry:write
absoluteExpiresIn
Optional boolean. If set to true, the expires_in token property is treated as an absolute timestamp instead of a duration.

2.3.2. Adding Kafka Connect clusters

Integrate Kafka Connect clusters with the console to view available connectors and their configurations. You can associate one or more Kafka Connect clusters with one or more Kafka clusters that are already defined in the console configuration. The console displays Connect cluster and connector information but does not allow modification.

A placeholder for adding Connect clusters is provided in: examples/console/resources/console/010-Console-example.yaml.

You can define Connect clusters globally as part of the console configuration using the kafkaConnectClusters property.

kafkaConnectClusters
Defines one or more Kafka Connect clusters that the console can connect to in order to retrieve connector information. Each entry includes a name, an endpoint url, and a list of Kafka clusters with which the Connect cluster is associated.
kafkaClusters
Lists the Kafka clusters associated with the Kafka Connect cluster. Each Kafka cluster is referenced by its <namespace>/<name> combination as defined in the kafkaClusters configuration. For standalone Kafka clusters without a namespace, specify only the cluster name.

Example Kafka Connect cluster configuration

apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
  name: my-console
spec:
  hostname: my-console.<cluster_domain>
  kafkaClusters:
    - name: console-kafka
      namespace: kafka
      listener: secure
      metricsSource: my-ocp-prometheus
      credentials:
        kafkaUser:
          name: console-kafka-user1
  kafkaConnectClusters:
    - name: my-connect-cluster 
1

      url: http://my-connect-cluster.example.com/ 
2

      kafkaClusters: 
3

        - kafka/console-kafka
    - name: my-mm2-cluster 
4

      url: http://my-mm2-cluster.example.com/
      kafkaClusters: 
5

        - ns1/kafka1
        - ns2/kafka2
  # ...

1
A unique name for the Connect cluster.
2
Base URL of the Kafka Connect REST API endpoint.
3
Associates the Connect cluster with one or more Kafka clusters configured in the console.
4
Example entry for a MirrorMaker 2 Connect cluster.
5
Associates this Kafka Connect cluster with multiple Kafka clusters in different namespaces.

When the console is deployed with these settings, Connect clusters and connector details can be viewed from the Kafka Connect page of the console.

2.3.3. Enabling a metrics provider

Configure the console deployment to enable a metrics provider. You can set up configuration to use one of the following sources to scrape metrics from Kafka clusters using Prometheus:

  • OpenShift’s built-in user workload monitoring
    Use OpenShift’s workload monitoring, incorporating the Prometheus operator, to monitor console services and workloads without the need for an additional monitoring solution.
  • A standalone Prometheus instance
    Provide the details and credentials to connect with your own Prometheus instance.
  • An embedded Prometheus instance (default)
    Deploy a private Prometheus instance for use only by the console instance. The instance is configured to retrieve metrics from all Streams for Apache Kafka managed Kafka instances in the same OpenShift cluster. Using embedded metrics is intended for development and evaluation only, not production.

Example configuration for OpenShift monitoring and a standalone Prometheus instance is provided in the following files:

  • examples/console/resources/console/console-openshift-metrics.yaml
  • examples/console/resources/console/console-standalone-prometheus.yaml

You can define Prometheus sources globally as part of the console configuration using metricsSources properties:

metricsSources
Declares one or more metrics providers that the console can use to collect metrics.
type

Specifies the type of metrics source. Valid options:

  • openshift-monitoring
  • standalone (external Prometheus)
  • embedded (console-managed Prometheus)
url
For standalone sources, specifies the base URL of the Prometheus instance.
authentication
(For standalone and openshift-monitoring only) Configures access to the metrics provider using basic, bearer token, or oidc authentication.
trustStore
(Optional, for standalone only) Specifies a truststore for verifying TLS certificates when connecting to the metrics provider. Supported formats: JKS, PEM, PKCS12. Content may be provided using a ConfigMap or a Secret.

Assign the metrics source to a Kafka cluster using the kafkaClusters.metricsSource property. The value of metricsSource is the name of the entry in the metricsSources array.

The configuration for openshift-monitoring and embedded requires no further configuration besides the type.

Example metrics configuration for Openshift monitoring

apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
  name: my-console
spec:
  hostname: my-console.<cluster_domain>
  # ...
  metricsSources:
    - name: my-ocp-prometheus
      type: openshift-monitoring
  kafkaClusters:
    - name: console-kafka
      namespace: kafka
      listener: secure
      metricsSource: my-ocp-prometheus
      credentials:
        kafkaUser:
          name: console-kafka-user1
  # ...

Example metrics configuration for standalone Prometheus monitoring

apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
  name: my-console
spec:
  hostname: my-console.<cluster_domain>
  # ...
  metricsSources:
    - name: my-custom-prometheus
      type: standalone
      url: <prometheus_instance_address> 
1

      authentication: 
2

        username: my-user
        password: my-password
      trustStore: 
3

        type: JKS
        content:
          valueFrom:
            configMapKeyRef:
              name: my-prometheus-configmap
              key: ca.jks
        password: 
4

          value: truststore-password
  kafkaClusters:
    - name: console-kafka
      namespace: kafka
      listener: secure
      metricsSource: my-ocp-prometheus
      credentials:
        kafkaUser:
          name: console-kafka-user1
  # ...

1
URL of the standalone Prometheus instance for metrics collection.
2
Authentication credentials for accessing the Prometheus instance. Supported authentication methods:
3
Optional truststore used to validate the metrics provider’s TLS certificate. Supported formats include JKS, PEM, and PKCS12. Truststore content can be provided using either a ConfigMap (configMapKeyRef) or a Secret (secretKeyRef).
4
Optional password for the truststore. Can be provided as a plaintext value (as shown) or via a Secret. Plaintext values are not recommended for production.

2.3.4. Using a schema registry with Kafka

Integrate a schema registry with the console to centrally manage schemas for Kafka data. The console currently supports integration with Apicurio Registry to reference and validate schemas used in Kafka data streams. Requests to the registry can be authenticated using supported methods, including OIDC.

A placeholder for adding schema registries is provided in: examples/console/resources/console/010-Console-example.yaml.

You can define schema registry connections globally as part of the console configuration using schemaRegistries properties:

schemaRegistries
Defines external schema registries that the console can connect to for schema validation and management.
authentication
Configures access to the schema registry using basic, bearer token, or oidc authentication.
trustStore
(Optional) Specifies a truststore for verifying TLS certificates when connecting to the schema registry. Supported formats: JKS, PEM, PKCS12. Content may be provided using a ConfigMap or a Secret.

Assign the schema registry source to a Kafka cluster using the kafkaClusters.schemaRegistry property. The value of schemaRegistry is the name of the entry in the schemaRegistries array.

Example schema registry configuration with OIDC authentication

apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
  name: my-console
spec:
  hostname: my-console.<cluster_domain>
  schemaRegistries:
    - name: my-registry 
1

      url: <schema_registry_URL> 
2

      authentication: 
3

        oidc:
          authServerUrl: <OIDC_discovery_URL>
          clientId: <client_id>
          clientSecret:
            valueFrom:
              secretKeyRef:
                name: my-oidc-secret
                key: client-secret
          method: POST
          grantType: CLIENT
          trustStore: 
4

            type: JKS
            content:
              valueFrom:
                configMapKeyRef:
                  name: my-oidc-configmap
                  key: ca.jks
            password: 
5

              value: truststore-password
      trustStore: 
6

        type: PEM
        content:
          valueFrom:
            configMapKeyRef:
              name: my-apicurio-configmap
              key: cert-chain.pem
    kafkaClusters:
      - name: console-kafka
        namespace: kafka
        listener: secure
        metricsSource: my-ocp-prometheus
        schemaRegistry: my-registry
        credentials:
          kafkaUser:
            name: console-kafka-user1
  # ...

1
A unique name for the schema registry connection.
2
Base URL of the schema registry API. This is typically the REST endpoint, such as http://<host>/apis/registry/v2.
3
Authentication credentials for accessing the schema registry. Supported authentication methods:
4
Optional truststore used to validate the OIDC provider’s TLS certificate. Supported formats include JKS, PEM, and PKCS12. Truststore content can be provided using either a ConfigMap (configMapKeyRef) or a Secret (secretKeyRef).
5
Optional password for the truststore. Can be provided as a plaintext value (as shown) or via a Secret. Plaintext values are not recommended for production.
6
Optional truststore used to validate the schema registry’s TLS certificate. Configuration format and source options are the same as for the OIDC truststore.
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2026 Red Hat
返回顶部