此内容没有您所选择的语言版本。
Chapter 2. Deploying the console
Deploy the console using the dedicated operator. After installing the operator, you can create instances of the console.
For each console instance, the operator needs a Prometheus instance to collect and display Kafka cluster metrics. You can configure the console to use an existing Prometheus source. If no source is set, the operator creates a private Prometheus instance when the console is deployed. However, this default setup is not recommended for production and should only be used for development or evaluation purposes.
Connect the console to one or more Kafka clusters to provide visibility into topics, Kafka nodes, and consumer groups.
Configure the console to integrate with related services, including:
- Authentication providers for securing access to Kafka clusters
- Kafka Connect clusters for viewing connector and configuration details
- Metrics providers for monitoring Kafka cluster performance
- Schema registries for validating and decoding messages using data schemas
Define these integrations in the Console custom resource configuration YAML file.
2.1. Deployment prerequisites 复制链接链接已复制到粘贴板!
To deploy the console, you need the following:
- An OpenShift 4.16–4.20 (tested); 4.12, 4.14 (supported) cluster.
-
The
occommand-line tool is installed and configured to connect to the OpenShift cluster. -
Access to the OpenShift cluster using an account with
cluster-adminpermissions, such assystem-admin. - A Kafka cluster managed by Streams for Apache Kafka, running on the OpenShift cluster.
Example files are provided for installing a Kafka cluster managed by Streams for Apache Kafka, along with a Kafka user representing the console. These files offer the fastest way to set up and try the console, but you can also use your own Streams for Apache Kafka managed Kafka deployment.
2.1.1. Using your own Kafka cluster 复制链接链接已复制到粘贴板!
If you use your own Streams for Apache Kafka deployment, verify the configuration by comparing it with the example deployment files provided with the console.
For each Kafka cluster, the Kafka resource used to install the cluster must be configured with the following:
- Sufficient authorization for the console to connect
Metrics properties for the console to be able to display certain data
The metrics configuration must match the properties specified in the example
Kafka(console-kafka) andConfigMap(console-kafka-metrics) resources.
2.1.2. Deploying a new Kafka cluster 复制链接链接已复制到粘贴板!
If you already have Streams for Apache Kafka installed but want to create a new Kafka cluster for use with the console, example deployment resources are available to help you get started.
These resources create the following:
- A Kafka cluster in KRaft mode with SCRAM-SHA-512 authentication.
-
A Streams for Apache Kafka
KafkaNodePoolresource to manage the cluster nodes. -
A
KafkaUserresource to enable authenticated and authorized console connections to the Kafka cluster.
The KafkaUser custom resource in the 040-KafkaUser-console-kafka-user1.yaml file includes the necessary ACL types to provide authorized access for the console to the Kafka cluster.
The minimum required ACL rules are configured as follows:
-
Describe,DescribeConfigspermissions for theclusterresource -
Read,Describe,DescribeConfigspermissions for alltopicresources -
Read,Describepermissions for allgroupresources
To ensure the console has the necessary access to function, a minimum level of authorization must be configured for the principal used in each Kafka cluster connection. The specific permissions may vary based on the authorization framework in use, such as ACLs, Keycloak authorization, OPA, or a custom solution.
When configuring the KafkaUser authentication and authorization, ensure they match the corresponding Kafka configuration:
-
KafkaUser.spec.authenticationshould matchKafka.spec.kafka.listeners[*].authentication. -
KafkaUser.spec.authorizationshould matchKafka.spec.kafka.authorization.
Prerequisites
- An OpenShift 4.16–4.20 (tested); 4.12, 4.14 (supported) cluster.
-
Access to the OpenShift web console using an account with
cluster-adminpermissions, such assystem:admin. -
The
occommand-line tool is installed and configured to connect to the OpenShift cluster.
Procedure
Download and extract the console installation artifacts.
The artifacts are included with installation and example files available from the release page.
The artifacts provide the deployment YAML files to the install the Kafka cluster. Use the sample installation files located in
examples/console/resources/kafka.Set environment variables to update the installation files:
export NAMESPACE=kafka1 export LISTENER_TYPE=route2 export CLUSTER_DOMAIN=<domain_name>3 In this example, the namespace variable is defined as
kafkaand the listener type isroute.Install the Kafka cluster.
Run the following command to apply the YAML files and deploy the Kafka cluster to the defined namespace:
cat examples/console/resources/kafka/*.yaml | envsubst | oc apply -n ${NAMESPACE} -f -This command reads the YAML files, replaces the namespace environment variables, and applies the resulting configuration to the specified OpenShift namespace.
Check the status of the deployment:
oc get pods -n kafkaOutput shows the operators and cluster readiness
NAME READY STATUS RESTARTS strimzi-cluster-operator 1/1 Running 0 console-kafka-console-nodepool-0 1/1 Running 0 console-kafka-console-nodepool-1 1/1 Running 0 console-kafka-console-nodepool-2 1/1 Running 0-
console-kafkais the name of the cluster. console-nodepoolis the name of the node pool.A node ID identifies the nodes created.
With the default deployment, you install three nodes.
READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.
-
2.2. Installing the console operator 复制链接链接已复制到粘贴板!
Install the console operator using one of the following methods:
- Using the Operator Lifecycle Manager (OLM) command line interface (CLI)
- From the OperatorHub in the OpenShift web console (Openshift clusters only)
-
By applying the
ConsoleRBAC, deployment, and Custom Resource Definition (CRD) resources
OLM and OperatorHub install options will become available after the operator is submitted to and approved for the OperatorHub. See Issue 1526 for tracking progress.
The recommended approach is to install the operator via the OpenShift CLI (oc) using the Operator Lifecycle Manager (OLM) resources. If using the OLM is not suitable for your environment, you can install the operator by applying the CRD directly.
2.2.1. Deploying the console operator using a CRD 复制链接链接已复制到粘贴板!
This procedure describes how to install the Streams for Apache Kafka Console operator using a Custom Resource Definition (CRD).
Prerequisites
Procedure
Download and extract the console installation artifacts.
The artifacts are included with installation and example files available from the release page.
The artifacts include a Custom Resource Definition (CRD) file (
console-operator.yaml) to install the operator without the OLM.Set an environment variable to define the namespace where you want to install the operator:
export NAMESPACE=operator-namespaceIn this example, the namespace variable is defined as
operator-namespace.Install the console operator with the CRD.
Use the sample installation files located in
install/console-operator/non-olm. These resources install the operator with cluster-wide scope, allowing it to manage console resources across all namespaces. Run the following command to apply the YAML file:cat install/console-operator/non-olm/console-operator.yaml | envsubst | oc apply -n ${NAMESPACE} -f -This command reads the YAML file, replaces the namespace environment variables, and applies the resulting configuration to the specified OpenShift namespace.
Check the status of the deployment:
oc get pods -n operator-namespaceOutput shows the deployment name and readiness
NAME READY STATUS RESTARTS console-operator 1/1 Running 1READYshows the number of replicas that are ready/expected. The deployment is successful when theSTATUSdisplays asRunning.- Use the console operator to deploy the console and connect to a Kafka cluster.
Use the console operator to deploy the Streams for Apache Kafka Console to the same OpenShift cluster as a Kafka cluster managed by Streams for Apache Kafka. Use the console to connect to the Kafka cluster.
Prerequisites
- Deployment prerequisites.
- The console operator is deployed to the OpenShift cluster.
Procedure
Create a
Consolecustom resource in the desired namespace.If you deployed the example Kafka cluster provided with the installation artifacts, you can use the configuration specified in the
examples/console/resources/console/010-Console-example.yamlconfiguration file unchanged.Otherwise, configure the resource to connect to your Kafka cluster.
Example console configuration
apiVersion: console.streamshub.github.com/v1alpha1 kind: Console metadata: name: my-console spec: hostname: my-console.<cluster_domain>1 kafkaClusters: - name: console-kafka2 namespace: kafka3 listener: secure4 properties: values: []5 valuesFrom: []6 credentials: kafkaUser: name: console-kafka-user17 - 1
- Hostname to access the console by HTTP.
- 2
- Name of the
Kafkaresource representing the cluster. - 3
- Namespace of the Kafka cluster.
- 4
- Listener to expose the Kafka cluster for console connection.
- 5
- (Optional) Add connection properties if needed.
- 6
- (optional) References to config maps or secrets, if needed.
- 7
- (Optional) Kafka user created for authenticated access to the Kafka cluster.
Apply the
Consoleconfiguration to install the console.In this example, the console is deployed to the
console-namespacenamespace:oc apply -f examples/console/resources/console/010-Console-example.yaml -n console-namespaceCheck the status of the deployment:
oc get pods -n console-namespaceOutput shows the deployment name and readiness
NAME READY STATUS RUNNING console-kafka 1/1 1 1Access the console.
When the console is running, use the hostname specified in the
Consoleresource (spec.hostname) to access the user interface.
Enable secure console connections to Kafka clusters using an OIDC provider. Configure the console deployment to configure connections to any Identity Provider (IdP), such as Keycloak or Dex, that supports OpenID Connect (OIDC). Also define the subjects and roles for user authorization. To use group-based authorization as shown in the examples, configure an OIDC provider that includes a group membership claim, such as groups, in the generated access tokens. The security profiles can be configured for all Kafka cluster connections on a global level, though you can add roles and rules for specific Kafka clusters.
An example configuration is provided in the following file: examples/console/resources/console/console-security-oidc.yaml.
The configuration introduces the following additional properties for console deployment:
security- Properties that define the connection details for the console to connect with the OIDC provider.
subjects- Specifies the subjects (users or groups) and their roles in terms of JWT claims or explicit subject names, determining access permissions.
roles- Defines the roles and associated access rules for users, specifying which resources (like Kafka clusters) they can interact with and what operations they are permitted to perform.
Example security configuration for all clusters
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: my-console
spec:
hostname: my-console.<cluster_domain>
security:
oidc:
authServerUrl: <OIDC_discovery_URL>
clientId: <client_id>
clientSecret:
valueFrom:
secretKeyRef:
name: my-oidc-secret
key: client-secret
trustStore:
type: JKS
content:
valueFrom:
configMapKeyRef:
name: my-oidc-configmap
key: ca.jks
password:
value: truststore-password
subjects:
- claim: groups
include:
- <team_name_1>
- <team_name_2>
roleNames:
- developers
- claim: groups
include:
- <team_name_3>
roleNames:
- administrators
- include:
- <user_1>
- <user_2>
roleNames:
- administrators
roles:
- name: developers
rules:
- resources:
- kafkas
resourceNames:
# exact
- dev-cluster-a
# wildcard
- com.example.team.*
# regular expression
- /qa-cluster-[xy]/
privileges:
- 'ALL'
- name: administrators
rules:
- resources:
- kafkas
privileges:
- 'ALL'
kafkaClusters:
- name: console-kafka
namespace: kafka
listener: secure
credentials:
kafkaUser:
name: console-kafka-user1
- 1
- The OIDC provider’s issuer URI for endpoint discovery.
- 2
- The client ID that identifies the console to the OIDC provider. This value is obtained from the client credentials in your OIDC provider.
- 3
- The client secret, which authenticates the client when used with the client ID. This value is also obtained from the client credentials in your OIDC provider.
- 4
- The name of the OpenShift
Secretwhere the client secret is stored. - 5
- The key within the
Secretthat holds the client secret value. - 6
- Optional truststore used to validate the OIDC provider’s TLS certificate. Supported formats include
JKS,PEM, andPKCS12. Truststore content can be provided using either aConfigMap(configMapKeyRef) or aSecret(secretKeyRef). - 7
- Optional password for the truststore. Can be provided as a plaintext value (as shown) or more securely by reference to a
Secret. Plaintext values are not recommended for production. - 8
- JWT claim types or names to identify the users or groups.
- 9
- Users or groups included under the specified claim.
- 10
- Roles assigned to the specified users or groups.
- 11
- Specific users included by name when no claim is specified.
- 12
- Resources that the assigned role can access.
- 13
- Resource name filters that identify which resources the assigned role can access. You can specify exact names (for example,
dev-cluster-a), wildcard patterns (for example,com.example.team.*), or regular expressions. When a value is enclosed in slashes (/), the console switches to regular-expression mode (for example,/qa-cluster-[xy]/). - 14
- Privileges granted to the assigned role for the specified resources.
If you want to specify roles and rules for individual Kafka clusters, add the details under kafka.clusters[].security.roles[]. In the following example, the console-kafka cluster allows developers to list and view selected Kafka resources. Administrators can also update certain resources.
Example security configuration for an individual cluster
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: my-console
spec:
hostname: my-console.<cluster_domain>
# ...
kafkaClusters:
- name: console-kafka
namespace: kafka
listener: secure
credentials:
kafkaUser:
name: console-kafka-user1
security:
roles:
- name: developers
rules:
- resources:
- topics
- topics/records
- consumerGroups
- rebalances
privileges:
- GET
- LIST
- name: administrators
rules:
- resources:
- topics
- topics/records
- consumerGroups
- rebalances
- nodes/configs
privileges:
- GET
- LIST
- resources:
- consumerGroups
- rebalances
privileges:
- UPDATE
Optional OIDC authentication properties
The following properties can be used to further configure oidc authentication. These apply to any part of the console configuration that supports authentication.oidc, such as schema registries or metrics providers.
- grantType
Specifies the OIDC grant type to use. Required when using non-interactive authentication flows, where no user login is involved. Supported values:
-
CLIENT: Requires a client ID and secret. -
PASSWORD: Requires a client ID and secret, plus user credentials (usernameandpassword) provided throughgrantOptions.
-
- grantOptions
Additional parameters specific to the selected grant type. Use
grantOptionsto provide properties such asusernameandpasswordwhen using thePASSWORDgrant type.oidc: grantOptions: username: my-user password: <my_password>- method
Method for passing the client ID and secret to the OIDC provider. Supported values:
-
BASIC: (default) Uses HTTP Basic authentication. -
POST: Sends credentials as form parameters.
-
- scopes
Optional list of access token scopes to request from the OIDC provider. Defaults are usually defined by the OIDC client configuration. Specify this property if access to the target service requires additional or alternative scopes not granted by default.
oidc: scopes: - openid - registry:read - registry:write- absoluteExpiresIn
-
Optional boolean. If set to
true, theexpires_intoken property is treated as an absolute timestamp instead of a duration.
2.3.2. Adding Kafka Connect clusters 复制链接链接已复制到粘贴板!
Integrate Kafka Connect clusters with the console to view available connectors and their configurations. You can associate one or more Kafka Connect clusters with one or more Kafka clusters that are already defined in the console configuration. The console displays Connect cluster and connector information but does not allow modification.
A placeholder for adding Connect clusters is provided in: examples/console/resources/console/010-Console-example.yaml.
You can define Connect clusters globally as part of the console configuration using the kafkaConnectClusters property.
kafkaConnectClusters-
Defines one or more Kafka Connect clusters that the console can connect to in order to retrieve connector information. Each entry includes a
name, an endpointurl, and a list of Kafka clusters with which the Connect cluster is associated. kafkaClusters-
Lists the Kafka clusters associated with the Kafka Connect cluster. Each Kafka cluster is referenced by its
<namespace>/<name>combination as defined in thekafkaClustersconfiguration. For standalone Kafka clusters without a namespace, specify only the cluster name.
Example Kafka Connect cluster configuration
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: my-console
spec:
hostname: my-console.<cluster_domain>
kafkaClusters:
- name: console-kafka
namespace: kafka
listener: secure
metricsSource: my-ocp-prometheus
credentials:
kafkaUser:
name: console-kafka-user1
kafkaConnectClusters:
- name: my-connect-cluster
url: http://my-connect-cluster.example.com/
kafkaClusters:
- kafka/console-kafka
- name: my-mm2-cluster
url: http://my-mm2-cluster.example.com/
kafkaClusters:
- ns1/kafka1
- ns2/kafka2
# ...
- 1
- A unique name for the Connect cluster.
- 2
- Base URL of the Kafka Connect REST API endpoint.
- 3
- Associates the Connect cluster with one or more Kafka clusters configured in the console.
- 4
- Example entry for a MirrorMaker 2 Connect cluster.
- 5
- Associates this Kafka Connect cluster with multiple Kafka clusters in different namespaces.
When the console is deployed with these settings, Connect clusters and connector details can be viewed from the Kafka Connect page of the console.
2.3.3. Enabling a metrics provider 复制链接链接已复制到粘贴板!
Configure the console deployment to enable a metrics provider. You can set up configuration to use one of the following sources to scrape metrics from Kafka clusters using Prometheus:
-
OpenShift’s built-in user workload monitoring
Use OpenShift’s workload monitoring, incorporating the Prometheus operator, to monitor console services and workloads without the need for an additional monitoring solution. -
A standalone Prometheus instance
Provide the details and credentials to connect with your own Prometheus instance. -
An embedded Prometheus instance (default)
Deploy a private Prometheus instance for use only by the console instance. The instance is configured to retrieve metrics from all Streams for Apache Kafka managed Kafka instances in the same OpenShift cluster. Using embedded metrics is intended for development and evaluation only, not production.
Example configuration for OpenShift monitoring and a standalone Prometheus instance is provided in the following files:
-
examples/console/resources/console/console-openshift-metrics.yaml -
examples/console/resources/console/console-standalone-prometheus.yaml
You can define Prometheus sources globally as part of the console configuration using metricsSources properties:
metricsSources- Declares one or more metrics providers that the console can use to collect metrics.
typeSpecifies the type of metrics source. Valid options:
-
openshift-monitoring -
standalone(external Prometheus) -
embedded(console-managed Prometheus)
-
url-
For
standalonesources, specifies the base URL of the Prometheus instance. authentication-
(For
standaloneandopenshift-monitoringonly) Configures access to the metrics provider usingbasic,bearertoken, oroidcauthentication. trustStore-
(Optional, for
standaloneonly) Specifies a truststore for verifying TLS certificates when connecting to the metrics provider. Supported formats:JKS,PEM,PKCS12. Content may be provided using aConfigMapor aSecret.
Assign the metrics source to a Kafka cluster using the kafkaClusters.metricsSource property. The value of metricsSource is the name of the entry in the metricsSources array.
The configuration for openshift-monitoring and embedded requires no further configuration besides the type.
Example metrics configuration for Openshift monitoring
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: my-console
spec:
hostname: my-console.<cluster_domain>
# ...
metricsSources:
- name: my-ocp-prometheus
type: openshift-monitoring
kafkaClusters:
- name: console-kafka
namespace: kafka
listener: secure
metricsSource: my-ocp-prometheus
credentials:
kafkaUser:
name: console-kafka-user1
# ...
Example metrics configuration for standalone Prometheus monitoring
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: my-console
spec:
hostname: my-console.<cluster_domain>
# ...
metricsSources:
- name: my-custom-prometheus
type: standalone
url: <prometheus_instance_address>
authentication:
username: my-user
password: my-password
trustStore:
type: JKS
content:
valueFrom:
configMapKeyRef:
name: my-prometheus-configmap
key: ca.jks
password:
value: truststore-password
kafkaClusters:
- name: console-kafka
namespace: kafka
listener: secure
metricsSource: my-ocp-prometheus
credentials:
kafkaUser:
name: console-kafka-user1
# ...
- 1
- URL of the standalone Prometheus instance for metrics collection.
- 2
- Authentication credentials for accessing the Prometheus instance. Supported authentication methods:
-
basic: Requiresusernameandpassword. -
bearer: Requirestoken. -
oidc: See Using an OIDC provider to secure access to Kafka clusters for details.
-
- 3
- Optional truststore used to validate the metrics provider’s TLS certificate. Supported formats include
JKS,PEM, andPKCS12. Truststore content can be provided using either aConfigMap(configMapKeyRef) or aSecret(secretKeyRef). - 4
- Optional password for the truststore. Can be provided as a plaintext value (as shown) or via a
Secret. Plaintext values are not recommended for production.
2.3.4. Using a schema registry with Kafka 复制链接链接已复制到粘贴板!
Integrate a schema registry with the console to centrally manage schemas for Kafka data. The console currently supports integration with Apicurio Registry to reference and validate schemas used in Kafka data streams. Requests to the registry can be authenticated using supported methods, including OIDC.
A placeholder for adding schema registries is provided in: examples/console/resources/console/010-Console-example.yaml.
You can define schema registry connections globally as part of the console configuration using schemaRegistries properties:
schemaRegistries- Defines external schema registries that the console can connect to for schema validation and management.
authentication-
Configures access to the schema registry using
basic,bearertoken, oroidcauthentication. trustStore-
(Optional) Specifies a truststore for verifying TLS certificates when connecting to the schema registry. Supported formats:
JKS,PEM,PKCS12. Content may be provided using aConfigMapor aSecret.
Assign the schema registry source to a Kafka cluster using the kafkaClusters.schemaRegistry property. The value of schemaRegistry is the name of the entry in the schemaRegistries array.
Example schema registry configuration with OIDC authentication
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: my-console
spec:
hostname: my-console.<cluster_domain>
schemaRegistries:
- name: my-registry
url: <schema_registry_URL>
authentication:
oidc:
authServerUrl: <OIDC_discovery_URL>
clientId: <client_id>
clientSecret:
valueFrom:
secretKeyRef:
name: my-oidc-secret
key: client-secret
method: POST
grantType: CLIENT
trustStore:
type: JKS
content:
valueFrom:
configMapKeyRef:
name: my-oidc-configmap
key: ca.jks
password:
value: truststore-password
trustStore:
type: PEM
content:
valueFrom:
configMapKeyRef:
name: my-apicurio-configmap
key: cert-chain.pem
kafkaClusters:
- name: console-kafka
namespace: kafka
listener: secure
metricsSource: my-ocp-prometheus
schemaRegistry: my-registry
credentials:
kafkaUser:
name: console-kafka-user1
# ...
- 1
- A unique name for the schema registry connection.
- 2
- Base URL of the schema registry API. This is typically the REST endpoint, such as http://<host>/apis/registry/v2.
- 3
- Authentication credentials for accessing the schema registry. Supported authentication methods:
-
basic: Requiresusernameandpassword. -
bearer: Requirestoken. -
oidc: See Using an OIDC provider to secure access to Kafka clusters for details.
-
- 4
- Optional truststore used to validate the OIDC provider’s TLS certificate. Supported formats include
JKS,PEM, andPKCS12. Truststore content can be provided using either aConfigMap(configMapKeyRef) or aSecret(secretKeyRef). - 5
- Optional password for the truststore. Can be provided as a plaintext value (as shown) or via a
Secret. Plaintext values are not recommended for production. - 6
- Optional truststore used to validate the schema registry’s TLS certificate. Configuration format and source options are the same as for the OIDC truststore.