Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 2. Connecting the Streams for Apache Kafka Console to a Kafka cluster
Deploy the Streams for Apache Kafka Console to the same OpenShift cluster as the Kafka cluster managed by Streams for Apache Kafka. Use the installation files provided with the Streams for Apache Kafka Console.
For each Kafka cluster, the configuration of the Kafka
resource used to install the cluster requires the following:
- Sufficient authorization for the console to connect to the cluster.
- Prometheus enabled and able to scrape metrics from the cluster.
-
Metrics configuration (through a
ConfigMap
) for exporting metrics in a format suitable for Prometheus.
The Streams for Apache Kafka Console requires a Kafka user, configured as KafkaUser
custom resource, for the console to access the cluster as an authenticated and authorized user.
When you configure the KafkaUser
authentication and authorization mechanisms, ensure they match the equivalent Kafka
configuration.
-
KafkaUser.spec.authentication
matchesKafka.spec.kafka.listeners[*].authentication
-
KafkaUser.spec.authorization
matchesKafka.spec.kafka.authorization
Prometheus must be installed and configured to scrape metrics from Kubernetes and Kafka clusters and populate the metrics graphs in the console.
Prerequisites
-
Installation requires an OpenShift user with
cluster-admin
role, such assystem:admin
. - An OpenShift 4.12 to 4.15 cluster.
- A Kafka cluster managed by Streams for Apache Kafka running on the OpenShift cluster.
- The Prometheus Operator, which must be a separate operator from the one deployed for OpenShift monitoring.
-
The
oc
command-line tool is installed and configured to connect to the OpenShift cluster. Secret values for session management and authentication within the console.
You can use the OpenSSL TLS management tool for generating the values as follows:
SESSION_SECRET=$(LC_CTYPE=C openssl rand -base64 32) echo "Generated SESSION_SECRET: $SESSION_SECRET" NEXTAUTH_SECRET=$(LC_CTYPE=C openssl rand -base64 32) echo "Generated NEXTAUTH_SECRET: $NEXTAUTH_SECRET"
Use
openssl help
for command-line descriptions of the options used.
In addition to the files to install the console, pre-configured files to install the Streams for Apache Kafka Operator, the Prometheus Operator, a Prometheus instance, and a Kafka cluster are also included with the Streams for Apache Kafka Console installation artifacts. In this procedure, we assume the operators are installed. The installation files offer the quickest way to set up and try the console, though you can use your own deployments of Streams for Apache Kafka and Prometheus.
Procedure
Download and extract the Streams for Apache Kafka Console installation artifacts.
The artifacts are included with installation and example files available from the Streams for Apache Kafka software downloads page.
The files contain the deployment configuration required for the console, the Kafka cluster, and Prometheus.
The example Kafka configuration creates a route listener that the console uses to connect to the Kafka cluster. As the console and the Kafka cluster are deployed on the same OpenShift cluster, you can use the internal bootstrap address of the Kafka cluster instead of a route.
Create a Prometheus instance with the configuration required by the console by applying the Prometheus installation files:
Edit
${NAMESPACE}
in theconsole-prometheus-server.clusterrolebinding.yaml
file to use the namespace the Prometheus instance is going to be installed into:sed -i 's/${NAMESPACE}/'"my-project"'/g' <resource_path>/console-prometheus-server.clusterrolebinding.yaml
For example, in this procedure we are installing to the
my-project
namespace. The configuration binds the role for Prometheus with its service account.Create the Prometheus instance by applying the installation files in this order:
# Prometheus security resources oc apply -n my-project -f <resource_path>/prometheus/console-prometheus-server.clusterrole.yaml oc apply -n my-project -f <resource_path>/prometheus/console-prometheus-server.serviceaccount.yaml oc apply -n my-project -f <resource_path>/prometheus/console-prometheus-server.clusterrolebinding.yaml # Prometheus PodMonitor and Kubernetes scrape configurations oc apply -n my-project -f <resource_path>/prometheus/kafka-resources.podmonitor.yaml oc apply -n my-project -f <resource_path>/prometheus/kubernetes-scrape-configs.secret.yaml # Prometheus instance oc apply -n my-project -f <resource_path>/prometheus/console-prometheus.prometheus.yaml
The instance is named
console-prometheus
and the URL of the service for connecting the console ishttp://prometheus-operated.my-project.svc.cluster.local:9090
, withmy-project
taken from the namespace name.NoteNo route is deployed for the
console-prometheus
instance as it does not need to be accessible from outside the OpenShift cluster.
Create and deploy a Kafka cluster.
If you are using the console with a Kafka cluster operating in KRaft mode, update the metrics configuration for the cluster in the
console-kafka-metrics.configmap.yaml
file:- Uncomment the KRaft-related metrics configuration.
- Comment out the ZooKeeper related metrics.
This file contains the metrics configuration required by the console.
Edit the
KafkaUser
custom resource in theconsole-kafka-user1.kafkauser.yaml
file by adding ACL types to provide authorized access for the console to the Kafka cluster.At a minimum, the Kafka user requires the following ACL rules:
-
Describe
,DescribeConfigs
permissions for thecluster
resource -
Read
,Describe
,DescribeConfigs
permissions for alltopic
resources Read
,Describe
permissions for allgroup
resourcesExample user authorization settings
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: console-cluster-user1 labels: strimzi.io/cluster: console-kafka spec: authentication: type: scram-sha-512 authorization: type: simple acls: - resource: type: cluster name: "" patternType: literal operations: - Describe - DescribeConfigs - resource: type: topic name: "*" patternType: literal operations: - Read - Describe - DescribeConfigs - resource: type: group name: "*" patternType: literal operations: - Read - Describe
-
Edit the
console-kafka.kafka.yaml
file to replace the placeholders:sed -i 's/type: ${LISTENER_TYPE}/type: route/g' console-kafka.kafka.yaml sed -i 's/\${CLUSTER_DOMAIN}/'"<my_router_base_domain>"'/g' console-kafka.kafka.yaml
This file contains the
Kafka
custom resource configuration to create the Kafka cluster.These commands do the following:
-
Replace
type: ${LISTENER_TYPE}
withtype: route
. While this example uses aroute
type, you can replace${LISTENER_TYPE}
with any valid listener type for your deployment. -
Replace
${CLUSTER_DOMAIN}
with the value of the base domain required to specify the route listener hosts used by the bootstrap and per-broker services. By default,route
listener hosts are automatically assigned by OpenShift. However, you can override the assigned route hosts by specifying hosts.
Alternatively, you can copy the example configuration to your own Kafka deployment.
-
Replace
Create the Kafka cluster by applying the installation files in this order:
# Metrics configuration oc apply -n my-project -f <resource_path>/console-kafka-metrics.configmap.yaml # Create the cluster oc apply -n my-project -f <resource_path>/console-kafka.kafka.yaml # Create a user for the cluster oc apply -n my-project -f <resource_path>/console-kafka-user1.kafkauser.yaml
If you are using your own Kafka cluster, apply the updated
Kafka
resource configuration instead ofconsole-kafka.kafka.yaml
.The installation files create a Kafka cluster as well as a Kafka user and the metrics configuration required by the console for connecting to the cluster A Kafka user and metrics configuration are required for each Kafka cluster you want to monitor through the console. Each Kafka user requires a unique name.
If the Kafka cluster is in a different namespace from your Prometheus instance, modify the
kafka-resources.podmonitor.yaml
file to include anamespaceSelector
:apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: kafka-resources labels: app: console-kafka-monitor spec: namespaceSelector: matchNames: - <kafka_namespace> # ...
This ensures that Prometheus can monitor the Kafka pods. Replace
<kafka_namespace>
with the actual namespace where your Kafka cluster is deployed.
Check the status of the deployment:
oc get pods -n <my_console_namespace>
Output shows the operators and cluster readiness
NAME READY STATUS RESTARTS strimzi-cluster-operator 1/1 Running 0 console-kafka-kafka-0 1/1 Running 0 console-kafka-kafka-1 1/1 Running 0 console-kafka-kafka-2 1/1 Running 0 prometheus-operator-... 1/1 Running 0 console-prometheus 1/1 Running 0
Here,
console-kafka
is the name of the cluster.A pod ID identifies the pods created.
With the default deployment, you install three pods.
READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.
Install the Streams for Apache Kafka Console.
Edit the
console-server.clusterrolebinding.yaml
file to use the namespace the console instance is going to be installed into:sed -i 's/${NAMESPACE}/'"my-project"'/g' /<resource_path>console-server.clusterrolebinding.yaml
The configuration binds the role for the console with its service account.
Install the console user interface and route to the interface by applying the installation files in this order:
# Console security resources oc apply -n my-project -f <resource_path>/console-server.clusterrole.yaml oc apply -n my-project -f <resource_path>/console-server.serviceaccount.yaml oc apply -n my-project -f <resource_path>/console-server.clusterrolebinding.yaml # Console user interface service oc apply -n my-project -f <resource_path>/console-ui.service.yaml # Console route oc apply -n my-project -f <resource_path>/console-ui.route.yaml
The install creates the role, role binding, service account, services, and route necessary to run the console user interface.
Create a
Secret
calledconsole-ui-secrets
containing two secret values (as described in the prerequisites) for session management and authentication within the console:oc create secret generic console-ui-secrets -n my-project \ --from-literal=SESSION_SECRET="<session_secret_value>" \ --from-literal=NEXTAUTH_SECRET="<next_secret_value>"
The secrets are mounted as environment variables when the console is deployed.
Get the hostname for the route created for the console user interface:
oc get route console-ui-route -n my-project -o jsonpath='{.spec.host}'
The hostname is required for access to the console user interface.
Edit the
console.deployment.yaml
file to replace the placeholders:sed -i 's/${CONSOLE_HOSTNAME}/'"<route_hostname>"'/g' console.deployment.yaml sed -i 's/${NAMESPACE}/'"my-project"'/g' console.deployment.yaml
These commands do the following:
-
Replace
https://${CONSOLE_HOSTNAME}
withhttps://<route_hostname>
, which is the route used to access the console user interface. -
Replace
${NAMESPACE}
with themy-project
namespace name inhttp://prometheus-operated.${NAMESPACE}.svc.cluster.local:9090
, which is the URL of the Prometheus instance used by the console.
If you are using your own Kafka cluster, ensure that the environment variables are configured with the correct values. The values enable the console to connect with the cluster and retrieve metrics.
-
Replace
Install the console:
oc apply -n my-project -f <resource_path>/console.deployment.yaml
Output shows the console readiness
NAME READY STATUS RESTARTS strimzi-cluster-operator 1/1 Running 0 console-kafka-kafka-0 1/1 Running 0 console-kafka-kafka-0 1/1 Running 0 console-kafka-kafka-0 1/1 Running 0 prometheus-operator-... 1/1 Running 0 console-prometheus 1/1 Running 0 console-... 2/2 Running 0
Adding the example configuration to your own Kafka cluster
If you already have a Kafka cluster installed, you can update the Kafka
resource with the required configuration. When applying the cluster configuration files, use the updated Kafka
resource rather than using the Kafka
resource provided with the Streams for Apache Kafka Console installation files.
The Kafka
resource requires the following configuration:
-
A
route
listener to expose the cluster for console connection - Prometheus metrics enabled for retrieving metrics on the cluster. Add the same configuration for ZooKeeper if you are using ZooKeeper for metadata management.
The Prometheus metrics configuration must reference the ConfigMap
that provides the metrics configuration required by the console. The metrics configuration is provided in the console-cluster-metrics.configmap.yaml
resource configuration file.
Example Kafka cluster configuration for console connection
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: console-kafka namespace: my-project spec: entityOperator: topicOperator: {} userOperator: {} kafka: authorization: type: simple config: allow.everyone.if.no.acl.found: 'true' default.replication.factor: 3 inter.broker.protocol.version: '3.6' min.insync.replicas: 2 offsets.topic.replication.factor: 3 transaction.state.log.min.isr: 2 transaction.state.log.replication.factor: 3 listeners: 1 - name: route1 port: 9094 tls: true type: route authentication: type: scram-sha-512 replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 10Gi deleteClaim: false metricsConfig: 2 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: console-cluster-metrics key: kafka-metrics-config.yml version: 3.6.0 zookeeper: replicas: 3 storage: deleteClaim: false size: 10Gi type: persistent-claim metricsConfig: 3 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: console-cluster-metrics key: zookeeper-metrics-config.yml
- 1
- Listener to expose the cluster for console connection. In this example, a route listener is configured.
- 2
- Prometheus metrics, which are enabled by referencing a
ConfigMap
containing configuration for the Prometheus JMX exporter. - 3
- Add ZooKeeper configuration only if you are using Streams for Apache Kafka with ZooKeeper for cluster management. It is not required in KRaft mode.
Checking the console deployment environment variables
If you are using your own Kafka cluster, check the deployment configuration for the console has the required environment variables.
The following prefixes determine the scope of the environment variable values:
-
KAFKA
represents configuration for all Kafka clusters. -
CONSOLE_KAFKA_<UNIQUE_NAME_ID_FOR_CLUSTER>
represents configuration for each specific cluster.
Example console deployment configuration
apiVersion: apps/v1 kind: Deployment metadata: name: console spec: replicas: 1 # ... template: metadata: labels: app: console spec: # ... containers: - name: console-api # ... env: - name: KAFKA_SECURITY_PROTOCOL 1 value: SASL_SSL - name: KAFKA_SASL_MECHANISM 2 value: SCRAM-SHA-512 - name: CONSOLE_KAFKA_CLUSTER1 3 value: my-project/console-kafka - name: CONSOLE_KAFKA_CLUSTER1_BOOTSTRAP_SERVERS 4 value: console-kafka-route1-bootstrap-my-project.router.com:443 - name: CONSOLE_KAFKA_CLUSTER1_SASL_JAAS_CONFIG 5 valueFrom: secretKeyRef: name: console-cluster-user1 key: sasl.jaas.config - name: console-ui # ... env: - name: NEXTAUTH_SECRET 6 valueFrom: secretKeyRef: name: console-ui-secrets key: NEXTAUTH_SECRET - name: SESSION_SECRET 7 valueFrom: secretKeyRef: name: console-ui-secrets key: SESSION_SECRET - name: NEXTAUTH_URL 8 value: 'https://console-ui-route-my-project.router.com' - name: BACKEND_URL 9 value: 'http://127.0.0.1:8080' - name: CONSOLE_METRICS_PROMETHEUS_URL 10 value: 'http://prometheus-operated.my-project.svc.cluster.local:9090'
- 1
- The security protocol used for communication with Kafka brokers.
- 2
- The SASL mechanism for console (client) authentication to the Kafka brokers.
- 3
- The namespace and the name specified for the cluster in its
Kafka
resource configuration. - 4
- The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. In this example, a route listener address is being used. The listener was configured in the
Kafka
resource. - 5
- Authentication credentials for the Kafka user mounted as a
Secret
to the Streams for Apache Kafka Console deployment. Thesasl.jaas.config
property within the secret contains the SASL credentials, such as usernames and passwords. Here, the credentials are mounted as aSecret
. - 6
- Secret for authentication within the console.
- 7
- Secret for session management within the console
- 8
- The URL to connect to the Streams for Apache Kafka user interface and for users to access the console.
- 9
- The backend server that the console user interface communicates with for data retrieval.
- 10
- The URL to connect to the Prometheus instance, which includes the namespace (
my-project
) of theKafka
resource.