Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 14. Setting up client access to a Kafka cluster
After you have deployed Streams for Apache Kafka, you can set up client access to your Kafka cluster. To verify the deployment, you can deploy example producer and consumer clients. Otherwise, create listeners that provide client access within or outside the OpenShift cluster.
14.1. Deploying example clients Copier lienLien copié sur presse-papiers!
Send and receive messages from a Kafka cluster installed on OpenShift.
This procedure describes how to deploy Kafka clients to the OpenShift cluster, then produce and consume messages to test your installation. The clients are deployed using the Kafka container image.
Prerequisites
- The Kafka cluster is available for the clients.
Procedure
Deploy a Kafka producer.
This example deploys a Kafka producer that connects to the Kafka cluster
my-cluster
.A topic named
my-topic
is created.Deploying a Kafka producer to OpenShift
oc run kafka-producer -ti --image=registry.redhat.io/amq-streams/kafka-38-rhel9:2.8.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic
oc run kafka-producer -ti --image=registry.redhat.io/amq-streams/kafka-38-rhel9:2.8.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Type a message into the console where the producer is running.
- Press Enter to send the message.
Deploy a Kafka consumer.
The consumer should consume messages produced to
my-topic
in the Kafka clustermy-cluster
.Deploying a Kafka consumer to OpenShift
oc run kafka-consumer -ti --image=registry.redhat.io/amq-streams/kafka-38-rhel9:2.8.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning
oc run kafka-consumer -ti --image=registry.redhat.io/amq-streams/kafka-38-rhel9:2.8.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Confirm that you see the incoming messages in the consumer console.
14.2. Configuring listeners to connect to Kafka Copier lienLien copié sur presse-papiers!
Use listeners to enable client connections to Kafka. Streams for Apache Kafka provides a generic GenericKafkaListener
schema with properties to configure listeners through the Kafka
resource.
When configuring a Kafka cluster, you specify a listener type
based on your requirements, environment, and infrastructure. Services, routes, load balancers, and ingresses for clients to connect to a cluster are created according to the listener type.
Internal and external listener types are supported.
- Internal listeners
Use internal listener types to connect clients within a kubernetes cluster.
-
internal
to connect within the same OpenShift cluster cluster-ip
to expose Kafka using per-brokerClusterIP
servicesInternal listeners use a headless service and the DNS names assigned to the broker pods. By default, they do not use the OpenShift service DNS domain (typically
.cluster.local
). However, you can customize this configuration using theuseServiceDnsDomain
property. Consider using acluster-ip
type listener if routing through the headless service isn’t feasible or if you require a custom access mechanism, such as when integrating with specific Ingress controllers or the OpenShift Gateway API.
-
- External listeners
Use external listener types to connect clients outside an OpenShift cluster.
-
nodeport
to use ports on OpenShift nodes -
loadbalancer
to use loadbalancer services -
ingress
to use KubernetesIngress
and the Ingress NGINX Controller for Kubernetes (Kubernetes only) route
to use OpenShiftRoute
and the default HAProxy router (OpenShift only)External listeners handle access to a Kafka cluster from networks that require different authentication mechanisms. For example, loadbalancers might not be suitable for certain infrastructure, such as bare metal, where node ports provide a better option.
-
Do not use the built-in ingress
controller on OpenShift, use the route
type instead. The Ingress NGINX Controller is only intended for use on Kubernetes. The route
type is only supported on OpenShift.
Each listener is defined as an array in the Kafka
resource.
Example listener configuration
You can configure as many listeners as required, as long as their names and ports are unique. You can also configure listeners for secure connection using authentication.
If you scale your Kafka cluster while using external listeners, it might trigger a rolling update of all Kafka brokers. This depends on the configuration.
14.3. Listener naming conventions Copier lienLien copié sur presse-papiers!
From the listener configuration, the resulting listener bootstrap and per-broker service names are structured according to the following naming conventions:
Listener type | Bootstrap service name | Per-Broker service name |
---|---|---|
| <cluster_name>-kafka-bootstrap | Not applicable |
| <cluster_name>-kafka-<listener-name>-bootstrap | <cluster_name>-kafka-<listener-name>-<idx> |
For example, my-cluster-kafka-bootstrap
, my-cluster-kafka-external1-bootstrap
, and my-cluster-kafka-external1-0
. The names are assigned to the services, routes, load balancers, and ingresses created through the listener configuration.
You can use certain backwards compatible names and port numbers to transition listeners initially configured under the retired KafkaListeners
schema. The resulting external listener naming convention varies slightly. The specific combinations of listener name and port configuration values in the following table are backwards compatible.
Listener name | Port | Bootstrap service name | Per-Broker service name |
---|---|---|---|
|
| <cluster_name>-kafka-bootstrap | Not applicable |
|
| <cluster-name>-kafka-bootstrap | Not applicable |
|
| <cluster_name>-kafka-bootstrap | <cluster_name>-kafka-bootstrap-<idx> |
14.4. Accessing Kafka using node ports Copier lienLien copié sur presse-papiers!
Use node ports to access a Kafka cluster from an external client outside the OpenShift cluster.
To connect to a broker, you specify a hostname and port number for the Kafka bootstrap address, as well as the certificate used for TLS encryption.
The procedure shows basic nodeport
listener configuration. You can use listener properties to enable TLS encryption (tls
) and specify a client authentication mechanism (authentication
). Add additional configuration using configuration
properties. For example, you can use the following configuration properties with nodeport
listeners:
preferredNodePortAddressType
- Specifies the first address type that’s checked as the node address.
externalTrafficPolicy
- Specifies whether the service routes external traffic to node-local or cluster-wide endpoints.
nodePort
- Overrides the assigned node port numbers for the bootstrap and broker services.
For more information on listener configuration, see the GenericKafkaListener
schema reference.
Prerequisites
- A running Cluster Operator
In this procedure, the Kafka cluster name is my-cluster
. The name of the listener is external4
.
Procedure
Configure a
Kafka
resource with an external listener set to thenodeport
type.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create or update the resource.
oc apply -f <kafka_configuration_file>
oc apply -f <kafka_configuration_file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A cluster CA certificate to verify the identity of the kafka brokers is created in the secret
my-cluster-cluster-ca-cert
.NodePort
type services are created for each Kafka broker, as well as an external bootstrap service.Node port services created for the bootstrap and brokers
NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external4-0 NodePort 172.30.55.13 9094:31789/TCP my-cluster-kafka-external4-1 NodePort 172.30.250.248 9094:30028/TCP my-cluster-kafka-external4-2 NodePort 172.30.115.81 9094:32650/TCP my-cluster-kafka-external4-bootstrap NodePort 172.30.30.23 9094:32650/TCP
NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external4-0 NodePort 172.30.55.13 9094:31789/TCP my-cluster-kafka-external4-1 NodePort 172.30.250.248 9094:30028/TCP my-cluster-kafka-external4-2 NodePort 172.30.115.81 9094:32650/TCP my-cluster-kafka-external4-bootstrap NodePort 172.30.30.23 9094:32650/TCP
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The bootstrap address used for client connection is propagated to the
status
of theKafka
resource.Example status for the bootstrap address
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the bootstrap address you can use to access the Kafka cluster from the status of the
Kafka
resource.oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external4")].bootstrapServers}{"\n"}' ip-10-0-224-199.us-west-2.compute.internal:32650
oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external4")].bootstrapServers}{"\n"}' ip-10-0-224-199.us-west-2.compute.internal:32650
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the cluster CA certificate.
oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure your client to connect to the brokers.
-
Specify the bootstrap host and port in your Kafka client as the bootstrap address to connect to the Kafka cluster. For example,
ip-10-0-224-199.us-west-2.compute.internal:32650
. Add the extracted certificate to the truststore of your Kafka client to configure a TLS connection.
If you enabled a client authentication mechanism, you will also need to configure it in your client.
-
Specify the bootstrap host and port in your Kafka client as the bootstrap address to connect to the Kafka cluster. For example,
If you are using your own listener certificates, check whether you need to add the CA certificate to the client’s truststore configuration. If it is a public (external) CA, you usually won’t need to add it.
14.5. Accessing Kafka using loadbalancers Copier lienLien copié sur presse-papiers!
Use loadbalancers to access a Kafka cluster from an external client outside the OpenShift cluster.
To connect to a broker, you specify a hostname and port number for the Kafka bootstrap address, as well as the certificate used for TLS encryption.
The procedure shows basic loadbalancer
listener configuration. You can use listener properties to enable TLS encryption (tls
) and specify a client authentication mechanism (authentication
). Add additional configuration using configuration
properties. For example, you can use the following configuration properties with loadbalancer
listeners:
loadBalancerSourceRanges
- Restricts traffic to a specified list of CIDR (Classless Inter-Domain Routing) ranges.
externalTrafficPolicy
- Specifies whether the service routes external traffic to node-local or cluster-wide endpoints.
loadBalancerIP
- Requests a specific IP address when creating a loadbalancer.
For more information on listener configuration, see the GenericKafkaListener
schema reference.
Prerequisites
- A running Cluster Operator
In this procedure, the Kafka cluster name is my-cluster
. The name of the listener is external3
.
Procedure
Configure a
Kafka
resource with an external listener set to theloadbalancer
type.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create or update the resource.
oc apply -f <kafka_configuration_file>
oc apply -f <kafka_configuration_file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A cluster CA certificate to verify the identity of the kafka brokers is also created in the secret
my-cluster-cluster-ca-cert
.loadbalancer
type services and loadbalancers are created for each Kafka broker, as well as an external bootstrap service.Loadbalancer services and loadbalancers created for the bootstraps and brokers
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The bootstrap address used for client connection is propagated to the
status
of theKafka
resource.Example status for the bootstrap address
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The DNS addresses used for client connection are propagated to the
status
of each loadbalancer service.Example status for the bootstrap loadbalancer
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the bootstrap address you can use to access the Kafka cluster from the status of the
Kafka
resource.oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external3")].bootstrapServers}{"\n"}' a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094
oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external3")].bootstrapServers}{"\n"}' a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the cluster CA certificate.
oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure your client to connect to the brokers.
-
Specify the bootstrap host and port in your Kafka client as the bootstrap address to connect to the Kafka cluster. For example,
a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094
. Add the extracted certificate to the truststore of your Kafka client to configure a TLS connection.
If you enabled a client authentication mechanism, you will also need to configure it in your client.
-
Specify the bootstrap host and port in your Kafka client as the bootstrap address to connect to the Kafka cluster. For example,
If you are using your own listener certificates, check whether you need to add the CA certificate to the client’s truststore configuration. If it is a public (external) CA, you usually won’t need to add it.
14.6. Accessing Kafka using OpenShift routes Copier lienLien copié sur presse-papiers!
Use OpenShift routes to access a Kafka cluster from clients outside the OpenShift cluster.
To be able to use routes, add configuration for a route
type listener in the Kafka
custom resource. When applied, the configuration creates a dedicated route and service for an external bootstrap and each broker in the cluster. Clients connect to the bootstrap route, which routes them through the bootstrap service to connect to a broker. Per-broker connections are then established using DNS names, which route traffic from the client to the broker through the broker-specific routes and services.
To connect to a broker, you specify a hostname for the route bootstrap address, as well as the certificate used for TLS encryption. For access using routes, the port is always 443.
An OpenShift route address comprises the Kafka cluster name, the listener name, the project name, and the domain of the router. For example, my-cluster-kafka-external1-bootstrap-my-project.domain.com
(<cluster_name>-kafka-<listener_name>-bootstrap-<namespace>.<domain>). Each DNS label (between periods “.”) must not exceed 63 characters, and the total length of the address must not exceed 255 characters.
The procedure shows basic listener configuration. TLS encryption (tls
) must be enabled. You can also specify a client authentication mechanism (authentication
). Add additional configuration using configuration
properties. For example, you can use the host
configuration property with route
listeners to specify the hostnames used by the bootstrap and per-broker services.
For more information on listener configuration, see the GenericKafkaListener
schema reference.
TLS passthrough
TLS passthrough is enabled for routes created by Streams for Apache Kafka. Kafka uses a binary protocol over TCP, but routes are designed to work with a HTTP protocol. To be able to route TCP traffic through routes, Streams for Apache Kafka uses TLS passthrough with Server Name Indication (SNI).
SNI helps with identifying and passing connection to Kafka brokers. In passthrough mode, TLS encryption is always used. Because the connection passes to the brokers, the listeners use TLS certificates signed by the internal cluster CA and not the ingress certificates. To configure listeners to use your own listener certificates, use the brokerCertChainAndKey
property.
Prerequisites
- A running Cluster Operator
In this procedure, the Kafka cluster name is my-cluster
. The name of the listener is external1
.
Procedure
Configure a
Kafka
resource with an external listener set to theroute
type.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
route
type listeners, TLS encryption must be enabled (true
).
Create or update the resource.
oc apply -f <kafka_configuration_file>
oc apply -f <kafka_configuration_file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A cluster CA certificate to verify the identity of the kafka brokers is created in the secret
my-cluster-cluster-ca-cert
.ClusterIP
type services are created for each Kafka broker, as well as an external bootstrap service.A
route
is also created for each service, with a DNS address (host/port) to expose them using the default OpenShift HAProxy router.The routes are preconfigured with TLS passthrough.
Routes created for the bootstraps and brokers
NAME HOST/PORT SERVICES PORT TERMINATION my-cluster-kafka-external1-0 my-cluster-kafka-external1-0-my-project.router.com my-cluster-kafka-external1-0 9094 passthrough my-cluster-kafka-external1-1 my-cluster-kafka-external1-1-my-project.router.com my-cluster-kafka-external1-1 9094 passthrough my-cluster-kafka-external1-2 my-cluster-kafka-external1-2-my-project.router.com my-cluster-kafka-external1-2 9094 passthrough my-cluster-kafka-external1-bootstrap my-cluster-kafka-external1-bootstrap-my-project.router.com my-cluster-kafka-external1-bootstrap 9094 passthrough
NAME HOST/PORT SERVICES PORT TERMINATION my-cluster-kafka-external1-0 my-cluster-kafka-external1-0-my-project.router.com my-cluster-kafka-external1-0 9094 passthrough my-cluster-kafka-external1-1 my-cluster-kafka-external1-1-my-project.router.com my-cluster-kafka-external1-1 9094 passthrough my-cluster-kafka-external1-2 my-cluster-kafka-external1-2-my-project.router.com my-cluster-kafka-external1-2 9094 passthrough my-cluster-kafka-external1-bootstrap my-cluster-kafka-external1-bootstrap-my-project.router.com my-cluster-kafka-external1-bootstrap 9094 passthrough
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The DNS addresses used for client connection are propagated to the
status
of each route.Example status for the bootstrap route
status: ingress: - host: >- my-cluster-kafka-external1-bootstrap-my-project.router.com # ...
status: ingress: - host: >- my-cluster-kafka-external1-bootstrap-my-project.router.com # ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use a target broker to check the client-server TLS connection on port 443 using the OpenSSL
s_client
.openssl s_client -connect my-cluster-kafka-external1-0-my-project.router.com:443 -servername my-cluster-kafka-external1-0-my-project.router.com -showcerts
openssl s_client -connect my-cluster-kafka-external1-0-my-project.router.com:443 -servername my-cluster-kafka-external1-0-my-project.router.com -showcerts
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The server name is the Server Name Indication (SNI) for passing the connection to the broker.
If the connection is successful, the certificates for the broker are returned.
Certificates for the broker
Certificate chain 0 s:O = io.strimzi, CN = my-cluster-kafka i:O = io.strimzi, CN = cluster-ca v0
Certificate chain 0 s:O = io.strimzi, CN = my-cluster-kafka i:O = io.strimzi, CN = cluster-ca v0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the address of the bootstrap service from the status of the
Kafka
resource.oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external1")].bootstrapServers}{"\n"}' my-cluster-kafka-external1-bootstrap-my-project.router.com:443
oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external1")].bootstrapServers}{"\n"}' my-cluster-kafka-external1-bootstrap-my-project.router.com:443
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The address comprises the Kafka cluster name, the listener name, the project name and the domain of the router (
router.com
in this example).Extract the cluster CA certificate.
oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure your client to connect to the brokers.
- Specify the address for the bootstrap service and port 443 in your Kafka client as the bootstrap address to connect to the Kafka cluster.
Add the extracted certificate to the truststore of your Kafka client to configure a TLS connection.
If you enabled a client authentication mechanism, you will also need to configure it in your client.
If you are using your own listener certificates, check whether you need to add the CA certificate to the client’s truststore configuration. If it is a public (external) CA, you usually won’t need to add it.
14.7. Discovering connection details for clients Copier lienLien copié sur presse-papiers!
Service discovery makes it easier for client applications running in the same OpenShift cluster as Streams for Apache Kafka to interact with a Kafka cluster.
A service discovery label and annotation are created for the following services:
- Internal Kafka bootstrap service
Kafka Bridge service
- Service discovery label
-
The service discovery label,
strimzi.io/discovery
, is set totrue
forService
resources to make them discoverable for client connections. - Service discovery annotation
- The service discovery annotation provides connection details in JSON format for each service for client applications to use to establish connections.
Example internal Kafka bootstrap service
Example Kafka Bridge service
Find services by specifying the discovery label when fetching services from the command line or a corresponding API call.
Returning services using the discovery label
oc get service -l strimzi.io/discovery=true
oc get service -l strimzi.io/discovery=true
Connection details are returned when retrieving the service discovery label.