Chapter 5. Accessing Kafka outside of the OpenShift cluster
Use an external listener to expose your AMQ Streams Kafka cluster to a client outside an OpenShift environment.
Specify the connection type to expose Kafka in the external listener configuration.
-
nodeportuses aNodePorttypeService -
loadbalanceruses aLoadbalancertypeService -
ingressuses KubernetesIngressand the Ingress NGINX Controller for Kubernetes -
routeuses OpenShiftRoutesand the HAProxy router
For more information on listener configuration, see GenericKafkaListener schema reference.
If you want to know more about the pros and cons of each connection type, refer to Accessing Apache Kafka in Strimzi.
route is only supported on OpenShift
5.1. Accessing Kafka using node ports Copy linkLink copied to clipboard!
This procedure describes how to access an AMQ Streams Kafka cluster from an external client using node ports.
To connect to a broker, you need a hostname and port number for the Kafka bootstrap address, as well as the certificate used for authentication.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Configure a
Kafkaresource with an external listener set to thenodeporttype.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create or update the resource.
oc apply -f <kafka_configuration_file>
oc apply -f <kafka_configuration_file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NodePorttype services are created for each Kafka broker, as well as an external bootstrap service. The bootstrap service routes external traffic to the Kafka brokers. Node addresses used for connection are propagated to thestatusof the Kafka custom resource.The cluster CA certificate to verify the identity of the kafka brokers is also created in the secret
<cluster_name>-cluster-ca-cert.Retrieve the bootstrap address you can use to access the Kafka cluster from the status of the
Kafkaresource.oc get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name=="<listener_name>")].bootstrapServers}{"\n"}'oc get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name=="<listener_name>")].bootstrapServers}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external")].bootstrapServers}{"\n"}'oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external")].bootstrapServers}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow If TLS encryption is enabled, extract the public certificate of the broker certification authority.
oc get secret KAFKA-CLUSTER-NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crtoc get secret KAFKA-CLUSTER-NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure it in your client.
5.2. Accessing Kafka using loadbalancers Copy linkLink copied to clipboard!
This procedure describes how to access an AMQ Streams Kafka cluster from an external client using loadbalancers.
To connect to a broker, you need the address of the bootstrap loadbalancer, as well as the certificate used for TLS encryption.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Configure a
Kafkaresource with an external listener set to theloadbalancertype.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create or update the resource.
oc apply -f <kafka_configuration_file>
oc apply -f <kafka_configuration_file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow loadbalancertype services and loadbalancers are created for each Kafka broker, as well as an external bootstrap service. The bootstrap service routes external traffic to all Kafka brokers. DNS names and IP addresses used for connection are propagated to thestatusof each service.The cluster CA certificate to verify the identity of the kafka brokers is also created in the secret
<cluster_name>-cluster-ca-cert.Retrieve the address of the bootstrap service you can use to access the Kafka cluster from the status of the
Kafkaresource.oc get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name=="<listener_name>")].bootstrapServers}{"\n"}'oc get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name=="<listener_name>")].bootstrapServers}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external")].bootstrapServers}{"\n"}'oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external")].bootstrapServers}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow If TLS encryption is enabled, extract the public certificate of the broker certification authority.
oc get secret KAFKA-CLUSTER-NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crtoc get secret KAFKA-CLUSTER-NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the extracted certificate in your Kafka client to configure the TLS connection. If you enabled any authentication, you will also need to configure it in your client.
5.3. Accessing Kafka using an Ingress NGINX Controller for OpenShift Copy linkLink copied to clipboard!
Use an Ingress NGINX Controller for Kubernetes to access an AMQ Streams Kafka cluster from clients outside the OpenShift cluster.
To be able to use an Ingress NGINX Controller for OpenShift, add configuration for an ingress type listener in the Kafka custom resource. When applied, the configuration creates a dedicated ingress and service for an external bootstrap and each broker in the cluster. Clients connect to the bootstrap ingress, which routes them through the bootstrap service to connect to a broker. Per-broker connections are then established using DNS names, which route traffic from the client to the broker through the broker-specific ingresses and services.
To connect to a broker, you specify a hostname for the ingress bootstrap address, as well as the TLS certificate. Authentication is optional.
For access using an ingress, the port used in the Kafka client is typically 443.
TLS passthrough
Make sure that you enable TLS passthrough in your Ingress NGINX Controller for OpenShift deployment. Kafka uses a binary protocol over TCP, but the Ingress NGINX Controller for OpenShift is designed to work with a HTTP protocol. To be able to route TCP traffic through ingresses, AMQ Streams uses TLS passthrough with Server Name Indication (SNI).
SNI helps with identifying and passing connection to Kafka brokers. In passthrough mode, TLS encryption is always used. Because the connection passes to the brokers, the listeners use the TLS certificates signed by the internal cluster CA and not the ingress certificates. To configure listeners to use your own listener certificates, use the brokerCertChainAndKey property.
For more information about enabling TLS passthrough, see the TLS passthrough documentation.
Prerequisites
- An Ingress NGINX Controller for OpenShift is running with TLS passthrough enabled
- A running Cluster Operator
In this procedure, the Kafka cluster name is my-cluster.
Procedure
Configure a
Kafkaresource with an external listener set to theingresstype.Specify an ingress hostname for the bootstrap service and each of the Kafka brokers in the Kafka cluster. Add any hostname to the
bootstrapandbroker-<index>prefixes that identify the bootstrap and brokers.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create or update the resource.
oc apply -f <kafka_configuration_file>
oc apply -f <kafka_configuration_file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow A cluster CA certificate to verify the identity of the kafka brokers is created in the secret
my-cluster-cluster-ca-cert.ClusterIPtype services are created for each Kafka broker, as well as an external bootstrap service.An
ingressis also created for each service, with a DNS address to expose them using the Ingress NGINX Controller for OpenShift.Ingresses created for the bootstrap and brokers
NAME CLASS HOSTS ADDRESS PORTS my-cluster-kafka-0 nginx broker-0.myingress.com external.ingress.com 80,443 my-cluster-kafka-1 nginx broker-1.myingress.com external.ingress.com 80,443 my-cluster-kafka-2 nginx broker-2.myingress.com external.ingress.com 80,443 my-cluster-kafka-bootstrap nginx bootstrap.myingress.com external.ingress.com 80,443
NAME CLASS HOSTS ADDRESS PORTS my-cluster-kafka-0 nginx broker-0.myingress.com external.ingress.com 80,443 my-cluster-kafka-1 nginx broker-1.myingress.com external.ingress.com 80,443 my-cluster-kafka-2 nginx broker-2.myingress.com external.ingress.com 80,443 my-cluster-kafka-bootstrap nginx bootstrap.myingress.com external.ingress.com 80,443Copy to Clipboard Copied! Toggle word wrap Toggle overflow The DNS addresses used for client connection are propagated to the
statusof each ingress.Status for the bootstrap ingress
status: loadBalancer: ingress: - hostname: external.ingress.com # ...status: loadBalancer: ingress: - hostname: external.ingress.com # ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use a target broker to check the client-server TLS connection on port 443 using the OpenSSL
s_client.openssl s_client -connect broker-0.myingress.com:443 -servername broker-0.myingress.com -showcerts
openssl s_client -connect broker-0.myingress.com:443 -servername broker-0.myingress.com -showcertsCopy to Clipboard Copied! Toggle word wrap Toggle overflow The server name is the SNI for passing the connection to the broker.
If the connection is successful, the certificates for the broker are returned.
Certificates for the broker
Certificate chain 0 s:O = io.strimzi, CN = my-cluster-kafka i:O = io.strimzi, CN = cluster-ca v0
Certificate chain 0 s:O = io.strimzi, CN = my-cluster-kafka i:O = io.strimzi, CN = cluster-ca v0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the cluster CA certificate.
oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crtoc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure your client to connect to the brokers.
-
Specify the bootstrap host (from the listener
configuration) and port 443 in your Kafka client as the bootstrap address to connect to the Kafka cluster. For example,bootstrap.myingress.com:443. Add the extracted certificate to the truststore of your Kafka client to configure a TLS connection.
If you enabled any authentication, you will also need to configure it in your client.
-
Specify the bootstrap host (from the listener
If you are using your own listener certificates, check whether you need to add the CA certificate to the client’s truststore configuration. If it is a public (external) CA, you usually won’t need to add it.
5.4. Accessing Kafka using OpenShift routes Copy linkLink copied to clipboard!
Use OpenShift routes to access an AMQ Streams Kafka cluster from clients outside the OpenShift cluster.
To be able to use routes, add configuration for a route type listener in the Kafka custom resource. When applied, the configuration creates a dedicated route and service for an external bootstrap and each broker in the cluster. Clients connect to the bootstrap route, which routes them through the bootstrap service to connect to a broker. Per-broker connections are then established using DNS names, which route traffic from the client to the broker through the broker-specific routes and services.
To connect to a broker, you specify a hostname for the route bootstrap address, as well as the certificate used for authentication.
For access using routes, the port is always 443.
An OpenShift route address comprises the name of the Kafka cluster, the name of the listener, and the name of the project it is created in. For example, my-cluster-kafka-listener1-bootstrap-myproject (<cluster_name>-kafka-<listener_name>-bootstrap-<namespace>). Be careful that the whole length of the address does not exceed a maximum limit of 63 characters.
TLS passthrough
TLS passthrough is enabled for routes created by AMQ Streams. Kafka uses a binary protocol over TCP, but routes are designed to work with a HTTP protocol. To be able to route TCP traffic through routes, AMQ Streams uses TLS passthrough with Server Name Indication (SNI).
SNI helps with identifying and passing connection to Kafka brokers. In passthrough mode, TLS encryption is always used. Because the connection passes to the brokers, the listeners use TLS certificates signed by the internal cluster CA and not the ingress certificates. To configure listeners to use your own listener certificates, use the brokerCertChainAndKey property.
Prerequisites
- A running Cluster Operator
In this procedure, the Kafka cluster name is my-cluster.
Procedure
Configure a
Kafkaresource with an external listener set to theroutetype.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
routetype listeners, TLS encryption must be enabled (true).
Create or update the resource.
oc apply -f <kafka_configuration_file>
oc apply -f <kafka_configuration_file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow A cluster CA certificate to verify the identity of the kafka brokers is created in the secret
my-cluster-cluster-ca-cert.ClusterIPtype services are created for each Kafka broker, as well as an external bootstrap service.A
routeis also created for each service, with a DNS address (host/port) to expose them using the default OpenShift HAProxy router.The routes are preconfigured with TLS passthrough.
Routes created for the bootstraps and brokers
NAME HOST/PORT SERVICES PORT TERMINATION my-cluster-kafka-listener1-0 my-cluster-kafka-listener1-0-my-project.router.com my-cluster-kafka-listener1-0 9094 passthrough my-cluster-kafka-listener1-1 my-cluster-kafka-listener1-1-my-project.router.com my-cluster-kafka-listener1-1 9094 passthrough my-cluster-kafka-listener1-2 my-cluster-kafka-listener1-2-my-project.router.com my-cluster-kafka-listener1-2 9094 passthrough my-cluster-kafka-listener1-bootstrap my-cluster-kafka-listener1-bootstrap-my-project.router.com my-cluster-kafka-listener1-bootstrap 9094 passthrough
NAME HOST/PORT SERVICES PORT TERMINATION my-cluster-kafka-listener1-0 my-cluster-kafka-listener1-0-my-project.router.com my-cluster-kafka-listener1-0 9094 passthrough my-cluster-kafka-listener1-1 my-cluster-kafka-listener1-1-my-project.router.com my-cluster-kafka-listener1-1 9094 passthrough my-cluster-kafka-listener1-2 my-cluster-kafka-listener1-2-my-project.router.com my-cluster-kafka-listener1-2 9094 passthrough my-cluster-kafka-listener1-bootstrap my-cluster-kafka-listener1-bootstrap-my-project.router.com my-cluster-kafka-listener1-bootstrap 9094 passthroughCopy to Clipboard Copied! Toggle word wrap Toggle overflow The DNS addresses used for client connection are propagated to the
statusof each route.Example status for the bootstrap route
status: ingress: - host: >- my-cluster-kafka-listener1-bootstrap-my-project.router.com # ...status: ingress: - host: >- my-cluster-kafka-listener1-bootstrap-my-project.router.com # ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use a target broker to check the client-server TLS connection on port 443 using the OpenSSL
s_client.openssl s_client -connect my-cluster-kafka-listener1-0-my-project.router.com:443 -servername my-cluster-kafka-listener1-0-my-project.router.com -showcerts
openssl s_client -connect my-cluster-kafka-listener1-0-my-project.router.com:443 -servername my-cluster-kafka-listener1-0-my-project.router.com -showcertsCopy to Clipboard Copied! Toggle word wrap Toggle overflow The server name is the SNI for passing the connection to the broker.
If the connection is successful, the certificates for the broker are returned.
Certificates for the broker
Certificate chain 0 s:O = io.strimzi, CN = my-cluster-kafka i:O = io.strimzi, CN = cluster-ca v0
Certificate chain 0 s:O = io.strimzi, CN = my-cluster-kafka i:O = io.strimzi, CN = cluster-ca v0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the address of the bootstrap service from the status of the
Kafkaresource.oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="listener1")].bootstrapServers}{"\n"}' my-cluster-kafka-listener1-bootstrap-my-project.router.com:443oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="listener1")].bootstrapServers}{"\n"}' my-cluster-kafka-listener1-bootstrap-my-project.router.com:443Copy to Clipboard Copied! Toggle word wrap Toggle overflow The address comprises the cluster name, the listener name, the project name and the domain of the router (
router.comin this example).Extract the cluster CA certificate.
oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crtoc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure your client to connect to the brokers.
- Specify the address for the bootstrap service and port 443 in your Kafka client as the bootstrap address to connect to the Kafka cluster.
Add the extracted certificate to the truststore of your Kafka client to configure a TLS connection.
If you enabled any authentication, you will also need to configure it in your client.
If you are using your own listener certificates, check whether you need to add the CA certificate to the client’s truststore configuration. If it is a public (external) CA, you usually won’t need to add it.