Chapter 5. Accessing Kafka outside of the OpenShift cluster
Use an external listener to expose your AMQ Streams Kafka cluster to a client outside an OpenShift environment.
Specify the connection type
to expose Kafka in the external listener configuration.
-
nodeport
uses aNodePort
typeService
-
loadbalancer
uses aLoadbalancer
typeService
-
ingress
uses KubernetesIngress
and the Ingress NGINX Controller for Kubernetes -
route
uses OpenShiftRoutes
and the HAProxy router
For more information on listener configuration, see GenericKafkaListener
schema reference.
If you want to know more about the pros and cons of each connection type, refer to Accessing Apache Kafka in Strimzi.
route
is only supported on OpenShift
5.1. Accessing Kafka using node ports
This procedure describes how to access an AMQ Streams Kafka cluster from an external client using node ports.
To connect to a broker, you need a hostname and port number for the Kafka bootstrap address, as well as the certificate used for authentication.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Configure a
Kafka
resource with an external listener set to thenodeport
type.For example:
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: external port: 9094 type: nodeport tls: true authentication: type: tls # ... # ... zookeeper: # ...
Create or update the resource.
oc apply -f <kafka_configuration_file>
NodePort
type services are created for each Kafka broker, as well as an external bootstrap service. The bootstrap service routes external traffic to the Kafka brokers. Node addresses used for connection are propagated to thestatus
of the Kafka custom resource.The cluster CA certificate to verify the identity of the kafka brokers is also created in the secret
<cluster_name>-cluster-ca-cert
.Retrieve the bootstrap address you can use to access the Kafka cluster from the status of the
Kafka
resource.oc get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name=="<listener_name>")].bootstrapServers}{"\n"}'
For example:
oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external")].bootstrapServers}{"\n"}'
If TLS encryption is enabled, extract the public certificate of the broker certification authority.
oc get secret KAFKA-CLUSTER-NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure it in your client.
5.2. Accessing Kafka using loadbalancers
This procedure describes how to access an AMQ Streams Kafka cluster from an external client using loadbalancers.
To connect to a broker, you need the address of the bootstrap loadbalancer, as well as the certificate used for TLS encryption.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Configure a
Kafka
resource with an external listener set to theloadbalancer
type.For example:
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: external port: 9094 type: loadbalancer tls: true # ... # ... zookeeper: # ...
Create or update the resource.
oc apply -f <kafka_configuration_file>
loadbalancer
type services and loadbalancers are created for each Kafka broker, as well as an external bootstrap service. The bootstrap service routes external traffic to all Kafka brokers. DNS names and IP addresses used for connection are propagated to thestatus
of each service.The cluster CA certificate to verify the identity of the kafka brokers is also created in the secret
<cluster_name>-cluster-ca-cert
.Retrieve the address of the bootstrap service you can use to access the Kafka cluster from the status of the
Kafka
resource.oc get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name=="<listener_name>")].bootstrapServers}{"\n"}'
For example:
oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external")].bootstrapServers}{"\n"}'
If TLS encryption is enabled, extract the public certificate of the broker certification authority.
oc get secret KAFKA-CLUSTER-NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
Use the extracted certificate in your Kafka client to configure the TLS connection. If you enabled any authentication, you will also need to configure it in your client.
5.3. Accessing Kafka using an Ingress NGINX Controller for OpenShift
Use an Ingress NGINX Controller for Kubernetes to access an AMQ Streams Kafka cluster from clients outside the OpenShift cluster.
To be able to use an Ingress NGINX Controller for OpenShift, add configuration for an ingress
type listener in the Kafka
custom resource. When applied, the configuration creates a dedicated ingress and service for an external bootstrap and each broker in the cluster. Clients connect to the bootstrap ingress, which routes them through the bootstrap service to connect to a broker. Per-broker connections are then established using DNS names, which route traffic from the client to the broker through the broker-specific ingresses and services.
To connect to a broker, you specify a hostname for the ingress bootstrap address, as well as the TLS certificate. Authentication is optional.
For access using an ingress, the port used in the Kafka client is typically 443.
TLS passthrough
Make sure that you enable TLS passthrough in your Ingress NGINX Controller for OpenShift deployment. Kafka uses a binary protocol over TCP, but the Ingress NGINX Controller for OpenShift is designed to work with a HTTP protocol. To be able to route TCP traffic through ingresses, AMQ Streams uses TLS passthrough with Server Name Indication (SNI).
SNI helps with identifying and passing connection to Kafka brokers. In passthrough mode, TLS encryption is always used. Because the connection passes to the brokers, the listeners use the TLS certificates signed by the internal cluster CA and not the ingress certificates. To configure listeners to use your own listener certificates, use the brokerCertChainAndKey
property.
For more information about enabling TLS passthrough, see the TLS passthrough documentation.
Prerequisites
- An Ingress NGINX Controller for OpenShift is running with TLS passthrough enabled
- A running Cluster Operator
In this procedure, the Kafka cluster name is my-cluster
.
Procedure
Configure a
Kafka
resource with an external listener set to theingress
type.Specify an ingress hostname for the bootstrap service and each of the Kafka brokers in the Kafka cluster. Add any hostname to the
bootstrap
andbroker-<index>
prefixes that identify the bootstrap and brokers.For example:
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # ... listeners: - name: external port: 9094 type: ingress tls: true 1 authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com class: nginx 2 # ... zookeeper: # ...
Create or update the resource.
oc apply -f <kafka_configuration_file>
A cluster CA certificate to verify the identity of the kafka brokers is created in the secret
my-cluster-cluster-ca-cert
.ClusterIP
type services are created for each Kafka broker, as well as an external bootstrap service.An
ingress
is also created for each service, with a DNS address to expose them using the Ingress NGINX Controller for OpenShift.Ingresses created for the bootstrap and brokers
NAME CLASS HOSTS ADDRESS PORTS my-cluster-kafka-0 nginx broker-0.myingress.com external.ingress.com 80,443 my-cluster-kafka-1 nginx broker-1.myingress.com external.ingress.com 80,443 my-cluster-kafka-2 nginx broker-2.myingress.com external.ingress.com 80,443 my-cluster-kafka-bootstrap nginx bootstrap.myingress.com external.ingress.com 80,443
The DNS addresses used for client connection are propagated to the
status
of each ingress.Status for the bootstrap ingress
status: loadBalancer: ingress: - hostname: external.ingress.com # ...
Use a target broker to check the client-server TLS connection on port 443 using the OpenSSL
s_client
.openssl s_client -connect broker-0.myingress.com:443 -servername broker-0.myingress.com -showcerts
The server name is the SNI for passing the connection to the broker.
If the connection is successful, the certificates for the broker are returned.
Certificates for the broker
Certificate chain 0 s:O = io.strimzi, CN = my-cluster-kafka i:O = io.strimzi, CN = cluster-ca v0
Extract the cluster CA certificate.
oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
Configure your client to connect to the brokers.
-
Specify the bootstrap host (from the listener
configuration
) and port 443 in your Kafka client as the bootstrap address to connect to the Kafka cluster. For example,bootstrap.myingress.com:443
. Add the extracted certificate to the truststore of your Kafka client to configure a TLS connection.
If you enabled any authentication, you will also need to configure it in your client.
-
Specify the bootstrap host (from the listener
If you are using your own listener certificates, check whether you need to add the CA certificate to the client’s truststore configuration. If it is a public (external) CA, you usually won’t need to add it.
5.4. Accessing Kafka using OpenShift routes
Use OpenShift routes to access an AMQ Streams Kafka cluster from clients outside the OpenShift cluster.
To be able to use routes, add configuration for a route
type listener in the Kafka
custom resource. When applied, the configuration creates a dedicated route and service for an external bootstrap and each broker in the cluster. Clients connect to the bootstrap route, which routes them through the bootstrap service to connect to a broker. Per-broker connections are then established using DNS names, which route traffic from the client to the broker through the broker-specific routes and services.
To connect to a broker, you specify a hostname for the route bootstrap address, as well as the certificate used for authentication.
For access using routes, the port is always 443.
An OpenShift route address comprises the name of the Kafka cluster, the name of the listener, and the name of the project it is created in. For example, my-cluster-kafka-listener1-bootstrap-myproject
(<cluster_name>-kafka-<listener_name>-bootstrap-<namespace>). Be careful that the whole length of the address does not exceed a maximum limit of 63 characters.
TLS passthrough
TLS passthrough is enabled for routes created by AMQ Streams. Kafka uses a binary protocol over TCP, but routes are designed to work with a HTTP protocol. To be able to route TCP traffic through routes, AMQ Streams uses TLS passthrough with Server Name Indication (SNI).
SNI helps with identifying and passing connection to Kafka brokers. In passthrough mode, TLS encryption is always used. Because the connection passes to the brokers, the listeners use TLS certificates signed by the internal cluster CA and not the ingress certificates. To configure listeners to use your own listener certificates, use the brokerCertChainAndKey
property.
Prerequisites
- A running Cluster Operator
In this procedure, the Kafka cluster name is my-cluster
.
Procedure
Configure a
Kafka
resource with an external listener set to theroute
type.For example:
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # ... listeners: - name: listener1 port: 9094 type: route tls: true 1 # ... # ... zookeeper: # ...
- 1
- For
route
type listeners, TLS encryption must be enabled (true
).
Create or update the resource.
oc apply -f <kafka_configuration_file>
A cluster CA certificate to verify the identity of the kafka brokers is created in the secret
my-cluster-cluster-ca-cert
.ClusterIP
type services are created for each Kafka broker, as well as an external bootstrap service.A
route
is also created for each service, with a DNS address (host/port) to expose them using the default OpenShift HAProxy router.The routes are preconfigured with TLS passthrough.
Routes created for the bootstraps and brokers
NAME HOST/PORT SERVICES PORT TERMINATION my-cluster-kafka-listener1-0 my-cluster-kafka-listener1-0-my-project.router.com my-cluster-kafka-listener1-0 9094 passthrough my-cluster-kafka-listener1-1 my-cluster-kafka-listener1-1-my-project.router.com my-cluster-kafka-listener1-1 9094 passthrough my-cluster-kafka-listener1-2 my-cluster-kafka-listener1-2-my-project.router.com my-cluster-kafka-listener1-2 9094 passthrough my-cluster-kafka-listener1-bootstrap my-cluster-kafka-listener1-bootstrap-my-project.router.com my-cluster-kafka-listener1-bootstrap 9094 passthrough
The DNS addresses used for client connection are propagated to the
status
of each route.Example status for the bootstrap route
status: ingress: - host: >- my-cluster-kafka-listener1-bootstrap-my-project.router.com # ...
Use a target broker to check the client-server TLS connection on port 443 using the OpenSSL
s_client
.openssl s_client -connect my-cluster-kafka-listener1-0-my-project.router.com:443 -servername my-cluster-kafka-listener1-0-my-project.router.com -showcerts
The server name is the SNI for passing the connection to the broker.
If the connection is successful, the certificates for the broker are returned.
Certificates for the broker
Certificate chain 0 s:O = io.strimzi, CN = my-cluster-kafka i:O = io.strimzi, CN = cluster-ca v0
Retrieve the address of the bootstrap service from the status of the
Kafka
resource.oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="listener1")].bootstrapServers}{"\n"}' my-cluster-kafka-listener1-bootstrap-my-project.router.com:443
The address comprises the cluster name, the listener name, the project name and the domain of the router (
router.com
in this example).Extract the cluster CA certificate.
oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
Configure your client to connect to the brokers.
- Specify the address for the bootstrap service and port 443 in your Kafka client as the bootstrap address to connect to the Kafka cluster.
Add the extracted certificate to the truststore of your Kafka client to configure a TLS connection.
If you enabled any authentication, you will also need to configure it in your client.
If you are using your own listener certificates, check whether you need to add the CA certificate to the client’s truststore configuration. If it is a public (external) CA, you usually won’t need to add it.