Chapter 3. Configuring external listeners
Use an external listener to expose your AMQ Streams Kafka cluster to a client outside an OpenShift environment.
Specify the connection type to expose Kafka in the external listener configuration.
-
nodeportusesNodePorttypeServices -
loadbalancerusesLoadbalancertypeServices -
ingressuses KubernetesIngressand the NGINX Ingress Controller for Kubernetes -
routeuses OpenShiftRoutesand the HAProxy router
For more information on listener configuration, see GenericKafkaListener schema reference.
route is only supported on OpenShift
Additional resources
3.1. Accessing Kafka using node ports Copy linkLink copied to clipboard!
This procedure describes how to access an AMQ Streams Kafka cluster from an external client using node ports.
To connect to a broker, you need a hostname and port number for the Kafka bootstrap address, as well as the certificate used for authentication.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Configure a
Kafkaresource with an external listener set to thenodeporttype.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create or update the resource.
oc apply -f KAFKA-CONFIG-FILE
oc apply -f KAFKA-CONFIG-FILECopy to Clipboard Copied! Toggle word wrap Toggle overflow NodePorttype services are created for each Kafka broker, as well as an external bootstrap service. The bootstrap service routes external traffic to the Kafka brokers. Node addresses used for connection are propagated to thestatusof the Kafka custom resource.The cluster CA certificate to verify the identity of the kafka brokers is also created with the same name as the
Kafkaresource.Retrieve the bootstrap address you can use to access the Kafka cluster from the status of the
Kafkaresource.oc get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}{"\n"}'oc get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow If TLS encryption is enabled, extract the public certificate of the broker certification authority.
oc get secret KAFKA-CLUSTER-NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crtoc get secret KAFKA-CLUSTER-NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication.
3.2. Accessing Kafka using loadbalancers Copy linkLink copied to clipboard!
This procedure describes how to access an AMQ Streams Kafka cluster from an external client using loadbalancers.
To connect to a broker, you need the address of the bootstrap loadbalancer, as well as the certificate used for TLS encryption.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Configure a
Kafkaresource with an external listener set to theloadbalancertype.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create or update the resource.
oc apply -f KAFKA-CONFIG-FILE
oc apply -f KAFKA-CONFIG-FILECopy to Clipboard Copied! Toggle word wrap Toggle overflow loadbalancertype services and loadbalancers are created for each Kafka broker, as well as an external bootstrap service. The bootstrap service routes external traffic to all Kafka brokers. DNS names and IP addresses used for connection are propagated to thestatusof each service.The cluster CA certificate to verify the identity of the kafka brokers is also created with the same name as the
Kafkaresource.Retrieve the address of the bootstrap service you can use to access the Kafka cluster from the status of the
Kafkaresource.oc get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}{"\n"}'oc get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow If TLS encryption is enabled, extract the public certificate of the broker certification authority.
oc get secret KAFKA-CLUSTER-NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crtoc get secret KAFKA-CLUSTER-NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication.
3.3. Accessing Kafka using ingress Copy linkLink copied to clipboard!
This procedure shows how to access an AMQ Streams Kafka cluster from an external client outside of OpenShift using Nginx Ingress.
To connect to a broker, you need a hostname (advertised address) for the Ingress bootstrap address, as well as the certificate used for authentication.
For access using Ingress, the port is always 443.
TLS passthrough
Kafka uses a binary protocol over TCP, but the NGINX Ingress Controller for Kubernetes is designed to work with the HTTP protocol. To be able to pass the Kafka connections through the Ingress, AMQ Streams uses the TLS passthrough feature of the NGINX Ingress Controller for Kubernetes. Ensure TLS passthrough is enabled in your NGINX Ingress Controller for Kubernetes deployment.
Because it is using the TLS passthrough functionality, TLS encryption cannot be disabled when exposing Kafka using Ingress.
For more information about enabling TLS passthrough, see TLS passthrough documentation.
Prerequisites
- OpenShift cluster
- Deployed NGINX Ingress Controller for Kubernetes with TLS passthrough enabled
- A running Cluster Operator
Procedure
Configure a
Kafkaresource with an external listener set to theingresstype.Specify the Ingress hosts for the bootstrap service and Kafka brokers.
For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Ingress hosts for the bootstrap service and Kafka brokers.
Create or update the resource.
oc apply -f KAFKA-CONFIG-FILE
oc apply -f KAFKA-CONFIG-FILECopy to Clipboard Copied! Toggle word wrap Toggle overflow ClusterIPtype services are created for each Kafka broker, as well as an additional bootstrap service. These services are used by the Ingress controller to route traffic to the Kafka brokers. AnIngressresource is also created for each service to expose them using the Ingress controller. The Ingress hosts are propagated to thestatusof each service.The cluster CA certificate to verify the identity of the kafka brokers is also created with the same name as the
Kafkaresource.Use the address for the bootstrap host you specified in the
configurationand port 443 (BOOTSTRAP-HOST:443) in your Kafka client as the bootstrap address to connect to the Kafka cluster.Extract the public certificate of the broker certificate authority.
oc get secret KAFKA-CLUSTER-NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crtoc get secret KAFKA-CLUSTER-NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the extracted certificate in your Kafka client to configure the TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication.
3.4. Accessing Kafka using OpenShift routes Copy linkLink copied to clipboard!
This procedure describes how to access an AMQ Streams Kafka cluster from an external client outside of OpenShift using routes.
To connect to a broker, you need a hostname for the route bootstrap address, as well as the certificate used for TLS encryption.
For access using routes, the port is always 443.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Configure a
Kafkaresource with an external listener set to theroutetype.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningAn OpenShift Route address comprises the name of the Kafka cluster, the name of the listener, and the name of the namespace it is created in. For example,
my-cluster-kafka-listener1-bootstrap-myproject(CLUSTER-NAME-kafka-LISTENER-NAME-bootstrap-NAMESPACE). Be careful that the whole length of the address does not exceed a maximum limit of 63 characters.Create or update the resource.
oc apply -f KAFKA-CONFIG-FILE
oc apply -f KAFKA-CONFIG-FILECopy to Clipboard Copied! Toggle word wrap Toggle overflow ClusterIPtype services are created for each Kafka broker, as well as an external bootstrap service. The services route the traffic from the OpenShift Routes to the Kafka brokers. An OpenShiftRouteresource is also created for each service to expose them using the HAProxy load balancer. DNS addresses used for connection are propagated to thestatusof each service.The cluster CA certificate to verify the identity of the kafka brokers is also created with the same name as the
Kafkaresource.Retrieve the address of the bootstrap service you can use to access the Kafka cluster from the status of the
Kafkaresource.oc get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}{"\n"}'oc get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the public certificate of the broker certification authority.
oc get secret KAFKA-CLUSTER-NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crtoc get secret KAFKA-CLUSTER-NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication.