Chapter 3. Creating a Docker image from the Kafka Connect base image
An alternative to using Kafka Connect S2I is to build your own CDC image using Docker. You can use the Kafka container image on Red Hat Container Catalog as a base image for creating your own custom image with additional connector plugins.
The following procedure explains how to create your custom image and add it to the /opt/kafka/plugins directory. At startup, the Change Data Capture version of Kafka Connect loads any third-party connector plug-ins contained in the /opt/kafka/plugins directory.
Prerequisites
- AMQ Streams Cluster Operator is deployed
Procedure
Create a new
Dockerfileusingregistry.redhat.io/amq7/amq-streams-kafka-23:1.3.0as the base image:FROM registry.redhat.io/amq7/amq-streams-kafka-23:1.3.0 USER root:root COPY ./my-plugins/ /opt/kafka/plugins/ USER jboss:jbossBuild the container image.
docker build -t my-new-container-image:latestPush your custom image to your container registry.
docker push my-new-container-image:latestPoint to the new container image.
You can either:
Edit the
KafkaConnect.spec.imageproperty of theKafkaConnectcustom resource.If set, this property overrides the
STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGEvariable in the Cluster Operator.apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... image: my-new-container-imageor
-
In the
install/cluster-operator/050-Deployment-strimzi-cluster-operator.yamlfile, edit theSTRIMZI_DEFAULT_KAFKA_CONNECT_IMAGEvariable to point to the new container image and reinstall the Cluster Operator. If you edit this file you will need to apply it to your OpenShift cluster.
Additional resources
-
For more information on the
KafkaConnect.spec.image propertyandSTRIMZI_DEFAULT_KAFKA_CONNECT_IMAGEvariable, see Using AMQ Streams on OpenShift.