Chapter 3. Installing Service Registry
This chapter explains how to set up storage in AMQ Streams and how to install and run Service Registry:
Prerequisites
You can install more than one instance of Service Registry depending on your environment. The number of instances depends on your storage, for example, Kafka cluster configuration, and on the number and type of artifacts stored in the registry.
3.1. Setting up AMQ Streams storage on OpenShift
This topic explains how to install and configure Red Hat AMQ Streams storage for Service Registry on OpenShift. The following versions are supported:
- AMQ Streams 1.4 or 1.3
- OpenShift 4.3, 4.2, or 3.11
You can install Service Registry in an existing Kafka cluster or create a new Kafka cluster, depending on your environment.
Prerequisites
- You must have an OpenShift cluster.
You must have installed AMQ Streams using the instructions in Using AMQ Streams on OpenShift.
Alternatively, to install using the simple demonstration example shown in this section, you must have:
- Downloaded AMQ Streams from the Red Hat customer portal
- OpenShift cluster adminstrator access
Procedure
If you do not already have AMQ Streams installed, install AMQ Streams on your OpenShift cluster. For example, enter the following command from your AMQ Streams download directory:
oc apply -f install/cluster-operator/
If you do not already have a Kafka cluster set up, create a new Kafka cluster with AMQ Streams. For example:
$ cat << EOF | oc create -f - apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 listeners: external: type: route storage: type: ephemeral zookeeper: replicas: 3 storage: type: ephemeral entityOperator: topicOperator: {} EOF
This simple example creates a cluster with 3 Zookeeper nodes and 3 Kafka nodes using ephemeral storage. All data is lost when the Pods are no longer running on OpenShift.
Create the required
storage-topic
to store Service Registry artifacts in AMQ Streams. For example:$ cat << EOF | oc create -f - apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaTopic metadata: name: storage-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 3 replicas: 3 config: cleanup.policy: compact EOF
Create the required
global-id-topic
to store Service Registry global IDs in AMQ Streams. For example:$ cat << EOF | oc create -f - apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaTopic metadata: name: global-id-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 3 replicas: 3 config: cleanup.policy: compact EOF
Additional resources
For more details on installing AMQ Streams and on creating Kafka clusters and topics:
3.2. Installing Service Registry with AMQ Streams storage on OpenShift
This topic explains how to install and run Service Registry with storage in Red Hat AMQ Streams using an OpenShift template.
The following versions are supported:
- AMQ Streams 1.4 or 1.3
- OpenShift 4.3, 4.2, or 3.11
Prerequisites
- You must have an OpenShift cluster with cluster administrator access.
- You must have already installed AMQ Streams and configured your Kafka cluster on OpenShift. See Section 3.1, “Setting up AMQ Streams storage on OpenShift”.
Ensure that you can access the Service Registry image in the Red Hat Container Catalog:
- Create a service account and pull secret for the image. For details, see Container Service Accounts.
Download the pull secret and submit it to your OpenShift cluster. For example:
$ oc create -f 11223344_service-registry-secret.yaml --namespace=myproject
Procedure
- Get the Service Registry OpenShift template.
Enter the following command to get the name of the Kafka bootstrap service running in AMQ Streams on your OpenShift cluster:
$ oc get services | grep .*kafka-bootstrap
Create a new OpenShift application using the
oc new-app
command. For example:$ oc new-app service-registry-template.yml \ -p KAFKA_BOOTSTRAP_SERVERS=my-cluster-kafka-bootstrap:9092 \ -p REGISTRY_ROUTE=my-cluster-service-registry-myproject.example.com \ -p APPLICATION_ID=my-kafka-streams-app
You must specify the following arguments:
-
service-registry-template.yml
: The OpenShift template file for Service Registry. -
KAFKA_BOOTSTRAP_SERVERS
: The name of the Kafka bootstrap service on your OpenShift cluster, followed by the Kafka broker port. For example:my-cluster-kafka-bootstrap:9092
. -
REGISTRY_ROUTE
: The name of the OpenShift route to expose Service Registry, which is based on your OpenShift cluster environment. For example:my-cluster-service-registry-myproject.example.com
. APPLICATION_ID
: The name of your AMQ Streams application. For example:my-kafka-streams-app
.You can also specify the following environment variables using the
-e
option:-
APPLICATION_SERVER_HOST
: The IP address of your Kafka Streams application server host, which is required in a multi-node Kafka configuration. Defaults to$(POD_IP)
. -
APPLICATION_SERVER_PORT
: The port number of your Kafka Streams application server, which is required in a multi-node Kafka configuration. Defaults to9000
.
-
Verify the command output when complete. For example:
Deploying template "myproject/service-registry" for "service-registry-template.yml" to project myproject service-registry --------- Congratulations on deploying Service Registry into OpenShift! All components have been deployed and configured. * With parameters: * Registry Route Name=my-cluster-service-registry-myproject.example.com * Registry Max Memory Limit=1300Mi * Registry Memory Requests=600Mi * Registry Max CPU Limit=1 * Registry CPU Requests=100m * Kafka Bootstrap Servers=my-cluster-kafka-bootstrap:9092 * Kafka Application ID=my-kafka-streams-app --> Creating resources ... imagestream.image.openshift.io "registry" created service "service-registry" created deploymentconfig.apps.openshift.io "service-registry" created route.route.openshift.io "service-registry" created --> Success Access your application via route 'my-cluster-service-registry-myproject.example.com'
-
Enter
oc status
to view your Service Registry installation on OpenShift.
Additional resources
- For sample REST API requests, see the Registry REST API documentation.
For details on example client applications: