Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 3. Deploying Service Registry storage in AMQ Streams
This chapter explains how to install and configure Service Registry data storage in AMQ Streams.
- Section 3.1, “Installing AMQ Streams from the OpenShift OperatorHub”
- Section 3.2, “Configuring Service Registry with Kafka storage on OpenShift”
- Section 3.3, “Configuring Kafka storage with TLS security”
- Section 3.4, “Configuring Kafka storage with SCRAM security”
- Section 3.5, “Configuring OAuth authentication for Kafka storage”
Prerequisites
3.1. Installing AMQ Streams from the OpenShift OperatorHub Copier lienLien copié sur presse-papiers!
If you do not already have AMQ Streams installed, you can install the AMQ Streams Operator on your OpenShift cluster from the OperatorHub. The OperatorHub is available from the OpenShift Container Platform web console and provides an interface for cluster administrators to discover and install Operators. For more details, see Understanding OperatorHub.
Prerequisites
- You must have cluster administrator access to an OpenShift cluster
- See Deploying and Upgrading AMQ Streams on OpenShift for detailed information on installing AMQ Streams. This section shows a simple example of installing using the OpenShift OperatorHub.
Procedure
- In the OpenShift Container Platform web console, log in using an account with cluster administrator privileges.
-
Change to the OpenShift project in which you want to install AMQ Streams. For example, from the Project drop-down, select
my-project. - In the left navigation menu, click Operators and then OperatorHub.
-
In the Filter by keyword text box, enter
AMQ Streamsto find the Red Hat Integration - AMQ Streams Operator. - Read the information about the Operator, and click Install to display the Operator subscription page.
Select your subscription settings, for example:
- Update Channel and then amq-streams-2.1.x
Installation Mode: Select one of the following:
- All namespaces on the cluster (default)
- A specific namespace on the cluster > my-project
- Approval Strategy: Select Automatic or Manual
- Click Install, and wait a few moments until the Operator is ready for use.
Additional resources
3.2. Configuring Service Registry with Kafka storage on OpenShift Copier lienLien copié sur presse-papiers!
This section explains how to configure Kafka-based storage for Service Registry using AMQ Streams on OpenShift. The kafkasql storage option uses Kafka storage with in-memory H2 database. This storage option is suitable for production environments when persistent storage is configured for the Kafka cluster on OpenShift.
You can install Service Registry in an existing Kafka cluster or create a new Kafka cluster, depending on your environment.
Prerequisites
- You must have an OpenShift cluster with cluster administrator access.
- You must have already installed Service Registry. See Chapter 2, Installing Service Registry on OpenShift.
- You must have already installed AMQ Streams. See Section 3.1, “Installing AMQ Streams from the OpenShift OperatorHub”.
Procedure
- In the OpenShift Container Platform web console, log in using an account with cluster administrator privileges.
If you do not already have a Kafka cluster configured, create a new Kafka cluster using AMQ Streams. For example, in the OpenShift OperatorHub:
- Click Installed Operators and then Red Hat Integration - AMQ Streams.
- Under Provided APIs and then Kafka, click Create Instance to create a new Kafka cluster.
Edit the custom resource definition as appropriate, and click Create.
WarningThe default example creates a cluster with 3 Zookeeper nodes and 3 Kafka nodes with
ephemeralstorage. This temporary storage is suitable for development and testing only, and not for production. For more details, see Deploying and Upgrading AMQ Streams on OpenShift.
- After the cluster is ready, click Provided APIs > Kafka > my-cluster > YAML.
In the
statusblock, make a copy of thebootstrapServersvalue, which you will use later to deploy Service Registry. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Installed Operators > Red Hat Integration - Service Registry > ApicurioRegistry > Create ApicurioRegistry.
Paste in the following custom resource definition, but use your
bootstrapServersvalue that you copied earlier:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Create and wait for the Service Registry route to be created on OpenShift.
Click Networking > Route to access the new route for the Service Registry web console. For example:
http://example-apicurioregistry-kafkasql.my-project.my-domain-name.com/
http://example-apicurioregistry-kafkasql.my-project.my-domain-name.com/Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Configuring Kafka storage with TLS security Copier lienLien copié sur presse-papiers!
You can configure the AMQ Streams Operator and Service Registry Operator to use an encrypted Transport Layer Security (TLS) connection.
Prerequisites
- You must install the Service Registry Operator using the OperatorHub or command line.
- You must install the AMQ Streams Operator or have Kafka accessible from your OpenShift cluster.
This section assumes that the AMQ Streams Operator is available, however you can use any Kafka deployment. In that case, you must manually create the Openshift secrets that the Service Registry Operator expects.
Procedure
- In the OpenShift web console, click Installed Operators, select the AMQ Streams Operator details, and then the Kafka tab.
- Click Create Kafka to provision a new Kafka cluster for Service Registry storage.
Configure the
authorizationandtlsfields to use TLS authentication for the Kafka cluster, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The default Kafka topic name that Service Registry uses to store data is
kafkasql-journal. This topic is created automatically by Service Registry. You can override this behavior or the default topic name by setting the appropriate environment variables (default values):-
REGISTRY_KAFKASQL_TOPIC_AUTO_CREATE=true -
REGISTRY_KAFKASQL_TOPIC=kafkasql-journal
If you decide not to create the Kafka topic manually, skip the next step.
-
Click the Kafka Topic tab, and then Create Kafka Topic to create the
kafkasql-journaltopic:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Kafka User resource to configure authentication and authorization for the Service Registry user. You can specify a user name in the
metadatasection or use the defaultmy-user.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must configure the authorization specifically for the topics and resources that the Service Registry requires. This is a simple permissive example.
Click Workloads and then Secrets to find two secrets that AMQ Streams creates for Service Registry to connect to the Kafka cluster:
-
my-cluster-cluster-ca-cert- contains the PKCS12 truststore for the Kafka cluster my-user- contains the user’s keystoreNoteThe name of the secret can vary based on your cluster or user name.
-
If you create the secrets manually, they must contain the following key-value pairs:
my-cluster-ca-cert
-
ca.p12- truststore in PKCS12 format -
ca.password- truststore password
-
my-user
-
user.p12- keystore in PKCS12 format -
user.password- keystore password
-
Configure the following example configuration to deploy the Service Registry.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You must use a different bootstrapServers address than in the plain insecure use case. The address must support TLS connections and is found in the specified Kafka resource under the type: tls field.
3.4. Configuring Kafka storage with SCRAM security Copier lienLien copié sur presse-papiers!
You can configure the AMQ Streams Operator and Service Registry Operator to use Salted Challenge Response Authentication Mechanism (SCRAM-SHA-512) for the Kafka cluster.
Prerequisites
- You must install the Service Registry Operator using the OperatorHub or command line.
- You must install the AMQ Streams Operator or have Kafka accessible from your OpenShift cluster.
This section assumes that AMQ Streams Operator is available, however you can use any Kafka deployment. In that case, you must manually create the Openshift secrets that the Service Registry Operator expects.
Procedure
- In the OpenShift web console, click Installed Operators, select the AMQ Streams Operator details, and then the Kafka tab.
- Click Create Kafka to provision a new Kafka cluster for Service Registry storage.
Configure the
authorizationandtlsfields to use SCRAM-SHA-512 authentication for the Kafka cluster, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The default Kafka topic name that Service Registry uses to store data is
kafkasql-journal. This topic is created automatically by Service Registry. You can override this behavior or the default topic name by setting the appropriate environment variables (default values):-
REGISTRY_KAFKASQL_TOPIC_AUTO_CREATE=true -
REGISTRY_KAFKASQL_TOPIC=kafkasql-journal
If you decide not to create the Kafka topic manually, skip the next step.
-
Click the Kafka Topic tab, and then Create Kafka Topic to create the
kafkasql-journaltopic:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Kafka User resource to configure SCRAM authentication and authorization for the Service Registry user. You can specify a user name in the
metadatasection or use the defaultmy-user.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must configure the authorization specifically for the topics and resources that the Service Registry requires. This is a simple permissive example.
Click Workloads and then Secrets to find two secrets that AMQ Streams creates for Service Registry to connect to the Kafka cluster:
-
my-cluster-cluster-ca-cert- contains the PKCS12 truststore for the Kafka cluster my-user- contains the user’s keystoreNoteThe name of the secret can vary based on your cluster or user name.
-
If you create the secrets manually, they must contain the following key-value pairs:
my-cluster-ca-cert
-
ca.p12- the truststore in PKCS12 format -
ca.password- truststore password
-
my-user
-
password- user password
-
Configure the following example settings to deploy the Service Registry:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You must use a different bootstrapServers address than in the plain insecure use case. The address must support TLS connections, and is found in the specified Kafka resource under the type: tls field.
3.5. Configuring OAuth authentication for Kafka storage Copier lienLien copié sur presse-papiers!
When using Kafka-based storage in AMQ Streams, Service Registry supports accessing a Kafka cluster that requires OAuth authentication. To enable this support, you must to set some environment variables in your Service Registry deployment.
When you set these environment variables, the Kafka producer and consumer applications in Service Registry will use this configuration to authenticate to the Kafka cluster over OAuth.
Prerequisites
- You must have already configured Kafka-based storage of Service Registry data in AMQ Streams. See Section 3.2, “Configuring Service Registry with Kafka storage on OpenShift”.
Procedure
Set the following environment variables in your Service Registry deployment:
Expand Environment variable Description Default value ENABLE_KAFKA_SASLEnables SASL OAuth authentication for Service Registry storage in Kafka. You must set this variable to
truefor the other variables to have effect.falseCLIENT_IDThe client ID used to authenticate to Kafka.
-CLIENT_SECRETThe client secret used to authenticate to Kafka.
-OAUTH_TOKEN_ENDPOINT_URIThe URL of the OAuth identity server.
http://localhost:8090
Additional resources
- For an example of how to set Service Registry environment variables on OpenShift, see Section 6.1, “Configuring Service Registry health checks on OpenShift”