Installing and deploying Apicurio Registry on OpenShift
Install, deploy, and configure Apicurio Registry 2.5
Abstract
Preface
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation
We appreciate your feedback on our documentation.
To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly.
Prerequisite
-
You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance.
If you do not have an account, you will be prompted to create one.
Procedure
- Click the following link: Create issue.
- In the Summary text box, enter a brief description of the issue.
In the Description text box, provide the following information:
- The URL of the page where you found the issue.
-
A detailed description of the issue.
You can leave the information in any other fields at their default values.
- Click Create to submit the Jira issue to the documentation team.
Thank you for taking the time to provide feedback.
Chapter 1. Apicurio Registry Operator quickstart
You can quickly install the Apicurio Registry Operator on the command line by using Custom Resource Definitions (CRDs).
The quickstart example deploys your Apicurio Registry instance with storage in an SQL database:
The recommended installation option for production environments is the OpenShift OperatorHub. The recommended storage option is an SQL database for performance, stability, and data management.
1.1. Quickstart Apicurio Registry Operator installation
You can quickly install and deploy the Apicurio Registry Operator on the command line, without the Operator Lifecycle Manager, by using a downloaded set of installation files and example CRDs.
Prerequisites
- You are logged in to an OpenShift cluster with administrator access.
-
You have the OpenShift
oc
command-line client installed. For more details, see the OpenShift CLI documentation.
Procedure
-
Browse to Red Hat Software Downloads, select the product version, and download the examples in the Apicurio Registry CRDs
.zip
file. -
Extract the downloaded CRDs
.zip
file and change to theapicurio-registry-install-examples
directory. Create an OpenShift project for the Apicurio Registry Operator installation, for example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NAMESPACE="apicurio-registry" oc new-project "$NAMESPACE"
export NAMESPACE="apicurio-registry" oc new-project "$NAMESPACE"
Enter the following command to apply the example CRD in the
install/install.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat install/install.yaml | sed "s/apicurio-registry-operator-namespace/$NAMESPACE/g" | oc apply -f -
cat install/install.yaml | sed "s/apicurio-registry-operator-namespace/$NAMESPACE/g" | oc apply -f -
Enter
oc get deployment
to check the readiness of the Apicurio Registry Operator. For example, the output should be as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME READY UP-TO-DATE AVAILABLE AGE apicurio-registry-operator 1/1 1 1 XmYs
NAME READY UP-TO-DATE AVAILABLE AGE apicurio-registry-operator 1/1 1 1 XmYs
1.2. Quickstart Apicurio Registry instance deployment
To create your Apicurio Registry instance deployment, use the SQL database storage option to connect to an existing PostgreSQL database.
Prerequisites
- Ensure that the Apicurio Registry Operator is installed.
- You have a PostgreSQL database that is reachable from your OpenShift cluster.
Procedure
Open the
examples/apicurioregistry_sql_cr.yaml
file in an editor and view theApicurioRegistry
custom resource (CR):Example CR for SQL storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-sql spec: configuration: persistence: "sql" sql: dataSource: url: "jdbc:postgresql://<service name>.<namespace>.svc:5432/<database name>" userName: "postgres" password: "<password>" # Optional
apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-sql spec: configuration: persistence: "sql" sql: dataSource: url: "jdbc:postgresql://<service name>.<namespace>.svc:5432/<database name>" userName: "postgres" password: "<password>" # Optional
In the
dataSource
section, replace the example settings with your database connection details. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow dataSource: url: "jdbc:postgresql://postgresql.apicurio-registry.svc:5432/registry" userName: "pgadmin" password: "pgpass"
dataSource: url: "jdbc:postgresql://postgresql.apicurio-registry.svc:5432/registry" userName: "pgadmin" password: "pgpass"
Enter the following commands to apply the updated
ApicurioRegistry
CR in the namespace with the Apicurio Registry Operator, and wait for the Apicurio Registry instance to deploy:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc project "$NAMESPACE" oc apply -f ./examples/apicurioregistry_sql_cr.yaml
oc project "$NAMESPACE" oc apply -f ./examples/apicurioregistry_sql_cr.yaml
Enter
oc get deployment
to check the readiness of the Apicurio Registry instance. For example, the output should be as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME READY UP-TO-DATE AVAILABLE AGE example-apicurioregistry-sql-deployment 1/1 1 1 XmYs
NAME READY UP-TO-DATE AVAILABLE AGE example-apicurioregistry-sql-deployment 1/1 1 1 XmYs
Enter
oc get routes
to get theHOST/PORT
URL to launch the Apicurio Registry web console in your browser. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow example-apicurioregistry-sql.apicurio-registry.router-default.apps.mycluster.myorg.mycompany.com
example-apicurioregistry-sql.apicurio-registry.router-default.apps.mycluster.myorg.mycompany.com
Chapter 2. Installing Apicurio Registry on OpenShift
This chapter explains how to install Apicurio Registry on OpenShift Container Platform:
Prerequisites
- Read the introduction in the Red Hat build of Apicurio Registry User Guide.
2.1. Installing Apicurio Registry from the OpenShift OperatorHub
You can install the Apicurio Registry Operator on your OpenShift cluster from the OperatorHub. The OperatorHub is available from the OpenShift Container Platform web console and provides an interface for cluster administrators to discover and install Operators. For more details, see Understanding OperatorHub.
You can install more than one instance of Apicurio Registry depending on your environment. The number of instances depends on the number and type of artifacts stored in Apicurio Registry and on your chosen storage option.
Prerequisites
- You must have cluster administrator access to an OpenShift cluster.
Procedure
- In the OpenShift Container Platform web console, log in using an account with cluster administrator privileges.
Create a new OpenShift project:
- In the left navigation menu, click Home, Project, and then Create Project.
-
Enter a project name, for example,
my-project
, and click Create.
- In the left navigation menu, click Operators and then OperatorHub.
-
In the Filter by keyword text box, enter
registry
to find the Red Hat Integration - Service Registry Operator. - Read the information about the Operator, and click Install to display the Operator subscription page.
Select your subscription settings, for example:
Update Channel: Select one of the following:
- 2.x: Includes all minor and patch updates, such as 2.3.0 and 2.0.3. For example, an installation on 2.0.x will upgrade to 2.3.x.
- 2.0.x: Includes patch updates only, such as 2.0.1 and 2.0.2. For example, an installation on 2.0.x will ignore 2.3.x.
Installation Mode: Select one of the following:
- All namespaces on the cluster (default)
- A specific namespace on the cluster and then my-project
- Approval Strategy: Select Automatic or Manual
- Click Install, and wait a few moments until the Operator is ready for use.
Additional resources
Chapter 3. Deploying Apicurio Registry storage in AMQ Streams
This chapter explains how to install and configure Apicurio Registry data storage in AMQ Streams.
- Section 3.1, “Installing AMQ Streams from the OpenShift OperatorHub”
- Section 3.2, “Configuring Apicurio Registry with Kafka storage on OpenShift”
- Section 3.3, “Configuring Kafka storage with TLS security”
- Section 3.4, “Configuring Kafka storage with SCRAM security”
- Section 3.5, “Configuring OAuth authentication for Kafka storage”
Prerequisites
3.1. Installing AMQ Streams from the OpenShift OperatorHub
If you do not already have AMQ Streams installed, you can install the AMQ Streams Operator on your OpenShift cluster from the OperatorHub. The OperatorHub is available from the OpenShift Container Platform web console and provides an interface for cluster administrators to discover and install Operators. For more details, see Understanding OperatorHub.
Prerequisites
- You must have cluster administrator access to an OpenShift cluster
- See Deploying and Managing AMQ Streams on OpenShift for detailed information on installing AMQ Streams. This section shows a simple example of installing using the OpenShift OperatorHub.
Procedure
- In the OpenShift Container Platform web console, log in using an account with cluster administrator privileges.
-
Change to the OpenShift project in which you want to install AMQ Streams. For example, from the Project drop-down, select
my-project
. - In the left navigation menu, click Operators and then OperatorHub.
-
In the Filter by keyword text box, enter
AMQ Streams
to find the Red Hat Integration - AMQ Streams Operator. - Read the information about the Operator, and click Install to display the Operator subscription page.
Select your subscription settings, for example:
- Update Channel and then amq-streams-2.6.x
Installation Mode: Select one of the following:
- All namespaces on the cluster (default)
- A specific namespace on the cluster > my-project
- Approval Strategy: Select Automatic or Manual
- Click Install, and wait a few moments until the Operator is ready for use.
Additional resources
3.2. Configuring Apicurio Registry with Kafka storage on OpenShift
This section explains how to configure Kafka-based storage for Apicurio Registry using AMQ Streams on OpenShift. The kafkasql
storage option uses Kafka storage with an in-memory H2 database for caching. This storage option is suitable for production environments when persistent
storage is configured for the Kafka cluster on OpenShift.
You can install Apicurio Registry in an existing Kafka cluster or create a new Kafka cluster, depending on your environment.
Prerequisites
- You must have an OpenShift cluster with cluster administrator access.
- You must have already installed Apicurio Registry. See Chapter 2, Installing Apicurio Registry on OpenShift.
- You must have already installed AMQ Streams. See Section 3.1, “Installing AMQ Streams from the OpenShift OperatorHub”.
Procedure
- In the OpenShift Container Platform web console, log in using an account with cluster administrator privileges.
If you do not already have a Kafka cluster configured, create a new Kafka cluster using AMQ Streams. For example, in the OpenShift OperatorHub:
- Click Installed Operators and then Red Hat Integration - AMQ Streams.
- Under Provided APIs and then Kafka, click Create Instance to create a new Kafka cluster.
Edit the custom resource definition as appropriate, and click Create.
WarningThe default example creates a cluster with 3 Zookeeper nodes and 3 Kafka nodes with
ephemeral
storage. This temporary storage is suitable for development and testing only, and not for production. For more details, see Deploying and Managing AMQ Streams on OpenShift.
- After the cluster is ready, click Provided APIs > Kafka > my-cluster > YAML.
In the
status
block, make a copy of thebootstrapServers
value, which you will use later to deploy Apicurio Registry. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow status: ... conditions: ... listeners: - addresses: - host: my-cluster-kafka-bootstrap.my-project.svc port: 9092 bootstrapServers: 'my-cluster-kafka-bootstrap.my-project.svc:9092' type: plain ...
status: ... conditions: ... listeners: - addresses: - host: my-cluster-kafka-bootstrap.my-project.svc port: 9092 bootstrapServers: 'my-cluster-kafka-bootstrap.my-project.svc:9092' type: plain ...
- Click Installed Operators > Red Hat Integration - Service Registry > ApicurioRegistry > Create ApicurioRegistry.
Paste in the following custom resource definition, but use your
bootstrapServers
value that you copied earlier:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-kafkasql spec: configuration: persistence: 'kafkasql' kafkasql: bootstrapServers: 'my-cluster-kafka-bootstrap.my-project.svc:9092'
apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-kafkasql spec: configuration: persistence: 'kafkasql' kafkasql: bootstrapServers: 'my-cluster-kafka-bootstrap.my-project.svc:9092'
- Click Create and wait for the Apicurio Registry route to be created on OpenShift.
Click Networking > Route to access the new route for the Apicurio Registry web console. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow http://example-apicurioregistry-kafkasql.my-project.my-domain-name.com/
http://example-apicurioregistry-kafkasql.my-project.my-domain-name.com/
To configure the Kafka topic that Apicurio Registry uses to store data, click Installed Operators > Red Hat Integration - AMQ Streams > Provided APIs > Kafka Topic > kafkasql-journal > YAML. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: kafkasql-journal labels: strimzi.io/cluster: my-cluster namespace: ... spec: partitions: 3 replicas: 3 config: cleanup.policy: compact
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: kafkasql-journal labels: strimzi.io/cluster: my-cluster namespace: ... spec: partitions: 3 replicas: 3 config: cleanup.policy: compact
WarningYou must configure the Kafka topic used by Apicurio Registry (named
kafkasql-journal
by default) with a compaction cleanup policy, otherwise a data loss might occur.
Additional resources
- For more details on creating Kafka clusters and topics using AMQ Streams, see Deploying and Managing AMQ Streams on OpenShift.
3.3. Configuring Kafka storage with TLS security
You can configure the AMQ Streams Operator and Apicurio Registry Operator to use an encrypted Transport Layer Security (TLS) connection.
Prerequisites
- You have installed the Apicurio Registry Operator using the OperatorHub or command line.
- You have installed the AMQ Streams Operator or have Kafka accessible from your OpenShift cluster.
This section assumes that the AMQ Streams Operator is available, however you can use any Kafka deployment. In that case, you must manually create the Openshift secrets that the Apicurio Registry Operator expects.
Procedure
- In the OpenShift web console, click Installed Operators, select the AMQ Streams Operator details, and then the Kafka tab.
- Click Create Kafka to provision a new Kafka cluster for Apicurio Registry storage.
Configure the
authorization
andtls
fields to use TLS authentication for the Kafka cluster, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: registry-example-kafkasql-tls # Change or remove the explicit namespace spec: kafka: config: offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 log.message.format.version: '2.7' inter.broker.protocol.version: '2.7' version: 2.7.0 storage: type: ephemeral replicas: 3 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple entityOperator: topicOperator: {} userOperator: {} zookeeper: storage: type: ephemeral replicas: 3
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: registry-example-kafkasql-tls # Change or remove the explicit namespace spec: kafka: config: offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 log.message.format.version: '2.7' inter.broker.protocol.version: '2.7' version: 2.7.0 storage: type: ephemeral replicas: 3 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple entityOperator: topicOperator: {} userOperator: {} zookeeper: storage: type: ephemeral replicas: 3
The default Kafka topic name automatically created by Apicurio Registry to store data is
kafkasql-journal
. You can override this behavior or the default topic name by setting environment variables. The default values are as follows:-
REGISTRY_KAFKASQL_TOPIC_AUTO_CREATE=true
-
REGISTRY_KAFKASQL_TOPIC=kafkasql-journal
If you decide not to create the Kafka topic manually, skip the next step.
-
Click the Kafka Topic tab, and then Create Kafka Topic to create the
kafkasql-journal
topic:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaTopic metadata: name: kafkasql-journal labels: strimzi.io/cluster: my-cluster namespace: registry-example-kafkasql-tls spec: partitions: 2 replicas: 1 config: cleanup.policy: compact
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaTopic metadata: name: kafkasql-journal labels: strimzi.io/cluster: my-cluster namespace: registry-example-kafkasql-tls spec: partitions: 2 replicas: 1 config: cleanup.policy: compact
Create a Kafka User resource to configure authentication and authorization for the Apicurio Registry user. You can specify a user name in the
metadata
section or use the defaultmy-user
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster namespace: registry-example-kafkasql-tls spec: authentication: type: tls authorization: acls: - operation: All resource: name: '*' patternType: literal type: topic - operation: All resource: name: '*' patternType: literal type: cluster - operation: All resource: name: '*' patternType: literal type: transactionalId - operation: All resource: name: '*' patternType: literal type: group type: simple
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster namespace: registry-example-kafkasql-tls spec: authentication: type: tls authorization: acls: - operation: All resource: name: '*' patternType: literal type: topic - operation: All resource: name: '*' patternType: literal type: cluster - operation: All resource: name: '*' patternType: literal type: transactionalId - operation: All resource: name: '*' patternType: literal type: group type: simple
NoteThis simple example assumes admin permissions and creates the Kafka topic automatically. You must configure the
authorization
section specifically for the topics and resources that the Apicurio Registry requires.The following example shows the minimum configuration required when the Kafka topic is created manually:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... authorization: acls: - operations: - Read - Write resource: name: kafkasql-journal patternType: literal type: topic - operations: - Read - Write resource: name: apicurio-registry- patternType: prefix type: group type: simple
... authorization: acls: - operations: - Read - Write resource: name: kafkasql-journal patternType: literal type: topic - operations: - Read - Write resource: name: apicurio-registry- patternType: prefix type: group type: simple
Click Workloads and then Secrets to find two secrets that AMQ Streams creates for Apicurio Registry to connect to the Kafka cluster:
-
my-cluster-cluster-ca-cert
- contains the PKCS12 truststore for the Kafka cluster my-user
- contains the user’s keystoreNoteThe name of the secret can vary based on your cluster or user name.
-
If you create the secrets manually, they must contain the following key-value pairs:
my-cluster-ca-cert
-
ca.p12
- truststore in PKCS12 format -
ca.password
- truststore password
-
my-user
-
user.p12
- keystore in PKCS12 format -
user.password
- keystore password
-
Configure the following example configuration to deploy the Apicurio Registry.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-kafkasql-tls spec: configuration: persistence: "kafkasql" kafkasql: bootstrapServers: "my-cluster-kafka-bootstrap.registry-example-kafkasql-tls.svc:9093" security: tls: keystoreSecretName: my-user truststoreSecretName: my-cluster-cluster-ca-cert
apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-kafkasql-tls spec: configuration: persistence: "kafkasql" kafkasql: bootstrapServers: "my-cluster-kafka-bootstrap.registry-example-kafkasql-tls.svc:9093" security: tls: keystoreSecretName: my-user truststoreSecretName: my-cluster-cluster-ca-cert
You must use a different bootstrapServers
address than in the plain insecure use case. The address must support TLS connections and is found in the specified Kafka resource under the type: tls
field.
3.4. Configuring Kafka storage with SCRAM security
You can configure the AMQ Streams Operator and Apicurio Registry Operator to use Salted Challenge Response Authentication Mechanism (SCRAM-SHA-512) for the Kafka cluster.
Prerequisites
- You have installed the Apicurio Registry Operator using the OperatorHub or command line.
- You have installed the AMQ Streams Operator or have Kafka accessible from your OpenShift cluster.
This section assumes that AMQ Streams Operator is available, however you can use any Kafka deployment. In that case, you must manually create the Openshift secrets that the Apicurio Registry Operator expects.
Procedure
- In the OpenShift web console, click Installed Operators, select the AMQ Streams Operator details, and then the Kafka tab.
- Click Create Kafka to provision a new Kafka cluster for Apicurio Registry storage.
Configure the
authorization
andtls
fields to use SCRAM-SHA-512 authentication for the Kafka cluster, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: registry-example-kafkasql-scram # Change or remove the explicit namespace spec: kafka: config: offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 log.message.format.version: '2.7' inter.broker.protocol.version: '2.7' version: 2.7.0 storage: type: ephemeral replicas: 3 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: scram-sha-512 authorization: type: simple entityOperator: topicOperator: {} userOperator: {} zookeeper: storage: type: ephemeral replicas: 3
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: registry-example-kafkasql-scram # Change or remove the explicit namespace spec: kafka: config: offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 log.message.format.version: '2.7' inter.broker.protocol.version: '2.7' version: 2.7.0 storage: type: ephemeral replicas: 3 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: scram-sha-512 authorization: type: simple entityOperator: topicOperator: {} userOperator: {} zookeeper: storage: type: ephemeral replicas: 3
The default Kafka topic name automatically created by Apicurio Registry to store data is
kafkasql-journal
. You can override this behavior or the default topic name by setting environment variables. The default values are as follows:-
REGISTRY_KAFKASQL_TOPIC_AUTO_CREATE=true
-
REGISTRY_KAFKASQL_TOPIC=kafkasql-journal
If you decide not to create the Kafka topic manually, skip the next step.
-
Click the Kafka Topic tab, and then Create Kafka Topic to create the
kafkasql-journal
topic:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaTopic metadata: name: kafkasql-journal labels: strimzi.io/cluster: my-cluster namespace: registry-example-kafkasql-scram spec: partitions: 2 replicas: 1 config: cleanup.policy: compact
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaTopic metadata: name: kafkasql-journal labels: strimzi.io/cluster: my-cluster namespace: registry-example-kafkasql-scram spec: partitions: 2 replicas: 1 config: cleanup.policy: compact
Create a Kafka User resource to configure SCRAM authentication and authorization for the Apicurio Registry user. You can specify a user name in the
metadata
section or use the defaultmy-user
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster namespace: registry-example-kafkasql-scram spec: authentication: type: scram-sha-512 authorization: acls: - operation: All resource: name: '*' patternType: literal type: topic - operation: All resource: name: '*' patternType: literal type: cluster - operation: All resource: name: '*' patternType: literal type: transactionalId - operation: All resource: name: '*' patternType: literal type: group type: simple
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster namespace: registry-example-kafkasql-scram spec: authentication: type: scram-sha-512 authorization: acls: - operation: All resource: name: '*' patternType: literal type: topic - operation: All resource: name: '*' patternType: literal type: cluster - operation: All resource: name: '*' patternType: literal type: transactionalId - operation: All resource: name: '*' patternType: literal type: group type: simple
NoteThis simple example assumes admin permissions and creates the Kafka topic automatically. You must configure the
authorization
section specifically for the topics and resources that the Apicurio Registry requires.The following example shows the minimum configuration required when the Kafka topic is created manually:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... authorization: acls: - operations: - Read - Write resource: name: kafkasql-journal patternType: literal type: topic - operations: - Read - Write resource: name: apicurio-registry- patternType: prefix type: group type: simple
... authorization: acls: - operations: - Read - Write resource: name: kafkasql-journal patternType: literal type: topic - operations: - Read - Write resource: name: apicurio-registry- patternType: prefix type: group type: simple
Click Workloads and then Secrets to find two secrets that AMQ Streams creates for Apicurio Registry to connect to the Kafka cluster:
-
my-cluster-cluster-ca-cert
- contains the PKCS12 truststore for the Kafka cluster my-user
- contains the user’s keystoreNoteThe name of the secret can vary based on your cluster or user name.
-
If you create the secrets manually, they must contain the following key-value pairs:
my-cluster-ca-cert
-
ca.p12
- the truststore in PKCS12 format -
ca.password
- truststore password
-
my-user
-
password
- user password
-
Configure the following example settings to deploy the Apicurio Registry:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-kafkasql-scram spec: configuration: persistence: "kafkasql" kafkasql: bootstrapServers: "my-cluster-kafka-bootstrap.registry-example-kafkasql-scram.svc:9093" security: scram: truststoreSecretName: my-cluster-cluster-ca-cert user: my-user passwordSecretName: my-user
apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-kafkasql-scram spec: configuration: persistence: "kafkasql" kafkasql: bootstrapServers: "my-cluster-kafka-bootstrap.registry-example-kafkasql-scram.svc:9093" security: scram: truststoreSecretName: my-cluster-cluster-ca-cert user: my-user passwordSecretName: my-user
You must use a different bootstrapServers
address than in the plain insecure use case. The address must support TLS connections, and is found in the specified Kafka resource under the type: tls
field.
3.5. Configuring OAuth authentication for Kafka storage
When using Kafka-based storage in AMQ Streams, Apicurio Registry supports accessing a Kafka cluster that requires OAuth authentication. To enable this support, you must to set some environment variables in your Apicurio Registry deployment.
When you set these environment variables, the Kafka producer and consumer applications in Apicurio Registry will use this configuration to authenticate to the Kafka cluster over OAuth.
Prerequisites
- You must have already configured Kafka-based storage of Apicurio Registry data in AMQ Streams. See Section 3.2, “Configuring Apicurio Registry with Kafka storage on OpenShift”.
Procedure
Set the following environment variables in your Apicurio Registry deployment:
Environment variable Description Default value ENABLE_KAFKA_SASL
Enables SASL OAuth authentication for Apicurio Registry storage in Kafka. You must set this variable to
true
for the other variables to have effect.false
CLIENT_ID
The client ID used to authenticate to Kafka.
-
CLIENT_SECRET
The client secret used to authenticate to Kafka.
-
OAUTH_TOKEN_ENDPOINT_URI
The URL of the OAuth identity server.
http://localhost:8090
Additional resources
- For an example of how to set Apicurio Registry environment variables on OpenShift, see Section 6.1, “Configuring Apicurio Registry health checks on OpenShift”
Chapter 4. Deploying Apicurio Registry storage in a PostgreSQL database
This chapter explains how to install, configure, and manage Apicurio Registry data storage in a PostgreSQL database.
Prerequisites
4.1. Installing a PostgreSQL database from the OpenShift OperatorHub
If you do not already have a PostgreSQL database Operator installed, you can install a PostgreSQL Operator on your OpenShift cluster from the OperatorHub. The OperatorHub is available from the OpenShift Container Platform web console and provides an interface for cluster administrators to discover and install Operators. For more details, see Understanding OperatorHub.
Prerequisites
- You must have cluster administrator access to an OpenShift cluster.
Procedure
- In the OpenShift Container Platform web console, log in using an account with cluster administrator privileges.
-
Change to the OpenShift project in which you want to install the PostgreSQL Operator. For example, from the Project drop-down, select
my-project
. - In the left navigation menu, click Operators and then OperatorHub.
-
In the Filter by keyword text box, enter
PostgreSQL
to find an Operator suitable for your environment, for example, Crunchy PostgreSQL for OpenShift. - Read the information about the Operator, and click Install to display the Operator subscription page.
Select your subscription settings, for example:
- Update Channel: stable
- Installation Mode: A specific namespace on the cluster and then my-project
- Approval Strategy: Select Automatic or Manual
Click Install, and wait a few moments until the Operator is ready for use.
ImportantYou must read the documentation from your chosen PostgreSQL Operator for details on how to create and manage your database.
Additional resources
4.2. Configuring Apicurio Registry with PostgreSQL database storage on OpenShift
This section explains how to configure storage for Apicurio Registry on OpenShift using a PostgreSQL database Operator. You can install Apicurio Registry in an existing database or create a new database, depending on your environment. This section shows a simple example using the PostgreSQL Operator by Dev4Ddevs.com.
Prerequisites
- You must have an OpenShift cluster with cluster administrator access.
- You must have already installed Apicurio Registry. See Chapter 2, Installing Apicurio Registry on OpenShift.
- You must have already installed a PostgreSQL Operator on OpenShift. For example, see Section 4.1, “Installing a PostgreSQL database from the OpenShift OperatorHub”.
Procedure
- In the OpenShift Container Platform web console, log in using an account with cluster administrator privileges.
-
Change to the OpenShift project in which Apicurio Registry and your PostgreSQL Operator are installed. For example, from the Project drop-down, select
my-project
. - Create a PostgreSQL database for your Apicurio Registry storage. For example, click Installed Operators, PostgreSQL Operator by Dev4Ddevs.com, and then Create database.
Click YAML and edit the database settings as follows:
-
name
: Change the value toregistry
-
image
: Change the value tocentos/postgresql-12-centos7
-
Edit any other database settings as needed depending on your environment, for example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: postgresql.dev4devs.com/v1alpha1 kind: Database metadata: name: registry namespace: my-project spec: databaseCpu: 30m databaseCpuLimit: 60m databaseMemoryLimit: 512Mi databaseMemoryRequest: 128Mi databaseName: example databaseNameKeyEnvVar: POSTGRESQL_DATABASE databasePassword: postgres databasePasswordKeyEnvVar: POSTGRESQL_PASSWORD databaseStorageRequest: 1Gi databaseUser: postgres databaseUserKeyEnvVar: POSTGRESQL_USER image: centos/postgresql-12-centos7 size: 1
apiVersion: postgresql.dev4devs.com/v1alpha1 kind: Database metadata: name: registry namespace: my-project spec: databaseCpu: 30m databaseCpuLimit: 60m databaseMemoryLimit: 512Mi databaseMemoryRequest: 128Mi databaseName: example databaseNameKeyEnvVar: POSTGRESQL_DATABASE databasePassword: postgres databasePasswordKeyEnvVar: POSTGRESQL_PASSWORD databaseStorageRequest: 1Gi databaseUser: postgres databaseUserKeyEnvVar: POSTGRESQL_USER image: centos/postgresql-12-centos7 size: 1
- Click Create, and wait until the database is created.
- Click Installed Operators > Red Hat Integration - Service Registry > ApicurioRegistry > Create ApicurioRegistry.
Paste in the following custom resource definition, and edit the values for the database
url
and credentials to match your environment:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-sql spec: configuration: persistence: 'sql' sql: dataSource: url: 'jdbc:postgresql://<service name>.<namespace>.svc:5432/<database name>' # e.g. url: 'jdbc:postgresql://acid-minimal-cluster.my-project.svc:5432/registry' userName: 'postgres' password: '<password>' # Optional
apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-sql spec: configuration: persistence: 'sql' sql: dataSource: url: 'jdbc:postgresql://<service name>.<namespace>.svc:5432/<database name>' # e.g. url: 'jdbc:postgresql://acid-minimal-cluster.my-project.svc:5432/registry' userName: 'postgres' password: '<password>' # Optional
- Click Create and wait for the Apicurio Registry route to be created on OpenShift.
Click Networking > Route to access the new route for the Apicurio Registry web console. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow http://example-apicurioregistry-sql.my-project.my-domain-name.com/
http://example-apicurioregistry-sql.my-project.my-domain-name.com/
Additional resources
4.3. Backing up Apicurio Registry PostgreSQL storage
When using storage in a PostgreSQL database, you must ensure that the data stored by Apicurio Registry is backed up regularly.
SQL Dump is a simple procedure that works with any PostgreSQL installation. This uses the pg_dump utility to generate a file with SQL commands that you can use to recreate the database in the same state that it was in at the time of the dump.
pg_dump
is a regular PostgreSQL client application, which you can execute from any remote host that has access to the database. Like any other client, the operations that can perform are limited to the user permissions.
Procedure
Use the
pg_dump
command to redirect the output to a file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow pg_dump dbname > dumpfile
$ pg_dump dbname > dumpfile
You can specify the database server that
pg_dump
connects to using the-h host
and-p port
options.You can reduce large dump files using a compression tool, such as gzip, for example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow pg_dump dbname | gzip > filename.gz
$ pg_dump dbname | gzip > filename.gz
Additional resources
- For details on client authentication, see the PostgreSQL documentation.
- For details on importing and exporting registry content, see Managing Apicurio Registry content using the REST API.
4.4. Restoring Apicurio Registry PostgreSQL storage
You can restore SQL Dump files created by pg_dump
using the psql
utility.
Prerequisites
-
You must have already backed up your PostgreSQL datbase using
pg_dump
. See Section 4.3, “Backing up Apicurio Registry PostgreSQL storage”. - All users who own objects or have permissions on objects in the dumped database must already exist.
Procedure
Enter the following command to create the database:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow createdb -T template0 dbname
$ createdb -T template0 dbname
Enter the following command to restore the SQL dump
Copy to Clipboard Copied! Toggle word wrap Toggle overflow psql dbname < dumpfile
$ psql dbname < dumpfile
- Run ANALYZE on each database so the query optimizer has useful statistics.
Chapter 5. Securing Apicurio Registry deployments
Apicurio Registry provides authentication and authorization by using Red Hat Single Sign-On based on OpenID Connect (OIDC) and HTTP basic. You can configure the required settings automatically using the Red Hat Single Sign-On Operator, or manually configure them in Red Hat Single Sign-On and Apicurio Registry.
Apicurio Registry also provides authentcation and authorization by using Microsoft Azure Active Directory based on OpenID Connect (OIDC) and the OAuth Authorization Code Flow. You can configure the required settings manually in Azure AD and Apicurio Registry.
In addition to role-based authorization options with Red Hat Single Sign-On or Azure AD, Apicurio Registry also provides content-based authorization at the schema or API level, where only the artifact creator has write access. You can also configure an HTTPS connection to Apicurio Registry from inside or outside an OpenShift cluster.
This chapter explains how to configure the following security options for your Apicurio Registry deployment on OpenShift:
- Section 5.1, “Securing Apicurio Registry using the Red Hat Single Sign-On Operator”
- Section 5.2, “Configuring Apicurio Registry authentication and authorization with Red Hat Single Sign-On”
- Section 5.3, “Configuring Apicurio Registry authentication and authorization with Microsoft Azure Active Directory”
- Section 5.4, “Apicurio Registry authentication and authorization configuration options”
- Section 5.5, “Configuring an HTTPS connection to Apicurio Registry from inside the OpenShift cluster”
- Section 5.6, “Configuring an HTTPS connection to Apicurio Registry from outside the OpenShift cluster”
Additional resources
For details on security configuration for Java client applications, see the following:
5.1. Securing Apicurio Registry using the Red Hat Single Sign-On Operator
The following procedure shows how to configure a Apicurio Registry REST API and web console to be protected by Red Hat Single Sign-On.
Apicurio Registry supports the following user roles:
Name | Capabilities |
---|---|
| Full access, no restrictions. |
|
Create artifacts and configure artifact rules. Cannot modify global rules, perform import/export, or use |
|
View and search only. Cannot modify artifacts or rules, perform import/export, or use |
There is a related configuration option in the ApicurioRegistry
CRD that you can use to set the web console to read-only mode. However, this configuration does not affect the REST API.
Prerequisites
- You must have already installed the Apicurio Registry Operator.
- You must install the Red Hat Single Sign-On Operator or have Red Hat Single Sign-On accessible from your OpenShift cluster.
The example configuration in this procedure is intended for development and testing only. To keep the procedure simple, it does not use HTTPS and other defenses recommended for a production environment. For more details, see the Red Hat Single Sign-On documentation.
Procedure
- In the OpenShift web console, click Installed Operators and Red Hat Single Sign-On Operator, and then the Keycloak tab.
Click Create Keycloak to provision a new Red Hat Single Sign-On instance for securing a Apicurio Registry deployment. You can use the default value, for example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: keycloak.org/v1alpha1 kind: Keycloak metadata: name: example-keycloak labels: app: sso spec: instances: 1 externalAccess: enabled: True podDisruptionBudget: enabled: True
apiVersion: keycloak.org/v1alpha1 kind: Keycloak metadata: name: example-keycloak labels: app: sso spec: instances: 1 externalAccess: enabled: True podDisruptionBudget: enabled: True
- Wait until the instance has been created, and click Networking and then Routes to access the new route for the keycloak instance.
- Click the Location URL and copy the displayed URL value for later use when deploying Apicurio Registry.
Click Installed Operators and Red Hat Single Sign-On Operator, and click the Keycloak Realm tab, and then Create Keycloak Realm to create a
registry
example realm:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: keycloak.org/v1alpha1 kind: KeycloakRealm metadata: name: registry-keycloakrealm labels: app: sso spec: instanceSelector: matchLabels: app: sso realm: displayName: Registry enabled: true id: registry realm: registry sslRequired: none roles: realm: - name: sr-admin - name: sr-developer - name: sr-readonly clients: - clientId: registry-client-ui implicitFlowEnabled: true redirectUris: - '*' standardFlowEnabled: true webOrigins: - '*' publicClient: true - clientId: registry-client-api implicitFlowEnabled: true redirectUris: - '*' standardFlowEnabled: true webOrigins: - '*' publicClient: true users: - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-admin username: registry-admin - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-developer username: registry-developer - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-readonly username: registry-user
apiVersion: keycloak.org/v1alpha1 kind: KeycloakRealm metadata: name: registry-keycloakrealm labels: app: sso spec: instanceSelector: matchLabels: app: sso realm: displayName: Registry enabled: true id: registry realm: registry sslRequired: none roles: realm: - name: sr-admin - name: sr-developer - name: sr-readonly clients: - clientId: registry-client-ui implicitFlowEnabled: true redirectUris: - '*' standardFlowEnabled: true webOrigins: - '*' publicClient: true - clientId: registry-client-api implicitFlowEnabled: true redirectUris: - '*' standardFlowEnabled: true webOrigins: - '*' publicClient: true users: - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-admin username: registry-admin - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-developer username: registry-developer - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-readonly username: registry-user
ImportantYou must customize this
KeycloakRealm
resource with values suitable for your environment if you are deploying to production. You can also create and manage realms using the Red Hat Single Sign-On web console.If your cluster does not have a valid HTTPS certificate configured, you can create the following HTTP
Service
andIngress
resources as a temporary workaround:Click Networking and then Services, and click Create Service using the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Service metadata: name: keycloak-http labels: app: keycloak spec: ports: - name: keycloak-http protocol: TCP port: 8080 targetPort: 8080 selector: app: keycloak component: keycloak type: ClusterIP sessionAffinity: None status: loadBalancer: {}
apiVersion: v1 kind: Service metadata: name: keycloak-http labels: app: keycloak spec: ports: - name: keycloak-http protocol: TCP port: 8080 targetPort: 8080 selector: app: keycloak component: keycloak type: ClusterIP sessionAffinity: None status: loadBalancer: {}
Click Networking and then Ingresses, and click Create Ingress using the following example::
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: keycloak-http labels: app: keycloak spec: rules: - host: KEYCLOAK_HTTP_HOST http: paths: - path: / pathType: ImplementationSpecific backend: service: name: keycloak-http port: number: 8080
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: keycloak-http labels: app: keycloak spec: rules: - host: KEYCLOAK_HTTP_HOST http: paths: - path: / pathType: ImplementationSpecific backend: service: name: keycloak-http port: number: 8080
Modify the
host
value to create a route accessible for the Apicurio Registry user, and use it instead of the HTTPS route created by Red Hat Single Sign-On Operator.
Click the Apicurio Registry Operator, and on the ApicurioRegistry tab, click Create ApicurioRegistry, using the following example, but replace your values in the
keycloak
section.Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-kafkasql-keycloak spec: configuration: security: keycloak: url: "http://keycloak-http-<namespace>.apps.<cluster host>" # ^ Required # Use an HTTP URL in development. realm: "registry" # apiClientId: "registry-client-api" # ^ Optional (default value) # uiClientId: "registry-client-ui" # ^ Optional (default value) persistence: 'kafkasql' kafkasql: bootstrapServers: '<my-cluster>-kafka-bootstrap.<my-namespace>.svc:9092'
apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-kafkasql-keycloak spec: configuration: security: keycloak: url: "http://keycloak-http-<namespace>.apps.<cluster host>" # ^ Required # Use an HTTP URL in development. realm: "registry" # apiClientId: "registry-client-api" # ^ Optional (default value) # uiClientId: "registry-client-ui" # ^ Optional (default value) persistence: 'kafkasql' kafkasql: bootstrapServers: '<my-cluster>-kafka-bootstrap.<my-namespace>.svc:9092'
5.2. Configuring Apicurio Registry authentication and authorization with Red Hat Single Sign-On
This section explains how to manually configure authentication and authorization options for Apicurio Registry and Red Hat Single Sign-On.
Alternatively, for details on how to configure these settings automatically, see Section 5.1, “Securing Apicurio Registry using the Red Hat Single Sign-On Operator”.
The Apicurio Registry web console and core REST API support authentication in Red Hat Single Sign-On based on OAuth and OpenID Connect (OIDC). The same Red Hat Single Sign-On realm and users are federated across the Apicurio Registry web console and core REST API using OpenID Connect so that you only require one set of credentials.
Apicurio Registry provides role-based authorization for default admin, write, and read-only user roles. Apicurio Registry provides content-based authorization at the schema or API level, where only the creator of the registry artifact can update or delete it. Apicurio Registry authentication and authorization settings are disabled by default.
Prerequisites
- Red Hat Single Sign-On is installed and running. For more details, see the Red Hat Single Sign-On user documentation.
- Apicurio Registry is installed and running.
Procedure
-
In the Red Hat Single Sign-On Admin Console, create a Red Hat Single Sign-On realm for Apicurio Registry. By default, Apicurio Registry expects a realm name of
registry
. For details on creating realms, see the the Red Hat Single Sign-On user documentation. Create a Red Hat Single Sign-On client for the Apicurio Registry API. By default, Apicurio Registry expects the following settings:
-
Client ID:
registry-api
-
Client Protocol:
openid-connect
Access Type:
bearer-only
You can use the defaults for the other client settings.
NoteIf you are using Red Hat Single Sign-On service accounts, the client Access Type must be
confidential
instead ofbearer-only
.
-
Client ID:
Create a Red Hat Single Sign-On client for the Apicurio Registry web console. By default, Apicurio Registry expects the following settings:
-
Client ID:
apicurio-registry
-
Client Protocol:
openid-connect
-
Access Type:
public
-
Valid Redirect URLs:
http://my-registry-url:8080/*
Web Origins:
+
You can use the defaults for the other client settings.
-
Client ID:
In your Apicurio Registry deployment on OpenShift, set the following Apicurio Registry environment variables to configure authentication using Red Hat Single Sign-On:
Table 5.2. Configuration for Apicurio Registry authentication with Red Hat Single Sign-On Environment variable Description Type Default AUTH_ENABLED
Enables authentication for Apicurio Registry. When set to
true
, the environment variables that follow are required for authentication using Red Hat Single Sign-On.String
false
KEYCLOAK_URL
The URL of the Red Hat Single Sign-On authentication server. For example,
http://localhost:8080
.String
-
KEYCLOAK_REALM
The Red Hat Single Sign-On realm for authentication. For example,
registry.
String
-
KEYCLOAK_API_CLIENT_ID
The client ID for the Apicurio Registry REST API.
String
registry-api
KEYCLOAK_UI_CLIENT_ID
The client ID for the Apicurio Registry web console.
String
apicurio-registry
TipFor an example of setting environment variables on OpenShift, see Section 6.1, “Configuring Apicurio Registry health checks on OpenShift”.
Set the following option to
true
to enable Apicurio Registry user roles in Red Hat Single Sign-On:Table 5.3. Configuration for Apicurio Registry role-based authorization Environment variable Java system property Type Default value ROLE_BASED_AUTHZ_ENABLED
registry.auth.role-based-authorization
Boolean
false
When Apicurio Registry user roles are enabled, you must assign Apicurio Registry users to at least one of the following default user roles in your Red Hat Single Sign-On realm:
Table 5.4. Default user roles for registry authentication and authorization Role Read artifacts Write artifacts Global rules Summary sr-admin
Yes
Yes
Yes
Full access to all create, read, update, and delete operations.
sr-developer
Yes
Yes
No
Access to create, read, update, and delete operations, except configuring global rules. This role can configure artifact-specific rules.
sr-readonly
Yes
No
No
Access to read and search operations only. This role cannot configure any rules.
Set the following to
true
to enable owner-only authorization for updates to schema and API artifacts in Apicurio Registry:Table 5.5. Configuration for owner-only authorization Environment variable Java system property Type Default value REGISTRY_AUTH_OBAC_ENABLED
registry.auth.owner-only-authorization
Boolean
false
Additional resources
- For details on configuring non-default user role names, see Section 5.4, “Apicurio Registry authentication and authorization configuration options”.
- For an open source example application and Keycloak realm, see Docker Compose example of Apicurio Registry with Keycloak.
- For details on how to use Red Hat Single Sign-On in a production environment, see the Red Hat Single Sign-On documentation.
5.3. Configuring Apicurio Registry authentication and authorization with Microsoft Azure Active Directory
This section explains how to manually configure authentication and authorization options for Apicurio Registry and Microsoft Azure Active Directory (Azure AD).
The Apicurio Registry web console and core REST API support authentication in Azure AD based on OpenID Connect (OIDC) and the OAuth Authorization Code Flow. Apicurio Registry provides role-based authorization for default admin, write, and read-only user roles. Apicurio Registry authentication and authorization settings are disabled by default.
To secure Apicurio Registry with Azure AD, you require a valid directory in Azure AD with specific configuration. This involves registering the Apicurio Registry application in the Azure AD portal with recommended settings and configuring environment variables in Apicurio Registry.
Prerequisites
- Azure AD is installed and running. For more details, see the Microsoft Azure AD user documentation.
- Apicurio Registry is installed and running.
Procedure
- Log in to the Azure AD portal using your email address or GitHub account.
In the navigation menu, select Manage > App registrations > New registration, and complete the following settings:
-
Name: Enter your application name. For example:
apicurio-registry-example
- Supported account types: Click Accounts in any organizational directory.
Redirect URI: Select Single-page application from the list, and enter your Apicurio Registry web console application host. For example:
https://test-registry.com/ui/
ImportantYou must register your Apicurio Registry application host as a Redirect URI. When logging in, users are redirected from Apicurio Registry to Azure AD for authentication, and you want to send them back to your application afterwards. Azure AD does not allow any redirect URLs that are not registered.
-
Name: Enter your application name. For example:
- Click Register. You can view your app registration details by selecting Manage > App registrations > apicurio-registry-example.
Select Manage > Authentication and ensure that the application is configured with your redirect URLs and tokens as follows:
-
Redirect URIs: For example:
https://test-registry.com/ui/
- Implicit grant and hybrid flows: Click ID tokens (used for implicit and hybrid flows)
-
Redirect URIs: For example:
-
Select Azure AD > Admin > App registrations > Your app > Application (client) ID. For example:
123456a7-b8c9-012d-e3f4-5fg67h8i901
-
Select Azure AD > Admin > App registrations > Your app > Directory (tenant) ID. For example:
https://login.microsoftonline.com/1a2bc34d-567e-89f1-g0hi-1j2kl3m4no56/v2.0
In Apicurio Registry, configure the following environment variables with your Azure AD settings:
Table 5.6. Configuration for Azure AD settings in Apicurio Registry Environment variable Description Setting KEYCLOAK_API_CLIENT_ID
The client application ID for the Apicurio Registry REST API
Your Azure AD Application (client) ID obtained in step 5. For example:
123456a7-b8c9-012d-e3f4-5fg67h8i901
REGISTRY_OIDC_UI_CLIENT_ID
The client application ID for the Apicurio Registry web console.
Your Azure AD Application (client) ID obtained in step 5. For example:
123456a7-b8c9-012d-e3f4-5fg67h8i901
REGISTRY_AUTH_URL_CONFIGURED
The URL for authentication in Azure AD.
Your Azure AD Application (tenant) ID obtained in step 6. For example:
https://login.microsoftonline.com/1a2bc34d-567e-89f1-g0hi-1j2kl3m4no56/v2.0
.In Apicurio Registry, configure the following environment variables for Apicurio Registry-specific settings:
Table 5.7. Configuration for Apicurio Registry-specific settings Environment variable Description Setting REGISTRY_AUTH_ENABLED
Enables authentication for Apicurio Registry.
true
REGISTRY_UI_AUTH_TYPE
The Apicurio Registry authentication type.
oidc
CORS_ALLOWED_ORIGINS
The host for your Apicurio Registry deployment for cross-origin resource sharing (CORS).
For example:
https://test-registry.com
REGISTRY_OIDC_UI_REDIRECT_URL
The host for your Apicurio Registry web console.
For example:
https://test-registry.com/ui
ROLE_BASED_AUTHZ_ENABLED
Enables role-based authorization in Apicurio Registry.
true
QUARKUS_OIDC_ROLES_ROLE_CLAIM_PATH
The name of the claim in which Azure AD stores roles.
roles
NoteWhen you enable roles in Apicurio Registry, you must also create the same roles in Azure AD as application roles. The default roles expected by Apicurio Registry are
sr-admin
,sr-developer
, andsr-readonly
.
Additional resources
- For details on configuring non-default user role names, see Section 5.4, “Apicurio Registry authentication and authorization configuration options”.
- For more details on using Azure AD, see the Microsoft Azure AD user documentation.
5.4. Apicurio Registry authentication and authorization configuration options
Apicurio Registry provides authentication options for OpenID Connect with Red Hat Single Sign-On and HTTP basic authentication.
Apicurio Registry provides authorization options for role-based and content-based approaches:
- Role-based authorization for default admin, write, and read-only user roles.
- Content-based authorization for schema or API artifacts, where only the owner of the artifacts or artifact group can update or delete artifacts.
All authentication and authorization options in Apicurio Registry are disabled by default. Before enabling any of these options, you must first set the AUTH_ENABLED
option to true
.
This chapter provides details on the following configuration options:
- Apicurio Registry authentication by using OpenID Connect with Red Hat Single Sign-On
- Apicurio Registry authentication by using HTTP basic
- Apicurio Registry role-based authorization
- Apicurio Registry owner-only authorization
- Apicurio Registry authenticated read access
- Apicurio Registry anonymous read-only access
Apicurio Registry authentication by using OpenID Connect with Red Hat Single Sign-On
You can set the following environment variables to configure authentication for the Apicurio Registry web console and API with Red Hat Single Sign-On:
Environment variable | Description | Type | Default |
---|---|---|---|
|
Enables authentication for Apicurio Registry. When set to | String |
|
|
The URL of the Red Hat Single Sign-On authentication server. For example, | String | - |
|
The Red Hat Single Sign-On realm for authentication. For example, | String | - |
| The client ID for the Apicurio Registry REST API. | String |
|
| The client ID for the Apicurio Registry web console. | String |
|
Apicurio Registry authentication by using HTTP basic
By default, Apicurio Registry supports authentication by using OpenID Connect. Users or API clients must obtain an access token to make authenticated calls to the Apicurio Registry REST API. However, because some tools do not support OpenID Connect, you can also configure Apicurio Registry to support HTTP basic authentication by setting the following configuration options to true
:
Environment variable | Java system property | Type | Default value |
---|---|---|---|
|
| Boolean |
|
|
| Boolean |
|
Apicurio Registry HTTP basic client credentials cache expiry
You can also configure the HTTP basic client credentials cache expiry time. By default, when using HTTP basic authentication, Apicurio Registry caches JWT tokens, and does not issue a new token when there is no need. You can configure the cache expiry time for JWT tokens, which is set to 10 mins by default.
When using Red Hat Single Sign-On, it is best to set this configuration to your Red Hat Single Sign-On JWT expiry time minus one minute. For example, if you have the expiry time set to 5
mins in Red Hat Single Sign-On, you should set the following configuration option to 4
mins:
Environment variable | Java system property | Type | Default value |
---|---|---|---|
|
| Integer |
|
Apicurio Registry role-based authorization
You can set the following options to true
to enable role-based authorization in Apicurio Registry:
Environment variable | Java system property | Type | Default value |
---|---|---|---|
|
| Boolean |
|
|
| Boolean |
|
You can then configure role-based authorization to use roles included in the user’s authentication token (for example, granted when authenticating by using Red Hat Single Sign-On), or to use role mappings managed internally by Apicurio Registry.
Use roles assigned in Red Hat Single Sign-On
To enable using roles assigned by Red Hat Single Sign-On, set the following environment variables:
Environment variable | Description | Type | Default |
---|---|---|---|
|
When set to | String |
|
| The name of the role that indicates a user is an admin. | String |
|
| The name of the role that indicates a user is a developer. | String |
|
| The name of the role that indicates a user has read-only access. | String |
|
When Apicurio Registry is configured to use roles from Red Hat Single Sign-On, you must assign Apicurio Registry users to at least one of the following user roles in Red Hat Single Sign-On. However, you can configure different user role names by using the environment variables in Table 5.12, “Configuration for Apicurio Registry role-based authorization by using Red Hat Single Sign-On”.
Role name | Read artifacts | Write artifacts | Global rules | Description |
---|---|---|---|---|
| Yes | Yes | Yes | Full access to all create, read, update, and delete operations. |
| Yes | Yes | No | Access to create, read, update, and delete operations, except configuring global rules and import/export. This role can configure artifact-specific rules only. |
| Yes | No | No | Access to read and search operations only. This role cannot configure any rules. |
Manage roles directly in Apicurio Registry
To enable using roles managed internally by Apicurio Registry, set the following environment variable:
Environment variable | Description | Type | Default |
---|---|---|---|
|
When set to | String |
|
When using internally managed role mappings, users can be assigned a role by using the /admin/roleMappings
endpoint in the Apicurio Registry REST API. For more details, see Apicurio Registry REST API documentation.
Users can be granted exactly one role: ADMIN
, DEVELOPER
, or READ_ONLY
. Only users with admin privileges can grant access to other users.
Apicurio Registry admin-override configuration
Because there are no default admin users in Apicurio Registry, it is usually helpful to configure another way for users to be identified as admins. You can configure this admin-override feature by using the following environment variables:
Environment variable | Description | Type | Default |
---|---|---|---|
| Enables the admin-override feature. | String |
|
|
Where to look for admin-override information. Only | String |
|
|
The type of information used to determine if a user is an admin. Values depend on the value of the FROM variable, for example, | String |
|
| The name of the role that indicates a user is an admin. | String |
|
| The name of a JWT token claim to use for determining admin-override. | String |
|
| The value that the JWT token claim indicated by the CLAIM variable must be for the user to be granted admin-override. | String |
|
For example, you can use this admin-override feature to assign the sr-admin
role to a single user in Red Hat Single Sign-On, which grants that user the admin role. That user can then use the /admin/roleMappings
REST API (or associated UI) to grant roles to additional users (including additional admins).
Apicurio Registry owner-only authorization
You can set the following options to true
to enable owner-only authorization for updates to artifacts or artifact groups in Apicurio Registry:
Environment variable | Java system property | Type | Default value |
---|---|---|---|
|
| Boolean |
|
|
| Boolean |
|
|
| Boolean |
|
When owner-only authorization is enabled, only the user who created an artifact can modify or delete that artifact.
When owner-only authorization and group owner-only authorization are both enabled, only the user who created an artifact group has write access to that artifact group, for example, to add or remove artifacts in that group.
Apicurio Registry authenticated read access
When the authenticated read access option is enabled, Apicurio Registry grants at least read-only access to requests from any authenticated user in the same organization, regardless of their user role.
To enable authenticated read access, you must first enable role-based authorization, and then ensure that the following options are set to true
:
Environment variable | Java system property | Type | Default value |
---|---|---|---|
|
| Boolean |
|
|
| Boolean |
|
For more details, see the section called “Apicurio Registry role-based authorization”.
Apicurio Registry anonymous read-only access
In addition to the two main types of authorization (role-based and owner-based authorization), Apicurio Registry supports an anonymous read-only access option.
To allow anonymous users, such as REST API calls with no authentication credentials, to make read-only calls to the REST API, set the following options to true
:
Environment variable | Java system property | Type | Default value |
---|---|---|---|
|
| Boolean |
|
|
| Boolean |
|
Additional resources
- For an example of how to set environment variables in your Apicurio Registry deployment on OpenShift, see Section 6.3, “Managing Apicurio Registry environment variables”
- For details on configuring custom authentication for Apicurio Registry, the see Quarkus Open ID Connect documentation
5.5. Configuring an HTTPS connection to Apicurio Registry from inside the OpenShift cluster
The following procedure shows how to configure Apicurio Registry deployment to expose a port for HTTPS connections from inside the OpenShift cluster.
This kind of connection is not directly available outside of the cluster. Routing is based on hostname, which is encoded in the case of an HTTPS connection. Therefore, edge termination or other configuration is still needed. See Section 5.6, “Configuring an HTTPS connection to Apicurio Registry from outside the OpenShift cluster”.
Prerequisites
- You must have already installed the Apicurio Registry Operator.
Procedure
Generate a
keystore
with a self-signed certificate. You can skip this step if you are using your own certificates.Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout tls.key -out tls.crt
openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout tls.key -out tls.crt
Create a new secret to hold the certificate and the private key.
- In the left navigation menu of the OpenShift web console, click Workloads > Secrets > Create Key/Value Secret.
-
Use the following values:
Name:https-cert-secret
Key 1:tls.key
Value 1: tls.key (uploaded file)
Key 2:tls.crt
Value 2: tls.crt (uploaded file)
or create the secret using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create secret generic https-cert-secret --from-file=tls.key --from-file=tls.crt
oc create secret generic https-cert-secret --from-file=tls.key --from-file=tls.crt
Edit the
spec.configuration.security.https
section of theApicurioRegistry
CR for your Apicurio Registry deployment, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry spec: configuration: # ... security: https: secretName: https-cert-secret
apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry spec: configuration: # ... security: https: secretName: https-cert-secret
Verify that the connection is working:
Connect into a pod on the cluster using SSH (you can use the Apicurio Registry pod):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc rsh example-apicurioregistry-deployment-6f788db977-2wzpw
oc rsh example-apicurioregistry-deployment-6f788db977-2wzpw
Find the cluster IP of the Apicurio Registry pod from the Service resource (see the Location column in the web console). Afterwards, execute a test request (we are using self-signed certificate, so an insecure flag is required):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -k https://172.30.230.78:8443/health
curl -k https://172.30.230.78:8443/health
In the Kubernetes secret containing the HTTPS certificate and key, the names tls.crt
and tls.key
must be used for the provided values. This is currently not configurable.
Disabling HTTP
If you enabled HTTPS using the procedure in this section, you can also disable the default HTTP connection by setting the spec.security.https.disableHttp
to true
. This removes the HTTP port 8080 from the Apicurio Registry pod container, Service
, and the NetworkPolicy
(if present).
Importantly, Ingress
is also removed because the Apicurio Registry Operator currently does not support configuring HTTPS in Ingress
. Users must create an Ingress
for HTTPS connections manually.
Additional resources
5.6. Configuring an HTTPS connection to Apicurio Registry from outside the OpenShift cluster
The following procedure shows how to configure Apicurio Registry deployment to expose an HTTPS edge-terminated route for connections from outside the OpenShift cluster.
Prerequisites
- You must have already installed the Apicurio Registry Operator.
- Read the OpenShift documentation for creating secured routes.
Procedure
Add a second Route in addition to the HTTP route created by the Apicurio Registry Operator. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow kind: Route apiVersion: route.openshift.io/v1 metadata: [...] labels: app: example-apicurioregistry [...] spec: host: example-apicurioregistry-default.apps.example.com to: kind: Service name: example-apicurioregistry-service-9whd7 weight: 100 port: targetPort: 8080 tls: termination: edge insecureEdgeTerminationPolicy: Redirect wildcardPolicy: None
kind: Route apiVersion: route.openshift.io/v1 metadata: [...] labels: app: example-apicurioregistry [...] spec: host: example-apicurioregistry-default.apps.example.com to: kind: Service name: example-apicurioregistry-service-9whd7 weight: 100 port: targetPort: 8080 tls: termination: edge insecureEdgeTerminationPolicy: Redirect wildcardPolicy: None
NoteMake sure the
insecureEdgeTerminationPolicy: Redirect
configuration property is set.If you do not specify a certificate, OpenShift will use a default. Alternatively, you can generate a custom self-signed certificate using the following commands:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openssl genrsa 2048 > tls.key && openssl req -new -x509 -nodes -sha256 -days 365 -key tls.key -out tls.crt
openssl genrsa 2048 > tls.key && openssl req -new -x509 -nodes -sha256 -days 365 -key tls.key -out tls.crt
Then create a route using the OpenShift CLI:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create route edge \ --service=example-apicurioregistry-service-9whd7 \ --cert=tls.crt --key=tls.key \ --hostname=example-apicurioregistry-default.apps.example.com \ --insecure-policy=Redirect \ -n default
oc create route edge \ --service=example-apicurioregistry-service-9whd7 \ --cert=tls.crt --key=tls.key \ --hostname=example-apicurioregistry-default.apps.example.com \ --insecure-policy=Redirect \ -n default
Chapter 6. Configuring and managing Apicurio Registry deployments
This chapter explains how to configure and manage optional settings for your Apicurio Registry deployment on OpenShift:
- Section 6.1, “Configuring Apicurio Registry health checks on OpenShift”
- Section 6.2, “Environment variables for Apicurio Registry health checks”
- Section 6.3, “Managing Apicurio Registry environment variables”
- Section 6.4, “Configuring Apicurio Registry deployment using PodTemplate”
- Section 6.5, “Configuring the Apicurio Registry web console”
- Section 6.6, “Configuring Apicurio Registry logging”
- Section 6.7, “Configuring Apicurio Registry event sourcing”
6.1. Configuring Apicurio Registry health checks on OpenShift
You can configure optional environment variables for liveness and readiness probes to monitor the health of the Apicurio Registry server on OpenShift:
- Liveness probes test if the application can make progress. If the application cannot make progress, OpenShift automatically restarts the failing Pod.
- Readiness probes test if the application is ready to process requests. If the application is not ready, it can become overwhelmed by requests, and OpenShift stops sending requests for the time that the probe fails. If other Pods are OK, they continue to receive requests.
The default values of the liveness and readiness environment variables are designed for most cases and should only be changed if required by your environment. Any changes to the defaults depend on your hardware, network, and amount of data stored. These values should be kept as low as possible to avoid unnecessary overhead.
Prerequisites
- You must have an OpenShift cluster with cluster administrator access.
- You must have already installed Apicurio Registry on OpenShift.
- You must have already installed and configured your chosen Apicurio Registry storage in AMQ Streams or PostgreSQL.
Procedure
- In the OpenShift Container Platform web console, log in using an account with cluster administrator privileges.
- Click Installed Operators > Red Hat Integration - Service Registry Operator.
- On the ApicurioRegistry tab, click the Operator custom resource for your deployment, for example, example-apicurioregistry.
-
In the main overview page, find the Deployment Name section and the corresponding
DeploymentConfig
name for your Apicurio Registry deployment, for example, example-apicurioregistry. -
In the left navigation menu, click Workloads > Deployment Configs, and select your
DeploymentConfig
name. Click the Environment tab, and enter your environment variables in the Single values env section, for example:
-
NAME:
LIVENESS_STATUS_RESET
-
VALUE:
350
-
NAME:
Click Save at the bottom.
Alternatively, you can perform these steps using the OpenShift
oc
command. For more details, see the OpenShift CLI documentation.
6.2. Environment variables for Apicurio Registry health checks
This section describes the available environment variables for Apicurio Registry health checks on OpenShift. These include liveness and readiness probes to monitor the health of the Apicurio Registry server on OpenShift. For an example procedure, see Section 6.1, “Configuring Apicurio Registry health checks on OpenShift”.
The following environment variables are provided for reference only. The default values are designed for most cases and should only be changed if required by your environment. Any changes to the defaults depend on your hardware, network, and amount of data stored. These values should be kept as low as possible to avoid unnecessary overhead.
Liveness environment variables
Name | Description | Type | Default |
---|---|---|---|
| Number of liveness issues or errors that can occur before the liveness probe fails. | Integer |
|
| Period in which the threshold number of errors must occur. For example, if this value is 60 and the threshold is 1, the check fails after two errors occur in 1 minute | Seconds |
|
| Number of seconds that must elapse without any more errors for the liveness probe to reset to OK status. | Seconds |
|
| Comma-separated list of ignored liveness exceptions. | String |
|
Because OpenShift automatically restarts a Pod that fails a liveness check, the liveness settings, unlike readiness settings, do not directly affect behavior of Apicurio Registry on OpenShift.
Readiness environment variables
Name | Description | Type | Default |
---|---|---|---|
| Number of readiness issues or errors that can occur before the readiness probe fails. | Integer |
|
| Period in which the threshold number of errors must occur. For example, if this value is 60 and the threshold is 1, the check fails after two errors occur in 1 minute. | Seconds |
|
| Number of seconds that must elapse without any more errors for the liveness probe to reset to OK status. In this case, this means how long the Pod stays not ready, until it returns to normal operation. | Seconds |
|
| Readiness tracks the timeout of two operations:
If these operations take more time than the configured timeout, this is counted as a readiness issue or error. This value controls the timeouts for both operations. | Seconds |
|
6.3. Managing Apicurio Registry environment variables
Apicurio Registry Operator manages most common Apicurio Registry configuration, but there are some options that it does not support yet. If a high-level configuration option is not available in the ApicurioRegistry
CR, you can use an environment variable to adjust it. You can update these by setting an environment variable directly in the ApicurioRegistry
CR, in the spec.configuration.env
field. These are then forwarded to the Deployment
resource of Apicurio Registry.
Procedure
You can manage Apicurio Registry environment variables by using the OpenShift web console or CLI.
- OpenShift web console
- Select the Installed Operators tab, and then Red Hat Integration - Service Registry Operator.
-
On the Apicurio Registry tab, click the
ApicurioRegistry
CR for your Apicurio Registry deployment. Click the YAML tab and then edit the
spec.configuration.env
section as needed. The following example shows how to set default global content rules:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry spec: configuration: # ... env: - name: REGISTRY_RULES_GLOBAL_VALIDITY value: FULL # One of: NONE, SYNTAX_ONLY, FULL - name: REGISTRY_RULES_GLOBAL_COMPATIBILITY value: FULL # One of: NONE, BACKWARD, BACKWARD_TRANSITIVE, FORWARD, FORWARD_TRANSITIVE, FULL, FULL_TRANSITIVE
apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry spec: configuration: # ... env: - name: REGISTRY_RULES_GLOBAL_VALIDITY value: FULL # One of: NONE, SYNTAX_ONLY, FULL - name: REGISTRY_RULES_GLOBAL_COMPATIBILITY value: FULL # One of: NONE, BACKWARD, BACKWARD_TRANSITIVE, FORWARD, FORWARD_TRANSITIVE, FULL, FULL_TRANSITIVE
- OpenShift CLI
- Select the project where Apicurio Registry is installed.
-
Run
oc get apicurioregistry
to get the list ofApicurioRegistry
CRs -
Run
oc edit apicurioregistry
on the CR representing the Apicurio Registry instance that you want to configure. Add or modify the environment variable in the
spec.configuration.env
section.The Apicurio Registry Operator might attempt to set an environment variable that is already explicitly specified in the
spec.configuration.env
field. If an environment variable configuration has a conflicting value, the value set by Apicurio Registry Operator takes precedence.You can avoid this conflict by either using the high-level configuration for the feature, or only using the explicitly specified environment variables. The following is an example of a conflicting configuration:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry spec: configuration: # ... ui: readOnly: true env: - name: REGISTRY_UI_FEATURES_READONLY value: false
apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry spec: configuration: # ... ui: readOnly: true env: - name: REGISTRY_UI_FEATURES_READONLY value: false
This configuration results in the Apicurio Registry web console being in read-only mode.
6.4. Configuring Apicurio Registry deployment using PodTemplate
This is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production.
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview.
The ApicurioRegistry
CRD contains the spec.deployment.podTemplateSpecPreview
field, which has the same structure as the field spec.template
in a Kubernetes Deployment
resource (the PodTemplateSpec
struct).
With some restrictions, the Apicurio Registry Operator forwards the data from this field to the corresponding field in the Apicurio Registry deployment. This provides greater configuration flexibility, without the need for the Apicurio Registry Operator to natively support each use case.
The following table contains a list of subfields that are not accepted by the Apicurio Registry Operator, and result in a configuration error:
podTemplateSpecPreview subfield | Status | Details |
---|---|---|
| alternative exists |
|
| alternative exists |
|
| alternative exists |
|
| warning |
To configure the Apicurio Registry container, |
| alternative exists |
|
| reserved | - |
| alternative exists |
|
| alternative exists |
|
If you set a field in podTemplateSpecPreview
, its value must be valid, as if you configured it in the Apicurio Registry Deployment
directly. The Apicurio Registry Operator might still modify the values you provided, but it will not fix an invalid value or make sure a default value is present.
Additional resources
6.5. Configuring the Apicurio Registry web console
You can set optional environment variables to configure the Apicurio Registry web console specifically for your deployment environment or to customize its behavior.
Prerequisites
- You have already installed Apicurio Registry.
Configuring the web console deployment environment
When you access the Apicurio Registry web console in your browser, some initial configuration settings are loaded. The following configuration settings are important:
- URL for core Apicurio Registry server REST API
- URL for Apicurio Registry web console client
Typically, Apicurio Registry automatically detects and generates these settings, but there are some deployment environments where this automatic detection can fail. If this happens, you can configure environment variables to explicitly set these URLs for your environment.
Procedure
Configure the following environment variables to override the default URLs:
-
REGISTRY_UI_CONFIG_APIURL
: Specifies the URL for the core Apicurio Registry server REST API. For example,https://registry.my-domain.com/apis/registry
-
REGISTRY_UI_CONFIG_UIURL
: Specifies the URL for the Apicurio Registry web console client. For example,https://registry.my-domain.com/ui
Configuring the web console in read-only mode
You can configure the Apicurio Registry web console in read-only mode as an optional feature. This mode disables all features in the Apicurio Registry web console that allow users to make changes to registered artifacts. For example, this includes the following:
- Creating an artifact
- Uploading a new artifact version
- Updating artifact metadata
- Deleting an artifact
Procedure
Configure the following environment variable:
-
REGISTRY_UI_FEATURES_READONLY
: Set totrue
to enable read-only mode. Defaults tofalse
.
6.6. Configuring Apicurio Registry logging
You can set Apicurio Registry logging configuration at runtime. Apicurio Registry provides a REST endpoint to set the log level for specific loggers for finer grained logging. This section explains how to view and set Apicurio Registry log levels at runtime using the Apicurio Registry /admin
REST API.
Prerequisites
-
Get the URL to access your Apicurio Registry instance, or get your Apicurio Registry route if you have Apicurio Registry deployed on OpenShift. This simple example uses a URL of
localhost:8080
.
Procedure
Use this
curl
command to obtain the current log level for the loggerio.apicurio.registry.storage
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -i localhost:8080/apis/registry/v2/admin/loggers/io.apicurio.registry.storage
$ curl -i localhost:8080/apis/registry/v2/admin/loggers/io.apicurio.registry.storage HTTP/1.1 200 OK [...] Content-Type: application/json {"name":"io.apicurio.registry.storage","level":"INFO"}
Use this
curl
command to change the log level for the loggerio.apicurio.registry.storage
toDEBUG
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -X PUT -i -H "Content-Type: application/json" --data '{"level":"DEBUG"}' localhost:8080/apis/registry/v2/admin/loggers/io.apicurio.registry.storage
$ curl -X PUT -i -H "Content-Type: application/json" --data '{"level":"DEBUG"}' localhost:8080/apis/registry/v2/admin/loggers/io.apicurio.registry.storage HTTP/1.1 200 OK [...] Content-Type: application/json {"name":"io.apicurio.registry.storage","level":"DEBUG"}
Use this
curl
command to revert the log level for the loggerio.apicurio.registry.storage
to its default value:Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -X DELETE -i localhost:8080/apis/registry/v2/admin/loggers/io.apicurio.registry.storage
$ curl -X DELETE -i localhost:8080/apis/registry/v2/admin/loggers/io.apicurio.registry.storage HTTP/1.1 200 OK [...] Content-Type: application/json {"name":"io.apicurio.registry.storage","level":"INFO"}
6.7. Configuring Apicurio Registry event sourcing
This is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production.
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview.
You can configure Apicurio Registry to send events when changes are made to registry content. For example, Apicurio Registry can trigger events when schema or API artifacts, groups, or content rules are created, updated, deleted, and so on. You can configure Apicurio Registry to send events to your applications and to third-party integrations for these kind of changes.
There are different protocols available for transporting events. The currently implemented protocols are HTTP and Apache Kafka. However, regardless of the protocol, the events are sent by using the CNCF CloudEvents specification. You can configure Apicurio Registry event sourcing by using Java system properties or the equivalent environment variables.
Apicurio Registry event types
All of the event types are defined in io.apicurio.registry.events.dto.RegistryEventType
. For example, these include the following event types:
-
io.apicurio.registry.artifact-created
-
io.apicurio.registry.artifact-updated
-
io.apicurio.registry.artifact-state-changed
-
io.apicurio.registry.artifact-rule-created
-
io.apicurio.registry.global-rule-created
-
io.apicurio.registry.group-created
Prerequisites
- You must have an application that you want to send Apicurio Registry cloud events to. For example, this can be a custom application or a third-party application.
Configuring Apicurio Registry event sourcing by using HTTP
The example in this section shows a custom application running on http://my-app-host:8888/events
.
Procedure
When using the HTTP protocol, set your Apicurio Registry configuration to send events to a your application as follows:
-
registry.events.sink.my-custom-consumer=http://my-app-host:8888/events
-
If required, you can configure multiple event consumers as follows:
-
registry.events.sink.my-custom-consumer=http://my-app-host:8888/events
-
registry.events.sink.other-consumer=http://my-consumer.com/events
-
Configuring Apicurio Registry event sourcing by using Apache Kafka
The example in this section shows a Kafka topic named my-registry-events
running on my-kafka-host:9092
.
Procedure
When using the Kafka protocol, set your Kafka topic as follows:
-
registry.events.kafka.topic=my-registry-events
-
You can set the configuration for the Kafka producer by using the
KAFKA_BOOTSTRAP_SERVERS
environment variable:KAFKA_BOOTSTRAP_SERVERS=my-kafka-host:9092
Alternatively, you can set the properties for the kafka producer by using the
registry.events.kafka.config
prefix, for example:registry.events.kafka.config.bootstrap.servers=my-kafka-host:9092
If required, you can also set the Kafka topic partition to use to produce events:
-
registry.events.kafka.topic-partition=1
-
Additional resources
- For more details, see the CNCF CloudEvents specification.
Chapter 7. Apicurio Registry Operator configuration reference
This chapter provides detailed information on the custom resource used to configure the Apicurio Registry Operator to deploy Apicurio Registry:
7.1. Apicurio Registry Custom Resource
The Apicurio Registry Operator defines an ApicurioRegistry
custom resource (CR) that represents a single deployment of Apicurio Registry on OpenShift.
These resource objects are created and maintained by users to instruct the Apicurio Registry Operator how to deploy and configure Apicurio Registry.
Example ApicurioRegistry CR
The following command displays the ApicurioRegistry
resource:
oc get apicurioregistry oc edit apicurioregistry example-apicurioregistry
oc get apicurioregistry
oc edit apicurioregistry example-apicurioregistry
apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry namespace: demo-kafka # ... spec: configuration: persistence: kafkasql kafkasql: bootstrapServers: 'my-cluster-kafka-bootstrap.demo-kafka.svc:9092' deployment: host: >- example-apicurioregistry.demo-kafka.example.com status: conditions: - lastTransitionTime: "2021-05-03T10:47:11Z" message: "" reason: Reconciled status: "True" type: Ready info: host: example-apicurioregistry.demo-kafka.example.com managedResources: - kind: Deployment name: example-apicurioregistry-deployment namespace: demo-kafka - kind: Service name: example-apicurioregistry-service namespace: demo-kafka - kind: Ingress name: example-apicurioregistry-ingress namespace: demo-kafka
apiVersion: registry.apicur.io/v1
kind: ApicurioRegistry
metadata:
name: example-apicurioregistry
namespace: demo-kafka
# ...
spec:
configuration:
persistence: kafkasql
kafkasql:
bootstrapServers: 'my-cluster-kafka-bootstrap.demo-kafka.svc:9092'
deployment:
host: >-
example-apicurioregistry.demo-kafka.example.com
status:
conditions:
- lastTransitionTime: "2021-05-03T10:47:11Z"
message: ""
reason: Reconciled
status: "True"
type: Ready
info:
host: example-apicurioregistry.demo-kafka.example.com
managedResources:
- kind: Deployment
name: example-apicurioregistry-deployment
namespace: demo-kafka
- kind: Service
name: example-apicurioregistry-service
namespace: demo-kafka
- kind: Ingress
name: example-apicurioregistry-ingress
namespace: demo-kafka
By default, the Apicurio Registry Operator watches its own project namespace only. Therefore, you must create the ApicurioRegistry
CR in the same namespace, if you are deploying the Operator manually. You can modify this behavior by updating WATCH_NAMESPACE
environment variable in the Operator Deployment
resource.
Additional resources
7.2. Apicurio Registry CR spec
The spec
is the part of the ApicurioRegistry
CR that is used to provide the desired state or configuration for the Operator to achieve.
ApicurioRegistry CR spec contents
The following example block contains the full tree of possible spec
configuration options. Some fields might not be required or should not be defined at the same time.
spec: configuration: persistence: <string> sql: dataSource: url: <string> userName: <string> password: <string> kafkasql: bootstrapServers: <string> security: tls: truststoreSecretName: <string> keystoreSecretName: <string> scram: mechanism: <string> truststoreSecretName: <string> user: <string> passwordSecretName: <string> ui: readOnly: <string> logLevel: <string> registryLogLevel: <string> security: keycloak: url: <string> realm: <string> apiClientId: <string> uiClientId: <string> https: disableHttp: <bool> secretName: <string> env: <k8s.io/api/core/v1 []EnvVar> deployment: replicas: <int32> host: <string> affinity: <k8s.io/api/core/v1 Affinity> tolerations: <k8s.io/api/core/v1 []Toleration> imagePullSecrets: <k8s.io/api/core/v1 []LocalObjectReference> metadata: annotations: <map[string]string> labels: <map[string]string> managedResources: disableIngress: <bool> disableNetworkPolicy: <bool> disablePodDisruptionBudget: <bool> podTemplateSpecPreview: <k8s.io/api/core/v1 PodTemplateSpec>
spec:
configuration:
persistence: <string>
sql:
dataSource:
url: <string>
userName: <string>
password: <string>
kafkasql:
bootstrapServers: <string>
security:
tls:
truststoreSecretName: <string>
keystoreSecretName: <string>
scram:
mechanism: <string>
truststoreSecretName: <string>
user: <string>
passwordSecretName: <string>
ui:
readOnly: <string>
logLevel: <string>
registryLogLevel: <string>
security:
keycloak:
url: <string>
realm: <string>
apiClientId: <string>
uiClientId: <string>
https:
disableHttp: <bool>
secretName: <string>
env: <k8s.io/api/core/v1 []EnvVar>
deployment:
replicas: <int32>
host: <string>
affinity: <k8s.io/api/core/v1 Affinity>
tolerations: <k8s.io/api/core/v1 []Toleration>
imagePullSecrets: <k8s.io/api/core/v1 []LocalObjectReference>
metadata:
annotations: <map[string]string>
labels: <map[string]string>
managedResources:
disableIngress: <bool>
disableNetworkPolicy: <bool>
disablePodDisruptionBudget: <bool>
podTemplateSpecPreview: <k8s.io/api/core/v1 PodTemplateSpec>
The following table describes each configuration option:
Configuration option | type | Default value | Description |
---|---|---|---|
| - | - | Section for configuration of Apicurio Registry application |
| string | required |
Storage backend. One of |
| - | - | SQL storage backend configuration |
| - | - | Database connection configuration for SQL storage backend |
| string | required | Database connection URL string |
| string | required | Database connection user |
| string | empty | Database connection password |
| - | - | Kafka storage backend configuration |
| string | required | Kafka bootstrap server URL, for Streams storage backend |
| - | - | Section to configure TLS authentication for Kafka storage backend |
| string | required | Name of a secret containing TLS truststore for Kafka |
| string | required | Name of a secret containing user TLS keystore |
| string | required | Name of a secret containing TLS truststore for Kafka |
| string | required | SCRAM user name |
| string | required | Name of a secret containing SCRAM user password |
| string |
| SASL mechanism |
| - | - | Apicurio Registry web console settings |
| string |
| Set Apicurio Registry web console to read-only mode |
| string |
|
Apicurio Registry log level, for non-Apicurio components and libraries. One of |
| string |
|
Apicurio Registry log level, for Apicurio application components (excludes non-Apicurio components and libraries). One of |
| - | - | Apicurio Registry web console and REST API security settings |
| - | - | Web console and REST API security configuration using Red Hat Single Sign-On |
| string | required | Red Hat Single Sign-On URL |
| string | required | Red Hat Single Sign-On realm |
| string |
| Red Hat Single Sign-On client for REST API |
| string |
| Red Hat Single Sign-On client for web console |
| - | - | Configuration for HTTPS. For more details, see Configuring an HTTPS connection to Apicurio Registry from inside the OpenShift cluster. |
| string | empty |
Name of a Kubernetes Secret that contains the HTTPS certificate and key, which must be named |
| bool |
| Disable HTTP port and Ingress. HTTPS must be enabled as a prerequisite. |
| k8s.io/api/core/v1 []EnvVar | empty | Configure a list of environment variables to be provided to the Apicurio Registry pod. For more details, see Managing Apicurio Registry environment variables. |
| - | - | Section for Apicurio Registry deployment settings |
| positive integer |
| Number of Apicurio Registry pods to deploy |
| string | auto-generated | Host/URL where the Apicurio Registry console and API are available. If possible, Apicurio Registry Operator attempts to determine the correct value based on the settings of your cluster router. The value is auto-generated only once, so user can override it afterwards. |
| k8s.io/api/core/v1 Affinity | empty | Apicurio Registry deployment affinity configuration |
| k8s.io/api/core/v1 []Toleration | empty | Apicurio Registry deployment tolerations configuration |
| k8s.io/api/core/v1 []LocalObjectReference | empty | Configure image pull secrets for Apicurio Registry deployment |
| - | - | Configure a set of labels or annotations for the Apicurio Registry pod. |
| map[string]string | empty | Configure a set of labels for Apicurio Registry pod |
| map[string]string | empty | Configure a set of annotations for Apicurio Registry pod |
| - | - | Section to configure how the Apicurio Registry Operator manages Kubernetes resources. For more details, see Apicurio Registry managed resources. |
| bool |
|
If set, the operator will not create and manage an |
| bool |
|
If set, the operator will not create and manage a |
| bool |
|
If set, the operator will not create and manage an |
| k8s.io/api/core/v1 PodTemplateSpec | empty | Configure parts of the Apicurio Registry deployment resource. For more details, see Configuring Apicurio Registry deployment using PodTemplate. |
If an option is marked as required, it might be conditional on other configuration options being enabled. Empty values might be accepted, but the Operator does not perform the specified action.
7.3. Apicurio Registry CR status
The status
is the section of the CR managed by the Apicurio Registry Operator that contains a description of the current deployment and application state.
ApicurioRegistry CR status contents
The status
section contains the following fields:
status: info: host: <string> conditions: <list of:> - type: <string> status: <string, one of: True, False, Unknown> reason: <string> message: <string> lastTransitionTime: <string, RFC-3339 timestamp> managedResources: <list of:> - kind: <string> namespace: <string> name: <string>
status:
info:
host: <string>
conditions: <list of:>
- type: <string>
status: <string, one of: True, False, Unknown>
reason: <string>
message: <string>
lastTransitionTime: <string, RFC-3339 timestamp>
managedResources: <list of:>
- kind: <string>
namespace: <string>
name: <string>
Status field | Type | Description |
---|---|---|
| - | Section with information about the deployed Apicurio Registry. |
| string | URL where the Apicurio Registry UI and REST API are accessible. |
| - | List of conditions that report the status of the Apicurio Registry, or the Operator with respect to that deployment. |
| string | Type of the condition. |
| string |
Status of the condition, one of |
| string | A programmatic identifier indicating the reason for the condition’s last transition. |
| string | A human-readable message indicating details about the transition. |
| string | The last time the condition transitioned from one status to another. |
| - | List of OpenShift resources managed by Apicurio Registry Operator |
| string | Resource kind. |
| string | Resource namespace. |
| string | Resource name. |
7.4. Apicurio Registry managed resources
The resources managed by the Apicurio Registry Operator when deploying Apicurio Registry are as follows:
-
Deployment
-
Ingress
(andRoute
) -
NetworkPolicy
-
PodDisruptionBudget
-
Service
You can disable the Apicurio Registry Operator from creating and managing some resources, so they can be configured manually. This provides greater flexibility when using features that the Apicurio Registry Operator does not currently support.
If you disable a resource type, its existing instance is deleted. If you enable a resource, the Apicurio Registry Operator attempts to find a resource using the app
label, for example, app=example-apicurioregistry
, and starts managing it. Otherwise, the Operator creates a new instance.
You can disable the following resource types in this way:
-
Ingress
(andRoute
) -
NetworkPolicy
-
PodDisruptionBudget
For example:
apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry spec: deployment: managedResources: disableIngress: true disableNetworkPolicy: true disablePodDisruptionBudget: false # Can be omitted
apiVersion: registry.apicur.io/v1
kind: ApicurioRegistry
metadata:
name: example-apicurioregistry
spec:
deployment:
managedResources:
disableIngress: true
disableNetworkPolicy: true
disablePodDisruptionBudget: false # Can be omitted
7.5. Apicurio Registry Operator labels
Resources managed by the Apicurio Registry Operator are usually labeled as follows:
Label | Description |
---|---|
|
Name of the Apicurio Registry deployment that the resource belongs to, based on the name of the specified |
|
Type of the deployment: |
|
Name of the deployment: same value as |
| Version of the Apicurio Registry or the Apicurio Registry Operator |
| A set of recommended Kubernetes labels for application deployments. |
| Metering labels for Red Hat products. |
Custom labels and annotations
You can provide custom labels and annotation for the Apicurio Registry pod, using the spec.deployment.metadata.labels
and spec.deployment.metadata.annotations
fields, for example:
apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry spec: configuration: # ... deployment: metadata: labels: example.com/environment: staging annotations: example.com/owner: my-team
apiVersion: registry.apicur.io/v1
kind: ApicurioRegistry
metadata:
name: example-apicurioregistry
spec:
configuration:
# ...
deployment:
metadata:
labels:
example.com/environment: staging
annotations:
example.com/owner: my-team
Additional resources
Chapter 8. Apicurio Registry configuration reference
This chapter provides reference information on the configuration options that are available for Apicurio Registry.
Additional resources
-
For details on setting configuration options by using the Core Registry API, see the
/admin/config/properties
endpoint in the Apicurio Registry REST API documentation. - For details on client configuration options for Kafka serializers and deserializers, see the Red Hat build of Apicurio Registry User Guide.
8.1. Apicurio Registry configuration options
The following Apicurio Registry configuration options are available for each component category:
8.1.1. api
Name | Type | Default | Available from | Description |
---|---|---|---|---|
|
|
|
| Include stack trace in errors responses |
|
|
| Disable APIs |
8.1.2. auth
Name | Type | Default | Available from | Description |
---|---|---|---|---|
|
|
|
| Auth admin override claim |
|
|
|
| Auth admin override claim value |
|
|
|
| Auth admin override enabled |
|
|
|
| Auth admin override from |
|
|
|
| Auth admin override role |
|
|
|
| Auth admin override type |
|
|
|
| Anonymous read access |
|
|
|
| Prefix used for application audit logging. |
|
|
|
| Authenticated read access |
|
|
|
| Default client credentials token expiration time. |
|
|
|
| Client credentials token expiration offset from JWT expiration. |
|
|
|
| Enable basic auth client credentials |
|
|
| Client credentials scope. | |
|
|
| Client identifier used by the server for authentication. | |
|
|
| Client secret used by the server for authentication. | |
|
|
|
| Enable auth |
|
|
|
| Artifact owner-only authorization |
|
|
|
| Artifact group owner-only authorization |
|
|
|
| Enable role based authorization |
|
|
|
| Auth roles source |
|
|
| Header authorization name | |
|
|
|
| Auth roles admin |
|
|
|
| Auth roles developer |
|
|
|
| Auth roles readonly |
|
|
|
| Auth tenant owner admin enabled |
|
|
| Authentication server url. |
8.1.3. cache
Name | Type | Default | Available from | Description |
---|---|---|---|---|
|
|
|
| Registry cache enabled |
8.1.4. ccompat
Name | Type | Default | Available from | Description |
---|---|---|---|---|
|
|
|
| Legacy ID mode (compatibility API) |
|
|
|
| Maximum number of Subjects returned (compatibility API) |
|
|
|
| Canonical hash mode (compatibility API) |
8.1.5. download
Name | Type | Default | Available from | Description |
---|---|---|---|---|
|
|
|
| Download link expiry |
8.1.6. events
Name | Type | Default | Available from | Description |
---|---|---|---|---|
|
|
| Events Kafka sink enabled |
8.1.7. health
Name | Type | Default | Available from | Description |
---|---|---|---|---|
|
|
| Ignored liveness errors | |
|
|
|
| Counter reset window duration of persistence liveness check |
|
|
|
| Disable logging of persistence liveness check |
|
|
|
| Error threshold of persistence liveness check |
|
|
|
| Status reset window duration of persistence liveness check |
|
|
|
| Counter reset window duration of persistence readiness check |
|
|
|
| Error threshold of persistence readiness check |
|
|
|
| Status reset window duration of persistence readiness check |
|
|
|
| Timeout of persistence readiness check |
|
|
|
| Counter reset window duration of response liveness check |
|
|
|
| Disable logging of response liveness check |
|
|
|
| Error threshold of response liveness check |
|
|
|
| Status reset window duration of response liveness check |
|
|
|
| Counter reset window duration of response readiness check |
|
|
|
| Error threshold of response readiness check |
|
|
|
| Status reset window duration of response readiness check |
|
|
|
| Timeout of response readiness check |
|
|
|
| Storage metrics cache check period |
8.1.8. import
Name | Type | Default | Available from | Description |
---|---|---|---|---|
|
|
| The import URL |
8.1.9. kafka
Name | Type | Default | Available from | Description |
---|---|---|---|---|
|
|
| Events Kafka topic | |
|
|
| Events Kafka topic partition |
8.1.10. limits
Name | Type | Default | Available from | Description |
---|---|---|---|---|
|
|
|
| Max artifact labels |
|
|
|
| Max artifact properties |
|
|
|
| Max artifacts |
|
|
|
| Max artifact description length |
|
|
|
| Max artifact label size |
|
|
|
| Max artifact name length |
|
|
|
| Max artifact property key size |
|
|
|
| Max artifact property value size |
|
|
|
| Max artifact requests per second |
|
|
|
| Max schema size (bytes) |
|
|
|
| Max total schemas |
|
|
|
| Max versions per artifacts |
|
|
|
| Storage metrics cache max size. |
8.1.11. log
Name | Type | Default | Available from | Description |
---|---|---|---|---|
|
|
| Log level |
8.1.12. mt
Name | Type | Default | Available from | Description |
---|---|---|---|---|
|
|
|
| Enable multitenancy |
|
|
|
| Enable Standalone Multitenancy mode. In this mode, Registry provides basic multi-tenancy features, without dependencies on additional components to manage tenants and their metadata. A new tenant is simply created as soon as a tenant ID is extracted from the request for the first time. The tenant IDs must be managed externally, and tenants can be effectively deleted by deleting their data. |
|
|
|
| Enable multitenancy authorization |
|
|
| Multitenancy reaper every | |
|
|
|
| Multitenancy reaper max tenants reaped |
|
|
|
| Multitenancy reaper period seconds |
|
|
| Token claims used to resolve the tenant id | |
|
|
|
| Multitenancy context path type base path |
|
|
|
| Enable multitenancy context path type |
|
|
|
| Enable multitenancy request header type |
|
|
|
| Multitenancy request header type name |
|
|
|
| Enable multitenancy subdomain type |
|
|
|
| Multitenancy subdomain type header name |
|
|
|
| Multitenancy subdomain type location |
|
|
|
| Multitenancy subdomain type pattern |
|
|
|
| Enable multitenancy request header type |
|
|
| Organization ID claim name | |
|
|
| Tenant manager auth client ID | |
|
|
| Tenant manager auth client secret | |
|
|
| Tenant manager auth enabled | |
|
|
| Tenant manager auth token expiration reduction ms | |
|
|
| Tenant manager auth url configured | |
|
|
| Tenant manager SSL Ca path | |
|
|
| Tenant manager URL | |
|
|
|
| Tenants context cache check period |
|
|
|
| Tenants context cache max size |
8.1.13. redirects
Name | Type | Default | Available from | Description |
---|---|---|---|---|
|
|
| Enable redirects | |
|
|
| Registry redirects | |
|
|
| Override the hostname used for generating externally-accessible URLs. The host and port overrides are useful when deploying Registry with HTTPS passthrough Ingress or Route. In cases like these, the request URL (and port) that is then re-used for redirection does not belong to actual external URL used by the client, because the request is proxied. The redirection then fails because the target URL is not reachable. | |
|
|
| Override the port used for generating externally-accessible URLs. |
8.1.14. rest
Name | Type | Default | Available from | Description |
---|---|---|---|---|
|
|
|
| Enables artifact version deletion |
|
|
|
| Max size of the artifact allowed to be downloaded from URL |
|
|
|
| Skip SSL validation when downloading artifacts from URL |
8.1.15. store
Name | Type | Default | Available from | Description |
---|---|---|---|---|
|
|
|
| Skip artifact versions with DISABLED state when retrieving latest artifact version |
|
|
|
| Datasource Db kind |
|
|
| Datasource jdbc URL | |
|
|
|
| SQL init |
8.1.16. ui
Name | Type | Default | Available from | Description |
---|---|---|---|---|
|
|
|
| UI OIDC tenant enabled |
|
|
| UI APIs URL | |
|
|
|
| UI auth OIDC client ID |
|
|
|
| UI auth OIDC redirect URL |
|
|
|
| UI auth OIDC URL |
|
|
|
| UI auth type |
|
|
|
| UI codegen enabled |
|
|
|
| UI context path |
|
|
|
| UI read-only mode |
|
|
|
| UI features settings |
|
|
| Overrides the UI root context (useful when relocating the UI context using an inbound proxy) |
Appendix A. Using your subscription
Apicurio Registry is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal.
Accessing your account
- Go to access.redhat.com.
- If you do not already have an account, create one.
- Log in to your account.
Activating a subscription
- Go to access.redhat.com.
- Navigate to My Subscriptions.
- Navigate to Activate a subscription and enter your 16-digit activation number.
Downloading ZIP and TAR files
To access ZIP or TAR files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required.
- Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads.
- Locate the Red Hat Integration entries in the Integration and Automation category.
- Select the desired Apicurio Registry product. The Software Downloads page opens.
- Click the Download link for your component.
Revised on 2024-05-13 11:12:38 UTC