Chapter 1. Service binding
The following chapter provides information about service binding and workload projection that were added to Red Hat build of Quarkus in version 2.7.5 and are in the state of Technology Preview in version 2.13.
Generally, OpenShift applications and services, also referred to as deployable workloads, need to be connected to other services for retrieving additional information, such as service URLs or credentials.
The Service Binding Operator manages the required communication for obtaining this information. This Operator then determines the following:
- How a service consumer intends to bind to such a service
-
The tools for application and service binding, such as the
quarkus-kubernetes-service-binding
extension
Quarkus supports the Service Binding Specification for Kubernetes to bind services to applications.
Specifically, Quarkus implements the Workload Projection part of the specification, allowing applications to bind to services, such as a Database or a Broker, without the need for user configuration.
To enable service binding for the available extensions, add the quarkus-kubernetes-service-binding
extension to the application dependencies.
You can use the following extensions for service binding and for workload projection:
-
quarkus-jdbc-mariadb
-
quarkus-jdbc-mssql
-
quarkus-jdbc-mysql
-
quarkus-jdbc-postgresql
-
quarkus-mongo-client
- Technology Preview -
quarkus-kafka-client
-
quarkus-smallrye-reactive-messaging-kafka
-
quarkus-reactive-mssql-client
- Technology Preview -
quarkus-reactive-mysql-client
-
quarkus-reactive-pg-client
-
1.1. Workload projection
Workload projection is a process of obtaining the configuration for services from the Kubernetes cluster. This configuration takes the form of directory structures that follow certain conventions and is attached to an application or to a service as a mounted volume. The kubernetes-service-binding
extension uses this directory structure to create configuration sources, which allows you to configure additional modules, such as databases or message brokers.
You can use workload projection during application development to connect their application to a development database or other locally-run services without changing the actual application code or configuration.
For an example of a workload projection where the directory structure is included in the test resources and passed to integration test, see the Kubernetes Service Binding datasource GitHub repository.
-
The
k8s-sb
directory is the root of all service bindings. In this example, only one database calledfruit-db
is intended to be bound. This binding database has thetype
file, that indicatespostgresql
as the database type, while the other files in the directory provide the necessary information to establish the connection. -
After your Quarkus project obtains information from
SERVICE_BINDING_ROOT
environment variables that are set by OpenShift Container Platform, you can locate generated configuration files that are present in the file system and use them to map the configuration-file values to properties of certain extensions.
1.2. Introduction to Service Binding Operator
The Service Binding Operator is an Operator that implements Service Binding Specification for Kubernetes and is meant to simplify the binding of services to an application. Containerized applications that support Workload Projection obtain service binding information in the form of volume mounts. The Service Binding Operator reads binding service information and mounts it to the application containers that need it.
The correlation between application and bound services is expressed through the ServiceBinding
resources, which declares the intent of what services are meant to be bound to what application.
The Service Binding Operator watches for ServiceBinding
resources, which inform the Operator what applications are meant to be bound with what services. When a listed application is deployed, the Service Binding Operator collects all the binding information that must be passed to the application, then upgrades the application container by attaching a volume mount with the binding information.
The Service Binding Operator completes the following actions:
-
Observes
ServiceBinding
resources for workloads intended to be bound to a particular service - Applies the binding information to the workload using volume mounts
The following chapter describes the automatic and semi-automatic service binding approaches and their use cases. With either approach, the kubernetes-service-binding
extension generates a ServiceBinding
resource. With the semi-automatic approach, users must provide a configuration for target services manually. With the automatic approach, for a limited set of services generating the ServiceBinding
resource, no additional configuration is needed.
Additional resources
1.3. Semi-automatic service binding
A service binding process starts with a user specification of required services that will be bound to a certain application. This expression is summarized in the ServiceBinding
resource that is generated by the kubernetes-service-binding
extension. The use of the kubernetes-service-binding
extensions helps users to generate ServiceBinding
resources with minimal configuration, therefore simplifying the process overall.
The Service Binding Operator responsible for the binding process then reads the information from the ServiceBinding
resource and mounts the required files to a container accordingly.
An example of the
ServiceBinding
resource:apiVersion: binding.operators.coreos.com/v1beta1 kind: ServiceBinding metadata: name: binding-request namespace: service-binding-demo spec: application: name: java-app group: apps version: v1 resource: deployments services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: Database name: db-demo id: postgresDB
NoteThe
quarkus-kubernetes-service-binding
extension provides a more compact way of expressing the same information. For example:quarkus.kubernetes-service-binding.services.db-demo.api-version=postgres-operator.crunchydata.com/v1beta1 quarkus.kubernetes-service-binding.services.db-demo.kind=Database
After adding the earlier configuration properties inside your application.properties
, the quarkus-kubernetes
, in combination with the quarkus-kubernetes-service-binding
extension, automatically generates the ServiceBinding
resource.
The earlier mentioned db-demo
property-configuration identifier now has a double role and also completes the following actions:
-
Correlates and groups
api-version
andkind
properties together. Defines the
name
property for the custom resource, which you can edit later if needed. For example:quarkus.kubernetes-service-binding.services.db-demo.api-version=postgres-operator.crunchydata.com/v1beta1 quarkus.kubernetes-service-binding.services.db-demo.kind=Database quarkus.kubernetes-service-binding.services.db-demo.name=my-db
Additional resources
1.4. Generating a ServiceBinding custom resource by using the semi-automatic method
You can generate a ServiceBinding resource semi-automatically. The following procedure shows the OpenShift Container Platform deployment process, including how to install operators to configure and deploy an application.
With the following procedure, you install Service Binding Operator and the PostgreSQL Operator from Crunchy Data.
PostgreSQL Operator is a third-party component. For PostgreSQL Operator support policies and terms of use, contact the software vendor Crunchy Data.
Then, the procedure creates a PostgreSQL cluster, a simple application, and finally, deploys it and binds it to the provisioned cluster.
Prerequisites
- You have created an OpenShift Container Platform 4.10 cluster
- You have access to OperatorHub and OpenShift Container Platform Administrator privileges needed to install cluster-wide Operators from OperatorHub
You have installed:
-
oc
orchestration tool - Maven and Java
-
Procedure
The steps in the following procedure use the HOME (~
) directory as a saving and installation destination.
Install the Service Binding Operator version 1.0 and higher using the Installing the Service Binding Operator from the OpenShift Container Platform web UI procedure.
Verify the installation:
oc get csv -n openshift-operators -w
-
When the
phase
of the Service Binding Operator is set toSucceeded
, proceed to the next step.
-
When the
Install the Crunchy PostgreSQL Operator from OperatorHub by using the web console or CLI. For links to instructions, see the Deploy & use section.
Verify the installation:
oc get csv -n openshift-operators -w
-
When the
phase
of the operator is set toSucceeded
, proceed to the next step.
-
When the
Create a PostgreSQL cluster:
Create a new OpenShift Container Platform namespace, in the space of which you will create a cluster and deploy your application later on. Throughout this procedure, the namespace is called
demo
.oc new-project demo
Create the following custom resource and save it as
pg-cluster.yml
:apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo spec: openshift: true image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-14.2-1 postgresVersion: 14 instances: - name: instance1 dataVolumeClaimSpec: accessModes: - "ReadWriteOnce" resources: requests: storage: 1Gi backups: pgbackrest: image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.38-0 repos: - name: repo1 volume: volumeClaimSpec: accessModes: - "ReadWriteOnce" resources: requests: storage: 1Gi
NoteThis YAML has been reused from Service Binding Operator Quickstart.
Apply the created custom resource:
oc apply -f ~/pg-cluster.yml
NoteThis command assumes that you saved the
pg-cluster.yml
file in HOME.Check the Pods to verify the installation:
oc get pods -n demo
-
Wait for the Pods to get into the
READY
state, which signals the installation is complete.
-
Wait for the Pods to get into the
Create a Quarkus application that binds to the PostgreSQL database.
The application we are going to create is going to be a simple
todo
application that will connect to PostgreSQL by using hibernate and panache.Generate the application:
mvn com.redhat.quarkus.platform:quarkus-maven-plugin:2.13.9.SP2-redhat-00003:create \ -DplatformGroupId=com.redhat.quarkus.platform \ -DplatformVersion=2.13.9.SP2-redhat-00003 \ -DprojectGroupId=org.acme \ -DprojectArtifactId=todo-example \ -DclassName="org.acme.TodoResource" \ -Dpath="/todo"
Add all required extensions for connecting to PostgreSQL, generating all required resources, and building a container image for our application:
./mvnw quarkus:add-extension -Dextensions="resteasy-reactive-jackson,jdbc-postgresql,hibernate-orm-panache,openshift,kubernetes-service-binding"
Create a simple entity, as outlined in the following example:
package org.acme; import javax.persistence.Column; import javax.persistence.Entity; import io.quarkus.hibernate.orm.panache.PanacheEntity; @Entity public class Todo extends PanacheEntity { @Column(length = 40, unique = true) public String title; public boolean completed; public Todo() { } public Todo(String title, Boolean completed) { this.title = title; } }
Expose the entity:
package org.acme; import javax.transaction.Transactional; import javax.ws.rs.*; import javax.ws.rs.core.Response; import javax.ws.rs.core.Response.Status; import java.util.List; @Path("/todo") public class TodoResource { @GET @Path("/") public List<Todo> getAll() { return Todo.listAll(); } @GET @Path("/{id}") public Todo get(@PathParam("id") Long id) { Todo entity = Todo.findById(id); if (entity == null) { throw new WebApplicationException("Todo with id of " + id + " does not exist.", Status.NOT_FOUND); } return entity; } @POST @Path("/") @Transactional public Response create(Todo item) { item.persist(); return Response.status(Status.CREATED).entity(item).build(); } @GET @Path("/{id}/complete") @Transactional public Response complete(@PathParam("id") Long id) { Todo entity = Todo.findById(id); entity.id = id; entity.completed = true; return Response.ok(entity).build(); } @DELETE @Transactional @Path("/{id}") public Response delete(@PathParam("id") Long id) { Todo entity = Todo.findById(id); if (entity == null) { throw new WebApplicationException("Todo with id of " + id + " does not exist.", Status.NOT_FOUND); } entity.delete(); return Response.noContent().build(); } }
Bind to the target PostgreSQL cluster by generating a
ServiceBinding
resource.Provide the service coordinates to generate the binding and configure the data source:
-
apiVersion:
postgres-operator.crunchydata.com/v1beta1
-
kind:
PostgresCluster
name:
pg-cluster
This is done by setting a
quarkus.kubernetes-service-binding.services.<id>.
prefix as in the example below. Theid
is used to group properties together and can be anything.quarkus.kubernetes-service-binding.services.my-db.api-version=postgres-operator.crunchydata.com/v1beta1 quarkus.kubernetes-service-binding.services.my-db.kind=PostgresCluster quarkus.kubernetes-service-binding.services.my-db.name=hippo quarkus.datasource.db-kind=postgresql quarkus.hibernate-orm.database.generation=drop-and-create quarkus.hibernate-orm.sql-load-script=import.sql
-
apiVersion:
Create an
import.sql
script with some initial data:INSERT INTO todo(id, title, completed) VALUES (nextval('hibernate_sequence'), 'Finish the blog post', false);
Deploy the application, including
ServiceBinding
, and apply it to the cluster:mvn clean install -Dquarkus.kubernetes.deploy=true -DskipTests
- Wait for the deployment to finish.
Verification
Verify the deployment:
oc get pods -n demo -w
Verify the installation
Port forward to http port locally and access the
/todo
endpoint:oc port-forward service/todo-example 8080:80
Open the following URL in a browser:
http://localhost:8080/todo
Additional resources
- For more information, see the Service Binding Operator section of the Quick Start guide.
1.5. Automatic service binding
The quarkus-kubernetes-service-binding
extension can generate the ServiceBinding
resource automatically after detecting that an application requires access to the external services that are provided by available bindable Operators.
Automatic service binding can be generated for a limited number of service types. To be consistent with established terminology for Kubernetes and Quarkus services, this chapter refers to these service types as kinds.
Service binding type | Operator | Api version | Kind |
| postgres-operator.crunchydata.com/v1beta1 | PostgresCluster | |
| pxc.percona.com/v1-9-0 | PerconaXtraDBCluster | |
| psmdb.percona.com/v1-9-0 | PerconaServerMongoDB |
- Red Hat build of Quarkus 2.13 support for MongoDB Operator is provided as a Technology Preview and applies to the client only.
- See the Quarkus application configurator page for a list of supported Panache extensions in Red Hat build of Quarkus 2.13.
1.5.1. Automatic datasource binding
For traditional databases, automatic binding is initiated whenever a datasource is configured as follows:
quarkus.datasource.db-kind=postgresql
The previous configuration, combined with the presence of quarkus-datasource
, quarkus-jdbc-postgresql
, quarkus-kubernetes
, and quarkus-kubernetes-service-binding
properties in the application, results in the generation of the ServiceBinding
resource for the postgresql
database type.
By using the apiVersion
and kind
properties of the Operator resource, which matches the used postgresql
Operator, the generated ServiceBinding
resource binds the service or resource to the application.
When you do not specify a name for your database service, the value of the db-kind
property is used as the default name.
services: - apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster name: postgresql
Specified the name of the datasource as follows:
quarkus.datasource.fruits-db.db-kind=postgresql
The service
in the generated ServiceBinding
then displays as follows:
services: - apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster name: fruits-db
Similarly, if you use mysql
, the name of the datasource can be specified as follows:
quarkus.datasource.fruits-db.db-kind=mysql
The generated service
contains the following:
services: - apiVersion: pxc.percona.com/v1-9-0 kind: PerconaXtraDBCluster name: fruits-db
1.5.1.1. Customizing automatic service binding
While the automatic service binding feature was developed to eliminate as much of the manual configuration as possible, there are scenarios where you might need to manually modify the generated ServiceBinding
resource. The generation process exclusively relies on information extracted from the application and the knowledge of the supported Operators, which may not reflect what is deployed in the cluster. The generated resource is based purely on the knowledge of the supported bind-able operators for popular service kinds and a set of conventions that were developed to prevent possible mismatches, such as:
- The target resource name does not match the datasource name
- A specific Operator needs to be used rather than the default Operator for that service kind
- Version conflicts that occur when a user needs to use any other version than default or latest
Conventions
- The target resource coordinates are determined based on the type of Operator and the kind of service.
-
The target resource name is set by default to match the service kind, such as
postgresql
,mysql
,mongo
. - For named datasources, the name of the datasource is used.
-
For named
mongo
clients, the name of the client is used.
Example 1: Name mismatch
For cases in which you need to modify the generated ServiceBinding
to fix a name mismatch, use the quarkus.kubernetes-service-binding.services
properties and specify the service’s name as the service key.
The service key
is usually the name of the service, for example, the name of the datasource, or the name of the mongo
client. When this value is not available, the datasource type, such as postgresql
, mysql
, mongo
, is used instead.
To avoid naming conflicts between different types of services, prefix the service key
with a specific datasource type, such as postgresql-<person>
.
The following example shows how to customize the apiVersion
property of the PostgresCluster
resource:
quarkus.datasource.db-kind=postgresql quarkus.kubernetes-service-binding.services.postgresql.api-version=postgres-operator.crunchydata.com/v1beta2
Example 2: Application of a custom name for a datasource
In Example 1, the db-kind
(postgresql
) was used as a service key. In this example, because the datasource is named, according to convention, the datasource name (fruits-db
) is used instead.
The following example shows that for a named datasource, the datasource name is used as the name of the target resource:
quarkus.datasource.fruits-db.db-kind=postgresql
This has the same effect as the following configuration:
quarkus.kubernetes-service-binding.services.fruits-db.api-version=postgres-operator.crunchydata.com/v1beta1 quarkus.kubernetes-service-binding.services.fruits-db.kind=PostgresCluster quarkus.kubernetes-service-binding.services.fruits-db.name=fruits-db
Additional resources
- For additional information about the available properties, see the Workload Projection part of the Service Binding specification.
Revised on 2024-04-16 11:36:03 UTC