Chapter 1. Service binding
Use service binding and workload projection in Quarkus to connect your applications to backing services with minimal configuration.
Deprecation of OpenShift Service Binding Operator
The OpenShift Service Binding Operator is deprecated in OpenShift Container Platform (OCP) 4.13 and later and is planned to be removed in a future OCP release.
The following chapter provides information about service binding and workload projection that were added to Red Hat build of Quarkus in version 2.7.5 and are in the state of Technology Preview in version 3.15.
Generally, OpenShift applications and services also referred to as deployable workloads, need to be connected to other services for retrieving additional information, such as service URLs or credentials.
The Service Binding Operator facilitates retrieval of the necessary information, which is then made available to applications and service-binding tools like the quarkus-kubernetes-service-binding extension through environment variables without directly influencing or determining the use of the extension tool itself.
Quarkus supports the Service binding specification for Kubernetes to bind services to applications.
Specifically, Quarkus implements the workload projection part of the specification, enabling applications to bind to services like databases or brokers, requiring only minimal configuration.
To enable service binding for the available extensions, include the quarkus-kubernetes-service-binding extension to the application dependencies.
You can use the following extensions for service binding and for workload projection:
-
quarkus-jdbc-mariadb -
quarkus-jdbc-mssql -
quarkus-jdbc-mysql -
quarkus-jdbc-postgresql -
quarkus-mongo-client- Technology Preview -
quarkus-kafka-client -
quarkus-messaging-kafka
-
quarkus-reactive-mssql-client- Technology Preview -
quarkus-reactive-mysql-client -
quarkus-reactive-pg-client
-
1.1. Workload projection Copy linkLink copied to clipboard!
Workload projection is a process of obtaining the configuration for services from the Kubernetes cluster. This configuration takes the form of directory structures that follow certain conventions and are attached to an application or a service as a mounted volume.
The kubernetes-service-binding extension uses this directory structure to create configuration sources, which allows you to configure additional modules, such as databases or message brokers.
You can use workload projection during application development to connect your application to a development database or other locally run services without changing the application code or configuration.
For an example of a workload projection where the directory structure is included in the test resources and passed to an integration test, see the Kubernetes Service Binding datasource GitHub repository.
The
k8s-sbdirectory is the root of all service bindings.In this example, only one database called
fruit-dbis intended to be bound. This binding database has thetypefile, which specifiespostgresqlas the database type, while the other files in the directory provide the necessary information to establish the connection.-
When your Red Hat build of Quarkus project obtains information from
SERVICE_BINDING_ROOTenvironment variables that are set by OpenShift Container Platform, you can locate generated configuration files that are present in the file system and use them to map the configuration-file values to properties of certain extensions.
1.2. Introduction to Service Binding Operator Copy linkLink copied to clipboard!
The Service Binding Operator is an Operator that implements the Service Binding Specification for Kubernetes and is meant to simplify the binding of services to an application.
Containerized applications that support workload projection obtain service binding information in the form of volume mounts. The Service Binding Operator reads binding service information and mounts it to the application containers that need it.
The correlation between application and bound services is expressed through the ServiceBinding resources, which declares the intent of what services are meant to be bound to what application.
The Service Binding Operator watches for ServiceBinding resources, which inform the Operator what applications are meant to be bound with what services. When a listed application is deployed, the Service Binding Operator collects all the binding information that must be passed to the application and then upgrades the application container by attaching a volume mount with the binding information.
The Service Binding Operator completes the following actions:
-
Observes
ServiceBindingresources for workloads bound to a particular service. - Applies the binding information to the workload using volume mounts.
The following chapter describes the automatic and semi-automatic service binding approaches and their use cases. The kubernetes-service-binding extension generates a ServiceBinding resource with either approach. With the semi-automatic approach, users must manually provide a configuration for target services. With the automatic approach, no additional configuration is needed for a limited set of services generating the ServiceBinding resource.
1.3. Semi-automatic service binding Copy linkLink copied to clipboard!
A service binding process starts with a user specification of required services that will be bound to a certain application. This expression is summarized in the ServiceBinding resource generated by the kubernetes-service-binding extension. The use of the kubernetes-service-binding extensions helps users to generate ServiceBinding resources with minimal configuration, therefore simplifying the process overall.
The Service Binding Operator responsible for the binding process then reads the information from the ServiceBinding resource and mounts the required files to a container accordingly.
An example of the
ServiceBindingresource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
quarkus-kubernetes-service-bindingextension provides a more compact way of expressing the same information. For example:quarkus.kubernetes-service-binding.services.db-demo.api-version=postgres-operator.crunchydata.com/v1beta1 quarkus.kubernetes-service-binding.services.db-demo.kind=Database
quarkus.kubernetes-service-binding.services.db-demo.api-version=postgres-operator.crunchydata.com/v1beta1 quarkus.kubernetes-service-binding.services.db-demo.kind=DatabaseCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After adding the earlier configuration properties inside your application.properties, the quarkus-kubernetes, in combination with the quarkus-kubernetes-service-binding extension, automatically generates the ServiceBinding resource.
The earlier mentioned db-demo property-configuration identifier now has a double role and also completes the following actions:
-
Correlates and groups
api-versionandkindproperties together. Defines the
nameproperty for the custom resource, which you can edit later if needed. For example:quarkus.kubernetes-service-binding.services.db-demo.api-version=postgres-operator.crunchydata.com/v1beta1 quarkus.kubernetes-service-binding.services.db-demo.kind=Database quarkus.kubernetes-service-binding.services.db-demo.name=my-db
quarkus.kubernetes-service-binding.services.db-demo.api-version=postgres-operator.crunchydata.com/v1beta1 quarkus.kubernetes-service-binding.services.db-demo.kind=Database quarkus.kubernetes-service-binding.services.db-demo.name=my-dbCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4. Generating a ServiceBinding custom resource by using the semi-automatic method Copy linkLink copied to clipboard!
You can generate a ServiceBinding resource semi-automatically. The following procedure shows the OpenShift Container Platform deployment process, including the installation of operators for configuring and deploying an application.
In this procedure, you install the Service Binding Operator and the PostgreSQL Operator from Crunchy Data.
PostgreSQL Operator is a third-party component. For PostgreSQL Operator support policies and terms of use, contact the software vendor Crunchy Data.
Then, the procedure involves creating a PostgreSQL cluster, setting up a straightforward application, and subsequently deploying and binding it to the provisioned cluster.
Prerequisites
- You have created an OpenShift Container Platform 4.12 cluster.
- You have administrator access to OperatorHub and OpenShift Container Platform to install cluster-wide operators from OperatorHub.
You have installed:
-
The OpenShift,
oc, orchestration tool - Maven and Java
-
The OpenShift,
Procedure
The steps in the following procedure use the HOME (~) directory as a saving and installation destination.
Install the Service Binding Operator version 1.3.3 and higher using the Installing the Service Binding Operator from the OpenShift Container Platform web UI procedure.
Verify the installation:
oc get csv -w
oc get csv -wCopy to Clipboard Copied! Toggle word wrap Toggle overflow Proceed to the next step when the
phaseof the Service Binding Operator is set toSucceeded.
Install the Crunchy PostgreSQL Operator from OperatorHub by using either the web console or CLI.
Verify the installation:
oc get csv -w
oc get csv -wCopy to Clipboard Copied! Toggle word wrap Toggle overflow Proceed to the next step when the operator’s
phaseis set toSucceeded.
Create a PostgreSQL cluster:
Create a new OpenShift Container Platform namespace, which will be used for creating a cluster and deploying your application later. This namespace will be referred to as
demothroughout the procedure.oc new-project demo
oc new-project demoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following custom resource and save it as
pg-cluster.yml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis YAML has been reused from Service Binding Operator Quickstart.
Apply the created custom resource:
oc apply -f ~/pg-cluster.yml
oc apply -f ~/pg-cluster.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis command assumes that you saved the
pg-cluster.ymlfile in the HOME directory.Check the pods to verify the installation:
oc get pods -n demo
oc get pods -n demoCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Wait for the Pods to enter the
READYstate, indicating the installation is complete.
-
Wait for the Pods to enter the
Create a Quarkus application that binds to the PostgreSQL database.
The application you are creating is a basic
todoapplication that connects to PostgreSQL using Hibernate and Panache.Generate the application:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add all required extensions for connecting to PostgreSQL, generating all required resources, and building a container image for our application:
./mvnw quarkus:add-extension -Dextensions="rest-jackson,jdbc-postgresql,hibernate-orm-panache,openshift,kubernetes-service-binding"
./mvnw quarkus:add-extension -Dextensions="rest-jackson,jdbc-postgresql,hibernate-orm-panache,openshift,kubernetes-service-binding"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a simple entity, as outlined in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expose the entity:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Bind to the target PostgreSQL cluster by generating a
ServiceBindingresource.Provide the service coordinates to generate the binding and configure the data source:
-
apiVersion:
postgres-operator.crunchydata.com/v1beta1 -
kind:
PostgresCluster name:
pg-clusterThis is accomplished by setting a
quarkus.kubernetes-service-binding.services.<id>.prefix, as demonstrated in the example below. Theidis used to group properties together and can be assigned any value.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
apiVersion:
Create an
import.sqlscript with some initial data:INSERT INTO todo(id, title, completed) VALUES (nextval('hibernate_sequence'), 'Finish the blog post', false);INSERT INTO todo(id, title, completed) VALUES (nextval('hibernate_sequence'), 'Finish the blog post', false);Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Deploy the application, including
ServiceBinding, and apply it to the cluster:mvn clean install -Dquarkus.kubernetes.deploy=true -DskipTests
mvn clean install -Dquarkus.kubernetes.deploy=true -DskipTestsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the deployment to finish.
Verification
Verify the deployment:
oc get pods -n demo -w
oc get pods -n demo -wCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the installation:
Port forward to the HTTP port locally, and then access the
/todoendpoint.oc port-forward service/todo-example 8080:80
oc port-forward service/todo-example 8080:80Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open the following URL in a web browser:
http://localhost:8080/todo
http://localhost:8080/todoCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5. Automatic service binding Copy linkLink copied to clipboard!
The quarkus-kubernetes-service-binding extension can automatically generate the ServiceBinding resource when it detects an application needing access to external services provided by compatible bindable operators.
Automatic service binding can only be generated for a limited set of service types.
In alignment with the established Kubernetes and Quarkus service terminology, this chapter uses the term "kinds" to refer to these service types.
| Service binding type | Operator | API version | Kind |
|
| postgres-operator.crunchydata.com/v1beta1 | PostgresCluster | |
|
| pxc.percona.com/v1-9-0 | PerconaXtraDBCluster | |
|
| psmdb.percona.com/v1-9-0 | PerconaServerMongoDB |
- Red Hat build of Quarkus 3.15 support for MongoDB Operator is provided as a Technology Preview and applies to the client only.
- See the Quarkus application configurator page for a list of supported Panache extensions in Red Hat build of Quarkus 3.15.
1.5.1. Automatic datasource binding Copy linkLink copied to clipboard!
For traditional databases, automatic binding is initiated whenever a datasource is configured as follows:
quarkus.datasource.db-kind=postgresql
quarkus.datasource.db-kind=postgresql
The configuration mentioned earlier, in conjunction with the presence of extensions such as quarkus-datasource, quarkus-jdbc-postgresql, quarkus-kubernetes, and quarkus-kubernetes-service-binding in the application, leads to the creation of the ServiceBinding resource for the postgresql database type.
By using the apiVersion and kind properties of the Operator resource, which matches the used postgresql Operator, the generated ServiceBinding resource binds the service or resource to the application.
When you do not specify a name for your database service, the value of the db-kind property is used as the default name.
services: - apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster name: postgresql
services:
- apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
name: postgresql
Specified the name of the datasource as follows:
quarkus.datasource.fruits-db.db-kind=postgresql
quarkus.datasource.fruits-db.db-kind=postgresql
The service in the generated ServiceBinding then displays as follows:
services: - apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster name: fruits-db
services:
- apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
name: fruits-db
Similarly, if you use mysql, the name of the datasource can be specified as follows:
quarkus.datasource.fruits-db.db-kind=mysql
quarkus.datasource.fruits-db.db-kind=mysql
The generated service contains the following:
services: - apiVersion: pxc.percona.com/v1-9-0 kind: PerconaXtraDBCluster name: fruits-db
services:
- apiVersion: pxc.percona.com/v1-9-0
kind: PerconaXtraDBCluster
name: fruits-db
1.5.1.1. Customizing automatic service binding Copy linkLink copied to clipboard!
While the automatic service binding feature was developed to eliminate as much of the manual configuration as possible, there are scenarios where you might need to modify the generated ServiceBinding resource manually.
The generation process exclusively relies on information extracted from the application and the knowledge of the supported Operators, which might not reflect what is deployed in the cluster.
The generated resource is based purely on the knowledge of the supported bindable operators for popular service kinds and a set of conventions that were developed to prevent possible mismatches, such as:
- The target resource name does not match the datasource name.
- A specific Operator needs to be used rather than the default Operator for that service kind.
- Version conflicts occur when a user needs to use a version other than the default or the latest.
Conventions:
- Target resource coordinates are established according to the Operator type and service kind.
-
By default, the target resource name aligns with the service kind, such as
postgresql,mysql, ormongo. - In the case of named datasources, the datasource name is used.
-
The client’s name is used for named
mongoclients.
Example 1: Name mismatch
For cases where you need to modify the generated ServiceBinding to fix a name mismatch, use the quarkus.kubernetes-service-binding.services properties and specify the service’s name as the service key.
The service key is usually the name of the service, for example, the name of the datasource or the name of the mongo client. When this value is unavailable, the datasource type, such as postgresql, mysql, or mongo, is used instead.
To avoid naming conflicts between different types of services, prefix the service key with a specific datasource type, such as postgresql-<person>.
The following example shows how to customize the apiVersion property of the PostgresCluster resource:
quarkus.datasource.db-kind=postgresql quarkus.kubernetes-service-binding.services.postgresql.api-version=postgres-operator.crunchydata.com/v1beta2
quarkus.datasource.db-kind=postgresql
quarkus.kubernetes-service-binding.services.postgresql.api-version=postgres-operator.crunchydata.com/v1beta2
Example 2: Application of a custom name for a datasource
In Example 1, the service key db-kind (postgresql) was used. In this instance, following the convention, the datasource name (fruits-db) is used because the datasource is named.
The following example shows that for a named datasource, the datasource name is used as the name of the target resource:
quarkus.datasource.fruits-db.db-kind=postgresql
quarkus.datasource.fruits-db.db-kind=postgresql
This has the same effect as the following configuration:
quarkus.kubernetes-service-binding.services.fruits-db.api-version=postgres-operator.crunchydata.com/v1beta1 quarkus.kubernetes-service-binding.services.fruits-db.kind=PostgresCluster quarkus.kubernetes-service-binding.services.fruits-db.name=fruits-db
quarkus.kubernetes-service-binding.services.fruits-db.api-version=postgres-operator.crunchydata.com/v1beta1
quarkus.kubernetes-service-binding.services.fruits-db.kind=PostgresCluster
quarkus.kubernetes-service-binding.services.fruits-db.name=fruits-db
Revised on 2025-10-02 09:13:57 UTC