Building applications
Creating and managing applications on OpenShift Container Platform
Abstract
Chapter 1. Building applications overview
Using OpenShift Container Platform, you can create, edit, delete, and manage applications using the web console or command line interface (CLI).
1.1. Working on a project
Using projects, you can organize and manage applications in isolation. You can manage the entire project lifecycle, including creating, viewing, and deleting a project in OpenShift Container Platform.
After you create the project, you can grant or revoke access to a project and manage cluster roles for the users using the Developer perspective. You can also edit the project configuration resource while creating a project template that is used for automatic provisioning of new projects.
Using the CLI, you can create a project as a different user by impersonating a request to the OpenShift Container Platform API. When you make a request to create a new project, the OpenShift Container Platform uses an endpoint to provision the project according to a customizable template. As a cluster administrator, you can choose to prevent an authenticated user group from self-provisioning new projects.
1.2. Working on an application
1.2.1. Creating an application
To create applications, you must have created a project or have access to a project with the appropriate roles and permissions. You can create an application by using either the Developer perspective in the web console, installed Operators, or the OpenShift CLI (oc
). You can source the applications to be added to the project from Git, JAR files, devfiles, or the developer catalog.
You can also use components that include source or binary code, images, and templates to create an application by using the OpenShift CLI (oc
). With the OpenShift Container Platform web console, you can create an application from an Operator installed by a cluster administrator.
1.2.2. Maintaining an application
After you create the application, you can use the web console to monitor your project or application metrics. You can also edit or delete the application using the web console.
When the application is running, not all applications resources are used. As a cluster administrator, you can choose to idle these scalable resources to reduce resource consumption.
1.2.3. Connecting an application to services
An application uses backing services to build and connect workloads, which vary according to the service provider. Using the Service Binding Operator, as a developer, you can bind workloads together with Operator-managed backing services, without any manual procedures to configure the binding connection. You can apply service binding also on IBM Power Systems, IBM Z, and LinuxONE environments.
1.2.4. Deploying an application
You can deploy your application using Deployment
or DeploymentConfig
objects and manage them from the web console. You can create deployment strategies that help reduce downtime during a change or an upgrade to the application.
You can also use Helm, a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters.
1.3. Using the Red Hat Marketplace
The Red Hat Marketplace is an open cloud marketplace where you can discover and access certified software for container-based environments that run on public clouds and on-premises.
Chapter 2. Projects
2.1. Working with projects
A project allows a community of users to organize and manage their content in isolation from other communities.
Projects starting with openshift-
and kube-
are default projects. These projects host cluster components that run as pods and other infrastructure components. As such, OpenShift Container Platform does not allow you to create projects starting with openshift-
or kube-
using the oc new-project
command. Cluster administrators can create these projects using the oc adm new-project
command.
Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components.
The following default projects are considered highly privileged: default
, kube-public
, kube-system
, openshift
, openshift-infra
, openshift-node
, and other system-created projects that have the openshift.io/run-level
label set to 0
or 1
. Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects.
2.1.1. Creating a project
You can use the OpenShift Container Platform web console or the OpenShift CLI (oc
) to create a project in your cluster.
2.1.1.1. Creating a project by using the web console
You can use the OpenShift Container Platform web console to create a project in your cluster.
Projects starting with openshift-
and kube-
are considered critical by OpenShift Container Platform. As such, OpenShift Container Platform does not allow you to create projects starting with openshift-
using the web console.
Prerequisites
- Ensure that you have the appropriate roles and permissions to create projects, applications, and other workloads in OpenShift Container Platform.
Procedure
If you are using the Administrator perspective:
- Navigate to Home → Projects.
Click Create Project:
-
In the Create Project dialog box, enter a unique name, such as
myproject
, in the Name field. - Optional: Add the Display name and Description details for the project.
Click Create.
The dashboard for your project is displayed.
-
In the Create Project dialog box, enter a unique name, such as
- Optional: Select the Details tab to view the project details.
- Optional: If you have adequate permissions for a project, you can use the Project Access tab to provide or revoke admin, edit, and view privileges for the project.
If you are using the Developer perspective:
Click the Project menu and select Create Project:
Figure 2.1. Create project
-
In the Create Project dialog box, enter a unique name, such as
myproject
, in the Name field. - Optional: Add the Display name and Description details for the project.
- Click Create.
-
In the Create Project dialog box, enter a unique name, such as
- Optional: Use the left navigation panel to navigate to the Project view and see the dashboard for your project.
- Optional: In the project dashboard, select the Details tab to view the project details.
- Optional: If you have adequate permissions for a project, you can use the Project Access tab of the project dashboard to provide or revoke admin, edit, and view privileges for the project.
Additional resources
2.1.1.2. Creating a project by using the CLI
If allowed by your cluster administrator, you can create a new project.
Projects starting with openshift-
and kube-
are considered critical by OpenShift Container Platform. As such, OpenShift Container Platform does not allow you to create Projects starting with openshift-
or kube-
using the oc new-project
command. Cluster administrators can create these projects using the oc adm new-project
command.
Procedure
Run:
$ oc new-project <project_name> \ --description="<description>" --display-name="<display_name>"
For example:
$ oc new-project hello-openshift \ --description="This is an example project" \ --display-name="Hello OpenShift"
The number of projects you are allowed to create might be limited by the system administrator. After your limit is reached, you might have to delete an existing project in order to create a new one.
2.1.2. Viewing a project
You can use the OpenShift Container Platform web console or the OpenShift CLI (oc
) to view a project in your cluster.
2.1.2.1. Viewing a project by using the web console
You can view the projects that you have access to by using the OpenShift Container Platform web console.
Procedure
If you are using the Administrator perspective:
- Navigate to Home → Projects in the navigation menu.
- Select a project to view. The Overview tab includes a dashboard for your project.
- Select the Details tab to view the project details.
- Select the YAML tab to view and update the YAML configuration for the project resource.
- Select the Workloads tab to see workloads in the project.
- Select the RoleBindings tab to view and create role bindings for your project.
If you are using the Developer perspective:
- Navigate to the Project page in the navigation menu.
- Select All Projects from the Project drop-down menu at the top of the screen to list all of the projects in your cluster.
- Select a project to view. The Overview tab includes a dashboard for your project.
- Select the Details tab to view the project details.
- If you have adequate permissions for a project, select the Project access tab view and update the privileges for the project.
2.1.2.2. Viewing a project using the CLI
When viewing projects, you are restricted to seeing only the projects you have access to view based on the authorization policy.
Procedure
To view a list of projects, run:
$ oc get projects
You can change from the current project to a different project for CLI operations. The specified project is then used in all subsequent operations that manipulate project-scoped content:
$ oc project <project_name>
2.1.3. Providing access permissions to your project using the Developer perspective
You can use the Project view in the Developer perspective to grant or revoke access permissions to your project.
Prerequisites
- You have created a project.
Procedure
To add users to your project and provide Admin, Edit, or View access to them:
- In the Developer perspective, navigate to the Project page.
- Select your project from the Project menu.
- Select the Project Access tab.
Click Add access to add a new row of permissions to the default ones.
Figure 2.2. Project permissions
- Enter the user name, click the Select a role drop-down list, and select an appropriate role.
- Click Save to add the new permissions.
You can also use:
- The Select a role drop-down list, to modify the access permissions of an existing user.
- The Remove Access icon, to completely remove the access permissions of an existing user to the project.
Advanced role-based access control is managed in the Roles and Roles Binding views in the Administrator perspective.
2.1.4. Customizing the available cluster roles using the web console
In the Developer perspective of the web console, the Project → Project access page enables a project administrator to grant roles to users in a project. By default, the available cluster roles that can be granted to users in a project are admin, edit, and view.
As a cluster administrator, you can define which cluster roles are available in the Project access page for all projects cluster-wide. You can specify the available roles by customizing the spec.customization.projectAccess.availableClusterRoles
object in the Console
configuration resource.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
- In the Administrator perspective, navigate to Administration → Cluster settings.
- Click the Configuration tab.
-
From the Configuration resource list, select Console
operator.openshift.io
. - Navigate to the YAML tab to view and edit the YAML code.
In the YAML code under
spec
, customize the list of available cluster roles for project access. The following example specifies the defaultadmin
,edit
, andview
roles:apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster # ... spec: customization: projectAccess: availableClusterRoles: - admin - edit - view
-
Click Save to save the changes to the
Console
configuration resource.
Verification
- In the Developer perspective, navigate to the Project page.
- Select a project from the Project menu.
- Select the Project access tab.
-
Click the menu in the Role column and verify that the available roles match the configuration that you applied to the
Console
resource configuration.
2.1.5. Adding to a project
You can add items to your project by using the +Add page in the Developer perspective.
Prerequisites
- You have created a project.
Procedure
- In the Developer perspective, navigate to the +Add page.
- Select your project from the Project menu.
- Click on an item on the +Add page and then follow the workflow.
You can also use the search feature in the Add* page to find additional items to add to your project. Click * under Add at the top of the page and type the name of a component in the search field.
2.1.6. Checking the project status
You can use the OpenShift Container Platform web console or the OpenShift CLI (oc
) to view the status of your project.
2.1.6.1. Checking project status by using the web console
You can review the status of your project by using the web console.
Prerequisites
- You have created a project.
Procedure
If you are using the Administrator perspective:
- Navigate to Home → Projects.
- Select a project from the list.
- Review the project status in the Overview page.
If you are using the Developer perspective:
- Navigate to the Project page.
- Select a project from the Project menu.
- Review the project status in the Overview page.
2.1.6.2. Checking project status by using the CLI
You can review the status of your project by using the OpenShift CLI (oc
).
Prerequisites
-
You have installed the OpenShift CLI (
oc
). - You have created a project.
Procedure
Switch to your project:
$ oc project <project_name> 1
- 1
- Replace
<project_name>
with the name of your project.
Obtain a high-level overview of the project:
$ oc status
2.1.7. Deleting a project
You can use the OpenShift Container Platform web console or the OpenShift CLI (oc
) to delete a project.
When you delete a project, the server updates the project status to Terminating from Active. Then, the server clears all content from a project that is in the Terminating state before finally removing the project. While a project is in Terminating status, you cannot add new content to the project. Projects can be deleted from the CLI or the web console.
2.1.7.1. Deleting a project by using the web console
You can delete a project by using the web console.
Prerequisites
- You have created a project.
- You have the required permissions to delete the project.
Procedure
If you are using the Administrator perspective:
- Navigate to Home → Projects.
- Select a project from the list.
Click the Actions drop-down menu for the project and select Delete Project.
NoteThe Delete Project option is not available if you do not have the required permissions to delete the project.
- In the Delete Project? pane, confirm the deletion by entering the name of your project.
- Click Delete.
If you are using the Developer perspective:
- Navigate to the Project page.
- Select the project that you want to delete from the Project menu.
Click the Actions drop-down menu for the project and select Delete Project.
NoteIf you do not have the required permissions to delete the project, the Delete Project option is not available.
- In the Delete Project? pane, confirm the deletion by entering the name of your project.
- Click Delete.
2.1.7.2. Deleting a project by using the CLI
You can delete a project by using the OpenShift CLI (oc
).
Prerequisites
-
You have installed the OpenShift CLI (
oc
). - You have created a project.
- You have the required permissions to delete the project.
Procedure
Delete your project:
$ oc delete project <project_name> 1
- 1
- Replace
<project_name>
with the name of the project that you want to delete.
2.2. Creating a project as another user
Impersonation allows you to create a project as a different user.
2.2.1. API impersonation
You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation.
2.2.2. Impersonating a user when you create a project
You can impersonate a different user when you create a project request. Because system:authenticated:oauth
is the only bootstrap group that can create project requests, you must impersonate that group.
Procedure
To create a project request on behalf of a different user:
$ oc new-project <project> --as=<user> \ --as-group=system:authenticated --as-group=system:authenticated:oauth
2.3. Configuring project creation
In OpenShift Container Platform, projects are used to group and isolate related objects. When a request is made to create a new project using the web console or oc new-project
command, an endpoint in OpenShift Container Platform is used to provision the project according to a template, which can be customized.
As a cluster administrator, you can allow and configure how developers and service accounts can create, or self-provision, their own projects.
2.3.1. About project creation
The OpenShift Container Platform API server automatically provisions new projects based on the project template that is identified by the projectRequestTemplate
parameter in the cluster’s project configuration resource. If the parameter is not defined, the API server creates a default template that creates a project with the requested name, and assigns the requesting user to the admin
role for that project.
When a project request is submitted, the API substitutes the following parameters into the template:
Parameter | Description |
---|---|
| The name of the project. Required. |
| The display name of the project. May be empty. |
| The description of the project. May be empty. |
| The user name of the administrating user. |
| The user name of the requesting user. |
Access to the API is granted to developers with the self-provisioner
role and the self-provisioners
cluster role binding. This role is available to all authenticated developers by default.
2.3.2. Modifying the template for new projects
As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements.
To create your own custom project template:
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions.
Procedure
-
Log in as a user with
cluster-admin
privileges. Generate the default project template:
$ oc adm create-bootstrap-project-template -o yaml > template.yaml
-
Use a text editor to modify the generated
template.yaml
file by adding objects or modifying existing objects. The project template must be created in the
openshift-config
namespace. Load your modified template:$ oc create -f template.yaml -n openshift-config
Edit the project configuration resource using the web console or CLI.
Using the web console:
- Navigate to the Administration → Cluster Settings page.
- Click Configuration to view all configuration resources.
- Find the entry for Project and click Edit YAML.
Using the CLI:
Edit the
project.config.openshift.io/cluster
resource:$ oc edit project.config.openshift.io/cluster
Update the
spec
section to include theprojectRequestTemplate
andname
parameters, and set the name of your uploaded project template. The default name isproject-request
.Project configuration resource with custom project template
apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestTemplate: name: <template_name> # ...
- After you save your changes, create a new project to verify that your changes were successfully applied.
2.3.3. Disabling project self-provisioning
You can prevent an authenticated user group from self-provisioning new projects.
Procedure
-
Log in as a user with
cluster-admin
privileges. View the
self-provisioners
cluster role binding usage by running the following command:$ oc describe clusterrolebinding.rbac self-provisioners
Example output
Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth
Review the subjects in the
self-provisioners
section.Remove the
self-provisioner
cluster role from the groupsystem:authenticated:oauth
.If the
self-provisioners
cluster role binding binds only theself-provisioner
role to thesystem:authenticated:oauth
group, run the following command:$ oc patch clusterrolebinding.rbac self-provisioners -p '{"subjects": null}'
If the
self-provisioners
cluster role binding binds theself-provisioner
role to more users, groups, or service accounts than thesystem:authenticated:oauth
group, run the following command:$ oc adm policy \ remove-cluster-role-from-group self-provisioner \ system:authenticated:oauth
Edit the
self-provisioners
cluster role binding to prevent automatic updates to the role. Automatic updates reset the cluster roles to the default state.To update the role binding using the CLI:
Run the following command:
$ oc edit clusterrolebinding.rbac self-provisioners
In the displayed role binding, set the
rbac.authorization.kubernetes.io/autoupdate
parameter value tofalse
, as shown in the following example:apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "false" # ...
To update the role binding by using a single command:
$ oc patch clusterrolebinding.rbac self-provisioners -p '{ "metadata": { "annotations": { "rbac.authorization.kubernetes.io/autoupdate": "false" } } }'
Log in as an authenticated user and verify that it can no longer self-provision a project:
$ oc new-project test
Example output
Error from server (Forbidden): You may not request a new project via this API.
Consider customizing this project request message to provide more helpful instructions specific to your organization.
2.3.4. Customizing the project request message
When a developer or a service account that is unable to self-provision projects makes a project creation request using the web console or CLI, the following error message is returned by default:
You may not request a new project via this API.
Cluster administrators can customize this message. Consider updating it to provide further instructions on how to request a new project specific to your organization. For example:
-
To request a project, contact your system administrator at
projectname@example.com
. -
To request a new project, fill out the project request form located at
https://internal.example.com/openshift-project-request
.
To customize the project request message:
Procedure
Edit the project configuration resource using the web console or CLI.
Using the web console:
- Navigate to the Administration → Cluster Settings page.
- Click Configuration to view all configuration resources.
- Find the entry for Project and click Edit YAML.
Using the CLI:
-
Log in as a user with
cluster-admin
privileges. Edit the
project.config.openshift.io/cluster
resource:$ oc edit project.config.openshift.io/cluster
-
Log in as a user with
Update the
spec
section to include theprojectRequestMessage
parameter and set the value to your custom message:Project configuration resource with custom project request message
apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestMessage: <message_string> # ...
For example:
apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestMessage: To request a project, contact your system administrator at projectname@example.com. # ...
- After you save your changes, attempt to create a new project as a developer or service account that is unable to self-provision projects to verify that your changes were successfully applied.
Chapter 3. Creating applications
3.1. Using templates
The following sections provide an overview of templates, as well as how to use and create them.
3.1.1. Understanding templates
A template describes a set of objects that can be parameterized and processed to produce a list of objects for creation by OpenShift Container Platform. A template can be processed to create anything you have permission to create within a project, for example services, build configurations, and deployment configurations. A template can also define a set of labels to apply to every object defined in the template.
You can create a list of objects from a template using the CLI or, if a template has been uploaded to your project or the global template library, using the web console.
3.1.2. Uploading a template
If you have a JSON or YAML file that defines a template, you can upload the template to projects using the CLI. This saves the template to the project for repeated use by any user with appropriate access to that project. Instructions about writing your own templates are provided later in this topic.
Procedure
Upload a template using one of the following methods:
Upload a template to your current project’s template library, pass the JSON or YAML file with the following command:
$ oc create -f <filename>
Upload a template to a different project using the
-n
option with the name of the project:$ oc create -f <filename> -n <project>
The template is now available for selection using the web console or the CLI.
3.1.3. Creating an application by using the web console
You can use the web console to create an application from a template.
Procedure
- Select Developer from the context selector at the top of the web console navigation menu.
- While in the desired project, click +Add
- Click All services in the Developer Catalog tile.
Click Builder Images under Type to see the available builder images.
NoteOnly image stream tags that have the
builder
tag listed in their annotations appear in this list, as demonstrated here:kind: "ImageStream" apiVersion: "v1" metadata: name: "ruby" creationTimestamp: null spec: # ... tags: - name: "2.6" annotations: description: "Build and run Ruby 2.6 applications" iconClass: "icon-ruby" tags: "builder,ruby" 1 supports: "ruby:2.6,ruby" version: "2.6" # ...
- 1
- Including
builder
here ensures this image stream tag appears in the web console as a builder.
- Modify the settings in the new application screen to configure the objects to support your application.
3.1.4. Creating objects from templates by using the CLI
You can use the CLI to process templates and use the configuration that is generated to create objects.
3.1.4.1. Adding labels
Labels are used to manage and organize generated objects, such as pods. The labels specified in the template are applied to every object that is generated from the template.
Procedure
Add labels in the template from the command line:
$ oc process -f <filename> -l name=otherLabel
3.1.4.2. Listing parameters
The list of parameters that you can override are listed in the parameters
section of the template.
Procedure
You can list parameters with the CLI by using the following command and specifying the file to be used:
$ oc process --parameters -f <filename>
Alternatively, if the template is already uploaded:
$ oc process --parameters -n <project> <template_name>
For example, the following shows the output when listing the parameters for one of the quick start templates in the default
openshift
project:$ oc process --parameters -n openshift rails-postgresql-example
Example output
NAME DESCRIPTION GENERATOR VALUE SOURCE_REPOSITORY_URL The URL of the repository with your application source code https://github.com/sclorg/rails-ex.git SOURCE_REPOSITORY_REF Set this to a branch name, tag or other ref of your repository if you are not using the default branch CONTEXT_DIR Set this to the relative path to your project if it is not in the root of your repository APPLICATION_DOMAIN The exposed hostname that will route to the Rails service rails-postgresql-example.openshiftapps.com GITHUB_WEBHOOK_SECRET A secret string used to configure the GitHub webhook expression [a-zA-Z0-9]{40} SECRET_KEY_BASE Your secret key for verifying the integrity of signed cookies expression [a-z0-9]{127} APPLICATION_USER The application user that is used within the sample application to authorize access on pages openshift APPLICATION_PASSWORD The application password that is used within the sample application to authorize access on pages secret DATABASE_SERVICE_NAME Database service name postgresql POSTGRESQL_USER database username expression user[A-Z0-9]{3} POSTGRESQL_PASSWORD database password expression [a-zA-Z0-9]{8} POSTGRESQL_DATABASE database name root POSTGRESQL_MAX_CONNECTIONS database max connections 10 POSTGRESQL_SHARED_BUFFERS database shared buffers 12MB
The output identifies several parameters that are generated with a regular expression-like generator when the template is processed.
3.1.4.3. Generating a list of objects
Using the CLI, you can process a file defining a template to return the list of objects to standard output.
Procedure
Process a file defining a template to return the list of objects to standard output:
$ oc process -f <filename>
Alternatively, if the template has already been uploaded to the current project:
$ oc process <template_name>
Create objects from a template by processing the template and piping the output to
oc create
:$ oc process -f <filename> | oc create -f -
Alternatively, if the template has already been uploaded to the current project:
$ oc process <template> | oc create -f -
You can override any parameter values defined in the file by adding the
-p
option for each<name>=<value>
pair you want to override. A parameter reference appears in any text field inside the template items.For example, in the following the
POSTGRESQL_USER
andPOSTGRESQL_DATABASE
parameters of a template are overridden to output a configuration with customized environment variables:Creating a List of objects from a template
$ oc process -f my-rails-postgresql \ -p POSTGRESQL_USER=bob \ -p POSTGRESQL_DATABASE=mydatabase
The JSON file can either be redirected to a file or applied directly without uploading the template by piping the processed output to the
oc create
command:$ oc process -f my-rails-postgresql \ -p POSTGRESQL_USER=bob \ -p POSTGRESQL_DATABASE=mydatabase \ | oc create -f -
If you have large number of parameters, you can store them in a file and then pass this file to
oc process
:$ cat postgres.env POSTGRESQL_USER=bob POSTGRESQL_DATABASE=mydatabase
$ oc process -f my-rails-postgresql --param-file=postgres.env
You can also read the environment from standard input by using
"-"
as the argument to--param-file
:$ sed s/bob/alice/ postgres.env | oc process -f my-rails-postgresql --param-file=-
3.1.5. Modifying uploaded templates
You can edit a template that has already been uploaded to your project.
Procedure
Modify a template that has already been uploaded:
$ oc edit template <template>
3.1.6. Using instant app and quick start templates
OpenShift Container Platform provides a number of default instant app and quick start templates to make it easy to quickly get started creating a new application for different languages. Templates are provided for Rails (Ruby), Django (Python), Node.js, CakePHP (PHP), and Dancer (Perl). Your cluster administrator must create these templates in the default, global openshift
project so you have access to them.
By default, the templates build using a public source repository on GitHub that contains the necessary application code.
Procedure
You can list the available default instant app and quick start templates with:
$ oc get templates -n openshift
To modify the source and build your own version of the application:
-
Fork the repository referenced by the template’s default
SOURCE_REPOSITORY_URL
parameter. Override the value of the
SOURCE_REPOSITORY_URL
parameter when creating from the template, specifying your fork instead of the default value.By doing this, the build configuration created by the template now points to your fork of the application code, and you can modify the code and rebuild the application at will.
-
Fork the repository referenced by the template’s default
Some of the instant app and quick start templates define a database deployment configuration. The configuration they define uses ephemeral storage for the database content. These templates should be used for demonstration purposes only as all database data is lost if the database pod restarts for any reason.
3.1.6.1. Quick start templates
A quick start template is a basic example of an application running on OpenShift Container Platform. Quick starts come in a variety of languages and frameworks, and are defined in a template, which is constructed from a set of services, build configurations, and deployment configurations. This template references the necessary images and source repositories to build and deploy the application.
To explore a quick start, create an application from a template. Your administrator must have already installed these templates in your OpenShift Container Platform cluster, in which case you can simply select it from the web console.
Quick starts refer to a source repository that contains the application source code. To customize the quick start, fork the repository and, when creating an application from the template, substitute the default source repository name with your forked repository. This results in builds that are performed using your source code instead of the provided example source. You can then update the code in your source repository and launch a new build to see the changes reflected in the deployed application.
3.1.6.1.1. Web framework quick start templates
These quick start templates provide a basic application of the indicated framework and language:
- CakePHP: a PHP web framework that includes a MySQL database
- Dancer: a Perl web framework that includes a MySQL database
- Django: a Python web framework that includes a PostgreSQL database
- NodeJS: a NodeJS web application that includes a MongoDB database
- Rails: a Ruby web framework that includes a PostgreSQL database
3.1.7. Writing templates
You can define new templates to make it easy to recreate all the objects of your application. The template defines the objects it creates along with some metadata to guide the creation of those objects.
The following is an example of a simple template object definition (YAML):
apiVersion: template.openshift.io/v1 kind: Template metadata: name: redis-template annotations: description: "Description" iconClass: "icon-redis" tags: "database,nosql" objects: - apiVersion: v1 kind: Pod metadata: name: redis-master spec: containers: - env: - name: REDIS_PASSWORD value: ${REDIS_PASSWORD} image: dockerfile/redis name: master ports: - containerPort: 6379 protocol: TCP parameters: - description: Password used for Redis authentication from: '[A-Z0-9]{8}' generate: expression name: REDIS_PASSWORD labels: redis: master
3.1.7.1. Writing the template description
The template description informs you what the template does and helps you find it when searching in the web console. Additional metadata beyond the template name is optional, but useful to have. In addition to general descriptive information, the metadata also includes a set of tags. Useful tags include the name of the language the template is related to for example, Java, PHP, Ruby, and so on.
The following is an example of template description metadata:
kind: Template apiVersion: template.openshift.io/v1 metadata: name: cakephp-mysql-example 1 annotations: openshift.io/display-name: "CakePHP MySQL Example (Ephemeral)" 2 description: >- An example CakePHP application with a MySQL database. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/cakephp-ex/blob/master/README.md. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing." 3 openshift.io/long-description: >- This template defines resources needed to develop a CakePHP application, including a build configuration, application DeploymentConfig, and database DeploymentConfig. The database is stored in non-persistent storage, so this configuration should be used for experimental purposes only. 4 tags: "quickstart,php,cakephp" 5 iconClass: icon-php 6 openshift.io/provider-display-name: "Red Hat, Inc." 7 openshift.io/documentation-url: "https://github.com/sclorg/cakephp-ex" 8 openshift.io/support-url: "https://access.redhat.com" 9 message: "Your admin credentials are ${ADMIN_USERNAME}:${ADMIN_PASSWORD}" 10
- 1
- The unique name of the template.
- 2
- A brief, user-friendly name, which can be employed by user interfaces.
- 3
- A description of the template. Include enough detail that users understand what is being deployed and any caveats they must know before deploying. It should also provide links to additional information, such as a README file. Newlines can be included to create paragraphs.
- 4
- Additional template description. This may be displayed by the service catalog, for example.
- 5
- Tags to be associated with the template for searching and grouping. Add tags that include it into one of the provided catalog categories. Refer to the
id
andcategoryAliases
inCATALOG_CATEGORIES
in the console constants file. The categories can also be customized for the whole cluster. - 6
- An icon to be displayed with your template in the web console.
Example 3.1. Available icons
-
icon-3scale
-
icon-aerogear
-
icon-amq
-
icon-angularjs
-
icon-ansible
-
icon-apache
-
icon-beaker
-
icon-camel
-
icon-capedwarf
-
icon-cassandra
-
icon-catalog-icon
-
icon-clojure
-
icon-codeigniter
-
icon-cordova
-
icon-datagrid
-
icon-datavirt
-
icon-debian
-
icon-decisionserver
-
icon-django
-
icon-dotnet
-
icon-drupal
-
icon-eap
-
icon-elastic
-
icon-erlang
-
icon-fedora
-
icon-freebsd
-
icon-git
-
icon-github
-
icon-gitlab
-
icon-glassfish
-
icon-go-gopher
-
icon-golang
-
icon-grails
-
icon-hadoop
-
icon-haproxy
-
icon-helm
-
icon-infinispan
-
icon-jboss
-
icon-jenkins
-
icon-jetty
-
icon-joomla
-
icon-jruby
-
icon-js
-
icon-knative
-
icon-kubevirt
-
icon-laravel
-
icon-load-balancer
-
icon-mariadb
-
icon-mediawiki
-
icon-memcached
-
icon-mongodb
-
icon-mssql
-
icon-mysql-database
-
icon-nginx
-
icon-nodejs
-
icon-openjdk
-
icon-openliberty
-
icon-openshift
-
icon-openstack
-
icon-other-linux
-
icon-other-unknown
-
icon-perl
-
icon-phalcon
-
icon-php
-
icon-play
-
iconpostgresql
-
icon-processserver
-
icon-python
-
icon-quarkus
-
icon-rabbitmq
-
icon-rails
-
icon-redhat
-
icon-redis
-
icon-rh-integration
-
icon-rh-spring-boot
-
icon-rh-tomcat
-
icon-ruby
-
icon-scala
-
icon-serverlessfx
-
icon-shadowman
-
icon-spring-boot
-
icon-spring
-
icon-sso
-
icon-stackoverflow
-
icon-suse
-
icon-symfony
-
icon-tomcat
-
icon-ubuntu
-
icon-vertx
-
icon-wildfly
-
icon-windows
-
icon-wordpress
-
icon-xamarin
-
icon-zend
-
- 7
- The name of the person or organization providing the template.
- 8
- A URL referencing further documentation for the template.
- 9
- A URL where support can be obtained for the template.
- 10
- An instructional message that is displayed when this template is instantiated. This field should inform the user how to use the newly created resources. Parameter substitution is performed on the message before being displayed so that generated credentials and other parameters can be included in the output. Include links to any next-steps documentation that users should follow.
3.1.7.2. Writing template labels
Templates can include a set of labels. These labels are added to each object created when the template is instantiated. Defining a label in this way makes it easy for users to find and manage all the objects created from a particular template.
The following is an example of template object labels:
kind: "Template" apiVersion: "v1" ... labels: template: "cakephp-mysql-example" 1 app: "${NAME}" 2
3.1.7.3. Writing template parameters
Parameters allow a value to be supplied by you or generated when the template is instantiated. Then, that value is substituted wherever the parameter is referenced. References can be defined in any field in the objects list field. This is useful for generating random passwords or allowing you to supply a hostname or other user-specific value that is required to customize the template. Parameters can be referenced in two ways:
-
As a string value by placing values in the form
${PARAMETER_NAME}
in any string field in the template. -
As a JSON or YAML value by placing values in the form
${{PARAMETER_NAME}}
in place of any field in the template.
When using the ${PARAMETER_NAME}
syntax, multiple parameter references can be combined in a single field and the reference can be embedded within fixed data, such as "http://${PARAMETER_1}${PARAMETER_2}"
. Both parameter values are substituted and the resulting value is a quoted string.
When using the ${{PARAMETER_NAME}}
syntax only a single parameter reference is allowed and leading and trailing characters are not permitted. The resulting value is unquoted unless, after substitution is performed, the result is not a valid JSON object. If the result is not a valid JSON value, the resulting value is quoted and treated as a standard string.
A single parameter can be referenced multiple times within a template and it can be referenced using both substitution syntaxes within a single template.
A default value can be provided, which is used if you do not supply a different value:
The following is an example of setting an explicit value as the default value:
parameters: - name: USERNAME description: "The user name for Joe" value: joe
Parameter values can also be generated based on rules specified in the parameter definition, for example generating a parameter value:
parameters: - name: PASSWORD description: "The random user password" generate: expression from: "[a-zA-Z0-9]{12}"
In the previous example, processing generates a random password 12 characters long consisting of all upper and lowercase alphabet letters and numbers.
The syntax available is not a full regular expression syntax. However, you can use \w
, \d
, \a
, and \A
modifiers:
-
[\w]{10}
produces 10 alphabet characters, numbers, and underscores. This follows the PCRE standard and is equal to[a-zA-Z0-9_]{10}
. -
[\d]{10}
produces 10 numbers. This is equal to[0-9]{10}
. -
[\a]{10}
produces 10 alphabetical characters. This is equal to[a-zA-Z]{10}
. -
[\A]{10}
produces 10 punctuation or symbol characters. This is equal to[~!@#$%\^&*()\-_+={}\[\]\\|<,>.?/"';:`]{10}
.
Depending on if the template is written in YAML or JSON, and the type of string that the modifier is embedded within, you might need to escape the backslash with a second backslash. The following examples are equivalent:
Example YAML template with a modifier
parameters: - name: singlequoted_example generate: expression from: '[\A]{10}' - name: doublequoted_example generate: expression from: "[\\A]{10}"
Example JSON template with a modifier
{ "parameters": [ { "name": "json_example", "generate": "expression", "from": "[\\A]{10}" } ] }
Here is an example of a full template with parameter definitions and references:
kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: cakephp-mysql-example annotations: description: Defines how to build the application spec: source: type: Git git: uri: "${SOURCE_REPOSITORY_URL}" 1 ref: "${SOURCE_REPOSITORY_REF}" contextDir: "${CONTEXT_DIR}" - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: replicas: "${{REPLICA_COUNT}}" 2 parameters: - name: SOURCE_REPOSITORY_URL 3 displayName: Source Repository URL 4 description: The URL of the repository with your application source code 5 value: https://github.com/sclorg/cakephp-ex.git 6 required: true 7 - name: GITHUB_WEBHOOK_SECRET description: A secret string used to configure the GitHub webhook generate: expression 8 from: "[a-zA-Z0-9]{40}" 9 - name: REPLICA_COUNT description: Number of replicas to run value: "2" required: true message: "... The GitHub webhook secret is ${GITHUB_WEBHOOK_SECRET} ..." 10
- 1
- This value is replaced with the value of the
SOURCE_REPOSITORY_URL
parameter when the template is instantiated. - 2
- This value is replaced with the unquoted value of the
REPLICA_COUNT
parameter when the template is instantiated. - 3
- The name of the parameter. This value is used to reference the parameter within the template.
- 4
- The user-friendly name for the parameter. This is displayed to users.
- 5
- A description of the parameter. Provide more detailed information for the purpose of the parameter, including any constraints on the expected value. Descriptions should use complete sentences to follow the console’s text standards. Do not make this a duplicate of the display name.
- 6
- A default value for the parameter which is used if you do not override the value when instantiating the template. Avoid using default values for things like passwords, instead use generated parameters in combination with secrets.
- 7
- Indicates this parameter is required, meaning you cannot override it with an empty value. If the parameter does not provide a default or generated value, you must supply a value.
- 8
- A parameter which has its value generated.
- 9
- The input to the generator. In this case, the generator produces a 40 character alphanumeric value including upper and lowercase characters.
- 10
- Parameters can be included in the template message. This informs you about generated values.
3.1.7.4. Writing the template object list
The main portion of the template is the list of objects which is created when the template is instantiated. This can be any valid API object, such as a build configuration, deployment configuration, or service. The object is created exactly as defined here, with any parameter values substituted in prior to creation. The definition of these objects can reference parameters defined earlier.
The following is an example of an object list:
kind: "Template"
apiVersion: "v1"
metadata:
name: my-template
objects:
- kind: "Service" 1
apiVersion: "v1"
metadata:
name: "cakephp-mysql-example"
annotations:
description: "Exposes and load balances the application pods"
spec:
ports:
- name: "web"
port: 8080
targetPort: 8080
selector:
name: "cakephp-mysql-example"
- 1
- The definition of a service, which is created by this template.
If an object definition metadata includes a fixed namespace
field value, the field is stripped out of the definition during template instantiation. If the namespace
field contains a parameter reference, normal parameter substitution is performed and the object is created in whatever namespace the parameter substitution resolved the value to, assuming the user has permission to create objects in that namespace.
3.1.7.5. Marking a template as bindable
The Template Service Broker advertises one service in its catalog for each template object of which it is aware. By default, each of these services is advertised as being bindable, meaning an end user is permitted to bind against the provisioned service.
Procedure
Template authors can prevent end users from binding against services provisioned from a given template.
-
Prevent end user from binding against services provisioned from a given template by adding the annotation
template.openshift.io/bindable: "false"
to the template.
3.1.7.6. Exposing template object fields
Template authors can indicate that fields of particular objects in a template should be exposed. The Template Service Broker recognizes exposed fields on ConfigMap
, Secret
, Service
, and Route
objects, and returns the values of the exposed fields when a user binds a service backed by the broker.
To expose one or more fields of an object, add annotations prefixed by template.openshift.io/expose-
or template.openshift.io/base64-expose-
to the object in the template.
Each annotation key, with its prefix removed, is passed through to become a key in a bind
response.
Each annotation value is a Kubernetes JSONPath expression, which is resolved at bind time to indicate the object field whose value should be returned in the bind
response.
Bind
response key-value pairs can be used in other parts of the system as environment variables. Therefore, it is recommended that every annotation key with its prefix removed should be a valid environment variable name — beginning with a character A-Z
, a-z
, or _
, and being followed by zero or more characters A-Z
, a-z
, 0-9
, or _
.
Unless escaped with a backslash, Kubernetes' JSONPath implementation interprets characters such as .
, @
, and others as metacharacters, regardless of their position in the expression. Therefore, for example, to refer to a ConfigMap
datum named my.key
, the required JSONPath expression would be {.data['my\.key']}
. Depending on how the JSONPath expression is then written in YAML, an additional backslash might be required, for example "{.data['my\\.key']}"
.
The following is an example of different objects' fields being exposed:
kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: ConfigMap apiVersion: v1 metadata: name: my-template-config annotations: template.openshift.io/expose-username: "{.data['my\\.username']}" data: my.username: foo - kind: Secret apiVersion: v1 metadata: name: my-template-config-secret annotations: template.openshift.io/base64-expose-password: "{.data['password']}" stringData: password: <password> - kind: Service apiVersion: v1 metadata: name: my-template-service annotations: template.openshift.io/expose-service_ip_port: "{.spec.clusterIP}:{.spec.ports[?(.name==\"web\")].port}" spec: ports: - name: "web" port: 8080 - kind: Route apiVersion: route.openshift.io/v1 metadata: name: my-template-route annotations: template.openshift.io/expose-uri: "http://{.spec.host}{.spec.path}" spec: path: mypath
An example response to a bind
operation given the above partial template follows:
{ "credentials": { "username": "foo", "password": "YmFy", "service_ip_port": "172.30.12.34:8080", "uri": "http://route-test.router.default.svc.cluster.local/mypath" } }
Procedure
-
Use the
template.openshift.io/expose-
annotation to return the field value as a string. This is convenient, although it does not handle arbitrary binary data. -
If you want to return binary data, use the
template.openshift.io/base64-expose-
annotation instead to base64 encode the data before it is returned.
3.1.7.7. Waiting for template readiness
Template authors can indicate that certain objects within a template should be waited for before a template instantiation by the service catalog, Template Service Broker, or TemplateInstance
API is considered complete.
To use this feature, mark one or more objects of kind Build
, BuildConfig
, Deployment
, DeploymentConfig
, Job
, or StatefulSet
in a template with the following annotation:
"template.alpha.openshift.io/wait-for-ready": "true"
Template instantiation is not complete until all objects marked with the annotation report ready. Similarly, if any of the annotated objects report failed, or if the template fails to become ready within a fixed timeout of one hour, the template instantiation fails.
For the purposes of instantiation, readiness and failure of each object kind are defined as follows:
Kind | Readiness | Failure |
---|---|---|
| Object reports phase complete. | Object reports phase canceled, error, or failed. |
| Latest associated build object reports phase complete. | Latest associated build object reports phase canceled, error, or failed. |
| Object reports new replica set and deployment available. This honors readiness probes defined on the object. | Object reports progressing condition as false. |
| Object reports new replication controller and deployment available. This honors readiness probes defined on the object. | Object reports progressing condition as false. |
| Object reports completion. | Object reports that one or more failures have occurred. |
| Object reports all replicas ready. This honors readiness probes defined on the object. | Not applicable. |
The following is an example template extract, which uses the wait-for-ready
annotation. Further examples can be found in the OpenShift Container Platform quick start templates.
kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: ... annotations: # wait-for-ready used on BuildConfig ensures that template instantiation # will fail immediately if build fails template.alpha.openshift.io/wait-for-ready: "true" spec: ... - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: ... annotations: template.alpha.openshift.io/wait-for-ready: "true" spec: ... - kind: Service apiVersion: v1 metadata: name: ... spec: ...
Additional recommendations
- Set memory, CPU, and storage default sizes to make sure your application is given enough resources to run smoothly.
-
Avoid referencing the
latest
tag from images if that tag is used across major versions. This can cause running applications to break when new images are pushed to that tag. - A good template builds and deploys cleanly without requiring modifications after the template is deployed.
3.1.7.8. Creating a template from existing objects
Rather than writing an entire template from scratch, you can export existing objects from your project in YAML form, and then modify the YAML from there by adding parameters and other customizations as template form.
Procedure
Export objects in a project in YAML form:
$ oc get -o yaml all > <yaml_filename>
You can also substitute a particular resource type or multiple resources instead of
all
. Runoc get -h
for more examples.The object types included in
oc get -o yaml all
are:-
BuildConfig
-
Build
-
DeploymentConfig
-
ImageStream
-
Pod
-
ReplicationController
-
Route
-
Service
-
Using the all
alias is not recommended because the contents might vary across different clusters and versions. Instead, specify all required resources.
3.2. Creating applications by using the Developer perspective
The Developer perspective in the web console provides you the following options from the +Add view to create applications and associated services and deploy them on OpenShift Container Platform:
Getting started resources: Use these resources to help you get started with Developer Console. You can choose to hide the header using the Options menu
.
- Creating applications using samples: Use existing code samples to get started with creating applications on the OpenShift Container Platform.
- Build with guided documentation: Follow the guided documentation to build applications and familiarize yourself with key concepts and terminologies.
- Explore new developer features: Explore the new features and resources within the Developer perspective.
Developer catalog: Explore the Developer Catalog to select the required applications, services, or source to image builders, and then add it to your project.
- All Services: Browse the catalog to discover services across OpenShift Container Platform.
- Database: Select the required database service and add it to your application.
- Operator Backed: Select and deploy the required Operator-managed service.
- Helm chart: Select the required Helm chart to simplify deployment of applications and services.
- Devfile: Select a devfile from the Devfile registry to declaratively define a development environment.
Event Source: Select an event source to register interest in a class of events from a particular system.
NoteThe Managed services option is also available if the RHOAS Operator is installed.
- Git repository: Import an existing codebase, Devfile, or Dockerfile from your Git repository using the From Git, From Devfile, or From Dockerfile options respectively, to build and deploy an application on OpenShift Container Platform.
- Container images: Use existing images from an image stream or registry to deploy it on to the OpenShift Container Platform.
- Pipelines: Use Tekton pipeline to create CI/CD pipelines for your software delivery process on the OpenShift Container Platform.
Serverless: Explore the Serverless options to create, build, and deploy stateless and serverless applications on the OpenShift Container Platform.
- Channel: Create a Knative channel to create an event forwarding and persistence layer with in-memory and reliable implementations.
- Samples: Explore the available sample applications to create, build, and deploy an application quickly.
- Quick Starts: Explore the quick start options to create, import, and run applications with step-by-step instructions and tasks.
From Local Machine: Explore the From Local Machine tile to import or upload files on your local machine for building and deploying applications easily.
- Import YAML: Upload a YAML file to create and define resources for building and deploying applications.
- Upload JAR file: Upload a JAR file to build and deploy Java applications.
- Share my Project: Use this option to add or remove users to a project and provide accessibility options to them.
- Helm Chart repositories: Use this option to add Helm Chart repositories in a namespace.
- Re-ordering of resources: Use these resources to re-order pinned resources added to your navigation pane. The drag-and-drop icon is displayed on the left side of the pinned resource when you hover over it in the navigation pane. The dragged resource can be dropped only in the section where it resides.
Note that certain options, such as Pipelines, Event Source, and Import Virtual Machines, are displayed only when the OpenShift Pipelines Operator, OpenShift Serverless Operator, and OpenShift Virtualization Operator are installed, respectively.
3.2.1. Prerequisites
To create applications using the Developer perspective ensure that:
- You have logged in to the web console.
- You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
To create serverless applications, in addition to the preceding prerequisites, ensure that:
3.2.2. Creating sample applications
You can use the sample applications in the +Add flow of the Developer perspective to create, build, and deploy applications quickly.
Prerequisites
- You have logged in to the OpenShift Container Platform web console and are in the Developer perspective.
Procedure
- In the +Add view, click the Samples tile to see the Samples page.
- On the Samples page, select one of the available sample applications to see the Create Sample Application form.
In the Create Sample Application Form:
- In the Name field, the deployment name is displayed by default. You can modify this name as required.
- In the Builder Image Version, a builder image is selected by default. You can modify this image version by using the Builder Image Version drop-down list.
- A sample Git repository URL is added by default.
- Click Create to create the sample application. The build status of the sample application is displayed on the Topology view. After the sample application is created, you can see the deployment added to the application.
3.2.3. Creating applications by using Quick Starts
The Quick Starts page shows you how to create, import, and run applications on OpenShift Container Platform, with step-by-step instructions and tasks.
Prerequisites
- You have logged in to the OpenShift Container Platform web console and are in the Developer perspective.
Procedure
- In the +Add view, click the Getting Started resources → Build with guided documentation → View all quick starts link to view the Quick Starts page.
- In the Quick Starts page, click the tile for the quick start that you want to use.
- Click Start to begin the quick start.
- Perform the steps that are displayed.
3.2.4. Importing a codebase from Git to create an application
You can use the Developer perspective to create, build, and deploy an application on OpenShift Container Platform using an existing codebase in GitHub.
The following procedure walks you through the From Git option in the Developer perspective to create an application.
Procedure
- In the +Add view, click From Git in the Git Repository tile to see the Import from git form.
-
In the Git section, enter the Git repository URL for the codebase you want to use to create an application. For example, enter the URL of this sample Node.js application
https://github.com/sclorg/nodejs-ex
. The URL is then validated. Optional: You can click Show Advanced Git Options to add details such as:
- Git Reference to point to code in a specific branch, tag, or commit to be used to build the application.
- Context Dir to specify the subdirectory for the application source code you want to use to build the application.
- Source Secret to create a Secret Name with credentials for pulling your source code from a private repository.
Optional: You can import a
Devfile
, aDockerfile
,Builder Image
, or aServerless Function
through your Git repository to further customize your deployment.-
If your Git repository contains a
Devfile
, aDockerfile
, aBuilder Image
, or afunc.yaml
, it is automatically detected and populated on the respective path fields. -
If a
Devfile
, aDockerfile
, or aBuilder Image
are detected in the same repository, theDevfile
is selected by default. -
If
func.yaml
is detected in the Git repository, the Import Strategy changes toServerless Function
. - Alternatively, you can create a serverless function by clicking Create Serverless function in the +Add view using the Git repository URL.
- To edit the file import type and select a different strategy, click Edit import strategy option.
-
If multiple
Devfiles
, aDockerfiles
, or aBuilder Images
are detected, to import a specific instance, specify the respective paths relative to the context directory.
-
If your Git repository contains a
After the Git URL is validated, the recommended builder image is selected and marked with a star. If the builder image is not auto-detected, select a builder image. For the
https://github.com/sclorg/nodejs-ex
Git URL, by default the Node.js builder image is selected.- Optional: Use the Builder Image Version drop-down to specify a version.
- Optional: Use the Edit import strategy to select a different strategy.
- Optional: For the Node.js builder image, use the Run command field to override the command to run the application.
In the General section:
-
In the Application field, enter a unique name for the application grouping, for example,
myapp
. Ensure that the application name is unique in a namespace. The Name field to identify the resources created for this application is automatically populated based on the Git repository URL if there are no existing applications. If there are existing applications, you can choose to deploy the component within an existing application, create a new application, or keep the component unassigned.
NoteThe resource name must be unique in a namespace. Modify the resource name if you get an error.
-
In the Application field, enter a unique name for the application grouping, for example,
In the Resources section, select:
- Deployment, to create an application in plain Kubernetes style.
- Deployment Config, to create an OpenShift Container Platform style application.
Serverless Deployment, to create a Knative service.
NoteTo set the default resource preference for importing an application, go to User Preferences → Applications → Resource type field. The Serverless Deployment option is displayed in the Import from Git form only if the OpenShift Serverless Operator is installed in your cluster. The Resources section is not available while creating a serverless function. For further details, refer to the OpenShift Serverless documentation.
In the Pipelines section, select Add Pipeline, and then click Show Pipeline Visualization to see the pipeline for the application. A default pipeline is selected, but you can choose the pipeline you want from the list of available pipelines for the application.
NoteThe Add pipeline checkbox is checked and Configure PAC is selected by default if the following criterias are fulfilled:
- Pipeline operator is installed
-
pipelines-as-code
is enabled -
.tekton
directory is detected in the Git repository
Add a webhook to your repository. If Configure PAC is checked and the GitHub App is set up, you can see the Use GitHub App and Setup a webhook options. If GitHub App is not set up, you can only see the Setup a webhook option:
- Go to Settings → Webhooks and click Add webhook.
- Set the Payload URL to the Pipelines as Code controller public URL.
- Select the content type as application/json.
-
Add a webhook secret and note it in an alternate location. With
openssl
installed on your local machine, generate a random secret. - Click Let me select individual events and select these events: Commit comments, Issue comments, Pull request, and Pushes.
- Click Add webhook.
Optional: In the Advanced Options section, the Target port and the Create a route to the application is selected by default so that you can access your application using a publicly available URL.
If your application does not expose its data on the default public port, 80, clear the check box, and set the target port number you want to expose.
Optional: You can use the following advanced options to further customize your application:
- Routing
By clicking the Routing link, you can perform the following actions:
- Customize the hostname for the route.
- Specify the path the router watches.
- Select the target port for the traffic from the drop-down list.
Secure your route by selecting the Secure Route check box. Select the required TLS termination type and set a policy for insecure traffic from the respective drop-down lists.
NoteFor serverless applications, the Knative service manages all the routing options above. However, you can customize the target port for traffic, if required. If the target port is not specified, the default port of
8080
is used.
- Domain mapping
If you are creating a Serverless Deployment, you can add a custom domain mapping to the Knative service during creation.
In the Advanced options section, click Show advanced Routing options.
- If the domain mapping CR that you want to map to the service already exists, you can select it from the Domain mapping drop-down menu.
-
If you want to create a new domain mapping CR, type the domain name into the box, and select the Create option. For example, if you type in
example.com
, the Create option is Create "example.com".
- Health Checks
Click the Health Checks link to add Readiness, Liveness, and Startup probes to your application. All the probes have prepopulated default data; you can add the probes with the default data or customize it as required.
To customize the health probes:
- Click Add Readiness Probe, if required, modify the parameters to check if the container is ready to handle requests, and select the check mark to add the probe.
- Click Add Liveness Probe, if required, modify the parameters to check if a container is still running, and select the check mark to add the probe.
Click Add Startup Probe, if required, modify the parameters to check if the application within the container has started, and select the check mark to add the probe.
For each of the probes, you can specify the request type - HTTP GET, Container Command, or TCP Socket, from the drop-down list. The form changes as per the selected request type. You can then modify the default values for the other parameters, such as the success and failure thresholds for the probe, number of seconds before performing the first probe after the container starts, frequency of the probe, and the timeout value.
- Build Configuration and Deployment
Click the Build Configuration and Deployment links to see the respective configuration options. Some options are selected by default; you can customize them further by adding the necessary triggers and environment variables.
For serverless applications, the Deployment option is not displayed as the Knative configuration resource maintains the desired state for your deployment instead of a
DeploymentConfig
resource.
- Scaling
Click the Scaling link to define the number of pods or instances of the application you want to deploy initially.
If you are creating a serverless deployment, you can also configure the following settings:
-
Min Pods determines the lower limit for the number of pods that must be running at any given time for a Knative service. This is also known as the
minScale
setting. -
Max Pods determines the upper limit for the number of pods that can be running at any given time for a Knative service. This is also known as the
maxScale
setting. - Concurrency target determines the number of concurrent requests desired for each instance of the application at a given time.
- Concurrency limit determines the limit for the number of concurrent requests allowed for each instance of the application at a given time.
- Concurrency utilization determines the percentage of the concurrent requests limit that must be met before Knative scales up additional pods to handle additional traffic.
-
Autoscale window defines the time window over which metrics are averaged to provide input for scaling decisions when the autoscaler is not in panic mode. A service is scaled-to-zero if no requests are received during this window. The default duration for the autoscale window is
60s
. This is also known as the stable window.
-
Min Pods determines the lower limit for the number of pods that must be running at any given time for a Knative service. This is also known as the
- Resource Limit
- Click the Resource Limit link to set the amount of CPU and Memory resources a container is guaranteed or allowed to use when running.
- Labels
- Click the Labels link to add custom labels to your application.
- Click Create to create the application and a success notification is displayed. You can see the build status of the application in the Topology view.
3.2.5. Creating applications by deploying container image
You can use an external image registry or an image stream tag from an internal registry to deploy an application on your cluster.
Prerequisites
- You have logged in to the OpenShift Container Platform web console and are in the Developer perspective.
Procedure
- In the +Add view, click Container images to view the Deploy Images page.
In the Image section:
- Select Image name from external registry to deploy an image from a public or a private registry, or select Image stream tag from internal registry to deploy an image from an internal registry.
- Select an icon for your image in the Runtime icon tab.
In the General section:
- In the Application name field, enter a unique name for the application grouping.
- In the Name field, enter a unique name to identify the resources created for this component.
In the Resource type section, select the resource type to generate:
-
Select Deployment to enable declarative updates for
Pod
andReplicaSet
objects. -
Select DeploymentConfig to define the template for a
Pod
object, and manage deploying new images and configuration sources. - Select Serverless Deployment to enable scaling to zero when idle.
-
Select Deployment to enable declarative updates for
- Click Create. You can view the build status of the application in the Topology view.
3.2.6. Deploying a Java application by uploading a JAR file
You can use the web console Developer perspective to upload a JAR file by using the following options:
- Navigate to the +Add view of the Developer perspective, and click Upload JAR file in the From Local Machine tile. Browse and select your JAR file, or drag a JAR file to deploy your application.
- Navigate to the Topology view and use the Upload JAR file option, or drag a JAR file to deploy your application.
- Use the in-context menu in the Topology view, and then use the Upload JAR file option to upload your JAR file to deploy your application.
Prerequisites
- The Cluster Samples Operator must be installed by a cluster administrator.
- You have access to the OpenShift Container Platform web console and are in the Developer perspective.
Procedure
- In the Topology view, right-click anywhere to view the Add to Project menu.
- Hover over the Add to Project menu to see the menu options, and then select the Upload JAR file option to see the Upload JAR file form. Alternatively, you can drag the JAR file into the Topology view.
- In the JAR file field, browse for the required JAR file on your local machine and upload it. Alternatively, you can drag the JAR file on to the field. A toast alert is displayed at the top right if an incompatible file type is dragged into the Topology view. A field error is displayed if an incompatible file type is dropped on the field in the upload form.
- The runtime icon and builder image are selected by default. If a builder image is not auto-detected, select a builder image. If required, you can change the version using the Builder Image Version drop-down list.
- Optional: In the Application Name field, enter a unique name for your application to use for resource labelling.
- In the Name field, enter a unique component name for the associated resources.
- Optional: Use the Resource type drop-down list to change the resource type.
- In the Advanced options menu, click Create a Route to the Application to configure a public URL for your deployed application.
- Click Create to deploy the application. A toast notification is shown to notify you that the JAR file is being uploaded. The toast notification also includes a link to view the build logs.
If you attempt to close the browser tab while the build is running, a web alert is displayed.
After the JAR file is uploaded and the application is deployed, you can view the application in the Topology view.
3.2.7. Using the Devfile registry to access devfiles
You can use the devfiles in the +Add flow of the Developer perspective to create an application. The +Add flow provides a complete integration with the devfile community registry. A devfile is a portable YAML file that describes your development environment without needing to configure it from scratch. Using the Devfile registry, you can use a preconfigured devfile to create an application.
Procedure
- Navigate to Developer Perspective → +Add → Developer Catalog → All Services. A list of all the available services in the Developer Catalog is displayed.
- Under Type, click Devfiles to browse for devfiles that support a particular language or framework. Alternatively, you can use the keyword filter to search for a particular devfile using their name, tag, or description.
- Click the devfile you want to use to create an application. The devfile tile displays the details of the devfile, including the name, description, provider, and the documentation of the devfile.
- Click Create to create an application and view the application in the Topology view.
3.2.8. Using the Developer Catalog to add services or components to your application
You use the Developer Catalog to deploy applications and services based on Operator backed services such as Databases, Builder Images, and Helm Charts. The Developer Catalog contains a collection of application components, services, event sources, or source-to-image builders that you can add to your project. Cluster administrators can customize the content made available in the catalog.
Procedure
- In the Developer perspective, navigate to the +Add view and from the Developer Catalog tile, click All Services to view all the available services in the Developer Catalog.
- Under All Services, select the kind of service or the component you need to add to your project. For this example, select Databases to list all the database services and then click MariaDB to see the details for the service.
Click Instantiate Template to see an automatically populated template with details for the MariaDB service, and then click Create to create and view the MariaDB service in the Topology view.
Figure 3.1. MariaDB in Topology
3.2.9. Additional resources
- For more information about Knative routing settings for OpenShift Serverless, see Routing.
- For more information about domain mapping settings for OpenShift Serverless, see Configuring a custom domain for a Knative service.
- For more information about Knative autoscaling settings for OpenShift Serverless, see Autoscaling.
- For more information about adding a new user to a project, see Working with projects.
- For more information about creating a Helm Chart repository, see Creating Helm Chart repositories.
3.3. Creating applications from installed Operators
Operators are a method of packaging, deploying, and managing a Kubernetes application. You can create applications on OpenShift Container Platform using Operators that have been installed by a cluster administrator.
This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console.
Additional resources
- See the Operators guide for more on how Operators work and how the Operator Lifecycle Manager is integrated in OpenShift Container Platform.
3.3.1. Creating an etcd cluster using an Operator
This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM).
Prerequisites
- Access to an OpenShift Container Platform 4.14 cluster.
- The etcd Operator already installed cluster-wide by an administrator.
Procedure
-
Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called
my-etcd
. Navigate to the Operators → Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator.
TipYou can get this list from the CLI using:
$ oc get csv
On the Installed Operators page, click the etcd Operator to view more details and available actions.
As shown under Provided APIs, this Operator makes available three new resource types, including one for an etcd Cluster (the
EtcdCluster
resource). These objects work similar to the built-in native Kubernetes ones, such asDeployment
orReplicaSet
, but contain logic specific to managing etcd.Create a new etcd cluster:
- In the etcd Cluster API box, click Create instance.
-
The next page allows you to make any modifications to the minimal starting template of an
EtcdCluster
object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster.
Click the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator.
Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project.
All users with the
edit
role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command:$ oc policy add-role-to-user edit <user> -n <target_project>
You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications.
3.4. Creating applications by using the CLI
You can create an OpenShift Container Platform application from components that include source or binary code, images, and templates by using the OpenShift Container Platform CLI.
The set of objects created by new-app
depends on the artifacts passed as input: source repositories, images, or templates.
3.4.1. Creating an application from source code
With the new-app
command you can create applications from source code in a local or remote Git repository.
The new-app
command creates a build configuration, which itself creates a new application image from your source code. The new-app
command typically also creates a Deployment
object to deploy the new image, and a service to provide load-balanced access to the deployment running your image.
OpenShift Container Platform automatically detects whether the pipeline, source, or docker build strategy should be used, and in the case of source build, detects an appropriate language builder image.
3.4.1.1. Local
To create an application from a Git repository in a local directory:
$ oc new-app /<path to source code>
If you use a local Git repository, the repository must have a remote named origin
that points to a URL that is accessible by the OpenShift Container Platform cluster. If there is no recognized remote, running the new-app
command will create a binary build.
3.4.1.2. Remote
To create an application from a remote Git repository:
$ oc new-app https://github.com/sclorg/cakephp-ex
To create an application from a private remote Git repository:
$ oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret
If you use a private remote Git repository, you can use the --source-secret
flag to specify an existing source clone secret that will get injected into your build config to access the repository.
You can use a subdirectory of your source code repository by specifying a --context-dir
flag. To create an application from a remote Git repository and a context subdirectory:
$ oc new-app https://github.com/sclorg/s2i-ruby-container.git \ --context-dir=2.0/test/puma-test-app
Also, when specifying a remote URL, you can specify a Git branch to use by appending #<branch_name>
to the end of the URL:
$ oc new-app https://github.com/openshift/ruby-hello-world.git#beta4
3.4.1.3. Build strategy detection
OpenShift Container Platform automatically determines which build strategy to use by detecting certain files:
If a Jenkins file exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a pipeline build strategy.
NoteThe
pipeline
build strategy is deprecated; consider using Red Hat OpenShift Pipelines instead.- If a Dockerfile exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a docker build strategy.
- If neither a Jenkins file nor a Dockerfile is detected, OpenShift Container Platform generates a source build strategy.
Override the automatically detected build strategy by setting the --strategy
flag to docker
, pipeline
, or source
.
$ oc new-app /home/user/code/myapp --strategy=docker
The oc
command requires that files containing build sources are available in a remote Git repository. For all source builds, you must use git remote -v
.
3.4.1.4. Language detection
If you use the source build strategy, new-app
attempts to determine the language builder to use by the presence of certain files in the root or specified context directory of the repository:
Language | Files |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
After a language is detected, new-app
searches the OpenShift Container Platform server for image stream tags that have a supports
annotation matching the detected language, or an image stream that matches the name of the detected language. If a match is not found, new-app
searches the Docker Hub registry for an image that matches the detected language based on name.
You can override the image the builder uses for a particular source repository by specifying the image, either an image stream or container specification, and the repository with a ~
as a separator. Note that if this is done, build strategy detection and language detection are not carried out.
For example, to use the myproject/my-ruby
imagestream with the source in a remote repository:
$ oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git
To use the openshift/ruby-20-centos7:latest
container image stream with the source in a local repository:
$ oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app
Language detection requires the Git client to be locally installed so that your repository can be cloned and inspected. If Git is not available, you can avoid the language detection step by specifying the builder image to use with your repository with the <image>~<repository>
syntax.
The -i <image> <repository>
invocation requires that new-app
attempt to clone repository
to determine what type of artifact it is, so this will fail if Git is not available.
The -i <image> --code <repository>
invocation requires new-app
clone repository
to determine whether image
should be used as a builder for the source code, or deployed separately, as in the case of a database image.
3.4.2. Creating an application from an image
You can deploy an application from an existing image. Images can come from image streams in the OpenShift Container Platform server, images in a specific registry, or images in the local Docker server.
The new-app
command attempts to determine the type of image specified in the arguments passed to it. However, you can explicitly tell new-app
whether the image is a container image using the --docker-image
argument or an image stream using the -i|--image-stream
argument.
If you specify an image from your local Docker repository, you must ensure that the same image is available to the OpenShift Container Platform cluster nodes.
3.4.2.1. Docker Hub MySQL image
Create an application from the Docker Hub MySQL image, for example:
$ oc new-app mysql
3.4.2.2. Image in a private registry
Create an application using an image in a private registry, specify the full container image specification:
$ oc new-app myregistry:5000/example/myimage
3.4.2.3. Existing image stream and optional image stream tag
Create an application from an existing image stream and optional image stream tag:
$ oc new-app my-stream:v1
3.4.3. Creating an application from a template
You can create an application from a previously stored template or from a template file, by specifying the name of the template as an argument. For example, you can store a sample application template and use it to create an application.
Upload an application template to your current project’s template library. The following example uploads an application template from a file called examples/sample-app/application-template-stibuild.json
:
$ oc create -f examples/sample-app/application-template-stibuild.json
Then create a new application by referencing the application template. In this example, the template name is ruby-helloworld-sample
:
$ oc new-app ruby-helloworld-sample
To create a new application by referencing a template file in your local file system, without first storing it in OpenShift Container Platform, use the -f|--file
argument. For example:
$ oc new-app -f examples/sample-app/application-template-stibuild.json
3.4.3.1. Template parameters
When creating an application based on a template, use the -p|--param
argument to set parameter values that are defined by the template:
$ oc new-app ruby-helloworld-sample \ -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword
You can store your parameters in a file, then use that file with --param-file
when instantiating a template. If you want to read the parameters from standard input, use --param-file=-
. The following is an example file called helloworld.params
:
ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword
Reference the parameters in the file when instantiating a template:
$ oc new-app ruby-helloworld-sample --param-file=helloworld.params
3.4.4. Modifying application creation
The new-app
command generates OpenShift Container Platform objects that build, deploy, and run the application that is created. Normally, these objects are created in the current project and assigned names that are derived from the input source repositories or the input images. However, with new-app
you can modify this behavior.
Object | Description |
---|---|
|
A |
|
For the |
|
A |
|
The |
Other | Other objects can be generated when instantiating templates, according to the template. |
3.4.4.1. Specifying environment variables
When generating applications from a template, source, or an image, you can use the -e|--env
argument to pass environment variables to the application container at run time:
$ oc new-app openshift/postgresql-92-centos7 \ -e POSTGRESQL_USER=user \ -e POSTGRESQL_DATABASE=db \ -e POSTGRESQL_PASSWORD=password
The variables can also be read from file using the --env-file
argument. The following is an example file called postgresql.env
:
POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password
Read the variables from the file:
$ oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env
Additionally, environment variables can be given on standard input by using --env-file=-
:
$ cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-
Any BuildConfig
objects created as part of new-app
processing are not updated with environment variables passed with the -e|--env
or --env-file
argument.
3.4.4.2. Specifying build environment variables
When generating applications from a template, source, or an image, you can use the --build-env
argument to pass environment variables to the build container at run time:
$ oc new-app openshift/ruby-23-centos7 \ --build-env HTTP_PROXY=http://myproxy.net:1337/ \ --build-env GEM_HOME=~/.gem
The variables can also be read from a file using the --build-env-file
argument. The following is an example file called ruby.env
:
HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem
Read the variables from the file:
$ oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env
Additionally, environment variables can be given on standard input by using --build-env-file=-
:
$ cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-
3.4.4.3. Specifying labels
When generating applications from source, images, or templates, you can use the -l|--label
argument to add labels to the created objects. Labels make it easy to collectively select, configure, and delete objects associated with the application.
$ oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world
3.4.4.4. Viewing the output without creation
To see a dry-run of running the new-app
command, you can use the -o|--output
argument with a yaml
or json
value. You can then use the output to preview the objects that are created or redirect it to a file that you can edit. After you are satisfied, you can use oc create
to create the OpenShift Container Platform objects.
To output new-app
artifacts to a file, run the following:
$ oc new-app https://github.com/openshift/ruby-hello-world \ -o yaml > myapp.yaml
Edit the file:
$ vi myapp.yaml
Create a new application by referencing the file:
$ oc create -f myapp.yaml
3.4.4.5. Creating objects with different names
Objects created by new-app
are normally named after the source repository, or the image used to generate them. You can set the name of the objects produced by adding a --name
flag to the command:
$ oc new-app https://github.com/openshift/ruby-hello-world --name=myapp
3.4.4.6. Creating objects in a different project
Normally, new-app
creates objects in the current project. However, you can create objects in a different project by using the -n|--namespace
argument:
$ oc new-app https://github.com/openshift/ruby-hello-world -n myproject
3.4.4.7. Creating multiple objects
The new-app
command allows creating multiple applications specifying multiple parameters to new-app
. Labels specified in the command line apply to all objects created by the single command. Environment variables apply to all components created from source or images.
To create an application from a source repository and a Docker Hub image:
$ oc new-app https://github.com/openshift/ruby-hello-world mysql
If a source code repository and a builder image are specified as separate arguments, new-app
uses the builder image as the builder for the source code repository. If this is not the intent, specify the required builder image for the source using the ~
separator.
3.4.4.8. Grouping images and source in a single pod
The new-app
command allows deploying multiple images together in a single pod. To specify which images to group together, use the +
separator. The --group
command line argument can also be used to specify the images that should be grouped together. To group the image built from a source repository with other images, specify its builder image in the group:
$ oc new-app ruby+mysql
To deploy an image built from source and an external image together:
$ oc new-app \ ruby~https://github.com/openshift/ruby-hello-world \ mysql \ --group=ruby+mysql
3.4.4.9. Searching for images, templates, and other inputs
To search for images, templates, and other inputs for the oc new-app
command, add the --search
and --list
flags. For example, to find all of the images or templates that include PHP:
$ oc new-app --search php
3.4.4.10. Setting the import mode
To set the import mode when using oc new-app
, add the --import-mode
flag. This flag can be appended with Legacy
or PreserveOriginal
, which provides users the option to create image streams using a single sub-manifest, or all manifests, respectively.
$ oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=Legacy --name=test
$ oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=PreserveOriginal --name=test
3.5. Creating applications using Ruby on Rails
Ruby on Rails is a web framework written in Ruby. This guide covers using Rails 4 on OpenShift Container Platform.
Go through the whole tutorial to have an overview of all the steps necessary to run your application on the OpenShift Container Platform. If you experience a problem try reading through the entire tutorial and then going back to your issue. It can also be useful to review your previous steps to ensure that all the steps were run correctly.
3.5.1. Prerequisites
- Basic Ruby and Rails knowledge.
- Locally installed version of Ruby 2.0.0+, Rubygems, Bundler.
- Basic Git knowledge.
- Running instance of OpenShift Container Platform 4.
-
Make sure that an instance of OpenShift Container Platform is running and is available. Also make sure that your
oc
CLI client is installed and the command is accessible from your command shell, so you can use it to log in using your email address and password.
3.5.2. Setting up the database
Rails applications are almost always used with a database. For local development use the PostgreSQL database.
Procedure
Install the database:
$ sudo yum install -y postgresql postgresql-server postgresql-devel
Initialize the database:
$ sudo postgresql-setup initdb
This command creates the
/var/lib/pgsql/data
directory, in which the data is stored.Start the database:
$ sudo systemctl start postgresql.service
When the database is running, create your
rails
user:$ sudo -u postgres createuser -s rails
Note that the user created has no password.
3.5.3. Writing your application
If you are starting your Rails application from scratch, you must install the Rails gem first. Then you can proceed with writing your application.
Procedure
Install the Rails gem:
$ gem install rails
Example output
Successfully installed rails-4.3.0 1 gem installed
After you install the Rails gem, create a new application with PostgreSQL as your database:
$ rails new rails-app --database=postgresql
Change into your new application directory:
$ cd rails-app
If you already have an application, make sure the
pg
(postgresql) gem is present in yourGemfile
. If not, edit yourGemfile
by adding the gem:gem 'pg'
Generate a new
Gemfile.lock
with all your dependencies:$ bundle install
In addition to using the
postgresql
database with thepg
gem, you also must ensure that theconfig/database.yml
is using thepostgresql
adapter.Make sure you updated
default
section in theconfig/database.yml
file, so it looks like this:default: &default adapter: postgresql encoding: unicode pool: 5 host: localhost username: rails password: <password>
Create your application’s development and test databases:
$ rake db:create
This creates
development
andtest
database in your PostgreSQL server.
3.5.3.1. Creating a welcome page
Since Rails 4 no longer serves a static public/index.html
page in production, you must create a new root page.
To have a custom welcome page must do following steps:
- Create a controller with an index action.
- Create a view page for the welcome controller index action.
- Create a route that serves applications root page with the created controller and view.
Rails offers a generator that completes all necessary steps for you.
Procedure
Run Rails generator:
$ rails generate controller welcome index
All the necessary files are created.
edit line 2 in
config/routes.rb
file as follows:root 'welcome#index'
Run the rails server to verify the page is available:
$ rails server
You should see your page by visiting http://localhost:3000 in your browser. If you do not see the page, check the logs that are output to your server to debug.
3.5.3.2. Configuring application for OpenShift Container Platform
To have your application communicate with the PostgreSQL database service running in OpenShift Container Platform you must edit the default
section in your config/database.yml
to use environment variables, which you must define later, upon the database service creation.
Procedure
Edit the
default
section in yourconfig/database.yml
with pre-defined variables as follows:Sample
config/database
YAML file<% user = ENV.key?("POSTGRESQL_ADMIN_PASSWORD") ? "root" : ENV["POSTGRESQL_USER"] %> <% password = ENV.key?("POSTGRESQL_ADMIN_PASSWORD") ? ENV["POSTGRESQL_ADMIN_PASSWORD"] : ENV["POSTGRESQL_PASSWORD"] %> <% db_service = ENV.fetch("DATABASE_SERVICE_NAME","").upcase %> default: &default adapter: postgresql encoding: unicode # For details on connection pooling, see rails configuration guide # http://guides.rubyonrails.org/configuring.html#database-pooling pool: <%= ENV["POSTGRESQL_MAX_CONNECTIONS"] || 5 %> username: <%= user %> password: <%= password %> host: <%= ENV["#{db_service}_SERVICE_HOST"] %> port: <%= ENV["#{db_service}_SERVICE_PORT"] %> database: <%= ENV["POSTGRESQL_DATABASE"] %>
3.5.3.3. Storing your application in Git
Building an application in OpenShift Container Platform usually requires that the source code be stored in a git repository, so you must install git
if you do not already have it.
Prerequisites
- Install git.
Procedure
Make sure you are in your Rails application directory by running the
ls -1
command. The output of the command should look like:$ ls -1
Example output
app bin config config.ru db Gemfile Gemfile.lock lib log public Rakefile README.rdoc test tmp vendor
Run the following commands in your Rails app directory to initialize and commit your code to git:
$ git init
$ git add .
$ git commit -m "initial commit"
After your application is committed you must push it to a remote repository. GitHub account, in which you create a new repository.
Set the remote that points to your
git
repository:$ git remote add origin git@github.com:<namespace/repository-name>.git
Push your application to your remote git repository.
$ git push
3.5.4. Deploying your application to OpenShift Container Platform
You can deploy you application to OpenShift Container Platform.
After creating the rails-app
project, you are automatically switched to the new project namespace.
Deploying your application in OpenShift Container Platform involves three steps:
- Creating a database service from OpenShift Container Platform’s PostgreSQL image.
- Creating a frontend service from OpenShift Container Platform’s Ruby 2.0 builder image and your Ruby on Rails source code, which are wired with the database service.
- Creating a route for your application.
Procedure
To deploy your Ruby on Rails application, create a new project for the application:
$ oc new-project rails-app --description="My Rails application" --display-name="Rails Application"
3.5.4.1. Creating the database service
Your Rails application expects a running database service. For this service use PostgreSQL database image.
To create the database service, use the oc new-app
command. To this command you must pass some necessary environment variables which are used inside the database container. These environment variables are required to set the username, password, and name of the database. You can change the values of these environment variables to anything you would like. The variables are as follows:
- POSTGRESQL_DATABASE
- POSTGRESQL_USER
- POSTGRESQL_PASSWORD
Setting these variables ensures:
- A database exists with the specified name.
- A user exists with the specified name.
- The user can access the specified database with the specified password.
Procedure
Create the database service:
$ oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password
To also set the password for the database administrator, append to the previous command with:
-e POSTGRESQL_ADMIN_PASSWORD=admin_pw
Watch the progress:
$ oc get pods --watch
3.5.4.2. Creating the frontend service
To bring your application to OpenShift Container Platform, you must specify a repository in which your application lives.
Procedure
Create the frontend service and specify database related environment variables that were setup when creating the database service:
$ oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql
With this command, OpenShift Container Platform fetches the source code, sets up the builder, builds your application image, and deploys the newly created image together with the specified environment variables. The application is named
rails-app
.Verify the environment variables have been added by viewing the JSON document of the
rails-app
deployment config:$ oc get dc rails-app -o json
You should see the following section:
Example output
env": [ { "name": "POSTGRESQL_USER", "value": "username" }, { "name": "POSTGRESQL_PASSWORD", "value": "password" }, { "name": "POSTGRESQL_DATABASE", "value": "db_name" }, { "name": "DATABASE_SERVICE_NAME", "value": "postgresql" } ],
Check the build process:
$ oc logs -f build/rails-app-1
After the build is complete, look at the running pods in OpenShift Container Platform:
$ oc get pods
You should see a line starting with
myapp-<number>-<hash>
, and that is your application running in OpenShift Container Platform.Before your application is functional, you must initialize the database by running the database migration script. There are two ways you can do this:
Manually from the running frontend container:
Exec into frontend container with
rsh
command:$ oc rsh <frontend_pod_id>
Run the migration from inside the container:
$ RAILS_ENV=production bundle exec rake db:migrate
If you are running your Rails application in a
development
ortest
environment you do not have to specify theRAILS_ENV
environment variable.
- By adding pre-deployment lifecycle hooks in your template.
3.5.4.3. Creating a route for your application
You can expose a service to create a route for your application.
Procedure
To expose a service by giving it an externally-reachable hostname like
www.example.com
use OpenShift Container Platform route. In your case you need to expose the frontend service by typing:$ oc expose service rails-app --hostname=www.example.com
Ensure the hostname you specify resolves into the IP address of the router.
Chapter 4. Viewing application composition by using the Topology view
The Topology view in the Developer perspective of the web console provides a visual representation of all the applications within a project, their build status, and the components and services associated with them.
4.1. Prerequisites
To view your applications in the Topology view and interact with them, ensure that:
- You have logged in to the web console.
- You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform.
- You have created and deployed an application on OpenShift Container Platform using the Developer perspective.
- You are in the Developer perspective.
4.2. Viewing the topology of your application
You can navigate to the Topology view using the left navigation panel in the Developer perspective. After you deploy an application, you are directed automatically to the Graph view where you can see the status of the application pods, quickly access the application on a public URL, access the source code to modify it, and see the status of your last build. You can zoom in and out to see more details for a particular application.
The Topology view provides you the option to monitor your applications using the List view. Use the List view icon (
) to see a list of all your applications and use the Graph view icon (
) to switch back to the graph view.
You can customize the views as required using the following:
- Use the Find by name field to find the required components. Search results may appear outside of the visible area; click Fit to Screen from the lower-left toolbar to resize the Topology view to show all components.
Use the Display Options drop-down list to configure the Topology view of the various application groupings. The options are available depending on the types of components deployed in the project:
Expand group
- Virtual Machines: Toggle to show or hide the virtual machines.
- Application Groupings: Clear to condense the application groups into cards with an overview of an application group and alerts associated with it.
- Helm Releases: Clear to condense the components deployed as Helm Release into cards with an overview of a given release.
- Knative Services: Clear to condense the Knative Service components into cards with an overview of a given component.
- Operator Groupings: Clear to condense the components deployed with an Operator into cards with an overview of the given group.
Show elements based on Pod Count or Labels
- Pod Count: Select to show the number of pods of a component in the component icon.
- Labels: Toggle to show or hide the component labels.
The Topology view also provides you the Export application option to download your application in the ZIP file format. You can then import the downloaded application to another project or cluster. For more details, see Exporting an application to another project or cluster in the Additional resources section.
4.3. Interacting with applications and components
In the Topology view in the Developer perspective of the web console, the Graph view provides the following options to interact with applications and components:
-
Click Open URL (
) to see your application exposed by the route on a public URL.
Click Edit Source code to access your source code and modify it.
NoteThis feature is available only when you create applications using the From Git, From Catalog, and the From Dockerfile options.
-
Hover your cursor over the lower left icon on the pod to see the name of the latest build and its status. The status of the application build is indicated as New (
), Pending (
), Running (
), Completed (
), Failed (
), and Canceled (
).
The status or phase of the pod is indicated by different colors and tooltips as:
-
Running (
): The pod is bound to a node and all of the containers are created. At least one container is still running or is in the process of starting or restarting.
-
Not Ready (
): The pods which are running multiple containers, not all containers are ready.
-
Warning(
): Containers in pods are being terminated, however termination did not succeed. Some containers may be other states.
-
Failed(
): All containers in the pod terminated but least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system.
-
Pending(
): The pod is accepted by the Kubernetes cluster, but one or more of the containers has not been set up and made ready to run. This includes time a pod spends waiting to be scheduled as well as the time spent downloading container images over the network.
-
Succeeded(
): All containers in the pod terminated successfully and will not be restarted.
-
Terminating(
): When a pod is being deleted, it is shown as Terminating by some kubectl commands. Terminating status is not one of the pod phases. A pod is granted a graceful termination period, which defaults to 30 seconds.
-
Unknown(
): The state of the pod could not be obtained. This phase typically occurs due to an error in communicating with the node where the pod should be running.
-
Running (
After you create an application and an image is deployed, the status is shown as Pending. After the application is built, it is displayed as Running.
Figure 4.1. Application topology
The application resource name is appended with indicators for the different types of resource objects as follows:
-
CJ:
CronJob
-
D:
Deployment
-
DC:
DeploymentConfig
-
DS:
DaemonSet
-
J:
Job
-
P:
Pod
-
SS:
StatefulSet
(Knative): A serverless application
NoteServerless applications take some time to load and display on the Graph view. When you deploy a serverless application, it first creates a service resource and then a revision. After that, it is deployed and displayed on the Graph view. If it is the only workload, you might be redirected to the Add page. After the revision is deployed, the serverless application is displayed on the Graph view.
-
CJ:
4.4. Scaling application pods and checking builds and routes
The Topology view provides the details of the deployed components in the Overview panel. You can use the Overview and Details tabs to scale the application pods, check build status, services, and routes as follows:
Click on the component node to see the Overview panel to the right. Use the Details tab to:
- Scale your pods using the up and down arrows to increase or decrease the number of instances of the application manually. For serverless applications, the pods are automatically scaled down to zero when idle and scaled up depending on the channel traffic.
- Check the Labels, Annotations, and Status of the application.
Click the Resources tab to:
- See the list of all the pods, view their status, access logs, and click on the pod to see the pod details.
- See the builds, their status, access logs, and start a new build if needed.
- See the services and routes used by the component.
For serverless applications, the Resources tab provides information on the revision, routes, and the configurations used for that component.
4.5. Adding components to an existing project
You can add components to a project.
Procedure
- Navigate to the +Add view.
-
Click Add to Project (
) next to left navigation pane or press Ctrl+Space
Search for the component and click the Start/Create/Install button or click Enter to add the component to the project and see it in the topology Graph view.
Figure 4.2. Adding component via quick search
Alternatively, you can also use the available options in the context menu, such as Import from Git, Container Image, Database, From Catalog, Operator Backed, Helm Charts, Samples, or Upload JAR file, by right-clicking in the topology Graph view to add a component to your project.
Figure 4.3. Context menu to add services

4.6. Grouping multiple components within an application
You can use the +Add view to add multiple components or services to your project and use the topology Graph view to group applications and resources within an application group.
Prerequisites
- You have created and deployed minimum two or more components on OpenShift Container Platform using the Developer perspective.
Procedure
To add a service to the existing application group, press Shift+ drag it to the existing application group. Dragging a component and adding it to an application group adds the required labels to the component.
Figure 4.4. Application grouping
Alternatively, you can also add the component to an application as follows:
- Click the service pod to see the Overview panel to the right.
- Click the Actions drop-down menu and select Edit Application Grouping.
- In the Edit Application Grouping dialog box, click the Application drop-down list, and select an appropriate application group.
- Click Save to add the service to the application group.
You can remove a component from an application group by selecting the component and using Shift+ drag to drag it out of the application group.
4.7. Adding services to your application
To add a service to your application use the +Add actions using the context menu in the topology Graph view.
In addition to the context menu, you can add services by using the sidebar or hovering and dragging the dangling arrow from the application group.
Procedure
Right-click an application group in the topology Graph view to display the context menu.
Figure 4.5. Add resource context menu
- Use Add to Application to select a method for adding a service to the application group, such as From Git, Container Image, From Dockerfile, From Devfile, Upload JAR file, Event Source, Channel, or Broker.
- Complete the form for the method you choose and click Create. For example, to add a service based on the source code in your Git repository, choose the From Git method, fill in the Import from Git form, and click Create.
4.8. Removing services from your application
In the topology Graph view remove a service from your application using the context menu.
Procedure
- Right-click on a service in an application group in the topology Graph view to display the context menu.
Select Delete Deployment to delete the service.
Figure 4.6. Deleting deployment option
4.9. Labels and annotations used for the Topology view
The Topology view uses the following labels and annotations:
- Icon displayed in the node
-
Icons in the node are defined by looking for matching icons using the
app.openshift.io/runtime
label, followed by theapp.kubernetes.io/name
label. This matching is done using a predefined set of icons. - Link to the source code editor or the source
-
The
app.openshift.io/vcs-uri
annotation is used to create links to the source code editor. - Node Connector
-
The
app.openshift.io/connects-to
annotation is used to connect the nodes. - App grouping
-
The
app.kubernetes.io/part-of=<appname>
label is used to group the applications, services, and components.
For detailed information on the labels and annotations OpenShift Container Platform applications must use, see Guidelines for labels and annotations for OpenShift applications.
4.10. Additional resources
- See Importing a codebase from Git to create an application for more information on creating an application from Git.
- See Connecting an application to a service using the Developer perspective.
- See Exporting applications.
Chapter 5. Exporting applications
As a developer, you can export your application in the ZIP file format. Based on your needs, import the exported application to another project in the same cluster or a different cluster by using the Import YAML option in the +Add view. Exporting your application helps you to reuse your application resources and saves your time.
5.1. Prerequisites
You have installed the gitops-primer Operator from the OperatorHub.
NoteThe Export application option is disabled in the Topology view even after installing the gitops-primer Operator.
- You have created an application in the Topology view to enable Export application.
5.2. Procedure
In the developer perspective, perform one of the following steps:
- Navigate to the +Add view and click Export application in the Application portability tile.
- Navigate to the Topology view and click Export application.
- Click OK in the Export Application dialog box. A notification opens to confirm that the export of resources from your project has started.
Optional steps that you might need to perform in the following scenarios:
- If you have started exporting an incorrect application, click Export application → Cancel Export.
- If your export is already in progress and you want to start a fresh export, click Export application → Restart Export.
If you want to view logs associated with exporting an application, click Export application and the View Logs link.
- After a successful export, click Download in the dialog box to download application resources in ZIP format onto your machine.
Chapter 6. Connecting applications to services
6.1. Release notes for Service Binding Operator
The Service Binding Operator consists of a controller and an accompanying custom resource definition (CRD) for service binding. It manages the data plane for workloads and backing services. The Service Binding Controller reads the data made available by the control plane of backing services. Then, it projects this data to workloads according to the rules specified through the ServiceBinding
resource.
With Service Binding Operator, you can:
- Bind your workloads together with Operator-managed backing services.
- Automate configuration of binding data.
- Provide service operators a low-touch administrative experience to provision and manage access to services.
- Enrich development lifecycle with a consistent and declarative service binding method that eliminates discrepancies in cluster environments.
The custom resource definition (CRD) of the Service Binding Operator supports the following APIs:
-
Service Binding with the
binding.operators.coreos.com
API group. -
Service Binding (Spec API) with the
servicebinding.io
API group.
6.1.1. Support matrix
Some features in the following table are in Technology Preview. These experimental features are not intended for production use.
In the table, features are marked with the following statuses:
- TP: Technology Preview
- GA: General Availability
Note the following scope of support on the Red Hat Customer Portal for these features:
Service Binding Operator | API Group and Support Status | OpenShift Versions | |
---|---|---|---|
Version |
|
| |
1.3.3 | GA | GA | 4.9-4.12 |
1.3.1 | GA | GA | 4.9-4.11 |
1.3 | GA | GA | 4.9-4.11 |
1.2 | GA | GA | 4.7-4.11 |
1.1.1 | GA | TP | 4.7-4.10 |
1.1 | GA | TP | 4.7-4.10 |
1.0.1 | GA | TP | 4.7-4.9 |
1.0 | GA | TP | 4.7-4.9 |
6.1.2. Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see Red Hat CTO Chris Wright’s message.
6.1.3. Release notes for Service Binding Operator 1.3.3
Service Binding Operator 1.3.3 is now available on OpenShift Container Platform 4.9, 4.10, 4.11 and 4.12.
6.1.3.1. Fixed issues
-
Before this update, a security vulnerability
CVE-2022-41717
was noted for Service Binding Operator. This update fixes theCVE-2022-41717
error and updates thegolang.org/x/net
package from v0.0.0-20220906165146-f3363e06e74c to v0.4.0. APPSVC-1256 - Before this update, Provisioned Services were only detected if the respective resource had the "servicebinding.io/provisioned-service: true" annotation set while other Provisioned Services were missed. With this update, the detection mechanism identifies all Provisioned Services correctly based on the "status.binding.name" attribute. APPSVC-1204
6.1.4. Release notes for Service Binding Operator 1.3.1
Service Binding Operator 1.3.1 is now available on OpenShift Container Platform 4.9, 4.10, and 4.11.
6.1.4.1. Fixed issues
-
Before this update, a security vulnerability
CVE-2022-32149
was noted for Service Binding Operator. This update fixes theCVE-2022-32149
error and updates thegolang.org/x/text
package from v0.3.7 to v0.3.8. APPSVC-1220
6.1.5. Release notes for Service Binding Operator 1.3
Service Binding Operator 1.3 is now available on OpenShift Container Platform 4.9, 4.10, and 4.11.
6.1.5.1. Removed functionality
- In Service Binding Operator 1.3, the Operator Lifecycle Manager (OLM) descriptor feature has been removed to improve resource utilization. As an alternative to OLM descriptors, you can use CRD annotations to declare binding data.
6.1.6. Release notes for Service Binding Operator 1.2
Service Binding Operator 1.2 is now available on OpenShift Container Platform 4.7, 4.8, 4.9, 4.10, and 4.11.
6.1.6.1. New features
This section highlights what is new in Service Binding Operator 1.2:
-
Enable Service Binding Operator to consider optional fields in the annotations by setting the
optional
flag value totrue
. -
Support for
servicebinding.io/v1beta1
resources. - Improvements to the discoverability of bindable services by exposing the relevant binding secret without requiring a workload to be present.
6.1.6.2. Known issues
- Currently, when you install Service Binding Operator on OpenShift Container Platform 4.11, the memory footprint of Service Binding Operator increases beyond expected limits. With low usage, however, the memory footprint stays within the expected ranges of your environment or scenarios. In comparison with OpenShift Container Platform 4.10, under stress, both the average and maximum memory footprint increase considerably. This issue is evident in the previous versions of Service Binding Operator as well. There is currently no workaround for this issue. APPSVC-1200
-
By default, the projected files get their permissions set to 0644. Service Binding Operator cannot set specific permissions due to a bug in Kubernetes that causes issues if the service expects specific permissions such as,
0600
. As a workaround, you can modify the code of the program or the application that is running inside a workload resource to copy the file to the/tmp
directory and set the appropriate permissions. APPSVC-1127 There is currently a known issue with installing Service Binding Operator in a single namespace installation mode. The absence of an appropriate namespace-scoped role-based access control (RBAC) rule prevents the successful binding of an application to a few known Operator-backed services that the Service Binding Operator can automatically detect and bind to. When this happens, it generates an error message similar to the following example:
Example error message
`postgresclusters.postgres-operator.crunchydata.com "hippo" is forbidden: User "system:serviceaccount:my-petclinic:service-binding-operator" cannot get resource "postgresclusters" in API group "postgres-operator.crunchydata.com" in the namespace "my-petclinic"`
Workaround 1: Install the Service Binding Operator in the
all namespaces
installation mode. As a result, the appropriate cluster-scoped RBAC rule now exists and the binding succeeds.Workaround 2: If you cannot install the Service Binding Operator in the
all namespaces
installation mode, install the following role binding into the namespace where the Service Binding Operator is installed:Example: Role binding for Crunchy Postgres Operator
kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role
According to the specification, when you change the
ClusterWorkloadResourceMapping
resources, Service Binding Operator must use the previous version of theClusterWorkloadResourceMapping
resource to remove the binding data that was being projected until now. Currently, when you change theClusterWorkloadResourceMapping
resources, the Service Binding Operator uses the latest version of theClusterWorkloadResourceMapping
resource to remove the binding data. As a result, {the servicebinding-title} might remove the binding data incorrectly. As a workaround, perform the following steps:-
Delete any
ServiceBinding
resources that use the correspondingClusterWorkloadResourceMapping
resource. -
Modify the
ClusterWorkloadResourceMapping
resource. -
Re-apply the
ServiceBinding
resources that you previously removed in step 1.
-
Delete any
6.1.7. Release notes for Service Binding Operator 1.1.1
Service Binding Operator 1.1.1 is now available on OpenShift Container Platform 4.7, 4.8, 4.9, and 4.10.
6.1.7.1. Fixed issues
-
Before this update, a security vulnerability
CVE-2021-38561
was noted for Service Binding Operator Helm chart. This update fixes theCVE-2021-38561
error and updates thegolang.org/x/text
package from v0.3.6 to v0.3.7. APPSVC-1124 -
Before this update, users of the Developer Sandbox did not have sufficient permissions to read
ClusterWorkloadResourceMapping
resources. As a result, Service Binding Operator prevented all service bindings from being successful. With this update, the Service Binding Operator now includes the appropriate role-based access control (RBAC) rules for any authenticated subject including the Developer Sandbox users. These RBAC rules allow the Service Binding Operator toget
,list
, andwatch
theClusterWorkloadResourceMapping
resources for the Developer Sandbox users and to process service bindings successfully. APPSVC-1135
6.1.7.2. Known issues
There is currently a known issue with installing Service Binding Operator in a single namespace installation mode. The absence of an appropriate namespace-scoped role-based access control (RBAC) rule prevents the successful binding of an application to a few known Operator-backed services that the Service Binding Operator can automatically detect and bind to. When this happens, it generates an error message similar to the following example:
Example error message
`postgresclusters.postgres-operator.crunchydata.com "hippo" is forbidden: User "system:serviceaccount:my-petclinic:service-binding-operator" cannot get resource "postgresclusters" in API group "postgres-operator.crunchydata.com" in the namespace "my-petclinic"`
Workaround 1: Install the Service Binding Operator in the
all namespaces
installation mode. As a result, the appropriate cluster-scoped RBAC rule now exists and the binding succeeds.Workaround 2: If you cannot install the Service Binding Operator in the
all namespaces
installation mode, install the following role binding into the namespace where the Service Binding Operator is installed:Example: Role binding for Crunchy Postgres Operator
kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role
Currently, when you modify the
ClusterWorkloadResourceMapping
resources, the Service Binding Operator does not implement correct behavior. As a workaround, perform the following steps:-
Delete any
ServiceBinding
resources that use the correspondingClusterWorkloadResourceMapping
resource. -
Modify the
ClusterWorkloadResourceMapping
resource. -
Re-apply the
ServiceBinding
resources that you previously removed in step 1.
-
Delete any
6.1.8. Release notes for Service Binding Operator 1.1
Service Binding Operator is now available on OpenShift Container Platform 4.7, 4.8, 4.9, and 4.10.
6.1.8.1. New features
This section highlights what is new in Service Binding Operator 1.1:
Service Binding Options
- Workload resource mapping: Define exactly where binding data needs to be projected for the secondary workloads.
- Bind new workloads using a label selector.
6.1.8.2. Fixed issues
- Before this update, service bindings that used label selectors to pick up workloads did not project service binding data into the new workloads that matched the given label selectors. As a result, the Service Binding Operator could not periodically bind such new workloads. With this update, service bindings now project service binding data into the new workloads that match the given label selector. The Service Binding Operator now periodically attempts to find and bind such new workloads. APPSVC-1083
6.1.8.3. Known issues
There is currently a known issue with installing Service Binding Operator in a single namespace installation mode. The absence of an appropriate namespace-scoped role-based access control (RBAC) rule prevents the successful binding of an application to a few known Operator-backed services that the Service Binding Operator can automatically detect and bind to. When this happens, it generates an error message similar to the following example:
Example error message
`postgresclusters.postgres-operator.crunchydata.com "hippo" is forbidden: User "system:serviceaccount:my-petclinic:service-binding-operator" cannot get resource "postgresclusters" in API group "postgres-operator.crunchydata.com" in the namespace "my-petclinic"`
Workaround 1: Install the Service Binding Operator in the
all namespaces
installation mode. As a result, the appropriate cluster-scoped RBAC rule now exists and the binding succeeds.Workaround 2: If you cannot install the Service Binding Operator in the
all namespaces
installation mode, install the following role binding into the namespace where the Service Binding Operator is installed:Example: Role binding for Crunchy Postgres Operator
kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role
Currently, when you modify the
ClusterWorkloadResourceMapping
resources, the Service Binding Operator does not implement correct behavior. As a workaround, perform the following steps:-
Delete any
ServiceBinding
resources that use the correspondingClusterWorkloadResourceMapping
resource. -
Modify the
ClusterWorkloadResourceMapping
resource. -
Re-apply the
ServiceBinding
resources that you previously removed in step 1.
-
Delete any
6.1.9. Release notes for Service Binding Operator 1.0.1
Service Binding Operator is now available on OpenShift Container Platform 4.7, 4.8 and 4.9.
Service Binding Operator 1.0.1 supports OpenShift Container Platform 4.9 and later running on:
- IBM Power Systems
- IBM Z and LinuxONE
The custom resource definition (CRD) of the Service Binding Operator 1.0.1 supports the following APIs:
-
Service Binding with the
binding.operators.coreos.com
API group. Service Binding (Spec API Tech Preview) with the
servicebinding.io
API group.ImportantService Binding (Spec API Tech Preview) with the
servicebinding.io
API group is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
6.1.9.1. Support matrix
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use.
Technology Preview Features Support Scope
In the table below, features are marked with the following statuses:
- TP: Technology Preview
- GA: General Availability
Note the following scope of support on the Red Hat Customer Portal for these features:
Feature | Service Binding Operator 1.0.1 |
---|---|
| GA |
| TP |
6.1.9.2. Fixed issues
-
Before this update, binding the data values from a
Cluster
custom resource (CR) of thepostgresql.k8s.enterpriesedb.io/v1
API collected thehost
binding value from the.metadata.name
field of the CR. The collected binding value is an incorrect hostname and the correct hostname is available at the.status.writeService
field. With this update, the annotations that the Service Binding Operator uses to expose the binding data values from the backing service CR are now modified to collect thehost
binding value from the.status.writeService
field. The Service Binding Operator uses these modified annotations to project the correct hostname in thehost
andprovider
bindings. APPSVC-1040 -
Before this update, when you would bind a
PostgresCluster
CR of thepostgres-operator.crunchydata.com/v1beta1
API, the binding data values did not include the values for the database certificates. As a result, the application failed to connect to the database. With this update, modifications to the annotations that the Service Binding Operator uses to expose the binding data from the backing service CR now include the database certificates. The Service Binding Operator uses these modified annotations to project the correctca.crt
,tls.crt
, andtls.key
certificate files. APPSVC-1045 -
Before this update, when you would bind a
PerconaXtraDBCluster
custom resource (CR) of thepxc.percona.com
API, the binding data values did not include theport
anddatabase
values. These binding values along with the others already projected are necessary for an application to successfully connect to the database service. With this update, the annotations that the Service Binding Operator uses to expose the binding data values from the backing service CR are now modified to project the additionalport
anddatabase
binding values. The Service Binding Operator uses these modified annotations to project the complete set of binding values that the application can use to successfully connect to the database service. APPSVC-1073
6.1.9.3. Known issues
Currently, when you install the Service Binding Operator in the single namespace installation mode, the absence of an appropriate namespace-scoped role-based access control (RBAC) rule prevents the successful binding of an application to a few known Operator-backed services that the Service Binding Operator can automatically detect and bind to. In addition, the following error message is generated:
Example error message
`postgresclusters.postgres-operator.crunchydata.com "hippo" is forbidden: User "system:serviceaccount:my-petclinic:service-binding-operator" cannot get resource "postgresclusters" in API group "postgres-operator.crunchydata.com" in the namespace "my-petclinic"`
Workaround 1: Install the Service Binding Operator in the
all namespaces
installation mode. As a result, the appropriate cluster-scoped RBAC rule now exists and the binding succeeds.Workaround 2: If you cannot install the Service Binding Operator in the
all namespaces
installation mode, install the following role binding into the namespace where the Service Binding Operator is installed:Example: Role binding for Crunchy Postgres Operator
kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role
6.1.10. Release notes for Service Binding Operator 1.0
Service Binding Operator is now available on OpenShift Container Platform 4.7, 4.8 and 4.9.
The custom resource definition (CRD) of the Service Binding Operator 1.0 supports the following APIs:
-
Service Binding with the
binding.operators.coreos.com
API group. Service Binding (Spec API Tech Preview) with the
servicebinding.io
API group.ImportantService Binding (Spec API Tech Preview) with the
servicebinding.io
API group is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
6.1.10.1. Support matrix
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use.
Technology Preview Features Support Scope
In the table below, features are marked with the following statuses:
- TP: Technology Preview
- GA: General Availability
Note the following scope of support on the Red Hat Customer Portal for these features:
Feature | Service Binding Operator 1.0 |
---|---|
| GA |
| TP |
6.1.10.2. New features
Service Binding Operator 1.0 supports OpenShift Container Platform 4.9 and later running on:
- IBM Power Systems
- IBM Z and LinuxONE
This section highlights what is new in Service Binding Operator 1.0:
Exposal of binding data from services
- Based on annotations present in CRD, custom resources (CRs), or resources.
- Based on descriptors present in Operator Lifecycle Manager (OLM) descriptors.
- Support for provisioned services
Workload projection
- Projection of binding data as files, with volume mounts.
- Projection of binding data as environment variables.
Service Binding Options
- Bind backing services in a namespace that is different from the workload namespace.
- Project binding data into the specific container workloads.
- Auto-detection of the binding data from resources owned by the backing service CR.
- Compose custom binding data from the exposed binding data.
-
Support for non-
PodSpec
compliant workload resources.
Security
- Support for role-based access control (RBAC).
6.1.11. Additional resources
6.2. Understanding Service Binding Operator
Application developers need access to backing services to build and connect workloads. Connecting workloads to backing services is always a challenge because each service provider suggests a different way to access their secrets and consume them in a workload. In addition, manual configuration and maintenance of this binding together of workloads and backing services make the process tedious, inefficient, and error-prone.
The Service Binding Operator enables application developers to easily bind workloads together with Operator-managed backing services, without any manual procedures to configure the binding connection.
6.2.1. Service Binding terminology
This section summarizes the basic terms used in Service Binding.
Service binding | The representation of the action of providing information about a service to a workload. Examples include establishing the exchange of credentials between a Java application and a database that it requires. |
Backing service | Any service or software that the application consumes over the network as part of its normal operation. Examples include a database, a message broker, an application with REST endpoints, an event stream, an Application Performance Monitor (APM), or a Hardware Security Module (HSM). |
Workload (application) | Any process running within a container. Examples include a Spring Boot application, a NodeJS Express application, or a Ruby on Rails application. |
Binding data | Information about a service that you use to configure the behavior of other resources within the cluster. Examples include credentials, connection details, volume mounts, or secrets. |
Binding connection | Any connection that establishes an interaction between the connected components, such as a bindable backing service and an application requiring that backing service. |
6.2.2. About Service Binding Operator
The Service Binding Operator consists of a controller and an accompanying custom resource definition (CRD) for service binding. It manages the data plane for workloads and backing services. The Service Binding Controller reads the data made available by the control plane of backing services. Then, it projects this data to workloads according to the rules specified through the ServiceBinding
resource.
As a result, the Service Binding Operator enables workloads to use backing services or external services by automatically collecting and sharing binding data with the workloads. The process involves making the backing service bindable and binding the workload and the service together.
6.2.2.1. Making an Operator-managed backing service bindable
To make a service bindable, as an Operator provider, you need to expose the binding data required by workloads to bind with the services provided by the Operator. You can provide the binding data either as annotations or as descriptors in the CRD of the Operator that manages the backing service.
6.2.2.2. Binding a workload together with a backing service
By using the Service Binding Operator, as an application developer, you need to declare the intent of establishing a binding connection. You must create a ServiceBinding
CR that references the backing service. This action triggers the Service Binding Operator to project the exposed binding data into the workload. The Service Binding Operator receives the declared intent and binds the workload together with the backing service.
The CRD of the Service Binding Operator supports the following APIs:
-
Service Binding with the
binding.operators.coreos.com
API group. -
Service Binding (Spec API) with the
servicebinding.io
API group.
With Service Binding Operator, you can:
- Bind your workloads to Operator-managed backing services.
- Automate configuration of binding data.
- Provide service operators with a low-touch administrative experience to provision and manage access to services.
- Enrich the development lifecycle with a consistent and declarative service binding method that eliminates discrepancies in cluster environments.
6.2.3. Key features
Exposal of binding data from services
- Based on annotations present in CRD, custom resources (CRs), or resources.
Workload projection
- Projection of binding data as files, with volume mounts.
- Projection of binding data as environment variables.
Service Binding Options
- Bind backing services in a namespace that is different from the workload namespace.
- Project binding data into the specific container workloads.
- Auto-detection of the binding data from resources owned by the backing service CR.
- Compose custom binding data from the exposed binding data.
-
Support for non-
PodSpec
compliant workload resources.
Security
- Support for role-based access control (RBAC).
6.2.4. API differences
The CRD of the Service Binding Operator supports the following APIs:
-
Service Binding with the
binding.operators.coreos.com
API group. -
Service Binding (Spec API) with the
servicebinding.io
API group.
Both of these API groups have similar features, but they are not completely identical. Here is the complete list of differences between these API groups:
Feature | Supported by the binding.operators.coreos.com API group | Supported by the servicebinding.io API group | Notes |
---|---|---|---|
Binding to provisioned services | Yes | Yes | Not applicable (N/A) |
Direct secret projection | Yes | Yes | Not applicable (N/A) |
Bind as files | Yes | Yes |
|
Bind as environment variables | Yes | Yes |
|
Selecting workload with a label selector | Yes | Yes | Not applicable (N/A) |
Detecting binding resources ( | Yes | No |
The |
Naming strategies | Yes | No |
There is no current mechanism within the |
Container path | Yes | Partial |
Because a service binding of the |
Container name filtering | No | Yes |
The |
Secret path | Yes | No |
The |
Alternative binding sources (for example, binding data from annotations) | Yes | Allowed by Service Binding Operator | The specification requires support for getting binding data from provisioned services and secrets. However, a strict reading of the specification suggests that support for other binding data sources is allowed. Using this fact, Service Binding Operator can pull the binding data from various sources (for example, pulling binding data from annotations). Service Binding Operator supports these sources on both the API groups. |
6.2.5. Additional resources
6.3. Installing Service Binding Operator
This guide walks cluster administrators through the process of installing the Service Binding Operator to an OpenShift Container Platform cluster.
You can install Service Binding Operator on OpenShift Container Platform 4.7 and later.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. - Your cluster has the Marketplace capability enabled or the Red Hat Operator catalog source configured manually.
6.3.1. Installing the Service Binding Operator using the web console
You can install Service Binding Operator using the OpenShift Container Platform OperatorHub. When you install the Service Binding Operator, the custom resources (CRs) required for the service binding configuration are automatically installed along with the Operator.
Procedure
- In the Administrator perspective of the web console, navigate to Operators → OperatorHub.
-
Use the Filter by keyword box to search for
Service Binding Operator
in the catalog. Click the Service Binding Operator tile. - Read the brief description about the Operator on the Service Binding Operator page. Click Install.
On the Install Operator page:
-
Select All namespaces on the cluster (default) for the Installation Mode. This mode installs the Operator in the default
openshift-operators
namespace, which enables the Operator to watch and be made available to all namespaces in the cluster. - Select Automatic for the Approval Strategy. This ensures that the future upgrades to the Operator are handled automatically by the Operator Lifecycle Manager (OLM). If you select the Manual approval strategy, OLM creates an update request. As a cluster administrator, you must then manually approve the OLM update request to update the Operator to the new version.
Select an Update Channel.
- By default, the stable channel enables installation of the latest stable and supported release of the Service Binding Operator.
-
Select All namespaces on the cluster (default) for the Installation Mode. This mode installs the Operator in the default
Click Install.
NoteThe Operator is installed automatically into the
openshift-operators
namespace.- On the Installed Operator — ready for use pane, click View Operator. You will see the Operator listed on the Installed Operators page.
- Verify that the Status is set to Succeeded to confirm successful installation of Service Binding Operator.
6.3.2. Additional resources
6.4. Getting started with service binding
The Service Binding Operator manages the data plane for workloads and backing services. This guide provides instructions with examples to help you create a database instance, deploy an application, and use the Service Binding Operator to create a binding connection between the application and the database service.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. -
You have installed the
oc
CLI. - You have installed Service Binding Operator from OperatorHub.
You have installed the 5.1.2 version of the Crunchy Postgres for Kubernetes Operator from OperatorHub using the v5 Update channel. The installed Operator is available in an appropriate namespace, such as the
my-petclinic
namespace.NoteYou can create the namespace using the
oc create namespace my-petclinic
command.You have installed the 5.1.2 version of the Crunchy Postgres for Kubernetes Operator from OperatorHub using the v5 Update channel. The installed Operator is available in an appropriate project, such as the
my-petclinic
project.NoteYou can create the project using the
oc new-project my-petclinic
command.
6.4.1. Creating a PostgreSQL database instance
To create a PostgreSQL database instance, you must create a PostgresCluster
custom resource (CR) and configure the database.
Procedure
Create the
PostgresCluster
CR in themy-petclinic
namespace by running the following command in shell:$ oc apply -n my-petclinic -f - << EOD --- apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo spec: image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-14.4-0 postgresVersion: 14 instances: - name: instance1 dataVolumeClaimSpec: accessModes: - "ReadWriteOnce" resources: requests: storage: 1Gi backups: pgbackrest: image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.38-0 repos: - name: repo1 volume: volumeClaimSpec: accessModes: - "ReadWriteOnce" resources: requests: storage: 1Gi EOD
The annotations added in this
PostgresCluster
CR enable the service binding connection and trigger the Operator reconciliation.The output verifies that the database instance is created:
Example output
postgrescluster.postgres-operator.crunchydata.com/hippo created
After you have created the database instance, ensure that all the pods in the
my-petclinic
namespace are running:$ oc get pods -n my-petclinic
The output, which takes a few minutes to display, verifies that the database is created and configured:
Example output
NAME READY STATUS RESTARTS AGE hippo-backup-9rxm-88rzq 0/1 Completed 0 2m2s hippo-instance1-6psd-0 4/4 Running 0 3m28s hippo-repo-host-0 2/2 Running 0 3m28s
After the database is configured, you can deploy the sample application and connect it to the database service.
6.4.2. Deploying the Spring PetClinic sample application
To deploy the Spring PetClinic sample application on an OpenShift Container Platform cluster, you must use a deployment configuration and configure your local environment to be able to test the application.
Procedure
Deploy the
spring-petclinic
application with thePostgresCluster
custom resource (CR) by running the following command in shell:$ oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD
The output verifies that the Spring PetClinic sample application is created and deployed:
Example output
deployment.apps/spring-petclinic created service/spring-petclinic created
NoteIf you are deploying the application using Container images in the Developer perspective of the web console, you must enter the following environment variables under the Deployment section of the Advanced options:
- Name: SPRING_PROFILES_ACTIVE
- Value: postgres
Verify that the application is not yet connected to the database service by running the following command:
$ oc get pods -n my-petclinic
The output takes a few minutes to display the
CrashLoopBackOff
status:Example output
NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s
At this stage, the pod fails to start. If you try to interact with the application, it returns errors.
Expose the service to create a route for your application:
$ oc expose service spring-petclinic -n my-petclinic
The output verifies that the
spring-petclinic
service is exposed and a route for the Spring PetClinic sample application is created:Example output
route.route.openshift.io/spring-petclinic exposed
You can now use the Service Binding Operator to connect the application to the database service.
6.4.3. Connecting the Spring PetClinic sample application to the PostgreSQL database service
To connect the sample application to the database service, you must create a ServiceBinding
custom resource (CR) that triggers the Service Binding Operator to project the binding data into the application.
Procedure
Create a
ServiceBinding
CR to project the binding data:$ oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster 2 name: hippo application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD
The output verifies that the
ServiceBinding
CR is created to project the binding data into the sample application.Example output
servicebinding.binding.operators.coreos.com/spring-petclinic created
Verify that the request for service binding is successful:
$ oc get servicebindings -n my-petclinic
Example output
NAME READY REASON AGE spring-petclinic-pgcluster True ApplicationsBound 7s
By default, the values from the binding data of the database service are projected as files into the workload container that runs the sample application. For example, all the values from the Secret resource are projected into the
bindings/spring-petclinic-pgcluster
directory.NoteOptionally, you can also verify that the files in the application contain the projected binding data, by printing out the directory contents:
$ for i in username password host port type; do oc exec -it deploy/spring-petclinic -n my-petclinic -- /bin/bash -c 'cd /tmp; find /bindings/*/'$i' -exec echo -n {}:" " \; -exec cat {} \;'; echo; done
Example output: With all the values from the secret resource
/bindings/spring-petclinic-pgcluster/username: <username> /bindings/spring-petclinic-pgcluster/password: <password> /bindings/spring-petclinic-pgcluster/host: hippo-primary.my-petclinic.svc /bindings/spring-petclinic-pgcluster/port: 5432 /bindings/spring-petclinic-pgcluster/type: postgresql
Set up the port forwarding from the application port to access the sample application from your local environment:
$ oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic
Example output
Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080
Access http://localhost:8080/petclinic.
You can now remotely access the Spring PetClinic sample application at localhost:8080 and see that the application is now connected to the database service.
6.4.4. Additional resources
6.5. Getting started with service binding on IBM Power, IBM Z, and IBM LinuxONE
The Service Binding Operator manages the data plane for workloads and backing services. This guide provides instructions with examples to help you create a database instance, deploy an application, and use the Service Binding Operator to create a binding connection between the application and the database service.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. -
You have installed the
oc
CLI. - You have installed the Service Binding Operator from OperatorHub.
6.5.1. Deploying a PostgreSQL Operator
Procedure
-
To deploy the Dev4Devs PostgreSQL Operator in the
my-petclinic
namespace run the following command in shell:
$ oc apply -f - << EOD
---
apiVersion: v1
kind: Namespace
metadata:
name: my-petclinic
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: postgres-operator-group
namespace: my-petclinic
---
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: ibm-multiarch-catalog
namespace: openshift-marketplace
spec:
sourceType: grpc
image: quay.io/ibm/operator-registry-<architecture> 1
imagePullPolicy: IfNotPresent
displayName: ibm-multiarch-catalog
updateStrategy:
registryPoll:
interval: 30m
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: postgresql-operator-dev4devs-com
namespace: openshift-operators
spec:
channel: alpha
installPlanApproval: Automatic
name: postgresql-operator-dev4devs-com
source: ibm-multiarch-catalog
sourceNamespace: openshift-marketplace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: database-view
labels:
servicebinding.io/controller: "true"
rules:
- apiGroups:
- postgresql.dev4devs.com
resources:
- databases
verbs:
- get
- list
EOD
- 1
- The Operator image.
-
For IBM Power®:
quay.io/ibm/operator-registry-ppc64le:release-4.9
-
For IBM Z® and IBM® LinuxONE:
quay.io/ibm/operator-registry-s390x:release-4.8
-
For IBM Power®:
Verification
After the operator is installed, list the operator subscriptions in the
openshift-operators
namespace:$ oc get subs -n openshift-operators
Example output
NAME PACKAGE SOURCE CHANNEL postgresql-operator-dev4devs-com postgresql-operator-dev4devs-com ibm-multiarch-catalog alpha rh-service-binding-operator rh-service-binding-operator redhat-operators stable
6.5.2. Creating a PostgreSQL database instance
To create a PostgreSQL database instance, you must create a Database
custom resource (CR) and configure the database.
Procedure
Create the
Database
CR in themy-petclinic
namespace by running the following command in shell:$ oc apply -f - << EOD apiVersion: postgresql.dev4devs.com/v1alpha1 kind: Database metadata: name: sampledatabase namespace: my-petclinic annotations: host: sampledatabase type: postgresql port: "5432" service.binding/database: 'path={.spec.databaseName}' service.binding/port: 'path={.metadata.annotations.port}' service.binding/password: 'path={.spec.databasePassword}' service.binding/username: 'path={.spec.databaseUser}' service.binding/type: 'path={.metadata.annotations.type}' service.binding/host: 'path={.metadata.annotations.host}' spec: databaseCpu: 30m databaseCpuLimit: 60m databaseMemoryLimit: 512Mi databaseMemoryRequest: 128Mi databaseName: "sampledb" databaseNameKeyEnvVar: POSTGRESQL_DATABASE databasePassword: "samplepwd" databasePasswordKeyEnvVar: POSTGRESQL_PASSWORD databaseStorageRequest: 1Gi databaseUser: "sampleuser" databaseUserKeyEnvVar: POSTGRESQL_USER image: registry.redhat.io/rhel8/postgresql-13:latest databaseStorageClassName: nfs-storage-provisioner size: 1 EOD
The annotations added in this
Database
CR enable the service binding connection and trigger the Operator reconciliation.The output verifies that the database instance is created:
Example output
database.postgresql.dev4devs.com/sampledatabase created
After you have created the database instance, ensure that all the pods in the
my-petclinic
namespace are running:$ oc get pods -n my-petclinic
The output, which takes a few minutes to display, verifies that the database is created and configured:
Example output
NAME READY STATUS RESTARTS AGE sampledatabase-cbc655488-74kss 0/1 Running 0 32s
After the database is configured, you can deploy the sample application and connect it to the database service.
6.5.3. Deploying the Spring PetClinic sample application
To deploy the Spring PetClinic sample application on an OpenShift Container Platform cluster, you must use a deployment configuration and configure your local environment to be able to test the application.
Procedure
Deploy the
spring-petclinic
application with thePostgresCluster
custom resource (CR) by running the following command in shell:$ oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres - name: org.springframework.cloud.bindings.boot.enable value: "true" ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD
The output verifies that the Spring PetClinic sample application is created and deployed:
Example output
deployment.apps/spring-petclinic created service/spring-petclinic created
NoteIf you are deploying the application using Container images in the Developer perspective of the web console, you must enter the following environment variables under the Deployment section of the Advanced options:
- Name: SPRING_PROFILES_ACTIVE
- Value: postgres
Verify that the application is not yet connected to the database service by running the following command:
$ oc get pods -n my-petclinic
It takes take a few minutes until the
CrashLoopBackOff
status is displayed:Example output
NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s
At this stage, the pod fails to start. If you try to interact with the application, it returns errors.
You can now use the Service Binding Operator to connect the application to the database service.
6.5.4. Connecting the Spring PetClinic sample application to the PostgreSQL database service
To connect the sample application to the database service, you must create a ServiceBinding
custom resource (CR) that triggers the Service Binding Operator to project the binding data into the application.
Procedure
Create a
ServiceBinding
CR to project the binding data:$ oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgresql.dev4devs.com kind: Database 2 name: sampledatabase version: v1alpha1 application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD
The output verifies that the
ServiceBinding
CR is created to project the binding data into the sample application.Example output
servicebinding.binding.operators.coreos.com/spring-petclinic created
Verify that the request for service binding is successful:
$ oc get servicebindings -n my-petclinic
Example output
NAME READY REASON AGE spring-petclinic-postgresql True ApplicationsBound 47m
By default, the values from the binding data of the database service are projected as files into the workload container that runs the sample application. For example, all the values from the Secret resource are projected into the
bindings/spring-petclinic-pgcluster
directory.Once this is created, you can go to the topology to see the visual connection.
Figure 6.1. Connecting spring-petclinic to a sample database
Set up the port forwarding from the application port to access the sample application from your local environment:
$ oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic
Example output
Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080
Access http://localhost:8080.
You can now remotely access the Spring PetClinic sample application at localhost:8080 and see that the application is now connected to the database service.
6.5.5. Additional resources
6.6. Exposing binding data from a service
Application developers need access to backing services to build and connect workloads. Connecting workloads to backing services is always a challenge because each service provider requires a different way to access their secrets and consume them in a workload.
The Service Binding Operator enables application developers to easily bind workloads together with operator-managed backing services, without any manual procedures to configure the binding connection. For the Service Binding Operator to provide the binding data, as an Operator provider or user who creates backing services, you must expose the binding data to be automatically detected by the Service Binding Operator. Then, the Service Binding Operator automatically collects the binding data from the backing service and shares it with a workload to provide a consistent and predictable experience.
6.6.1. Methods of exposing binding data
This section describes the methods you can use to expose the binding data.
Ensure that you know and understand your workload requirements and environment, and how it works with the provided services.
Binding data is exposed under the following circumstances:
Backing service is available as a provisioned service resource.
The service you intend to connect to is compliant with the Service Binding specification. You must create a
Secret
resource with all the required binding data values and reference it in the backing service custom resource (CR). The detection of all the binding data values is automatic.Backing service is not available as a provisioned service resource.
You must expose the binding data from the backing service. Depending on your workload requirements and environment, you can choose any of the following methods to expose the binding data:
- Direct secret reference
- Declaring binding data through custom resource definition (CRD) or CR annotations
- Detection of binding data through owned resources
6.6.1.1. Provisioned service
Provisioned service represents a backing service CR with a reference to a Secret
resource placed in the .status.binding.name
field of the backing service CR.
As an Operator provider or the user who creates backing services, you can use this method to be compliant with the Service Binding specification, by creating a Secret
resource and referencing it in the .status.binding.name
section of the backing service CR. This Secret
resource must provide all the binding data values required for a workload to connect to the backing service.
The following examples show an AccountService
CR that represents a backing service and a Secret
resource referenced from the CR.
Example: AccountService
CR
apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service spec: # ... status: binding: name: hippo-pguser-hippo
Example: Referenced Secret
resource
apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: "<password>" user: "<username>" # ...
When creating a service binding resource, you can directly give the details of the AccountService
resource in the ServiceBinding
specification as follows:
Example: ServiceBinding
resource
apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: # ... services: - group: "example.com" version: v1alpha1 kind: AccountService name: prod-account-service application: name: spring-petclinic group: apps version: v1 resource: deployments
Example: ServiceBinding
resource in Specification API
apiVersion: servicebinding.io/v1beta1 kind: ServiceBinding metadata: name: account-service spec: # ... service: apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service workload: apiVersion: apps/v1 kind: Deployment name: spring-petclinic
This method exposes all the keys in the hippo-pguser-hippo
referenced Secret
resource as binding data that is to be projected into the workload.
6.6.1.2. Direct secret reference
You can use this method, if all the required binding data values are available in a Secret
resource that you can reference in your Service Binding definition. In this method, a ServiceBinding
resource directly references a Secret
resource to connect to a service. All the keys in the Secret
resource are exposed as binding data.
Example: Specification with the binding.operators.coreos.com
API
apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: # ... services: - group: "" version: v1 kind: Secret name: hippo-pguser-hippo
Example: Specification that is compliant with the servicebinding.io
API
apiVersion: servicebinding.io/v1beta1 kind: ServiceBinding metadata: name: account-service spec: # ... service: apiVersion: v1 kind: Secret name: hippo-pguser-hippo
6.6.1.3. Declaring binding data through CRD or CR annotations
You can use this method to annotate the resources of the backing service to expose the binding data with specific annotations. Adding annotations under the metadata
section alters the CRs and CRDs of the backing services. Service Binding Operator detects the annotations added to the CRs and CRDs and then creates a Secret
resource with the values extracted based on the annotations.
The following examples show the annotations that are added under the metadata
section and a referenced ConfigMap
object from a resource:
Example: Exposing binding data from a Secret
object defined in the CR annotations
apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret' # ...
The previous example places the name of the secret name in the {.metadata.name}-pguser-{.metadata.name}
template that resolves to hippo-pguser-hippo
. The template can contain multiple JSONPath expressions.
Example: Referenced Secret
object from a resource
apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: "<password>" user: "<username>"
Example: Exposing binding data from a ConfigMap
object defined in the CR annotations
apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap' # ...
The previous example places the name of the config map in the {.metadata.name}-config
template that resolves to hippo-config
. The template can contain multiple JSONPath expressions.
Example: Referenced ConfigMap
object from a resource
apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: "10s" user: "hippo"
6.6.1.4. Detection of binding data through owned resources
You can use this method if your backing service owns one or more Kubernetes resources such as route, service, config map, or secret that you can use to detect the binding data. In this method, the Service Binding Operator detects the binding data from resources owned by the backing service CR.
The following examples show the detectBindingResources
API option set to true
in the ServiceBinding
CR:
Example
apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-detect-all namespace: my-petclinic spec: detectBindingResources: true services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments
In the previous example, PostgresCluster
custom service resource owns one or more Kubernetes resources such as route, service, config map, or secret.
The Service Binding Operator automatically detects the binding data exposed on each of the owned resources.
6.6.2. Data model
The data model used in the annotations follows specific conventions.
Service binding annotations must use the following convention:
service.binding(/<NAME>)?: "<VALUE>|(path=<JSONPATH_TEMPLATE>(,objectType=<OBJECT_TYPE>)?(,elementType=<ELEMENT_TYPE>)?(,sourceKey=<SOURCE_KEY>)?(,sourceValue=<SOURCE_VALUE>)?)"
where:
|
Specifies the name under which the binding value is to be exposed. You can exclude it only when the |
|
Specifies the constant value exposed when no |
The data model provides the details on the allowed values and semantic for the path
, elementType
, objectType
, sourceKey
, and sourceValue
parameters.
Parameter | Description | Default value |
---|---|---|
| JSONPath template that consists JSONPath expressions enclosed by curly braces {}. | N/A |
|
Specifies whether the value of the element referenced in the
|
|
|
Specifies whether the value of the element indicated in the |
|
|
Specifies the key in the Note:
| N/A |
|
Specifies the key in the slice of maps. Note:
| N/A |
The sourceKey
and sourceValue
parameters are applicable only if the element indicated in the path
parameter refers to a ConfigMap
or Secret
resource.
6.6.3. Setting annotations mapping to be optional
You can have optional fields in the annotations. For example, a path to the credentials might not be present if the service endpoint does not require authentication. In such cases, a field might not exist in the target path of the annotations. As a result, Service Binding Operator generates an error, by default.
As a service provider, to indicate whether you require annotations mapping, you can set a value for the optional
flag in your annotations when enabling services. Service Binding Operator provides annotations mapping only if the target path is available. When the target path is not available, the Service Binding Operator skips the optional mapping and continues with the projection of the existing mappings without throwing any errors.
Procedure
To make a field in the annotations optional, set the
optional
flag value totrue
:Example
apiVersion: apps.example.org/v1beta1 kind: Database metadata: name: my-db namespace: my-petclinic annotations: service.binding/username: path={.spec.name},optional=true # ...
-
If you set the
optional
flag value tofalse
and the Service Binding Operator is unable to find the target path, the Operator fails the annotations mapping. -
If the
optional
flag has no value set, the Service Binding Operator considers the value asfalse
by default and fails the annotations mapping.
6.6.4. RBAC requirements
To expose the backing service binding data using the Service Binding Operator, you require certain Role-based access control (RBAC) permissions. Specify certain verbs under the rules
field of the ClusterRole
resource to grant the RBAC permissions for the backing service resources. When you define these rules
, you allow the Service Binding Operator to read the binding data of the backing service resources throughout the cluster. If the users do not have permissions to read binding data or modify application resource, the Service Binding Operator prevents such users to bind services to application. Adhering to the RBAC requirements avoids unnecessary permission elevation for the user and prevents access to unauthorized services or applications.
The Service Binding Operator performs requests against the Kubernetes API using a dedicated service account. By default, this account has permissions to bind services to workloads, both represented by the following standard Kubernetes or OpenShift objects:
-
Deployments
-
DaemonSets
-
ReplicaSets
-
StatefulSets
-
DeploymentConfigs
The Operator service account is bound to an aggregated cluster role, allowing Operator providers or cluster administrators to enable binding custom service resources to workloads. To grant the required permissions within a ClusterRole
, label it with the servicebinding.io/controller
flag and set the flag value to true
. The following example shows how to allow the Service Binding Operator to get
, watch
, and list
the custom resources (CRs) of Crunchy PostgreSQL Operator:
Example: Enable binding to PostgreSQL database instances provisioned by Crunchy PostgreSQL Operator
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: postgrescluster-reader labels: servicebinding.io/controller: "true" rules: - apiGroups: - postgres-operator.crunchydata.com resources: - postgresclusters verbs: - get - watch - list ...
This cluster role can be deployed during the installation of the backing service Operator.
6.6.5. Categories of exposable binding data
The Service Binding Operator enables you to expose the binding data values from the backing service resources and custom resource definitions (CRDs).
This section provides examples to show how you can use the various categories of exposable binding data. You must modify these examples to suit your work environment and requirements.
6.6.5.1. Exposing a string from a resource
The following example shows how to expose the string from the metadata.name
field of the PostgresCluster
custom resource (CR) as a username:
Example
apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name} # ...
6.6.5.2. Exposing a constant value as the binding item
The following examples show how to expose a constant value from the PostgresCluster
custom resource (CR):
Example: Exposing a constant value
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: hippo
namespace: my-petclinic
annotations:
"service.binding/type": "postgresql" 1
- 1
- Binding
type
to be exposed with thepostgresql
value.
6.6.5.3. Exposing an entire config map or secret that is referenced from a resource
The following examples show how to expose an entire secret through annotations:
Example: Exposing an entire secret through annotations
apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret'
Example: The referenced secret from the backing service resource
apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: "<password>" user: "<username>"
6.6.5.4. Exposing a specific entry from a config map or secret that is referenced from a resource
The following examples show how to expose a specific entry from a config map through annotations:
Example: Exposing an entry from a config map through annotations
apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap,sourceKey=user'
Example: The referenced config map from the backing service resource
The binding data should have a key with name as db_timeout
and value as 10s
:
apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: "10s" user: "hippo"
6.6.5.5. Exposing a resource definition value
The following example shows how to expose a resource definition value through annotations:
Example: Exposing a resource definition value through annotations
apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name} ...
6.6.5.6. Exposing entries of a collection with the key and value from each entry
The following example shows how to expose the entries of a collection with the key and value from each entry through annotations:
Example: Exposing the entries of a collection through annotations
apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: "service.binding/uri": "path={.status.connections},elementType=sliceOfMaps,sourceKey=type,sourceValue=url" spec: # ... status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com
The following example shows how the previous entries of a collection in annotations are projected into the bound application.
Example: Binding data files
/bindings/<binding-name>/uri_primary => primary.example.com /bindings/<binding-name>/uri_secondary => secondary.example.com /bindings/<binding-name>/uri_404 => black-hole.example.com
Example: Configuration from a backing service resource
status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com
The previous example helps you to project all those values with keys such as primary
, secondary
, and so on.
6.6.5.7. Exposing items of a collection with one key per item
The following example shows how to expose the items of a collection with one key per item through annotations:
Example: Exposing the items of a collection through annotations
apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: "service.binding/tags": "path={.spec.tags},elementType=sliceOfStrings" spec: tags: - knowledge - is - power
The following example shows how the previous items of a collection in annotations are projected into the bound application.
Example: Binding data files
/bindings/<binding-name>/tags_0 => knowledge /bindings/<binding-name>/tags_1 => is /bindings/<binding-name>/tags_2 => power
Example: Configuration from a backing service resource
spec: tags: - knowledge - is - power
6.6.5.8. Exposing values of collection entries with one key per entry value
The following example shows how to expose the values of collection entries with one key per entry value through annotations:
Example: Exposing the values of collection entries through annotations
apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: "service.binding/url": "path={.spec.connections},elementType=sliceOfStrings,sourceValue=url" spec: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com
The following example shows how the previous values of a collection in annotations are projected into the bound application.
Example: Binding data files
/bindings/<binding-name>/url_0 => primary.example.com /bindings/<binding-name>/url_1 => secondary.example.com /bindings/<binding-name>/url_2 => black-hole.example.com
6.6.6. Additional resources
6.7. Projecting binding data
This section provides information on how you can consume the binding data.
6.7.1. Consumption of binding data
After the backing service exposes the binding data, for a workload to access and consume this data, you must project it into the workload from a backing service. Service Binding Operator automatically projects this set of data into the workload in the following methods:
- By default, as files.
-
As environment variables, after you configure the
.spec.bindAsFiles
parameter from theServiceBinding
resource.
6.7.2. Configuration of the directory path to project the binding data inside workload container
By default, Service Binding Operator mounts the binding data as files at a specific directory in your workload resource. You can configure the directory path using the SERVICE_BINDING_ROOT
environment variable setup in the container where your workload runs.
Example: Binding data mounted as files
$SERVICE_BINDING_ROOT 1 ├── account-database 2 │ ├── type 3 │ ├── provider 4 │ ├── uri │ ├── username │ └── password └── transaction-event-stream 5 ├── type ├── connection-count ├── uri ├── certificates └── private-key
- 1
- Root directory.
- 2 5
- Directory that stores the binding data.
- 3
- Mandatory identifier that identifies the type of the binding data projected into the corresponding directory.
- 4
- Optional: Identifier to identify the provider so that the application can identify the type of backing service it can connect to.
To consume the binding data as environment variables, use the built-in language feature of your programming language of choice that can read environment variables.
Example: Python client usage
import os username = os.getenv("USERNAME") password = os.getenv("PASSWORD")
For using the binding data directory name to look up the binding data
Service Binding Operator uses the ServiceBinding
resource name (.metadata.name
) as the binding data directory name. The spec also provides a way to override that name through the .spec.name
field. As a result, there is a chance for binding data name collision if there are multiple ServiceBinding
resources in the namespace. However, due to the nature of the volume mount in Kubernetes, the binding data directory will contain values from only one of the Secret
resources.
6.7.2.1. Computation of the final path for projecting the binding data as files
The following table summarizes the configuration of how the final path for the binding data projection is computed when files are mounted at a specific directory:
SERVICE_BINDING_ROOT | Final path |
---|---|
Not available |
|
|
|
In the previous table, the <ServiceBinding_ResourceName>
entry specifies the name of the ServiceBinding
resource that you configure in the .metadata.name
section of the custom resource (CR).
By default, the projected files get their permissions set to 0644. Service Binding Operator cannot set specific permissions due to a bug in Kubernetes that causes issues if the service expects specific permissions such as 0600
. As a workaround, you can modify the code of the program or the application that is running inside a workload resource to copy the file to the /tmp
directory and set the appropriate permissions.
To access and consume the binding data within the existing SERVICE_BINDING_ROOT
environment variable, use the built-in language feature of your programming language of choice that can read environment variables.
Example: Python client usage
from pyservicebinding import binding try: sb = binding.ServiceBinding() except binding.ServiceBindingRootMissingError as msg: # log the error message and retry/exit print("SERVICE_BINDING_ROOT env var not set") sb = binding.ServiceBinding() bindings_list = sb.bindings("postgresql")
In the previous example, the bindings_list
variable contains the binding data for the postgresql
database service type.
6.7.3. Projecting the binding data
Depending on your workload requirements and environment, you can choose to project the binding data either as files or environment variables.
Prerequisites
You understand the following concepts:
- Environment and requirements of your workload, and how it works with the provided services.
- Consumption of the binding data in your workload resource.
- Configuration of how the final path for data projection is computed for the default method.
- The binding data is exposed from the backing service.
Procedure
-
To project the binding data as files, determine the destination folder by ensuring that the existing
SERVICE_BINDING_ROOT
environment variable is present in the container where your workload runs. -
To project the binding data as environment variables, set the value for the
.spec.bindAsFiles
parameter tofalse
from theServiceBinding
resource in the custom resource (CR).
6.7.4. Additional resources
6.8. Binding workloads using Service Binding Operator
Application developers must bind a workload to one or more backing services by using a binding secret. This secret is generated for the purpose of storing information to be consumed by the workload.
As an example, consider that the service you want to connect to is already exposing the binding data. In this case, you would also need a workload to be used along with the ServiceBinding
custom resource (CR). By using this ServiceBinding
CR, the workload sends a binding request with the details of the services to bind with.
Example of ServiceBinding
CR
apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: 2 name: spring-petclinic group: apps version: v1 resource: deployments
As shown in the previous example, you can also directly use a ConfigMap
or a Secret
itself as a service resource to be used as a source of binding data.
6.8.1. Naming strategies
Naming strategies are available only for the binding.operators.coreos.com
API group.
Naming strategies use Go templates to help you define custom binding names through the service binding request. Naming strategies apply for all attributes including the mappings in the ServiceBinding
custom resource (CR).
A backing service projects the binding names as files or environment variables into the workload. If a workload expects the projected binding names in a particular format, but the binding names to be projected from the backing service are not available in that format, then you can change the binding names using naming strategies.
Predefined post-processing functions
While using naming strategies, depending on the expectations or requirements of your workload, you can use the following predefined post-processing functions in any combination to convert the character strings:
-
upper
: Converts the character strings into capital or uppercase letters. -
lower
: Converts the character strings into lowercase letters. -
title
: Converts the character strings where the first letter of each word is capitalized except for certain minor words.
Predefined naming strategies
Binding names declared through annotations are processed for their name change before their projection into the workload according to the following predefined naming strategies:
none
: When applied, there are no changes in the binding names.Example
After the template compilation, the binding names take the
{{ .name }}
form.host: hippo-pgbouncer port: 5432
upper
: Applied when nonamingStrategy
is defined. When applied, converts all the character strings of the binding name key into capital or uppercase letters.Example
After the template compilation, the binding names take the
{{ .service.kind | upper}}_{{ .name | upper }}
form.DATABASE_HOST: hippo-pgbouncer DATABASE_PORT: 5432
If your workload requires a different format, you can define a custom naming strategy and change the binding name using a prefix and a separator, for example,
PORT_DATABASE
.
-
When the binding names are projected as files, by default the predefined
none
naming strategy is applied, and the binding names do not change. -
When the binding names are projected as environment variables and no
namingStrategy
is defined, by default the predefineduppercase
naming strategy is applied. - You can override the predefined naming strategies by defining custom naming strategies using different combinations of custom binding names and predefined post-processing functions.
6.8.2. Advanced binding options
You can define the ServiceBinding
custom resource (CR) to use the following advanced binding options:
-
Changing binding names: This option is available only for the
binding.operators.coreos.com
API group. -
Composing custom binding data: This option is available only for the
binding.operators.coreos.com
API group. -
Binding workloads using label selectors: This option is available for both the
binding.operators.coreos.com
andservicebinding.io
API groups.
6.8.2.1. Changing the binding names before projecting them into the workload
You can specify the rules to change the binding names in the .spec.namingStrategy
attribute of the ServiceBinding
CR. For example, consider a Spring PetClinic sample application that connects to the PostgreSQL database. In this case, the PostgreSQL database service exposes the host
and port
fields of the database to use for binding. The Spring PetClinic sample application can access this exposed binding data through the binding names.
Example: Spring PetClinic sample application in the ServiceBinding
CR
# ... application: name: spring-petclinic group: apps version: v1 resource: deployments # ...
Example: PostgreSQL database service in the ServiceBinding
CR
# ... services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo # ...
If namingStrategy
is not defined and the binding names are projected as environment variables, then the host: hippo-pgbouncer
value in the backing service and the projected environment variable would appear as shown in the following example:
Example
DATABASE_HOST: hippo-pgbouncer
where:
|
Specifies the |
| Specifies the binding name. |
After applying the POSTGRESQL_{{ .service.kind | upper }}_{{ .name | upper }}_ENV
naming strategy, the list of custom binding names prepared by the service binding request appears as shown in the following example:
Example
POSTGRESQL_DATABASE_HOST_ENV: hippo-pgbouncer POSTGRESQL_DATABASE_PORT_ENV: 5432
The following items describe the expressions defined in the POSTGRESQL_{{ .service.kind | upper }}_{{ .name | upper }}_ENV
naming strategy:
-
.name
: Refers to the binding name exposed by the backing service. In the previous example, the binding names areHOST
andPORT
. -
.service.kind
: Refers to the kind of service resource whose binding names are changed with the naming strategy. -
upper
: String function used to post-process the character string while compiling the Go template string. -
POSTGRESQL
: Prefix of the custom binding name. -
ENV
: Suffix of the custom binding name.
Similar to the previous example, you can define the string templates in namingStrategy
to define how each key of the binding names should be prepared by the service binding request.
6.8.2.2. Composing custom binding data
As an application developer, you can compose custom binding data under the following circumstances:
- The backing service does not expose binding data.
- The values exposed are not available in the required format as expected by the workload.
For example, consider a case where the backing service CR exposes the host, port, and database user as binding data, but the workload requires that the binding data be consumed as a connection string. You can compose custom binding data using attributes in the Kubernetes resource representing the backing service.
Example
apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo 1 id: postgresDB 2 - group: "" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: name: spring-petclinic group: apps version: v1 resource: deployments mappings: ## From the database service - name: JDBC_URL value: 'jdbc:postgresql://{{ .postgresDB.metadata.annotations.proxy }}:{{ .postgresDB.spec.port }}/{{ .postgresDB.metadata.name }}' ## From both the services! - name: CREDENTIALS value: '{{ .postgresDB.metadata.name }}{{ translationService.postgresSecret.data.password }}' ## Generate JSON - name: DB_JSON 3 value: {{ json .postgresDB.status }} 4
- 1
- Name of the backing service resource.
- 2
- Optional identifier.
- 3
- The JSON name that the Service Binding Operator generates. The Service Binding Operator projects this JSON name as the name of a file or environment variable.
- 4
- The JSON value that the Service Binding Operator generates. The Service Binding Operator projects this JSON value as a file or environment variable. The JSON value contains the attributes from your specified field of the backing service custom resource.
6.8.2.3. Binding workloads using a label selector
You can use a label selector to specify the workload to bind. If you declare a service binding using the label selectors to pick up workloads, the Service Binding Operator periodically attempts to find and bind new workloads that match the given label selector.
For example, as a cluster administrator, you can bind a service to every Deployment
in a namespace with the environment: production
label by setting an appropriate labelSelector
field in the ServiceBinding
CR. This enables the Service Binding Operator to bind each of these workloads with one ServiceBinding
CR.
Example ServiceBinding
CR in the binding.operators.coreos.com/v1alpha1
API
apiVersion: binding.operators.coreos.com/v1alpha1
kind: ServiceBinding
metadata:
name: multi-application-binding
namespace: service-binding-demo
spec:
application:
labelSelector: 1
matchLabels:
environment: production
group: apps
version: v1
resource: deployments
services:
group: ""
version: v1
kind: Secret
name: super-secret-data
- 1
- Specifies the workload that is being bound.
Example ServiceBinding
CR in the servicebinding.io
API
apiVersion: servicebindings.io/v1beta1
kind: ServiceBinding
metadata:
name: multi-application-binding
namespace: service-binding-demo
spec:
workload:
selector: 1
matchLabels:
environment: production
apiVersion: app/v1
kind: Deployment
service:
apiVersion: v1
kind: Secret
name: super-secret-data
- 1
- Specifies the workload that is being bound.
If you define the following pairs of fields, Service Binding Operator refuses the binding operation and generates an error:
-
The
name
andlabelSelector
fields in thebinding.operators.coreos.com/v1alpha1
API. -
The
name
andselector
fields in theservicebinding.io
API (Spec API).
Understanding the rebinding behavior
Consider a case where, after a successful binding, you use the name
field to identify a workload. If you delete and recreate that workload, the ServiceBinding
reconciler does not rebind the workload, and the Operator cannot project the binding data to the workload. However, if you use the labelSelector
field to identify a workload, the ServiceBinding
reconciler rebinds the workload, and the Operator projects the binding data.
6.8.3. Binding secondary workloads that are not compliant with PodSpec
A typical scenario in service binding involves configuring the backing service, the workload (Deployment), and Service Binding Operator. Consider a scenario that involves a secondary workload (which can also be an application Operator) that is not compliant with PodSpec and is between the primary workload (Deployment) and Service Binding Operator.
For such secondary workload resources, the location of the container path is arbitrary. For service binding, if the secondary workload in a CR is not compliant with the PodSpec, you must specify the location of the container path. Doing so projects the binding data into the container path specified in the secondary workload of the ServiceBinding
custom resource (CR), for example, when you do not want the binding data inside a pod.
In Service Binding Operator, you can configure the path of where containers or secrets reside within a workload and bind these paths at a custom location.
6.8.3.1. Configuring the custom location of the container path
This custom location is available for the binding.operators.coreos.com
API group when Service Binding Operator projects the binding data as environment variables.
Consider a secondary workload CR, which is not compliant with the PodSpec and has containers located at the spec.containers
path:
Example: Secondary workload CR
apiVersion: "operator.sbo.com/v1" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - name: hello-world image: quay.io/baijum/secondary-workload:latest ports: - containerPort: 8080
Procedure
Configure the
spec.containers
path by specifying a value in theServiceBinding
CR and bind this path to aspec.application.bindingPath.containersPath
custom location:Example:
ServiceBinding
CR with thespec.containers
path in a custom locationapiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo id: postgresDB - group: "" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: 1 name: spring-petclinic group: apps version: v1 resource: deployments application: 2 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: containersPath: spec.containers 3
After you specify the location of the container path, Service Binding Operator generates the binding data, which becomes available in the container path specified in the secondary workload of the ServiceBinding
CR.
The following example shows the spec.containers
path with the envFrom
and secretRef
fields:
Example: Secondary workload CR with the envFrom
and secretRef
fields
apiVersion: "operator.sbo.com/v1" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - env: 1 - name: ServiceBindingOperatorChangeTriggerEnvVar value: "31793" envFrom: - secretRef: name: secret-resource-name 2 image: quay.io/baijum/secondary-workload:latest name: hello-world ports: - containerPort: 8080 resources: {}
6.8.3.2. Configuring the custom location of the secret path
This custom location is available for the binding.operators.coreos.com
API group when Service Binding Operator projects the binding data as environment variables.
Consider a secondary workload CR, which is not compliant with the PodSpec, with only the secret at the spec.secret
path:
Example: Secondary workload CR
apiVersion: "operator.sbo.com/v1" kind: SecondaryWorkload metadata: name: secondary-workload spec: secret: ""
Procedure
Configure the
spec.secret
path by specifying a value in theServiceBinding
CR and bind this path at aspec.application.bindingPath.secretPath
custom location:Example:
ServiceBinding
CR with thespec.secret
path in a custom locationapiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: ... application: 1 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: secretPath: spec.secret 2 ...
After you specify the location of the secret path, Service Binding Operator generates the binding data, which becomes available in the secret path specified in the secondary workload of the ServiceBinding
CR.
The following example shows the spec.secret
path with the binding-request
value:
Example: Secondary workload CR with the binding-request
value
...
apiVersion: "operator.sbo.com/v1"
kind: SecondaryWorkload
metadata:
name: secondary-workload
spec:
secret: binding-request-72ddc0c540ab3a290e138726940591debf14c581 1
...
- 1
- The unique name of the
Secret
resource that Service Binding Operator generates.
6.8.3.3. Workload resource mapping
-
Workload resource mapping is available for the secondary workloads of the
ServiceBinding
custom resource (CR) for both the API groups:binding.operators.coreos.com
andservicebinding.io
. -
You must define
ClusterWorkloadResourceMapping
resources only under theservicebinding.io
API group. However, theClusterWorkloadResourceMapping
resources interact withServiceBinding
resources under both thebinding.operators.coreos.com
andservicebinding.io
API groups.
If you cannot configure custom path locations by using the configuration method for container path, you can define exactly where binding data needs to be projected. Specify where to project the binding data for a given workload kind by defining the ClusterWorkloadResourceMapping
resources in the servicebinding.io
API group.
The following example shows how to define a mapping for the CronJob.batch/v1
resources.
Example: Mapping for CronJob.batch/v1
resources
apiVersion: servicebinding.io/v1beta1 kind: ClusterWorkloadResourceMapping metadata: name: cronjobs.batch 1 spec: versions: - version: "v1" 2 annotations: .spec.jobTemplate.spec.template.metadata.annotations 3 containers: - path: .spec.jobTemplate.spec.template.spec.containers[*] 4 - path: .spec.jobTemplate.spec.template.spec.initContainers[*] name: .name 5 env: .env 6 volumeMounts: .volumeMounts 7 volumes: .spec.jobTemplate.spec.template.spec.volumes 8
- 1
- Name of the
ClusterWorkloadResourceMapping
resource, which must be qualified as theplural.group
of the mapped workload resource. - 2
- Version of the resource that is being mapped. Any version that is not specified can be matched with the "*" wildcard.
- 3
- Optional: Identifier of the
.annotations
field in a pod, specified with a fixed JSONPath. The default value is.spec.template.spec.annotations
. - 4
- Identifier of the
.containers
and.initContainers
fields in a pod, specified with a JSONPath. If no entries under thecontainers
field are defined, the Service Binding Operator defaults to two paths:.spec.template.spec.containers[*]
and.spec.template.spec.initContainers[\*]
, with all other fields set as their default. However, if you specify an entry, then you must define the.path
field. - 5
- Optional: Identifier of the
.name
field in a container, specified with a fixed JSONPath. The default value is.name
. - 6
- Optional: Identifier of the
.env
field in a container, specified with a fixed JSONPath. The default value is.env
. - 7
- Optional: Identifier of the
.volumeMounts
field in a container, specified with a fixed JSONPath. The default value is.volumeMounts
. - 8
- Optional: Identifier of the
.volumes
field in a pod, specified with a fixed JSONPath. The default value is.spec.template.spec.volumes
.
In this context, a fixed JSONPath is a subset of the JSONPath grammar that accepts only the following operations:
-
Field lookup:
.spec.template
-
Array indexing:
.spec['template']
All other operations are not accepted.
-
Field lookup:
-
Most of these fields are optional. When they are not specified, the Service Binding Operator assumes defaults compatible with
PodSpec
resources. -
The Service Binding Operator requires that each of these fields is structurally equivalent to the corresponding field in a pod deployment. For example, the contents of the
.env
field in a workload resource must be able to accept the same structure of data that the.env
field in a Pod resource would. Otherwise, projecting binding data into such a workload might result in unexpected behavior from the Service Binding Operator.
Behavior specific to the binding.operators.coreos.com
API group
You can expect the following behaviors when ClusterWorkloadResourceMapping
resources interact with ServiceBinding
resources under the binding.operators.coreos.com
API group:
-
If a
ServiceBinding
resource with thebindAsFiles: false
flag value is created together with one of these mappings, then environment variables are projected into the.envFrom
field underneath eachpath
field specified in the correspondingClusterWorkloadResourceMapping
resource. As a cluster administrator, you can specify both a
ClusterWorkloadResourceMapping
resource and the.spec.application.bindingPath.containersPath
field in aServiceBinding.bindings.coreos.com
resource for binding purposes.The Service Binding Operator attempts to project binding data into the locations specified in both a
ClusterWorkloadResourceMapping
resource and the.spec.application.bindingPath.containersPath
field. This behavior is equivalent to adding a container entry to the correspondingClusterWorkloadResourceMapping
resource with thepath: $containersPath
attribute, with all other values taking their default value.
6.8.4. Unbinding workloads from a backing service
You can unbind a workload from a backing service by using the oc
tool.
To unbind a workload from a backing service, delete the
ServiceBinding
custom resource (CR) linked to it:$ oc delete ServiceBinding <.metadata.name>
Example
$ oc delete ServiceBinding spring-petclinic-pgcluster
where:
spring-petclinic-pgcluster
Specifies the name of the
ServiceBinding
CR.
6.8.5. Additional resources
6.9. Connecting an application to a service using the Developer perspective
Use the Topology view for the following purposes:
- Grouping multiple components within an application.
- Connecting components with each other.
- Connecting multiple resources to services with labels.
You can either use a binding or a visual connector to connect components.
A binding connection between the components can be established only if the target node is an Operator-backed service. This is indicated by the Create a binding connector tool-tip which appears when you drag an arrow to such a target node. When an application is connected to a service by using a binding connector a ServiceBinding
resource is created. Then, the Service Binding Operator controller projects the necessary binding data into the application deployment. After the request is successful, the application is redeployed establishing an interaction between the connected components.
A visual connector establishes only a visual connection between the components, depicting an intent to connect. No interaction between the components is established. If the target node is not an Operator-backed service the Create a visual connector tool-tip is displayed when you drag an arrow to a target node.
6.9.1. Discovering and identifying Operator-backed bindable services
As a user, if you want to create a bindable service, you must know which services are bindable. Bindable services are services that the applications can consume easily because they expose their binding data such as credentials, connection details, volume mounts, secrets, and other binding data in a standard way. The Developer perspective helps you discover and identify such bindable services.
Procedure
To discover and identify Operator-backed bindable services, consider the following alternative approaches:
- Click +Add → Developer Catalog → Operator Backed to see the Operator-backed tiles. Operator-backed services that support service binding features have a Bindable badge on the tiles.
On the left pane of the Operator Backed page, select Bindable.
TipClick the help icon next to Service binding to see more information about bindable services.
- Click +Add → Add and search for Operator-backed services. When you click the bindable service, you can view the Bindable badge in the side panel.
6.9.2. Creating a visual connection between components
You can depict an intent to connect application components by using the visual connector.
This procedure walks you through an example of creating a visual connection between a PostgreSQL Database service and a Spring PetClinic sample application.
Prerequisites
- You have created and deployed a Spring PetClinic sample application by using the Developer perspective.
-
You have created and deployed a Crunchy PostgreSQL database instance by using the Developer perspective. This instance has the following components:
hippo-backup
,hippo-instance
,hippo-repo-host
, andhippo-pgbouncer
.
Procedure
-
In the Developer perspective, switch to the relevant project, for example,
my-petclinic
. Hover over the Spring PetClinic sample application to see a dangling arrow on the node.
Figure 6.2. Visual connector
-
Click and drag the arrow towards the
hippo-pgbouncer
deployment to connect the Spring PetClinic sample application with it. -
Click the
spring-petclinic
deployment to see the Overview panel. Under the Details tab, click the edit icon in the Annotations section to see the Key =app.openshift.io/connects-to
and Value =[{"apiVersion":"apps/v1","kind":"Deployment","name":"hippo-pgbouncer"}]
annotation added to the deployment. Optional: You can repeat these steps to establish visual connections between other applications and components you create.
Figure 6.3. Connecting multiple applications
6.9.3. Creating a binding connection between components
You can create a binding connection with Operator-backed components, as demonstrated in the following example, which uses a PostgreSQL Database service and a Spring PetClinic sample application. To create a binding connection with a service that the PostgreSQL Database Operator backs, you must first add the Red Hat-provided PostgreSQL Database Operator to the OperatorHub, and then install the Operator. The PostreSQL Database Operator then creates and manages the Database resource, which exposes the binding data in secrets, config maps, status, and spec attributes.
Prerequisites
- You created and deployed a Spring PetClinic sample application in the Developer perspective.
- You installed Service Binding Operator from the OperatorHub.
-
You installed the Crunchy Postgres for Kubernetes Operator from the OperatorHub in the
v5
Update channel. -
You created a PostgresCluster resource in the Developer perspective, which resulted in a Crunchy PostgreSQL database instance with the following components:
hippo-backup
,hippo-instance
,hippo-repo-host
, andhippo-pgbouncer
.
Procedure
-
In the Developer perspective, switch to the relevant project, for example,
my-petclinic
. - In the Topology view, hover over the Spring PetClinic sample application to see a dangling arrow on the node.
- Drag and drop the arrow onto the hippo database icon in the Postgres Cluster to make a binding connection with the Spring PetClinic sample application.
In the Create Service Binding dialog, keep the default name or add a different name for the service binding, and then click Create.
Figure 6.4. Service Binding dialog
- Optional: If there is difficulty in making a binding connection using the Topology view, go to +Add → YAML → Import YAML.
Optional: In the YAML editor, add the
ServiceBinding
resource:apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments
A service binding request is created and a binding connection is created through a
ServiceBinding
resource. When the database service connection request succeeds, the application is redeployed and the connection is established.Figure 6.5. Binding connector
TipYou can also use the context menu by dragging the dangling arrow to add and create a binding connection to an operator-backed service.
Figure 6.6. Context menu to create binding connection
- In the navigation menu, click Topology. The spring-petclinic deployment in the Topology view includes an Open URL link to view its web page.
- Click the Open URL link.
You can now view the Spring PetClinic sample application remotely to confirm that the application is now connected to the database service and that the data has been successfully projected to the application from the Crunchy PostgreSQL database service.
The Service Binding Operator has successfully created a working connection between the application and the database service.
6.9.4. Verifying the status of your service binding from the Topology view
The Developer perspective helps you verify the status of your service binding through the Topology view.
Procedure
If a service binding was successful, click the binding connector. A side panel appears displaying the Connected status under the Details tab.
Optionally, you can view the Connected status on the following pages from the Developer perspective:
- The ServiceBindings page.
- The ServiceBinding details page. In addition, the page title displays a Connected badge.
If a service binding was unsuccessful, the binding connector shows a red arrowhead and a red cross in the middle of the connection. Click this connector to view the Error status in the side panel under the Details tab. Optionally, click the Error status to view specific information about the underlying problem.
You can also view the Error status and a tooltip on the following pages from the Developer perspective:
- The ServiceBindings page.
- The ServiceBinding details page. In addition, the page title displays an Error badge.
In the ServiceBindings page, use the Filter dropdown to list the service bindings based on their status.
6.9.5. Visualizing the binding connections to resources
As a user, use Label Selector in the Topology view to visualize a service binding and simplify the process of binding applications to backing services. When creating ServiceBinding
resources, specify labels by using Label Selector to find and connect applications instead of using the name of the application. The Service Binding Operator then consumes these ServiceBinding
resources and specified labels to find the applications to create a service binding with.
To navigate to a list of all connected resources, click the label selector associated with the ServiceBinding
resource.
To view the Label Selector, consider the following approaches:
After you import a
ServiceBinding
resource, view the Label Selector associated with the service binding on the ServiceBinding details page.Figure 6.7. ServiceBinding details page
To use